Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
\label{intro}
The astrophysical plasmas characterized by high Lundquist number
$S\equiv Lv_A/\eta$ ($L\equiv$ length scale of the magnetic field
\textbf{B} variability, $v_A\equiv$ Alfv\'en speed, and $\eta\equiv$
magnetic diffusivity) satisfy the Alfv\'en's flux-freezing theorem in presence of laminar plasma flow, ensuring magnetic field lines to be tied to fluid
parcels \citep{Alfven}. The scenario is different
in a turbulent magnetofluid, see \citet{Vishnaic1999, Vishnaic2000, Eyink}
for details.
An inherent large $L$ implies large $S$ and
ensures the flux freezing in the astrophysical plasmas. Particularly,
the solar corona with global $L\approx 100 ~\rm Mm$, $v_{A}\approx10^{6}$
ms$^{-1}$, B$\approx10$ G, and $\eta\approx1$ m$^2$s$^{-1}$ (calculated
using Spitzer resistivity) has a $S\approx10^{14}$ \citep{Aschwanden}.
However, the coronal plasma also exhibits diffusive behavior in the
form of solar transients---such as solar flares, coronal mass
ejections (CME), and jets. All of these are manifestations of
magnetic reconnections that in turn lead to dissipation of magnetic
energy into heat and kinetic energy of plasma flow, accompanied by
a rearrangement of magnetic field lines \citep{Arnab}. The magnetic
reconnections being dissipative processes, their onset is due to
the generation of small scales in consequence of large-scale dynamics,
ultimately increasing the magnetic field gradient and thereby
resulting in intermittently diffusive plasma. The small scales may
naturally occur as current sheets (CSs) \citep{ParkerECS},
magnetic nulls \citep{Parnell96,Ss2020} and quasi-separatrix layers
(QSLs) \citep{Demoulin, avijeet2020}, or can develop spontaneously
during the evolution of the magnetofluid. Such spontaneous developments
(owing to discontinuities in the magnetic field) are expected from
Parker’s magnetostatic theorem \citep{ParkerECS} and have also been
established numerically by MHD simulations \citep{Ss2020, DKRB,
SKRB, Sanjay2016, SK2017, avijeet2017, avijeet2018, Ss2019,
Sanjay2021}. Identification of the small (viz. the dissipation)
scale depends on the specific physical system under consideration.
For example, the length scale at which the reconnection occurs is
found to be $L_{\eta}\equiv\sqrt{\tau_{d}\eta}\approx$32~m, based on
$\eta\approx1$ m$^2$s$^{-1}$ and the magnetic diffusion time scale
$\tau_{d}$ approximated by the impulsive rise time of hard X-ray
flux $\approx 10^3$ s \citep{PF200} during a flare. Consequently,
the estimated ion inertial length scale
$\delta_i\approx 2.25$ m in the solar corona \citep{PF200} suggests
that the order of the dissipation term, $1/S\approx 10^{-5}$ (approximated
with $L_{\eta}$), is
smaller than the order of the Hall term, $\delta_i/L_\eta\approx 10^{-2}$,
in standard dimensionless induction equation
\citep{Westerberg07, 2021ApJ...906..102B}
\begin{equation}
\label{inducresist}
\frac{{\partial\bf{B}}}{\partial t} =
\nabla\times \left({\bf{v}}\times{\bf{B}}\right)
-\frac{1}{S}\nabla\times{\bf{J}}
-\frac{\delta_i}{L_\eta}\nabla\times\left({\bf{J}}\times{\bf{B}}\right)~,
\end{equation}
where ${\bf{J}}(=\nabla\times{\bf{B}})$ and ${\bf{v}}$ are the
volume current density and the plasma flow velocity, respectively.
This difference in the order of magnitude irrefutably indicates the
importance of the Hall term in the diffusive limit {\bf{ \citep{BIRN, BhattacharjeeReview}}} of the solar
coronal plasma which, further signifies that the HMHD can play a
crucial role for coronal transients as the magnetic reconnections
are their underlying mechanism. Importantly, the aforesaid activation
of the Hall term only in the diffusive limit is crucial in setting up a
HMHD based numerical simulation, invoked latter in the paper.
Important insight into magnetic reconnection can be gained by casting
(\ref{inducresist}) in absence of dissipation, as
\begin{equation}
\label{inducresist1}
\frac{{\partial\bf{B}}}{\partial t} =
\nabla\times \left({\bf{w}}\times{\bf{B}}\right)~,
\end{equation}
\noindent following \citet{hornig-schindler}. The velocity ${\bf{w}}={\bf{v}}-\delta_i/L_\eta{\bf{J}}$,
which is also the electron fluid velocity, conserves magnetic flux \citep{schindler} and topology \citep{hornig-schindler} since
field lines are tied to it. Consequently, field lines slip out from the fluid parcels advecting with velocity
{\bf{v}} to which the lines are frozen in ideal MHD. Importantly, the resulting breakdown of the flux freezing is
localized to the region where current density is large and the Hall term is effective. Because of the slippage, two fluid parcels do not remain connected with the same field lines over time---a change in field line connectivity. Quoting \citet{schindler}, such localized breakdown of flux freezing along with the resulting change in connectivity can be considered as the basis of reconnection \citep{axford}. Additional slippage of field lines
occur in presence of the dissipation term, but with a change in magnetic topology. The present paper extensively relies on this interpretation of reconnection as the slippage of magnetic field lines and the resulting change in magnetic connectivity.
The importance of HMHD is by no means limited to
coronal transients. For example, HMHD is important in the Earth's
magnetosphere, particularly at the magnetopause and the magnetotail
where CSs are present \citep{Mozer2002}. Generally, the HMHD is
expected to support faster magnetic reconnections, yet without
directly affecting the dissipation rate of magnetic energy and
helicity by the Hall term in the induction equation \citep{PF200, chenshi}. The faster reconnection may be associated with a more effective
slippage of field lines in HMHD compared to the resistive MHD, compatible
with the arguments presented earlier. Nevertheless,
these unique properties of the HMHD are expected to bring
subtle changes in the dynamical evolution of plasma, particularly
in the small scales dominated by magnetic reconnections, presumably
bringing a change in the large scales as a consequence. Such subtle
changes were found in the recent HMHD simulation
\citep{2021ApJ...906..102B}, performed by extending the computational
model EULAG-MHD \citep{PiotrJCP} to include the Hall effects.
Notably, the faster reconnection compared to MHD led to a breakage
of a magnetic flux rope, generated from analytically constructed
initial bipolar magnetic field lines \citep{Sanjay2016}. In turn,
the flux rope breakage resulted in the generation of magnetic
islands as theorized by \citet{Shibata}. Clearly, it is compelling
to study the HMHD evolution in a more realistic scenario with the
initial magnetic field obtained from a solar magnetogram. To attain
such an objective, we select the recently reported active
region (AR) NOAA 12734 by \citet{2021Joshi} that produced a C1.3 class
flare.
In absence of reliable direct measurement of the coronal magnetic field,
several extrapolation models such as nonlinear force-free
field (NLFFF) \citep{2008Wglman, 2012WglmnSakurai} and non-force-free
field (non-FFF) \citep{HuDas08, Hu2010} have been developed to construct
the coronal magnetic field using photospheric magnetograms. The
standard is the NLFFF, and the recent data-based MHD simulations
initialized with it have been reasonably successful in simulating
the dynamics of various coronal transients \citep{2013Jiang,
2014NaturAm, 2014Innoue, 2016Savcheva}. However, the NLFFF
extrapolations require to treat the photosphere as force-free, while
it is actually not so \citep{Gary}. Hence, a ``preprocessing technique''
is usually employed to minimize the Lorentz force on the photosphere
in order to provide a boundary condition suitable for NLFFF
extrapolations \citep{2006SoPhWgl, 2014SoPhJiang} and thereby
compromising the reality. Recently, the non-force-free-field (non-FFF)
model, based on the principle of minimum energy dissipation rate
\citep{bhattaJan2004, bhattaJan2007}, has emerged as a plausible
alternative to the force-free models \citep{HuDas08, Hu2010,
2008ApJHu}. In the non-FFF model, the magnetic field \textbf{B} satisfies
the double-curl-Beltrami equation \citep{MahajanYoshida} and the
corresponding Lorentz force on the photosphere is non-zero while
it decreases to small values at the coronal heights \citep{avijeet2018,
Ss2019, avijeet2020}---concurring with the observations. In this
paper, we use non-FFF extrapolation \citep{Hu2010} to obtain the
magnetic field in corona using the photospheric vector magnetogram
obtained from the Helioseismic Magnetic Imager (HMI) \citep{HMI}
onboard the Solar Dynamics Observatory (SDO) \citep{SDO}.
The paper is organized as follows. Section \ref{obs} describes the
flaring event in AR NOAA 12734, section \ref{extrapolation} presents
magnetic field lines morphology of AR NOAA 12734 along with the
preferable sites for magnetic reconnections such as QSLs, 3D null
point, and null-line found from the non-FFF extrapolation. Section
\ref{simulation-results} focuses on the numerical model, numerical
set-up and the evolution of magnetic field lines obtained from
the extrapolation along with their realizations in observations.
Section \ref{summary} highlights the key findings.
\section{Salient features of the C1.3 class flare in AR NOAA 12734}
\label{obs}
The AR NOAA 12734 produced an extended C1.3 class flare
on March 08, 2019 \citep{2021Joshi}. The impulsive phase of the
flare started at 03:07 UT as reported in the Figure 3 of
\citet{2021Joshi}, which shows the X-ray flux in the 1-8 {\AA} and
0.5-4 {\AA} detected by the Geostationary Operational Environmental
Satellite (GOES) \citep{Gracia}. The flux evinces
two subsequent peaks after the onset of the flare,
one around 03:19 UT and another roughly around 03:38 UT. \citet{2021Joshi}
suggested the eruptive event to take place in a coronal sigmoid
with two distinct stages of energy release. Additional observations
using the multi-wavelength channels of Atmospheric Imaging Assembly
(AIA) \citep{AIA} onboard SDO are listed below to highlight important
features pertaining to simulations reported in this paper. Figure
\ref{observations} illustrates a spatio-temporal
observational overview of the event. Panel (a)
shows the remote semicircular brightening (C1) prior to the impulsive
phase of the flare (indicated by the yellow arrow). Panels (b) to (d)
indicate the flare by yellow arrow and the eruption by the white arrow
in the 94 {\AA}, 171 {\AA}, and 131 {\AA} channels respectively.
Notably, the W-shaped brightening appears in panels (b) to (d) along
with the flare in different wavelength channels of SDO/AIA. Panel
(e) shows the circular structure of the chromospheric material (C2)
during the impulsive phase of the flare. It also highlights the
developed W-shaped flare ribbon (enclosed by the white box) which has
a tip at the center (marked by the white arrow). Panel (f) depicts
the post-flare loops in 171 {\AA} channel, indicating the post-flare
magnetic field line connectivity between various negative and
positive polarities on the photosphere.
\section{non-FFF Extrapolation of the AR NOAA 12734}
\label{extrapolation}
As stated upfront, the non-FFF extrapolation technique proposed by
\citet{HuDas08} and based on the minimum dissipation rate theory
(MDR) \citep{bhattaJan2004, bhattaJan2007} is used to obtain the
coronal magnetic field for the AR NOAA 12734. The extrapolation
essentially solves the equation
\begin{eqnarray}
\label{tc}
\nabla\times\nabla\times\nabla\times \textbf{B}+a_1 \nabla\times\nabla\times
\textbf{B}+b_1 \nabla\times\textbf{B}=0~,
\end{eqnarray}
where parameters $a_1$ and $b_1$ are constants. Following
\citep{Hu2010}, the field is constructed as
\begin{eqnarray}
\textbf{B}=\sum_{i=1,2,3} \textbf{B}_{i}~,~~ \nabla\times \textbf{B}_{i}
=\alpha_{i} \textbf{B}_{i}~,
\end{eqnarray}
where $\alpha_i$ is constant for a given $\textbf{B}_i$. The subfields
$\textbf{B}_1$ and $\textbf{B}_3$ are linear force-free having
$\alpha_1\neq\alpha_3$, whereas $\textbf{B}_2$ is a potential field
with $\alpha_2=0$. An optimal pair of $\alpha=\{\alpha_1,\alpha_3\}$
is iteratively found by minimizing the average deviation
between the observed transverse field ($\textbf{B}_t$) and the
computed ($\textbf{b}_t$) transverse field, quantified by
\begin{equation}
\label{En}
E_n=\left(\sum_{i=1}^{M} |\textbf{B}_{t,i}-\textbf{b}_{t,i}|\times |\textbf{B}_{t,i}|\right)/\left(\sum_{i=1}^{M}|\textbf{B}_{t,i}|^2\right)~,
\end{equation}
on the photosphere. Here, $M=N^2$ represents
the total number of grid points on the transverse plane. The grid
points are weighted with respect to the strength of the observed
transverse field to minimize the contribution from weaker fields,
see \citep{HuDas08, Hu2010} for further details.
Since (\ref{tc}) involves the evaluation of the second-order
derivative, $(\nabla\times\nabla\times \textbf{B})_z=-(\nabla^2
\textbf{B})_z$ at $z=0$, evaluation of \textbf{B} requires magnetograms
at two different values of $z$. In order to work with the generally
available single-layer vector magnetograms, an algorithm was
introduced by \cite{Hu2010} that involves additional
iterations to successively fine-tune the potential subfield
$\textbf{B}_2$. The system is reduced to second order by taking
initial guess $\textbf{B}_2=0$, which makes it easier to determine
the boundary condition for $\textbf{B}_1$ and $\textbf{B}_3$. If the
calculated value of $E_n$ turns out unsatisfactory---i.e.,
overly large---then a potential field corrector to $\textbf{B}_2$
is calculated from the difference in the observed and computed
transverse fields and subsequently summed with the previous
$\textbf{B}_2$ to further reduce $E_n$. Notably, recent simulations
initiated with the non-FFF model have successfully explained the
circular ribbon-flares in AR NOAA 12192 \citep{avijeet2018} and AR
NOAA 11283 \citep{avijeet2020} as well as a blowout
jet in AR NOAA 12615 \citep{Ss2019}, thus validating non-FFF's credibility.
The vector magnetogram is selected for 2019 March 08, at 03:00 UT
($\approx$ 7 minutes prior to the start of flare). The original
magnetogram cut out of dimensions 342$\times$195 pixels with pixel resolution 0.5 arcsec per pixel having an extent of $124~ \rm Mm\times 71$ Mm from
``hmi.sharp$\_$cea$\_$720s" series is considered, which ensures an
approximate magnetic flux balance at the bottom boundary. To
optimize the computational cost with the available resources, the
original field is re-scaled and non-FFF-extrapolated over a volume of
256$\times$128$\times$128 pixels while keeping the physical extent same
and preserving all magnetic structures throughout the region. The reduction, in effect, changes the conversion factor of 1 pixel to $\approx 0.484$ Mm along x and $\approx 0.554$ Mm along y and z directions of the employed Cartesian coordinate system.
Panel (a) of Figure~\ref{lfcombnd} shows $E_n$ in the transverse
field, defined in (\ref{En}), as a function of number of iterations.
It shows that $E_n$ tends to saturate at the value of $\approx$0.22.
Panel (b) of Figure \ref{lfcombnd} shows logarithmic decay of the
normalized horizontally averaged magnetic field, current density,
and Lorentz force with height. It is clear that the Lorentz force
is appreciable on the photosphere but decays off rapidly with height,
agreeing with the general perception that the corona is force-free
while the photosphere is not \citep{Liu2020, Yalim20}. Panel (c)
shows that the Pearson-r correlation between the extrapolated and
observed transverse fields is $\approx$0.96, implying strong
correlation. The direct volume rendering of the Lorentz force in
panel (d) also reveals a sharp decay of the Lorentz force with
height, expanding on the result of panel~(b).
To facilitate description, Figure \ref{regions}~(a) shows the
SDO/AIA 304 {\AA} image at 03:25 UT, where the flare ribbon brightening
has been divided into four segments marked as B1-B4.
Figure \ref{regions}~(b) shows the initial global magnetic field
line morphology of AR NOAA 12734, partitioned into
four regions R1-R4, corresponding to the flare ribbon brightening
segments B1-B4. The bottom boundary of panel (b) comprises of
$B_z$ maps in grey scale where the lighter shade indicates positive
polarity regions and the darker shade marks the negative polarity
regions. The magnetic field lines topologies and structures
belonging to a specific region and contributing to the flare are
documented below. \bigskip
\noindent {\bf{Region R1:}} The top-down view of the global magnetic field
line morphology is shown in the panel (a) of Figure~\ref{region1}.
To help locate QSLs, the bottom boundary is overlaid
with the $\log Q$ map of the squashing factor $Q$ \citep{Liu} in all
panels of the figure. Distribution of high $Q$ values along with
$B_z$ on the bottom boundary helps in identifying differently
connected regions. The region with a large $Q$ is prone to the onset
of slipping magnetic reconnections \citep{Demoulin}. Foot points
of magnetic field lines constituting QSL1 and QSL2 trace along the
high $Q$ values near the bottom boundary. QSL1, involving the
magnetic field lines Set I (green) and Set II (maroon), is shown
in panel (b). Particularly, magnetic field lines Set
I (green) extends higher in the corona forming the largest loops
in R1. Panel~(c) illustrates a closer view of QSL2
(multicolored) and the flux rope (black) beneath,
situated between the positive and negative polarities P1, P2 and
N1, respectively. In panel~(d), the flux rope (constituted by the
twisted black magnetic field lines) is depicted using the side view.
The twist value $T_w$ \citep{Liu} in the three vertical planes along the cross
section of the flux rope is also overlaid. Notably, the twist value
is 2 at the center of the rope and decreases outward (cf. vertical
plane in middle of the flux rope in panel (d)). \bigskip
\noindent {\bf{Region R2:}} Figure~\ref{R2R3R4exp} (a) shows the
side view of a 3D null point geometry of magnetic
field lines and the bottom boundary $B_z$ overlaid
with log $Q$ ranging between 5 and 10. Panel~(b) depicts an enlarged
view of the 3D null location, marked black. The height of the null
is found to be $\approx$ 3~Mm from the photosphere. The null is
detected using the bespoke procedure \citep{DKRB, Ss2020} that
approximates the Dirac delta on the grid as
\begin{equation}
\label{ndefine}
n(B_i) = \exp\big[-\sum_{i=x,y,z}{(B_{i} -B_{o})^2}/{d_{o}^2}\big]~,
\end{equation}
where small constants $B_o$ and $d_o$ correspond to the isovalue
of $B_i$ and the Gaussian spread. The function $n(B_i)$ takes
significant values only if $B_i\approx 0~\forall i$, whereupon a
3D null is the point where the three isosurfaces having isovalues
$B_i=B_o$ intersect.\bigskip
\noindent {\bf{Region R3:}} Side view of the magnetic field line
morphology in region R3 is shown in Figure \ref{R2R3R4exp} (c),
where the yellow surface corresponds to $n=0.9$. Panel~(d) highlights
a ``fish-bone-like'' structure, similar to the
schematic in Figure 5 of \citet{WangFB}. To show
that in the limiting case $n=0.9$ reduced to a null line, we plot
corresponding contours in the range $0.6\leq n \leq 0.9$ on three
pre-selected planes highlighted in panel (e). The size reduction
of the contours with increasing $n$ indicates the surface converging
to a line. Such null lines are also conceptualized as favorable
reconnection sites \citep{WangFB}. \bigskip
\noindent {\bf{Region 4}} Figure \ref{R2R3R4exp} (f) shows magnetic
field lines relevant to plasma rotation in B4. Notably, the null
line from the R3 intrudes into R4 and the extreme left plane in R3 (Figure \ref{R2R3R4exp} (e)) is also shared by the R4.
\section{HMHD and MHD simulations of AR NOAA 12734}
\label{simulation-results}
\subsection{Governing Equations and Numerical Model}
In the spirit of our earlier related works
\citep{avijeet2018, Ss2019, avijeet2020}, the plasma is idealized
to be incompressible and thermodynamically inactive as well as
explicitly nonresistive. While this relatively simple
idealization is naturally limited, it exposes the basic dynamics
of magnetic reconnections unobscured by the effects due to
compressibility and heat transfer. Albeit the latter are important
for coronal loops \citep{2002ApJ...577..475R}, they do not directly
affect the magnetic topology---in focus of this paper. Historically
rooted in classical hydrodynamics, such idealizations have a proven
record in theoretical studies of geo/astrophysical phenomena
\citep{Rossby38, 1991ApJ...383..420D, RBCLOW, 2021ApJ...906..102B}.
Inasmuch as their cognitive value depends on an a posteriori validation
against the observations, the present study offers yet another
opportunity to do so.
The Hall forcing has been incorporated \citep{2021ApJ...906..102B}
in the computational model EULAG-MHD \citep{PiotrJCP} to solve the
dimensionless HMHD equations,
\begin{eqnarray}
\label{momtransf}
\frac{\partial{\bf v}}{\partial t} +({\bf v}\cdot \nabla){\bf v}&=&
-\nabla p + (\nabla\times{\bf B})\times{\bf B} +
\frac{1}{R_F^A}\nabla^2 {\bf v}~,\\
\label{induc}
\frac{\partial{\bf B}}{\partial t}&=& \nabla\times(\textbf{v}\times{\bf B})
-d_H\nabla\times((\nabla\times{\bf B})\times{\bf B})~,\\
\label{incompv}
\nabla\cdot {\bf v}&=& 0~, \\
\label{incompb}
\nabla\cdot {\bf B}&=& 0~,
\end{eqnarray}
where $R_F^A=(v_A L/\nu)$, $\nu$ being the kinematic viscosity---is an effective fluid Reynolds number,
having the plasma speed replaced by the Alfv\'en speed $v_A$.
Hereafter $R_F^A$ is denoted as fluid Reynolds number for convenience. The transformation of the dimensional quantities (expressed in cgs-units)
into the corresponding non-dimensional quantities,
\begin{equation}
\label{norm}
{\bf{B}}\longrightarrow \frac{{\bf{B}}}{B_0},
\quad{\bf{x}}\longrightarrow \frac{\bf{x}}{L_0},
\quad{\bf{v}}\longrightarrow \frac{\bf{v}}{v_A},
\quad t \longrightarrow \frac{t}{\tau_A},
\quad p \longrightarrow \frac{p}{\rho_0 {v_{A}}^2}~,
\end{equation}
assumes arbitrary $B_0$ and $L_0$ while the Alfv\'en speed $v_A \equiv
B_0/\sqrt{4\pi\rho_0}$. Here $\rho_0$ is a constant mass density,
and $d_H$ is the Hall parameter. In the limit of $d_H=0$,
(\ref{momtransf})-(\ref{incompb}) reduce to the MHD equations
\citep{avijeet2018}.
The governing equations (\ref{momtransf})-(\ref{incompb})
are numerically integrated using EULAG-MHD---a magnetohydrodynamic
extension \citep{PiotrJCP} of the established Eulerian/Lagrangian
comprehensive fluid solver EULAG \citep{Prusa08} predominantly used
in atmospheric research. The EULAG solvers are based on the
spatio-temporally second-order-accurate nonoscillatory forward-in-time
advection scheme MPDATA (for {\it multidimensional positive definite
advection transport algorithm}) \citep{Piotrsingle}. Importantly,
unique to MPDATA is its widely
documented dissipative property mimicking the action of explicit
subgrid-scale turbulence models wherever the concerned advective
field is under-resolved; the property known as implicit
large-eddy simulations (ILES) \citep{Grinstein07}. In effect,
magnetic reconnections resulting in our simulations dissipate the
under-resolved magnetic field along with other advective
field variables and restore the flux freezing. These reconnections
being intermittent and local, successfully mimic physical reconnections.
\subsection{Numerical Setup}
The simulations are carried out by mapping the physical domain of $256\times128\times128$ pixels on the computational domain of $x\in\{-1, 1\}$, $y\in\{-0.5,0.5\}$, $z\in\{-0.5,0.5\}$ in a Cartesian coordinate system. The dimensionless spatial step sizes are $\Delta x=\Delta y=\Delta z \approx 0.0078$. The dimensionless time step is $\Delta t=5\times 10^{-4}$, set to resolve whistler speed---the fastest
speed in incompressible HMHD. The rationale is briefly presented in the Appendix \ref{appnd}.
The corresponding initial state is motionless ($\textbf{v}=0$) and the initial
magnetic field is provided from the non-FFF extrapolation. The non-zero
Lorentz force associated with the extrapolated field pushes the
magnetofluid to initiate the dynamics. Since the maximal variation
of magnetic flux through the photosphere is only 2.28$\%$ of its
initial value during the flare (not shown), the $\text{B}_z$ at the
bottom boundary (at $z=0$) is kept fixed throughout the simulation
while all other boundaries are
kept open. For velocity, all boundaries are set open. The mass density is set to $\rho_0=1$.
The fluid Reynolds number is set to $500$, which is roughly two orders of magnitude smaller than its coronal value $\approx 25000$ (calculated using kinematic viscosity $\nu=4\times 10^9 ~\rm m^2s^{-1}$ \citep{Aschwanden} in solar corona).
Without any loss in generality, the reduction in $R_F^A$ can be envisaged
to cause a reduction in computed Alfv\'en speed, $v_A|_\text{computed} \approx 0.02\times v_A|_\text{corona}$ where the $L$ for the computational and coronal length scales are set to 71 Mm and 100 Mm respectively. This diminished Alfv\'en speed reduces the requirement of computational resources and also relates it with the observation time. The results presented herein pertain to a run for 1200$\Delta t$ which along with the normalizing $\tau_A\approx 3.55\times 10^3$ s roughly corresponds to an observation time of $\approx$ 35 minutes. For the ease of reference in comparison with observations, we present the time in units of 0.005$\tau_a$ (which is 17.75 s) in the discussions of the figures in subsequent sections.
Although the coronal plasma idealized to have reduced Reynolds number is inconsequential here, in a comparison of MHD and HMHD evolution, we believe the above rationale merits further contemplation. Undeniably such a coronal plasma is not a reality. Nevertheless, the reduced $R_F^A$ does not affect the reconnection or its
consequence, but slows down the dynamics between two such events and importantly---reduces the computational cost, making data-based simulations realizable even with reasonable computing resources.
A recent work by \citet{JiangNat} used homologous approach toward simulating a realistic and self-consistent flaring region.
In the present simulations, all
parameters are identical for the MHD and the HMHD
except for the $d_H$, respectively set to 0 and 0.004.
The value 0.004 is motivated by recognizing ILES dissipation models intermittent magnetic reconnections at the ${\mathcal O}(\parallel\Delta{\bf x}\parallel)$ length scales,
consistent with the thesis put forward in Introduction, we specify
an appreciable Hall coefficient as $d_H = 0.5 \Delta z/L \approx
0.004$, where $L=1\equiv$ smallest extent of the computational volume,
having $\Delta y= \Delta z \approx 0.0078$ as the dissipation scales because of the ILES
property of the model. Correspondingly, the value is also at the lower bound of the pixel or scale order
approximation and, in particular, an order of magnitude smaller
that its coronal value valid at the actual dissipation scale. An
important practical benefit of this selection is the optimization
of the computational cost while keeping magnetic field line dynamics
tractable. Importantly, with dissipation and Hall scales being tied, an increased current density at the dissipation scale introduces additional slippage of field lines in HMHD over MHD (due to the Hall term) and, may be responsible for more effective and faster reconnections found in the Hall simulation reported below.
\subsection{Comparison of the HMHD and MHD simulations}
The simulated HMHD and MHD dynamics leading to the flare show
unambiguous differences. This section documents these differences
by comparing methodically simulated evolution of the magnetic
structures and topologies in the AR NOAA 12734---namely,
the flux rope, QSLs, and null points---identified in the extrapolated
initial data in the regions R1-R4.
\subsubsection{Region R1}
The dynamics of region R1 are by far the most complex among the
four selected regions. To facilitate future reference as well as to outline the
organization of the discussion that follows, Table~\ref{tab:r1} provides a brief
summary of our findings---in a spirit of theses to be proven by the simulation results.
\begin{table}
\caption{Salient features of magnetic field lines dynamics in R1}
\label{tab:r1}
\begin{tabular}{ |p{3cm}|p{5.5cm}|p{5.5cm}| }
\hline
Magnetic field lines structure& HMHD & MHD \\ [4ex]
\hline
QSL1 & Fast reconnection followed by a significant rise of loops,
eventually reconnecting higher in the corona. &Slow reconnection
followed by a limited rise of loops. \\ [6ex]
\hline
QSL2 & Fast reconnection causing the magnetic field lines to entirely
disconnect from the polarity P2. & Due to slow reconnection magnetic
field lines remain connected to P2. \\ [6ex]
\hline
Flux rope &Fast slipping reconnection of the flux-rope foot points,
followed by the expansion and rise of the rope envelope. & Slow
slipping reconnection and rise of the flux-rope envelope; the
envelope does not reach the QSL1. \\ [6ex]
\hline
\end{tabular}
\end{table}
\bigskip
The global dynamics of magnetic field lines in region R1 is
illustrated in Figure~\ref{fullR1}; consult
Figure~\ref{region1} for the initial condition and terminology. The
snapshots from the HMHD and MHD simulations are shown in panels
(a)-(d) and (e)-(f), respectively. In panels (a) and (b), corresponding
to $t=19$ and $t=46$, the foot points of magnetic field lines Set
II (near P2, marked maroon) exhibit slipping reconnection along
high values of the squashing factor $Q$ indicated by black arrows.
Subsequently, between $t=80$ and 81 in panels (c) and (d), the
magnetic field lines Set II rise in the corona and reconnect with
magnetic field lines Set I to change connectivity. The MHD counterpart
of the slipping reconnection in panels (e) and (f), corresponds to
magnetic field lines Set II between t=19 and t=113. It lags behind
the HMHD displays, thus implying slower dynamics. Furthermore, the
magnetic field lines Set II, unlike for the HMHD, do not reach up
to the magnetic field lines Set I constituting QSL1 and hence do
not reconnect. A more informative visualization of the highlighted
dynamics is supplemented in an online animation. The decay index is calculated for each time instant for both the simulations and is found to be less than 1.5 above the flux rope, indicating an absence of the torus instability \citep{Torok}.
For more detail,
Figures~\ref{R1QSL} and \ref{ropeHMHD-MHD} illustrate evolution of
QSL2 and flux rope separately.
Figure~\ref{R1QSL} panels (a)-(b) and (c)-(d) show,
respectively, the instants from the HMHD and MHD simulations of
QSL2 between P1, P2 and N1. The HMHD instants show
magnetic field lines that were anchored between P2
and N1 at $t=10$ have moved to P1 around t=102, marked by black
arrows in both panels. The magnetic field lines anchored at P2
moved to P1 along the high $Q$ values---signifying the slipping
reconnection. The MHD instants in panels (c)-(d)
show the connectivity changes of the violet and white colored
magnetic field lines. The white field line was initially connecting
P1 and N1, whereas the violet field line was connecting P2 and N1.
As a result of reconnection along QSL, the white field line changed
its connectivity from P1 to P2 and violet field line changes the
connectivity from P2 to P1 (marked by black arrows). Notably, in
contrast to the HMHD evolution, all magnetic field lines initially
anchored in P2 do not change their connectivity from P2 to P1 during
the MHD evolution, indicating the slower dynamics.
The flux rope has been introduced in panels (c) and
(d) of Figure~\ref{region1}, respectively, below the QSL2 and in
enlargement. Its HMHD and MHD evolutions along with the twists on
three different vertical cross sections are shown in panels (a)-(f)
and (g)-(i) of Figure~\ref{ropeHMHD-MHD}, respectively. Magnetic
field lines constituting the rope, rise substantially higher during
the HMHD evolution as a result of slipping reconnection along the high $Q$
in panels (c)-(f). In panel (c) at $t=32$, the foot points of the
rope that are anchored on right side (marked by black arrow) change
their connectivity from one high $Q$ regime to another in panel (d)
at t=33; i.e., the foot points on the right have moved to the left
side (marked by black arrow). Afterwards, the magnetic field lines rise because of the
continuous slipping reconnection, as evidenced in panels (e) to (f)
and the supplemented animation. Comparing panels (a) with (g) at
$t=10$ and (c) with (h) at t=32, we note that the twist
value $T_w$ is higher in the HMHD simulation. Panels
(h)-(i) highlight the displaced foot points of flux rope due to slipping reconnection
at t=32 and t=120 (cf. black arrow). The rope is preserved throughout the
HMHD and MHD simulations.
The rise and expansion of the flux-rope envelope
owing to slipping reconnection is remarkable in the
HMHD simulation. \citet{dudik} have already shown such a flux-rope
reconnection along QSL in a J-shaped current region,
with slipping reconnection causing the flux rope to form a sigmoid
(S-shaped hot channel observed in EUV images of SDO/AIA) followed
by its rise and expansion. Further insight is gained by overlaying
the flux rope evolution shown in Figure \ref{ropeHMHD-MHD} with direct volume rendering of
$|{\bf J}|/|{\bf B}|$ (Figures \ref{ropecs} and \ref{ropecsmhd}) as a measure of magnetic field gradient for the HMHD and MHD simulations.
In the HMHD case, appearance of large values of $|{\bf J}|/|{\bf B}|>475$ inside the rope
(panels (a) to (c)) and foot points on left of the rope (panels (d) to (e)) are apparent.
The development of the large $|{\bf J}|/|{\bf B}|$ is indicative of reconnection
within the rope. Contrarily, MHD simulation lacks such high values of $|{\bf J}|/|{\bf B}|$
in the same time span (panels (a)-(b)) and the field lines show no slippage---agreeing with the proposal that large currents magnify the Hall term, resulting into more effective slippage of field lines.
\subsubsection{Region R2}
To compare the simulated magnetic field lines dynamics in region
R2 with the observed tip of the W-shaped flare ribbon
B2 (Figure \ref{extrapolation} (a)) during the HMHD and MHD evolution,
we present the instants from both
simulations at t=70 in panels (a) and (b) of Figure \ref{R2comp}
respectively. Importantly, the lower spine remains anchored to the bottom boundary during the HMHD simulation (evident from the supplemented animation along with Figure \ref{R2comp}). Further, Figure \ref{R2comp-CS} shows the evolution of the lower spine along with the $|\textbf{J}|/|\textbf{B}|$ on the bottom boundary for the HMHD (panels (a) to (d)) and MHD (panels (e) to (h)) cases. In the HMHD case, noteworthy is the slipping motion of lower spine (marked by the black arrows) tracing the $|\textbf{J}|/|\textbf{B}|>350$ regions on the bottom boundary (panels (a) to (b)). Whereas, in the MHD such high values of $|\textbf{J}|/|\textbf{B}|$ are absent on the bottom boundary---suggesting the slippage of the field lines on the bottom boundary to be less effective in contrast to the HMHD. The finding is in agreement with the idea of enhanced slippage of field lines due to high current densities as conceptualized in the introduction.
The anchored lower spine provides a path for the plasma to flow downward
to the brightening segment B2. In the actual corona, such flows result in
flare brightening \citep{Benz}.
In contrast, the lower
spine gets completely disconnected from the bottom boundary (Figure
\ref{R2comp} (b)) in the MHD simulation, hence failing to explain
the tip of the W-shaped flare ribbon in B2. The
anchored lower spine in the HMHD simulation is caused by a complex
series of magnetic field lines reconnections at the 3D null and
along the QSLs in R2, as depicted in the animation.
\subsubsection{Region R3}
HMHD and MHD simulations of magnetic field lines dynamics around
the null-line are shown in Figures~\ref{R3HMHD} and \ref{R3MHD}
respectively. Figure~\ref{R3HMHD} shows the blue magnetic field
lines prior and after the reconnections (indicated
by black arrows) between t=4 to 5 (panels (a)-(b)), t=52 to 53
(panels (c)-(d)), and t=102 to 103 (panels (e)-(f)) during the HMHD
simulation. Figure \ref{R3MHD} shows the same blue
magnetic field lines prior and after the reconnections
(indicated by black arrows) between t=12 to 13 (panels (a)-(b)),
t=59 to 60 (panels (c)-(d)), and t=114 to 115 (panels (e)-(f))
during the MHD simulation. Comparison of the panels (a)-(f) of
Figure \ref{R3HMHD} with the same panels of Figure \ref{R3MHD}
reveals earlier reconnections of the blue magnetic
field lines in the HMHD simulation. In both figures, green
velocity vectors on the right represent the local plasma flow. They
get aligned downward along the foot points of the fan magnetic field
lines, as reconnection progresses. Consequently, the plasma flows
downward and impacts the denser and cooler chromosphere to give
rise to the brightening in B3. The velocity vectors
pointing upward represent a flow toward the null-line. The plasma
flow pattern in R3 is the same in the HMHD and in
the MHD simulation. The vertical $yz-$plane passing through the cross section
of the null-line surface (also shown in Figure \ref{R2R3R4exp} (d))
in all the panels of Figures \ref{R3HMHD} and \ref{R3MHD} shows the
variation of $n$ with time. It is evident that the null is not
destroyed throughout the HMHD and MHD evolution. Structural changes in the field lines caused by reconnection is near-identical for both the simulations, indicating inefficacy of the Hall term. This inefficacy is justifiable as $|\textbf{J}|/|\textbf{B}|$ remains small $\approx 10$ (not shown) in both HMHD and MHD evolution.
\subsubsection{Region R4} The development of the circular motion of magnetic field lines in region R4 during the HMHD simulation is depicted in Figure \ref{lftcrclrmotion}. It shows the global dynamics of magnetic field lines in R4 and the inset images show the zoomed view of magnetic field lines in R4 to highlight the circular motion of magnetic field lines. The bottom boundary is $B_z$ in the main figure while the inset images have the $z-$component of the plasma flow at the bottom boundary (on $xy-$plane). The red vectors represent the plasma flow direction as well as magnitude in all the panels of Figure \ref{lftcrclrmotion} where the anticlockwise pattern of the plasma flow is evident. The global dynamics highlight reconnection of the loop anchored between positive and negative polarities at t=60 in Figure \ref{lftcrclrmotion} as it gets disconnected from the bottom boundary in panels (c)-(d) of Figure \ref{lftcrclrmotion}. The animation accompanying Figure \ref{lftcrclrmotion} highlights an anticlockwise motion of foot points in the
same direction as the plasma flow, indicating field lines to be frozen in the fluid.
The trapped plasma may cause the rotating structure B4 in the observations (c.f. Figure \ref{extrapolation} (a)). However, no such motion is present during the MHD evolution of the same magnetic field lines (not shown). An interesting feature noted in the animation is the clockwise slippage of field lines after the initial anticlockwise rotation. Further analysis of R4 using the direct volume rendering of $|\textbf{J}|/|\textbf{B}|$ is presented in Figure \ref{lftcrclrmotion-SV}. The figure shows $|\textbf{J}|/|\textbf{B}|$ attains high values $\ge225$ (enclosed by the blue rectangles) within the rotating field lines from t$\approx$86 onward. This suggests the slippage of field lines is, once again, related to the high magnetic field gradients.
\par For completeness, we present the snapshots of an overall magnetic field lines morphology including the magnetic structures and topology of regions R1, R2, R3, and R4 together, overlaid with 304 {\AA} and 171 {\AA} from the HMHD and MHD simulations. Figure \ref{Tv304171} (a) shows an instant (at t=75) from the HMHD simulation where the topologies and magnetic structures in R1, R2, R3, and R4, plus the additionally drawn locust color magnetic field lines between R2 and R3 are shown collectively. It shows an excellent match of the magnetic field lines in R2 with the observed tip of W-shaped flare ribbon at B2, which is pointed out by the pink arrow in panel (a). Foot points of the spine-fan geometry around the 3D null orient themselves in the same fashion as the observed tip of the W-shaped flare ribbon at B2 as seen in 304 {\AA} channel of SDO/AIA. The rising loops indicated by the white arrow correspond to the same evolution as shown in Figure \ref{fullR1}. An overall magnetic field lines morphology mentioned in Figure \ref{lftcrclrmotion} (a) is given at the same time (t=75) during the MHD simulation overlaid with 304 {\AA} image in Figure \ref{lftcrclrmotion} (b). Importantly, unlike the HMHD simulation, the MHD simulation does not account for the anchored lower spine and fan magnetic field lines of the 3D null at the center of the B2. Also, the significant rise of overlying maroon magnetic field lines and the circular motion of the material in B4 is captured in the HMHD simulation only. In panel (c) magnetic field lines overlaid with 171 {\AA} image shows the magnetic field lines (higher up in the solar atmosphere) have resemblance with the post-flare loops during the HMHD. Overall, the HMHD evolution seems to be in better agreement with the observations in comparison to the MHD evolution.
\section{Summary and Discussion}
\label{summary}
The paper compares data-based HMHD and MHD simulations using the flaring Active Region NOAA 12734 as a test bed.
Importance of the HMHD stems from the realization that the Hall term in the induction equation cannot be neglected in presence of the magnetic reconnection---the underlying cause of solar flares.
The selected event is the C1.3 class flare on March 08, 2019 around 03:19 UT for the aforementioned comparison. Although the event is analyzed and reported in the literature, it is further explored using the multi-wavelength observations from SDO/AIA. The identified important features are:
an elongated extreme ultraviolet (EUV) counterpart of the eruption on the western side of the AR, a W-shaped flare ribbon and circular motion of cool chromospheric material on the eastern part.
The magnetic field line dynamics near these features are utilized to compare the simulations.
Notably, the simulations
idealize the corona to have an Alfv\`en speed which is two orders of
magnitude smaller than its
typical value. Congruent to the general understanding, the Hall parameter is selected to tie the Hall dynamics to the dissipation scale $\mathcal{O} (\Delta \textbf{x})$
in the spirit of the ILES carried out in the paper. The magnetic reconnection here is
associated with the slippage of magnetic field lines from the plasma parcels, effective at the dissipation scale due to local enhancement of magnetic field gradient. The same enhancement also amplifies the Hall contribution,
presumably enhancing the slippage and thereby making the reconnection faster and more effective than the MHD.
The coronal magnetic field is constructed by extrapolating the photospheric vector magnetic field obtained from the SDO/HMI observations employing the non-FFF technique \citep{Hu2010}. The concentrated distribution of the Lorentz force on the bottom boundary and its decrease with the height justify the use of non-FFF extrapolation for the solar corona. The initial non-zero Lorentz force is also crucial in generating self-consistent flows that initiate the dynamics and cause the magnetic reconnections.
Analyses of the extrapolated magnetic field reveal several magnetic structures and topologies of interest: a flux rope on the western part at flaring location, a 3D null point along with the fan-spine configuration at the centre, a ``Fish-bone-like structure" surrounding the null-line on the eastern part of the AR. All of these structures are found to be co-spatial with the observed flare ribbon brightening.
\par The HMHD simulation shows faster slipping reconnection of the flux rope foot points and overlying magnetic field lines (constituting QSLs above the flux rope) at the flaring location. Consequently, the overlying magnetic field lines rise, eventually reaching higher up in the corona and reconnecting to provide a path for plasma to eject out. The finding is in agreement with the observed elongated EUV counterpart of the eruption on western part of the AR. Contrarily, such significant rise of the flux rope and overlying field lines to subsequently reconnect higher up in the corona is absent in the MHD simulation---signifying the reconnection to be slower compared to the HMHD. Intriguingly, rise and expansion of the flux rope and overlying field lines owing to slipping reconnection on QSLs has also been modelled and observed in an earlier work by \citet{dudik}.
These are typical features of the ``standard solar flare model in 3D'', which allows for a consistent explanation of events which are
not causally connected \citep{dudik}. It also advocates that null-points and true separatrices are not required for the eruptive flares to occur---concurring the results of this work.
HMHD evolution of the fan-spine configuration surrounding the 3D null point is in better agreement with the tip of W-shaped flare ribbon at the centre of the AR. The lower spine and fan magnetic field lines remain anchored to the bottom boundary throughout the evolution which can account for the plasma flowing downward after the reconnection and cause the brightening. Whereas in the MHD, the lower spine gets disconnected and cannot
account for the brightening. The reconnection dynamics around the null-line and the corresponding plasma flow direction is same in the HMHD as well as the MHD simulation and agrees with the observed brightening. Nevertheless, reconnection is earlier in the HMHD. HMHD evolution captures an anti-clockwise circular motion of magnetic field lines in the left part of the AR which is co-spatial with the location of the rotating chromospheric material in eastern side of the AR. No such motion was found in the MHD simulation. Importantly, the simulations explicitly associate
generation of large magnetic field gradients to HMHD compared to MHD, resulting in faster and more efficient field line slippage because of the enhanced Hall term.
Overall, the results documented in the paper show the HMHD explains the flare brightening better than the MHD, prioritizing the requirement to include HMHD in future state-of-the-art data-based numerical simulations.
\section{Acknowledgement}
The simulations are performed using the 100TF cluster Vikram-100 at Physical Research Laboratory, India. We wish to acknowledge the visualization software VAPOR (\url{www.vapor.ucar.edu}), for generating relevant graphics. Q.H. and A.P. acknowledge partial support of NASA grants 80NSSC17K0016, 80NSSC21K1671, LWS 80NSSC21K0003 and NSF awards AGS-1650854 and AGS-1954503. This research was also supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622, as well as through the Synergy Grant number 810218 (ERC-2018-SyG) of the European Research Council.
| proofpile-arXiv_065-3 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we present and analyze a high-order time discontinuous Galerkin finite element method for the time integration of second order differential problems as those stemming from e.g. elastic wave propagation phenomena.
Classical approaches for the time integration of second order differential systems employ implicit and explicit finite differences, Leap-frog, Runge-Kutta or Newmark schemes, see e.g. \cite{Ve07,Bu08,QuSaSa07} for a detailed review. In computational seismology, explicit time integration schemes are nowadays preferred to implicit ones, due to their computational cheapness and ease of implementation. Indeed, although being unconditionally stable, implicit methods are typically computationally expensive. The main drawback of explicit methods is that they are conditionally stable and the choice of time step imposed by the Courant-Freidrichs-Levy (CFL) condition can sometimes be a great limitation.
To overcome this limitation one can employ local time stepping (LTS) algorithms \cite{GrMi13,DiGr09,CoFoJo03,Dumbser2007arbitrary} for which the CFL condition is imposed element-wise leading to an optimal choice of the time step. The unique drawback of this approach is the additional synchronization process that one need to take into account for a correct propagation of the wave field from one element to the other.
In this work, we present an implicit time integration method based on a discontinuous Galerkin (DG) approach. Originally, DG methods \cite{ReedHill73,Lesaint74} have been developed to approximate \textit{in space} hyperbolic problems \cite{ReedHill73}, and then generalized to elliptic and parabolic equations \cite{wheeler1978elliptic,arnold1982interior,HoScSu00,CockKarnShu00,
riviere2008discontinuous,HestWar,DiPiEr}. We refer the reader to \cite{riviere2003discontinuous,Grote06} for the application of DG methods to scalar wave equations and to \cite{Dumbser2007arbitrary,WiSt2010,antonietti2012non,
ferroni2016dispersion,antonietti2016stability,AnMa2018,
Antonietti_etal2018,mazzieri2013speed,AnMaMi20,DeGl15} for the elastodynamics problem.
The DG approach has been used also to approximate initial-value problem where the DG paradigm shows some advantage with respect to other implicit schemes such as the Johnson's method, see e.g. \cite{JOHNSON1993,ADJERID2011}. Indeed, since the information follows the positive direction of time, the solution at time-slab $[t_n,t_{n+1}]$ depends only on the solution at the time instant $t_n^-$.
By employing DG methods in both space and time dimensions it leads to a fully DG space-time formulation such as \cite{Delfour81,Vegt2006,WeGeSc2001,AnMaMi20}.
More generally, space-time methods have been largely employed for hyperbolic problems. Indeed, high order approximations in both space and time are simple to obtain, achieving spectral convergence of the space-time error through $p$-refinement. In addition, stability can be achieved with local CFL conditions, as in \cite{MoRi05}, increasing computational efficiency.
Space-time methods can be divided according to which type of space-time partition they employ. In structured techniques \cite{CangianiGeorgoulisHouston_2014,Tezduyar06}, the space-time grid is the cartesian product of a spatial mesh and a time partition. Examples of applications to second order hyperbolic problems can be found in \cite{StZa17,ErWi19,BaMoPeSc20}. Unstructured techniques \cite{Hughes88,Idesman07} employ grids generated considering the time as an additional dimension. See \cite{Yin00,AbPeHa06,DoFiWi16} for examples of applications to first order hyperbolic problems. Unstructured methods may have better properties, however they suffer from the difficulty of generating the mesh, especially for three-dimensional problems.
Among unstructured methods, we mention Trefftz techniques \cite{KrMo16,BaGeLi17,BaCaDiSh18}, in which the numerical solution is looked for in the Trefftz space, and the tent-pitching paradigm \cite{GoScWi17}, in which the space-time elements are progressively built on top of each other in order to grant stability of the numerical scheme. Recently, in \cite{MoPe18,PeScStWi20} a combination of Trefftz and tent-pitching techniques has been proposed with application to first order hyperbolic problems.
Finally, a typical approach for second order differential equations consists in reformulating them as a system of first order hyperbolic equations. Thus, velocity is considered as an additional problem's unkwnown that results in doubling the dimension of the final linear system, cf. \cite{Delfour81,Hughes88,FRENCH1993,JOHNSON1993,ThHe2005}.
The motivation for this work is to overcome the limitations of the space-time DG method presented in \cite{AnMaMi20} for elastodynamics problems. This method integrates the second order (in time) differential problem stemming from the spatial discretization. The resulting stiffness matrix is ill-conditioned making the use of iterative solvers quite difficult. Hence, direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method. Here, we propose to change the way the time integration is obtained, resulting in a well-conditioned system matrix and making iterative methods employable and complex 3D problems solvable.
In this work, we present a high order discontinuous Galerkin method for time integration of systems of second-order differential equations stemming from space discretization of the visco-elastodynamics problem. The differential (in time) problem is firstly reformulated as a first order system, then, by imposing only weak continuity of tractions across time slabs, we derive a discontinuous Galerkin method. We show the well posedness of the proposed method through the definition of a suitable energy norm, and we prove stability and \emph{a priori} error estimates. The obtained scheme is implicit, unconditionally stable and super-optimal in term of accuracy with respect to the integration time step. In addition, the solution strategy adopted for the associated algebraic linear system reduces the complexity and computational cost of the solution, making three dimensional problems (in space) affordable.
The paper is organized as follows. In Section \ref{Sc:Method} we formulate the problem, present its numerical discretization and show that it is well-posed. The stability and convergence properties of the method are discussed in Section \ref{Sc:Convergence}, where we present \textit{a priori} estimates in a suitable norm. In Section \ref{Sc:AlgebraicFormulation}, the equations are rewritten into the corresponding algebraic linear system and a suitable solution strategy is shown. Finally, in Section \ref{Sc:NumericalResults}, the method is validated through several numerical experiments both in two and three dimensions.
Throughout the paper, we denote by $||\aa||$ the Euclidean norm of a vector $\aa \in \mathbb{R}^d$, $d\ge 1$ and by $||A||_{\infty} = \max_{i=1,\dots,m}\sum_{j=1}^n |a_{ij}|$, the $\ell^{\infty}$-norm of a matrix $A\in\mathbb{R}^{m\times n}$, $m,n\ge1$. For a given $I\subset\mathbb{R}$ and $v:I\rightarrow\mathbb{R}$ we denote by $L^p(I)$ and $H^p(I)$, $p\in\mathbb{N}_0$, the classical Lebesgue and Hilbert spaces, respectively, and endow them with the usual norms, see \cite{AdamsFournier2003}. Finally, we indicate the Lebesgue and Hilbert spaces for vector-valued functions as $\bm{L}^p(I) = [L^p(I)]^d$ and $\bm{H}^p(I) = [H^p(I)]^d$, $d\ge1$, respectively.
\section{Discontinuous Galerkin approximation of a second-order initial value problem}
\label{Sc:Method}
For $T>0$, we consider the following model problem \cite{kroopnick}: find $\bm{u}(t) \in\bm{H}^2(0,T]$ such that
\begin{equation}
\label{Eq:SecondOrderEquation}
\begin{cases}
P\ddot{\bm{u}}(t) + L\dot{\bm{u}}(t)+K\bm{u}(t) = \bm{f}(t) \qquad \forall\, t \in (0,T], \\
\bm{u}(0) = \hat{\bm{u}}_0, \\
\dot{\bm{u}}(0) = \hat{\bm{u}}_1,
\end{cases}
\end{equation}
where $P,L,K \in \mathbb{R}^{d\times d}$, $d\geq 1$ are symmetric, positive definite matrices, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$ and $\bm{f} \in \bm{L}^2(0,T]$. Then, we introduce a variable $\bm{w}:(0,T]\rightarrow\mathbb{R}^{d}$ that is the first derivative of $\bm{u}$, i.e. $\bm{w}(t) = \dot{\bm{u}}(t)$, and reformulate problem \eqref{Eq:SecondOrderEquation} as a system of first order differential equations:
\begin{equation}
\label{Eq:FirstOrderSystem1}
\begin{cases}
K\dot{\bm{u}}(t) - K\bm{w}(t) = \boldsymbol{0} &\forall\, t\in(0,T], \\
P\dot{\bm{w}}(t) +L\bm{w}(t) + K\bm{u}(t) = \bm{f}(t) &\forall\, t\in(0,T], \\
\bm{u}(0) = \hat{\bm{u}}_0, \\
\bm{w}(0) = \hat{\bm{u}}_1.
\end{cases}
\end{equation}
Note that, since $K$ is a positive definite matrix, the first equation in \eqref{Eq:FirstOrderSystem1} is consistent with the definition of $\bm{w}$. By defining $\bm{z} = [\bm{u},\bm{w}]^T\in\mathbb{R}^{2d}$, $\bm{F}=[\bm{0},\bm{f}]^T\in\mathbb{R}^{2d}$, $\bm{z}_0 = [\hat{\bm{u}}_0,\hat{\bm{u}}_1]^T\in\mathbb{R}^{2d}$ and
\begin{equation}\label{def:KA}
\widetilde{K} = \begin{bmatrix}
K & 0 \\
0 & P
\end{bmatrix}\in\mathbb{R}^{2d\times2d}, \quad
A = \begin{bmatrix}
0 & -K \\
K & L
\end{bmatrix}\in\mathbb{R}^{2d\times2d},
\end{equation}
we can write \eqref{Eq:FirstOrderSystem1} as
\begin{equation}
\label{Eq:FirstOrderSystem2}
\begin{cases}
\tilde{K}\dot{\bm{z}}(t) + A\bm{z}(t) = \bm{F}(t) & \forall\, t\in(0,T], \\
\bm{z}(0) = \bm{z}_0.
\end{cases}
\end{equation}
To integrate in time system \eqref{Eq:FirstOrderSystem2}, we first partition the interval $I=(0,T]$ into $N$ time-slabs $I_n = (t_{n-1},t_n]$ having length $\Delta t_n = t_n-t_{n-1}$, for $n=1,\dots,N$ with $t_0 = 0$ and $t_N = T$, as it is shown in Figure \ref{Fig:TimeDomain}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{time_domain.png}
\caption{Example of time domain partition (bottom). Zoom of the time domain partition: values $t_n^+$ and $t_n^-$ are also reported (top).}\label{Fig:TimeDomain}
\end{figure}
Next, we incrementally build (on $n$) an approximation of the exact solution $\bm{u}$ in each time slab $I_n$. In the following we will use the notation
\begin{equation*}
(\bm{u},\bm{v})_I = \int_I \bm{u}(s)\cdot\bm{v}(s)\text{d}s, \quad \langle \bm{u},\bm{v} \rangle_t = \bm{u}(t)\cdot \bm{v}(t),
\end{equation*}
where $\aa\cdot\bm{b}$ stands for the euclidean scalar product between tho vectors $\aa,\bm{b}\in\mathbb{R}^d$. We also denote for (a regular enough) $\bm{v}$, the jump operator at $t_n$ as
\begin{equation*}
[\bm{v}]_n = \bm{v}(t_n^+) - \bm{v}(t_n^-) = \bm{v}^+ -\bm{v}^-, \quad \text{for } n\ge 0,
\end{equation*}
where
\begin{equation*}
\bm{v}(t_n^\pm) = \lim_{\epsilon\rightarrow 0^\pm}\bm{v}(t_n+\epsilon), \quad \text{for } n\ge 0.
\end{equation*}
Thus, we focus on the generic interval $I_n$ and assume that the solution on $I_{n-1}$ is known. We multiply equation \eqref{Eq:FirstOrderSystem2} by a (regular enough) test function $\bm{v}(t)\in\mathbb{R}^{2d}$ and integrate in time over $I_n$ obtaining
\begin{equation}
\label{Eq:Weak1}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} = (\bm{F},\bm{v})_{I_n}.
\end{equation}
Next, since $\bm{u} \in\bm{H}^2(0,T]$ and $\bm{w} = \dot{\bm{u}}$, then $\bm{z}\in\bm{H}^1(0,T]$. Therefore, we can add to \eqref{Eq:Weak1} the null term $\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+)$ getting
\begin{equation}
\label{Eq:Weak2}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} +\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) = (\bm{F},\bm{v})_{I_n}.
\end{equation}
Summing up over all time slabs we define the bilinear form $\mathcal{A}:\bm{H}^1(0,T)\times\bm{H}^1(0,T)\rightarrow\mathbb{R}$
\begin{equation}
\label{Eq:BilinearForm}
\mathcal{A}(\bm{z},\bm{v}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{v}(t_n^+) + \widetilde{K}\bm{z}(0^+)\cdot\bm{v}(0^+),
\end{equation}
and the linear functional $\mathcal{F}:\bm{L}^2(0,T)\rightarrow\mathbb{R}$ as
\begin{equation}
\label{Eq:LinearFunctional}
\mathcal{F}(\bm{v}) = \sum_{n=1}^N (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{v}_{0}^+,
\end{equation}
where we have used that $\bm{z}(0^-) = \bm{z}_0$. Now, we introduce the functional spaces
\begin{equation}
\label{Eq:PolynomialSpace}
V_n^{r_n} = \{ \bm{z}:I_n\rightarrow\mathbb{R}^{2d} \text{ s.t. } \bm{z}\in[\mathcal{P}^{r_n}(I_n)]^{2d} \},
\end{equation}
where $\mathcal{P}^{r_n}(I_n)$ is the space of polynomial defined on $I_n$ of maximum degree $r_n$,
\begin{equation}
\label{Eq:L2Space}
\mathcal{V}^{\bm{r}} = \{ \bm{z}\in\bm{L}^2(0,T] \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \},
\end{equation}
and
\begin{equation}
\label{Eq:CGSpace}
\mathcal{V}_{CG}^{\bm{r}} = \{ \bm{z}\in[\mathbb{C}^0(0,T]]^{2d} \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \text{ and } \dot{\bm{u}} = \bm{w} \},
\end{equation}
where $\bm{r} = (r_1,\dots,r_N) \in \mathbb{N}^N$ is the polynomial degree vector
Before assessing the discontinuous Galerkin formulation of problem~\eqref{Eq:FirstOrderSystem2}, we need to introduce, as in \cite{ScWi2010}, the following operator $\mathcal{R}$, that is used only on the purpose of the analysis and does not need to be computed in practice.
\begin{mydef}
\label{Def:Reconstruction}
We define a reconstruction operator $\mathcal{R}:\mathcal{V}^{\bm{r}}\rightarrow\mathcal{V}^{\bm{r}}_{CG}$ such that
\begin{equation}
\label{Eq:Reconstruction}
\begin{split}
(\mathcal{R}'(\bm{z}),\bm{v})_{I_n} &= (\bm{z}',\bm{v})_{I_n} + [\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) \quad \forall\, \bm{v}\in[\mathcal{P}^{r_n}(I_n)]^{2d}, \\ \mathcal{R}(\bm{z}(t_{n-1}^+)) &= \bm{z}(t_{n-1}^-) \quad \forall\, n =1,\dots,N.
\end{split}
\end{equation}
\end{mydef}
\noindent Now, we can properly define the functional space
\begin{equation}
\label{Eq:DGSpace}
\begin{split}
\mathcal{V}_{DG}^{\bm{r}} = \{& \bm{z}\in\mathcal{V}^{\bm{r}} \text{ and }\exists\, \hat{\bm{z}} = R(\bm{z}) \in\mathcal{V}_{CG}^{\bm{r}}\},
\end{split}
\end{equation}
and introduce the DG formulation of \eqref{Eq:FirstOrderSystem2} reads as follows. Find $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ such that
\begin{equation}
\label{Eq:WeakProblem}
\mathcal{A}(\bm{z}_{DG},\bm{v}) = \mathcal{F}(\bm{v}) \qquad \bm{v}\in\mathcal{V}_{DG}^{\bm{r}}.
\end{equation}
For the forthcoming analysis we introduce the following mesh-dependent energy norm.
\begin{myprop}
\label{Pr:Norm}
The function $|||\cdot|||:\mathcal{V}_{DG}^{\bm{r}}\rightarrow\mathbb{R}^{+}$, is defined as
\begin{equation}
\label{Eq:Norm}
|||\bm{z}|||^2 = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2,
\end{equation}
with
$
\widetilde{L} = \begin{bmatrix}
0 & 0 \\
0 & L^{\frac{1}{2}}
\end{bmatrix}\in\mathbb{R}^{2d\times2d}.
$
Moreover a norm on $\mathcal{V}_{DG}^{\bm{r}}$.
\end{myprop}
\begin{proof}
It is clear that homogeneity and subadditivity hold. In addition, it is trivial that if $\bm{z} = 0$ then $|||\bm{z}|||=0$. Therefore, we suppose $|||\bm{z}||| = 0$ and observe that
\begin{equation*}
||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}=||L^{\frac{1}{2}}\bm{w}||_{\bm{L}^2(I_n)}=0 \quad \forall n=1,\dots,N.
\end{equation*}
Since $L$ is positive definite we have $\bm{w} = \textbf{0} $ on $[0,T]$. Hence, $\bm{w}'=\textbf{0}$ on $[0,T]$. Using this result into \eqref{Eq:DGSpace} and calling $\bm{v} = [\bm{v}_1,\bm{v}_2]^T$, we get
\begin{equation*}
(\hat{\bm{w}}',\bm{v}_2)_{I_n} = 0 \quad \forall \bm{v}_2 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N.
\end{equation*}
Therefore $\hat{\bm{w}}'=\textbf{0}$ on $[0,T]$. In addition, from \eqref{Eq:DGSpace} we get $\textbf{0}=\bm{w}(t_1^-)=\hat{\bm{w}}(t_1^+)$ that combined with the previous result gives $\hat{\bm{w}}=\textbf{0}$ on $[0,T]$.
Now, since $\hat{\bm{z}}\in \mathcal{V}^{\bm{r}}_{CG}$, we have $\hat{\bm{u}}' = \hat{\bm{w}} = \textbf{0}$ on $[0,T]$. Therefore using again \eqref{Eq:DGSpace} we get
\begin{equation*}
(\bm{u}',\bm{v}_1)_{I_n} + [\bm{u}]_{n-1}\cdot \bm{v}_1(t_{n-1}^+)= 0 \quad \forall \bm{v}_1 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N.
\end{equation*}
Take $n = N$, then $[\bm{u}]_{N-1}=\textbf{0}$ (from $|||\bm{z}||| = 0$) and therefore $\bm{u}'=\textbf{0}$ on $I_N$. Combining this result with $\bm{u}(T^-)=\textbf{0}$ we get $\bm{u}=\textbf{0}$ on $I_N$ from which we derive $\textbf{0}=\bm{u}(t_{N-1}^+)=\bm{u}(t_{N-1}^-)$. Iterating until $n=2$ we get $\bm{u}=\textbf{0}$ on $I_n$, for any $n=2,\dots,N$. Moreover
\begin{equation*}
\textbf{0}=\bm{u}(t_1^+)=\bm{u}(t_1^-)=\hat{\bm{u}}(t_1^+)=\hat{\bm{u}}(t_1^-)=\hat{\bm{u}}(0^+)=\bm{u}(0^-),
\end{equation*} since $\hat{\bm{u}}' = \textbf{0}$ on $I_1$. Using again $|||\bm{z}|||=0$ we get $\bm{u}(0^+)=\textbf{0}$, hence $[\bm{u}]_0=\textbf{0}$. Taking $n=1$ we get $\bm{u}=\textbf{0}$ on $I_1$. Thus, $\bm{z}=\textbf{0}$ on $[0,T]$.
\end{proof}
The following result states the well-posedness of \eqref{Eq:WeakProblem}
\begin{myprop}
\label{Pr:WellPosedness} Problem~\eqref{Eq:WeakProblem} admits a unique solution $\bm{u}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$.
\end{myprop}
\begin{proof}
By taking $\bm{v} = \bm{z}$ we get
\begin{equation*}
\mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z},}\bm{z})_{I_n} + (A\bm{z},\bm{z})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{z}(t_n^+) + (\widetilde{K}^{\frac{1}{2}}\bm{z})^2.
\end{equation*}
Since $\widetilde{K}$ is symmetric, integrating by parts we have that
\begin{equation*}
(\widetilde{K}\dot{\bm{z}},\bm{z})_{I_n} = \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_n^-} - \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_{n-1}^+}.
\end{equation*}
Then, the second term can be rewritten as
\begin{equation*}
(A\bm{z},\bm{z})_{I_n} = (-K\bm{w},\bm{u})_{I_n} + (K\bm{u},\bm{w})_{I_n} + (L\bm{w},\bm{w})_{I_n} = ||\widetilde{L}\bm{z}||_{I_n}^2,
\end{equation*}
cf. also \eqref{def:KA}. Therefore
\begin{equation*}
\mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{I_n}^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1} (\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2 = |||\bm{z}|||^2.
\end{equation*}
The result follows from Proposition~\ref {Pr:Norm}, the bilinearity of $\mathcal{A}$ and the linearity of $\mathcal{F}$.
\end{proof}
\section{Convergence analysis}\label{Sc:Convergence}
In this section, we first present an \textit{a-priori} stability bound for the numerical solution of \eqref{Eq:WeakProblem} that can be easily obtained by direct application of the Cauchy-Schwarz inequality. Then, we use the latter to prove optimal error estimate for the numerical error, in the energy norm \eqref{Eq:Norm}.
\begin{myprop}
Let $\bm{f} \in \bm{L}^2(0,T]$, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$, and let $\bm{z}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$ be the solution of \eqref{Eq:WeakProblem}, then it holds
\begin{equation}
\label{Eq:Stability}
|||\bm{z}_{DG}||| \lesssim \Big(\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^(0,T)}^2+(K^{\frac{1}{2}}\hat{\bm{u}}_0)^2+(P^{\frac{1}{2}}\hat{\bm{u}}_1)^2\Big)^{\frac{1}{2}}.
\end{equation}
\end{myprop}
\begin{proof}
From the definition of the norm $|||\cdot|||$ given in \eqref{Eq:Norm} and the arithmetic-geometric inequality we have
\begin{equation*}
\begin{split}
|||\bm{z}_{DG}|||^2 &= \mathcal{A}(\bm{z}_{DG},\bm{z}_{DG}) = \mathcal{F}(\bm{z}_{DG}) = \sum_{n=1}^N (\bm{F},\bm{z}_{DG})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{z}_{DG}(0^+) \\
&\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}\sum_{n=1}^N ||\widetilde{L}\bm{z}_{DG}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{4}(\widetilde{K}^{\frac{1}{2}} \bm{z}_{DG})^2 \\
&\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{2}|||\bm{z}_{DG}|||^2.
\end{split}
\end{equation*}
Hence,
\begin{equation*}
|||\bm{z}_{DG}|||^2 \lesssim \sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (K^{\frac{1}{2}} \hat{\bm{u}}_{0})^2 + (P^{\frac{1}{2}}\hat{\bm{u}}_{1})^2.
\end{equation*}
\end{proof}
Before deriving an a priori estimate for the numerical error we introduce some preliminary results. We refer the interested reader to \cite{ScSc2000} for further details.
\begin{mylemma}
\label{Le:Projector}
Let $I=(-1,1)$ and $u\in L^2(I)$ continuous at $t=1$, the projector $\Pi^r u \in \mathcal{P}^r(I)$, $r\in\mathbb{N}_0$, defined by the $r+1$ conditions
\begin{equation}
\label{Eq:Projector}
\Pi^r u (1) = u(1), \qquad (\Pi^r u,q)_{I} = 0 \quad\forall\, q\in\mathcal{P}^{r-1}(I),
\end{equation}
is well posed. Moreover, let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then
\begin{equation}
\label{Eq:ProjectionError}
||u-\Pi^r u||_{L^2(I)}^2 \le C\bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}\frac{1}{r^2}\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2
\end{equation}
for any integer $0\le s \le \min(r,s_0)$. C depends on $s_0$ but it is independent from $r$ and $\Delta t$.
\end{mylemma}
Proceeding similarly to \cite{ScSc2000}, we now prove the following preliminary estimate for the derivative of the projection $\Pi^r u$.
\begin{mylemma}
\label{Le:DerivativeProjectionErrorInf}
Let $u\in H^1(I)$ be continuous at $t=1$. Then, it holds
\begin{equation}
\label{Eq:DerivativeProjectionErrorInf}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C(r+1)\inf_{q \in \mathcal{P}^r(I)} \Bigg\{||u'-q'||_{L^2(I)}^2 \Bigg\}.
\end{equation}
\end{mylemma}
\begin{proof}
Let $u' =\sum_{i=1}^{\infty} u_i L'_i$ be the Legendre expansion of $u'$ with coefficients $u_i\in\mathbb{R}$, $i=1,\dots,\infty$. Then (cfr. Lemma 3.2 in \cite{ScSc2000})
\begin{equation*}
\big(\Pi^r u\big)'=\sum_{i=1}^{r-1} u_i L'_i + \sum_{i=r}^{\infty} u_i L'_r
\end{equation*}
Now, for $r\in\mathbb{N}_0$, we denote by $\widehat{P}^r$ the $L^2(I)$-projection onto $\mathcal{P}^r(I)$. Hence,
\begin{equation*}
u' - \big(\Pi^r u\big)'= \sum_{i=r}^{\infty} u_i L'_i - \sum_{i=r}^{\infty} u_i L'_r = \sum_{i=r+1}^{\infty} u_i L'_i - \sum_{i=r+1}^{\infty} u_i L'_r = u' - \big(\widehat{P}^r u\big)' - \sum_{i=r+1}^{\infty} u_i L'_r.
\end{equation*}
Recalling that $||L'_r||_{L^2(I)} = r(r+2)$ we have
\begin{equation*}
||u' - \big(\Pi^r u\big)'||_{L^2(I)}^2 \le ||u' - \big(\widehat{P}^r u\big)'||_{L^2(I)}^2 - \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| r(r+1).
\end{equation*}
Finally, we use that
$ \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| \le \frac{C}{r}||u'||_{L^2(I)}
$ (cfr. Lemma~3.6 in \cite{ScSc2000}) and get
\begin{equation}
\label{Eq:DerivativeProjectionError}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C\big\{||u'-\big(\widehat{P}^r u\big)'||_{L^2(I)}^2+(r+1)||u'||_{L^2(I)}^2 \big\}.
\end{equation}
Now consider $q\in\mathcal{P}^r(I)$ arbitrary and insert $u'-q'$ into \eqref{Eq:DerivativeProjectionError}. The thesis follows from the reproducing properties of projectors $\Pi^r u$ and $\widehat{P}^r u$ and from the fact that $||u-\widehat{P}^r u||_{L^2(I)} \le ||u-q||_{L^2(I)} $ for any $q\in\mathcal{P}^r(I)$.
\end{proof}
By employing Proposition~3.9 in \cite{ScSc2000} and Lemma \ref{Le:DerivativeProjectionErrorInf} we obtain the following result.
\begin{mylemma}
\label{Le:DerivativeProjectionError}
Let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then
\begin{equation*}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \lesssim \bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}(r+2)\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2
\end{equation*}
for any integer $0\le s \le \min(r,s_0)$. The hidden constants depend on $s_0$ but are independent from $r$ and $\Delta t$.
\end{mylemma}
Finally we observe that the bilinear form appearing in formulation \eqref{Eq:WeakProblem} is strongly consistent, i.e.
\begin{equation}
\label{Eq:Consistency}
\mathcal{A}(\bm{z}-\bm{z}_{DG},\bm{v}) = 0 \qquad \forall\,\bm{v}\in\mathcal{V}^{\bm{r}}_{DG}.
\end{equation}
We now state the following convergence result.
\begin{myth}
\label{Th:ErrorEstimate}
Let $\hat{\bm{u}}_{0},\hat{\bm{u}}_{1} \in \mathbb{R}^{d}$. Let $\bm{z}$ be the solution of problem~\eqref{Eq:FirstOrderSystem2} and let $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ be its finite element approximation. If $\bm{z}|_{I_n}\in \bm{H}^{s_n}(I_n)$, for any $n=1,\dots,N$ with $s_n\geq2$, then it holds
\begin{equation}
\label{Eq:ErrorEstimate}
|||\bm{z}-\bm{z}_{DG}||| \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t}{2}\bigg)^{\mu_n+\frac{1}{2}}\Bigg((r_n+2)\frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}\Bigg)^{\frac{1}{2}}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{equation}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the hidden constants depend on the norm of matrices $L$, $K$ and $A$.
\end{myth}
\begin{proof}
We set $\bm{e} = \bm{z} - \bm{z}_{DG} = (\bm{z} - \Pi_I^r \bm{z}) + (\Pi_I^r \bm{z} - \bm{z}_{DG}) = \bm{e}^{\pi} + \bm{e}^{h}$. Hence we have $|||\bm{e}||| \le |||\bm{e}^{\pi}||| + |||\bm{e}^{h}|||$. Employing the properties of the projector \eqref{Eq:Projector} and estimates \eqref{Eq:ProjectionError} and \eqref{Eq:DerivativeProjectionError}, we can bound $|||\bm{e}^{\pi}|||$ as
\begin{equation*}
\begin{split}
|||\bm{e}^{\pi}|||^2 &= \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{e}^{\pi}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(T^-))^2 \\
& = \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N \Bigg(-\int_{t_{n-1}}^{t_{n}}\widetilde{K}^{\frac{1}{2}}\dot{\bm{e}}^{\pi}(s)ds\Bigg)^2 \\
& \lesssim \sum_{n=1}^N \Big(||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \Delta t ||\dot{\bm{e}^{\pi}}||_{L^2(I_n)}^2 \Big) \\
& \lesssim \sum_{n=1}^N \bigg[\bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} + \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2)\bigg] \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)} \\
& \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2) \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{split}
\end{equation*}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$.
For the term $|||\bm{e}_{h}|||$ we use \eqref{Eq:Consistency} and integrate by parts to get
\begin{equation*}
\begin{split}
|||\bm{e}^{h}|||^2 &= \mathcal{A}(\bm{e}^h,\bm{e}^h) = -\mathcal{A}(\bm{e}^{\pi},\bm{e}^h) \\
& = \sum_{n=1}^N (\widetilde{K}\dot{\bm{e}}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{\pi}]_n\cdot\bm{e}^h(t_n^+) + \widetilde{K}\bm{e}^{\pi}(0^+)\cdot\bm{e}^h(0^+) \\
& = \sum_{n=1}^N (\widetilde{K}\bm{e}^{\pi},\dot{\bm{e}}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{h}]_n\cdot\bm{e}^{\pi}(t_n^-) - \widetilde{K}\bm{e}^{\pi}(T^-)\cdot\bm{e}^h (T^-).
\end{split}
\end{equation*}
Thanks to \eqref{Eq:Projector}, only the second term of the last equation above does not vanish. Thus, we employ the Cauchy-Schwarz and arithmetic-geometric inequalities to obtain
\begin{equation*}
|||\bm{e}^{h}|||^2 = \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N ||\widetilde{L}\bm{e}^{h}||_{L^2(I_n)}^2 \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}|||\bm{e}^h|||^2.
\end{equation*}
Hence,
\begin{equation*}
|||\bm{e}^{h}|||^2 \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{equation*}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the thesis follows.
\end{proof}
\section{Algebraic formulation}
\label{Sc:AlgebraicFormulation}
In this section we derive the algebraic formulation stemming after DG discretization of \eqref{Eq:WeakProblem} for the time slab $I_n$.
We consider on $I_n$ a local polynomial degree $r_n$. In practice, since we use discontinuous functions, we can compute the numerical solution one time slab at time, assuming the initial conditions stemming from the previous time slab known. Hence, problem \eqref{Eq:WeakProblem} reduces to: find $\bm{z}\in V^{r_n}(I_n)$ such that
\begin{equation}
\label{Eq:WeakFormulationReduced}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \langle\widetilde{K}\bm{z},\bm{v}\rangle_{t_{n-1}^+} = (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}(t_{n-1}^-)\cdot\bm{v}({t_{n-1}^+}), \quad \forall\,n=1,\dots,N.
\end{equation}
Introducing a basis $\{\psi^{\ell}(t)\}_{{\ell}=1,\dots,r_n+1}$ for the polynomial space $\mathbb{P}^{r_n}(I_n)$ we define a vectorial basis $\{ \boldsymbol{\Psi}_i^{\ell}(t) \}_{i=1,\dots,2d}^{{\ell}=1,\dots,r_n+1}$ of $V_n^{r_n}$ where
\begin{equation*}
\{ \boldsymbol{\Psi}_i^{\ell}(t) \}_j =
\begin{cases}
\psi^{\ell}(t) & {\ell} = 1,\dots,r_n+1, \quad \text{if } i=j, \\
0 & {\ell} = 1,\dots,r_n+1, \quad \text{if } i\ne j.
\end{cases}
\end{equation*}
Then, we set $D_n=d(r_n+1)$ and write the trial function $\bm{z}_n = \bm{z}_{DG}|_{I_n} \in V_n^{r_n}$ as
\begin{equation*}
\bm{z}_n(t) = \sum_{j=1}^{2d} \sum_{m=1}^{r_n+1} \alpha_{j}^m \boldsymbol{\Psi}_j^m(t),
\end{equation*}
where $\alpha_{j}^m\in\mathbb{R}$ for $j=1,\dots,2d$, $m=1,\dots,r_n+1$. Writing \eqref{Eq:WeakFormulationReduced} for any test function $\boldsymbol{\Psi}_i^{\ell}(t)$, $i=1,\dots,2d$, $\ell=1\,\dots,r_n+1$ we obtain the linear system
\begin{equation}
\label{Eq:LinearSystem}
M\bm{Z}_n = \bm{G}_n,
\end{equation}
where $\bm{Z}_n,\bm{G}_n \in \mathbb{R}^{2D_n}$ are the vectors of expansion coefficient corresponding to the numerical solution and the right hand side on the interval $I_n$ by the chosen basis. Here $M\in\mathbb{R}^{2D_n\times2D_n}$ is the local stiffness matrix defined as
\begin{equation}
\label{Eq:StiffnessMatrix}
M = \widetilde{K} \otimes (N^1+N^3) + A \otimes N^2
= \begin{bmatrix}
K \otimes (N^1 + N^3) & -K \otimes N^2 \\
K \otimes N^2 & P \otimes (N^1+N^3) + L \otimes N^2
\end{bmatrix},
\end{equation}
where $N^1,N^2,N^3 \in \mathbb{R}^{r_n+1}$ are the local time matrices
\begin{equation}
\label{Eq:TimeMatrices}
N_{{\ell}m}^1 = (\dot{\psi}^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^2 = (\psi^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^3 = \langle\psi^m,\psi^{\ell}\rangle_{t_{n-1}^+},
\end{equation}
for $\ell,m=1,...,r_n+1$. Similarly to \cite{ThHe2005}, we reformulate system \eqref{Eq:LinearSystem} to reduce the computational cost of its resolution phase. We first introduce the vectors $\bm{G}_n^u,\, \bm{G}_n^w,\, \bm{U}_n,\, \bm{W}_n \in \mathbb{R}^{D_n}$ such that
\begin{equation*}
\bm{G}_n = \big[\bm{G}_n^u, \bm{G}_n^w\big]^T, \qquad \bm{Z}_n = \big[\bm{U}_n, \bm{W}_n\big]^T
\end{equation*}
and the matrices
\begin{equation}
N^4 = (N^1+N^3)^{-1}, \qquad N^5 = N^4N^2, \qquad N^6 = N^2N^4, \qquad N^7 = N^2N^4N^2.
\end{equation}
Next, we apply a block Gaussian elimination getting
\begin{equation*}
M = \begin{bmatrix}
K \otimes (N^1 + N^3) & -K \otimes N^2 \\
0 & P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7
\end{bmatrix},
\end{equation*}
and
\begin{equation*}
\bm{G}_n = \begin{bmatrix}
\bm{G}_n^u \\
\bm{G}_n^w - \mathcal{I}_d\otimes N^6 \bm{G}_n^u
\end{bmatrix}.
\end{equation*}
We define the matrix $\widehat{M}_n\in\mathbb{R}^{D_n\times D_n}$ as
\begin{equation}\label{Eq:TimeMatrix}
\widehat{M}_n = P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7,
\end{equation}
and the vector $\widehat{\bm{G}}_n\in\mathbb{R}^{D}$ as
\begin{equation}
\widehat{\bm{G}}_n = \bm{G}_n^w - \mathcal{I}_{d}\otimes N^6 \bm{G}_n^u.
\end{equation}
Then, we multiply the first block by $K^{-1}\otimes N^4$ and, exploiting the properties of the Kronecker product, we get
\begin{equation*}
\begin{bmatrix}
\mathcal{I}_{D_n} & -\mathcal{I}_{d} \otimes N^5 \\
0 & \widehat{M}_n
\end{bmatrix}
\begin{bmatrix}
\bm{U}_n \\
\bm{W}_n
\end{bmatrix} =
\begin{bmatrix}
(K^{-1}\otimes N^4)\bm{G}_n^u \\
\widehat{\bm{G}}_n
\end{bmatrix}.
\end{equation*}
Therefore, we first obtain the velocity $\bm{W}_n$ by solving the linear system
\begin{equation}\label{Eq:VelocitySystem}
\widehat{M}_n \bm{W}_n = \widehat{\bm{G}}_n,
\end{equation}
and then, we can compute the displacement $\bm{U}_n$ as
\begin{equation}\label{Eq:DisplacementUpdate1}
\bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (K^{-1}\otimes N^4)\bm{G}_n^u.
\end{equation}
Finally, since $\big[\bm{G}_n^u\big]_i^{\ell} = K\bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+)$, by defining $\bar{\bm{G}}_n^u\in \mathbb{R}^{D_n}$ as
\begin{equation}
\big[\bar{\bm{G}}_n^u\big]_i^{\ell} = \bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+),
\end{equation}
we can rewrite \eqref{Eq:DisplacementUpdate1} as
\begin{equation}\label{Eq:AltDisplacementUpdate2}
\bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (\mathcal{I}_{d}\otimes N^4)\bar{\bm{G}}_n^u.
\end{equation}
\section{Numerical results}
\label{Sc:NumericalResults}
In this section we report a wide set of numerical experiments to validate the theoretical estimates and asses the performance of the DG method proposed in Section \ref{Sc:Method}. We first present a set of verification tests for scalar- and vector-valued problems, then we test our formulation onto two- and three-dimensional elastodynamics wave propagation problems, through the open source software SPEED (\url{http://speed.mox.polimi.it/}).
\subsection{Scalar problem}
\label{Sec:1DConvergence}
For a time interval $I=[0,T]$, with $T=10$, we solve the scalar problem
\begin{equation}
\label{Eq:ScalarProblem}
\begin{cases}
\dot{u}(t) = w(t) & \forall t\in [0,10],\\
\dot{w}(t) + 5 w(t) + 6u(t) = f(t) & \forall t\in [0,10], \\
u(0) = 2, \\
w(0) = -5,
\end{cases}
\end{equation}
whose exact solution is $ \bm{z}(t) = (w(t),u(t)) = (-3e^{-3t}-3e^{-2t},e^{-3t}+e^{-2t})$ for $t\in[0,10]$.
We partition the time domain $I$ into $N$ time slabs of uniform length $\Delta t$ and we suppose the polynomial degree to be constant for each time-slab, i.e. $r_n = r$, for any $n=1,\dots,N$. We first compute
the error $|||\bm{z}_{DG} -\bm{z} |||$ as a function of the time-step $\Delta t$ for several choices of the polynomial degree $r$, as shown in Figure \ref{Fig:ConvergenceTest0D} (left). The obtained results confirms the super-optimal convergence properties of the scheme as shown in \eqref{Eq:ErrorEstimate}. Finally, since $\bm{z} \in C^{\infty}(\mathbb{R})$, from Figure \ref{Fig:ConvergenceTest0D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{ConvergenceTest0D.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest0D_Degree.png}
\caption{Test case of Section~\ref{Sec:1DConvergence}. Left: computed error $|||\bm{z}_{DG}-\bm{z}|||$ as a function of time-step $\Delta t$, with $r = 2,3,4,5$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of polynomial degree $r$, using a time step $\Delta t = 0.1$.}\label{Fig:ConvergenceTest0D}
\end{figure}
\subsection{Application to a the visco-elastodynamics system}
\label{Sec:AppVE}
In the following experiments we employ the proposed DG method to solve the second-order differential system of equations stemming from the spatial discretization of the visco-elastodynamics equation:
\begin{equation}
\label{Eq:Elastodynamic}
\begin{cases}
\partial_t \bold{u} - \bold{w} = \textbf{0}, & \text{in } \Omega\times(0,T],\\
\rho\partial_{t}\bold{w} + 2\rho\zeta\bold{w} + \rho \zeta^2\bold{u} - \nabla\cdot\boldsymbol{\sigma}(\bold{u}) = \textbf{f}, & \text{in } \Omega\times(0,T],\\
\end{cases}
\end{equation}
where $\Omega\in\mathbb{R}^\mathsf{d}$, $\mathsf{d}=2,3$, is an open bounded polygonal domain. Here, $\rho$ represents the density of the medium, $\zeta$ is a decay factor whose dimension is inverse of time, $\textbf{f}$ is a given source term (e.g. seismic source) and $\boldsymbol{\sigma}$ is the stress tensor encoding the Hooke's law
\begin{equation}
\boldsymbol{\sigma}(\bold{u})_{ij} = \lambda\sum_{k=1}^\mathsf{d} \frac{\partial u_k}{\partial x_k} + \mu \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right), \quad {\rm for} \; i,j=1,...,\mathsf{d},
\end{equation}
being $\lambda$ and $\mu$ the first and the second Lam\'e parameters, respectively. Problem \eqref{Eq:Elastodynamic} is usually supplemented with boundary conditions for $\bold{u}$ and initial conditions for $\bold{u}$ and $\bold{w}$, that we do not report here for brevity.
Finally, we suppose problem's data are regular enough to gaurantee its well-posedness \cite{AntoniettiFerroniMazzieriQuarteroni_2017}.
By employing a finite element discretization (either in its continuous or discontinuous variant) for the semi-discrete approximation (in space) of \eqref{Eq:Elastodynamic} we obtain the following system
\begin{equation*}
\left( \begin{matrix}
I & 0 \\
0 & P
\end{matrix} \right)\left( \begin{matrix}
\dot{\bm{u}} \\
\dot{\bm{w}}
\end{matrix} \right) + \left( \begin{matrix}
0 & -I \\
K & L
\end{matrix} \right)\left( \begin{matrix}
{\bm{u}} \\
{\bm{w}}
\end{matrix} \right) = \left( \begin{matrix}
\textbf{0} \\
\bm{f}
\end{matrix} \right),
\end{equation*}
that can be easily rewritten as in \eqref{Eq:FirstOrderSystem1}.
We remark that within the matrices and the right hand side are encoded the boundary conditions associated to \eqref{Eq:Elastodynamic}.
For the space discretization of \eqref{Eq:Elastodynamic}, we consider in the following a high order Discontinuous Galerkin method based either on general polygonal meshes (in two dimensions) \cite{AnMa2018} or on unstructured hexahedral meshes (in three dimensions) \cite{mazzieri2013speed}.
For the forthcoming experiments we denote by $h$ the granularity of the spatial mesh and $p$ the order of polynomials employed for space approximation. The combination of space and time DG methods yields to a high order space-time DG method that we denote by STDG.
Remark that the latter has been implemented in the open source software SPEED (\url{http://speed.mox.polimi.it/}).
\subsubsection{A two-dimensional test case with space-time polyhedral meshes}
\label{Sec:2DConvergence}
As a first verification test we consider problem~\eqref{Eq:Elastodynamic} in a bidimensional setting, i.e. $\Omega = (0,1)^2 \subset \mathbb{R}^2$.
We set the mass density $\rho=1$, the Lamé coefficients $\lambda=\mu=1$, $\zeta = 1$ and choose the data $\textbf{f}$ and the initial conditions such that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ where
\begin{equation*}
\bold{u} = e^{-t}
\begin{bmatrix}
-\sin^2(\pi x)\sin(2\pi y) \\
\sin(2\pi x)\sin^2(\pi y)
\end{bmatrix}, \qquad \bold{w} = \partial_t\bold{u}.
\end{equation*}
We consider a polygonal mesh (see Figure~\ref{fig:dgpolyspace-time}) made by 60 elements and set $p=8$. We take $T=0.4$ and divide the temporal iterval $(0,T]$ into $N$ time-slabs of uniform lenght $\Delta t$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{poly_spacetime.png}
\caption{Test case of Section~\ref{Sec:2DConvergence}. Example of space-time polygonal grid used for the verification test.}
\label{fig:dgpolyspace-time}
\end{figure}
In Figure~\ref{Fig:ConvergenceTest2D} (left) we show the energy norm \eqref{Eq:Norm} of the numerical error $|||\bm{z}_{DG}-\bm{z}|||$ computed for several choices of time polynomial degree $r=1,2,3$ by varying the time step $\Delta t$. We can observe that the error estimate \eqref{Eq:ErrorEstimate} is confirmed by our numerical results. Moreover, from Figure~\ref{Fig:ConvergenceTest2D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$. In the latter case we fixed $\Delta t = 0.1$ and use 10 polygonal elements for the space mesh, cf. Figure~\ref{fig:dgpolyspace-time}.
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Time.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Degree.png}
\caption{Test case of Section~\ref{Sec:2DConvergence}. Left: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of time-step $\Delta t$ for $r = 1,2,3$, using a space discretization with a polygonal mesh composed of $60$ elements and $p=8$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of the polynomial degree $r=p$, using a spatial grid composed of 10 elements and a time step $\Delta t = 0.1$. }
\label{Fig:ConvergenceTest2D}
\end{figure}
\subsubsection{A three-dimensional test case with space-time polytopal meshes}
\label{Sec:3DConvergence}
As a second verification test we consider problem~\eqref{Eq:Elastodynamic} for in a three dimensional setting. Here, we consider $\Omega = (0,1)^3 \subset \mathbb{R}^3$, $T=10$ and we set the external force $\boldsymbol{f}$ and the initial conditions so that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ given by
\begin{equation}
\label{Testcase}
\bold{u} = \cos(3\pi t)
\begin{bmatrix}
\sin(\pi x)^2\sin(2\pi y)\sin(2\pi z) \\
\sin(2\pi x)\sin(\pi y)^2\sin(2\pi z) \\
\sin(2\pi x)\sin(2\pi y)\sin(\pi z)^2
\end{bmatrix}, \quad \bold{w} = -3\pi\cos(3\pi t) \bold{u}.
\end{equation}
We partition $\Omega$ by using a conforming hexahedral mesh of granularity $h$, and we use a uniform time domain partition of step size $\Delta t$ for the time interval $[0,T]$. We choose a polynomial degree $ p \ge 2$ for the space discretization and $ r \ge 1$ for the temporal one. We firstly set $h=0.0125$ corresponding to $512$ elements and fix $p=6$, and let the time step $\Delta t$ varying from $0.4$ to $0.00625$ for $r=1,2,3,4$. The computed energy errors are shown in Figure \ref{Fig:ConvergenceTest3D} (left). We can observe that the numerical results are in agreement with the theoretical ones, cf. Theorem~\ref{Th:ErrorEstimate}. We note that with $r=4$, the error reaches a plateau for $\Delta t \leq 0.025$. However, this effect could be easily overcome by increasing the spatial polynomial degree $p$ and/or by refining the mesh size $h$.
Then, we fix a grid size $h=0.25$, a time step $\Delta t=0.1$ and let vary together the polynomial degrees, $p=r=2,3,4,5$. Figure \ref{Fig:ConvergenceTest3D} (right) shows an exponential decay of the error.
\begin{figure}
\includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Time.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Degree.png}
\caption{Test case of Section~\ref{Sec:3DConvergence}. Left: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the time-step $\Delta t$, with $r=1,2,3,4$, $h=0.125$ and $p=6$. Right: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the polynomial degree $p=r$, with $\Delta t = 0.1$, $h=0.25$.}
\label{Fig:ConvergenceTest3D}
\end{figure}
\subsubsection{Plane wave propagation}
\label{Sec:PlaneWave}
The aim of this test is to compare the performance of the proposed method STDG with the space-time DG method (here referred to as STDG$_0$) firstly presented in \cite{Paper_Dg-Time} and then applied to 3D problems in \cite{AnMaMi20}. The difference between STDG$_0$ and STDG is in the way the time approximation is obtain. Indeed, the former integrates the second order in time differential problem, whereas the latter discretizes the first order in time differential system.
On the one hand, as pointed out in \cite{AnMaMi20}, the main limitation of the STDG$_0$ method is the ill-conditioning of the resulting stiffness matrix that makes the use of iterative solvers quite difficult. Hence, for STDG$_0$ direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method.
On the other hand, even if the final linear systems stemming from STDG$_0$ and STDG methods are very similar (in fact they only differ upon the definition of the (local) time matrices) we obtain for the latter a well-conditioned system matrix making iterative methods employable and complex 3D problems solvable.
Here, we consider a plane wave propagating along the vertical direction in two (horizontally stratified) heterogeneous domains. The source plane wave is polarized in the $x$ direction and its time dependency is given by a unit amplitude Ricker wave with peak frequency at $2~{\rm Hz}$. We impose a free surface condition on the top surface, absorbing boundary conditions on the bottom surface and homogeneous Dirichlet conditions along the $y$ and $z$ direction on the remaining boundaries. We solve the problem in two domains that differs from dimensions and material properties, and are called as Domain A and Domain B, respectively.
Domain A has dimension $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-500,0)~{\rm m}$, cf. Figure~\ref{Fig:TutorialDomain}, and is partitioned into 3 subdomains corresponding to the different material layers, cf. Table~\ref{Tab:TutorialMaterials}. The subdomains are discretized in space with a uniform cartesian hexahedral grid of size $h = 50~{\rm m}$ that results in 40 elements.
Domain B has dimensions $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-1850,0)~{\rm m}$, and has more layers, cf. Figure~\ref{Fig:TorrettaDomain} and Table~\ref{Tab:TorrettaMaterials}. The subdomains are discretized in space with a cartesian hexahedral grid of size $h$ ranging from $15~{\rm m}$ in the top layer to $50~{\rm m}$ in the bottom layer. Hence, the total number of elements is 1225.
\begin{figure}
\begin{minipage}{\textwidth}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{TutorialDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain A. Computational domain $\Omega = \cup_{\ell=1}^{3}\Omega_{\ell}$.}
\label{Fig:TutorialDomain}
\end{minipage}
\hfill
\begin{minipage}{0.65\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 150 $ & $1800$ & $600$ & $300$ & $0.166$ \\
\hline
$\Omega_2$ & $ 300 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\
\hline
$\Omega_3$ & $ 50 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\
\hline
\end{tabular}
\captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain A. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Tab:TutorialMaterials}
\end{minipage}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{\textwidth}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{TorrettaDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain B. Computational domain $\Omega = \cup_{\ell=1}^{11}\Omega_{\ell}$.}
\label{Fig:TorrettaDomain}
\end{minipage}
\hfill
\begin{minipage}{0.65\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 15 $ & $1800$ & $1064$ & $236$ & $0.261$ \\
\hline
$\Omega_2$ & $ 15 $ & $1800$ & $1321$ & $294$ & $0.216$ \\
\hline
$\Omega_3$ & $ 20 $ & $1800$ & $1494$ & $332$ & $0.190$ \\
\hline
$\Omega_4$ & $ 30 $ & $1800$ & $1664$ & $370$ & $0.169$ \\
\hline
$\Omega_5$ & $ 40 $ & $1800$ & $1838$ & $408$ & $0.153$ \\
\hline
$\Omega_6$ & $60 $ & $1800$ & $2024$ & $450$ & $0.139$ \\
\hline
$\Omega_7$ & $ 120 $ & $2050$ & $1988$ & $523$ & $0.120$ \\
\hline
$\Omega_8$ & $500 $ & $2050$ & $1920$ & $600$ & $0.105$ \\
\hline
$\Omega_9$ & $ 400 $ & $2400$ & $3030$ & $1515$ & $0.041$ \\
\hline
$\Omega_{10}$ & $ 600 $ & $2400$ & $4180$ & $2090$ & $0.030$ \\
\hline
$\Omega_{11}$ & $ 50 $ & $2450$ & $5100$ & $2850$ & $0.020$ \\
\hline
\end{tabular}
\captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain B. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Tab:TorrettaMaterials}
\end{minipage}
\end{minipage}
\end{figure}
In Figure~\ref{Fig:PlanewaveDisplacement} on the left (resp. on the right) we report the computed displacement $\bm{u}$ along the $x-$axis, registered at point $P=(50, 50, 0)~{\rm m}$ located on the top surface
for Domain A (resp. Domain B).
We compare the results with those obtained in \cite{AnMaMi20}, choosing a polynomial degree $p=r=2$ in both space and time variables and a time step $\Delta t = 0.01$. In both cases, we can observe a perfect agreement of the two solutions.
\begin{figure}
\includegraphics[width=0.49\textwidth]{TutorialDisp.png}
\includegraphics[width=0.49\textwidth]{TorrettaDisp.png}
\caption{Test case of Section~\ref{Sec:PlaneWave}. Computed displacement $\bm{u}$ along $x-$axis registered at $P=(50, 50, 0)~{\rm m}$ obtained employing the proposed formulation, i.e. STDG method, and the method \cite{AnMaMi20}, i.e. STDG$_0$, for Domain A (left) and Domain B (right). We set the polynomial degree $p=r=2$ in both space and time dimensions and time step $\Delta t = 0.01$.}
\label{Fig:PlanewaveDisplacement}
\end{figure}
In Table~\ref{Tab:Comparison} we collect the condition number of the system matrix, the number of GMRES iterations and the execution time for the STDG$_0$ and STDG methods applied on a single time integration step, computed by using Domain A and Domain B, respectively.
From the results we can observe that the proposed STDG method outperforms the STDG$_0$ one, in terms of condition number and GMRES iteration counts for the solution of the corresponding linear system. Clearly, for small problems, when the storage of the system matrix and the use of a direct solvers is possible the STSG$_0$ remains the most efficient solution.
\begin{table}[h!]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dom.} &
\multirow{2}{*}{$p$} &
\multicolumn{2}{c|}{Condition number} & \multicolumn{2}{c|}{\# GMRES it.} & \multicolumn{2}{c|}{Execution time [s]} \\ \cline{3-8}
&& STSG$_0$ & STDG & STSG$_0$ & STDG & STSG$_0$ & STDG \\
\hline
\hline
A & 2 & $1.2\cdot10^9$ & $1.3\cdot10^2$ & $1.5\cdot10^4$ & $27$ & $1.1$ & $3.0\cdot10^{-3}$\\
\hline
A & 4 & $2.7\cdot10^{10}$ & $2.8\cdot10^3$ & $>10^6$ & $125$ & $>2200$ & $0.3\cdot10^{-1}$\\
\hline
B & 2 & $1.3\cdot10^{14}$ & $5.0\cdot10^2$ & $4.2\cdot10^5$ & $56$ & $452.3$ & $6.5\cdot10^{-2}$\\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:PlaneWave}. Comparison between the proposed formulation \eqref{Eq:WeakProblem} and the method presented in \cite{AnMaMi20} in terms of conditioning and iterative resolution. We set $p=r$ and we fix the relative tolerance for the GMRES convergence at $10^{-12}$. }
\label{Tab:Comparison}
\end{table}
\subsubsection{Layer over a half-space}
\label{Sec:LOH1}
In this experiment, we test the performance of the STDG method by considering a benchmark test case, cf. \cite{DaBr01} for a real elastodynamics application, known in literature as layer over a half-space (LOH). We let $\Omega=(-15,15)\times(-15,15) \times(0,17)~{\rm km}$ be composed of two layers with different material properties, cf. Table~\ref{Table:LOH1Materials}. The domain is partitioned employing two conforming meshes of different granularity. The ``fine'' (resp. ``coarse'') grid is composed of $352800$ (resp. $122400$) hexahedral elements, varying from size $86~{\rm m}$ (resp. $167~{\rm m}$), in the top layer, to $250~{\rm m}$ (resp. $500~{\rm m}$) in the bottom half-space, cf. Figure~\ref{Fig:LOH1Domain}. On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surfacews we consider absorbing boundary conditions \cite{stacey1988improved}.
\begin{figure} [h!]
\centering
\includegraphics[width=0.9\textwidth]{LOHDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Computational domain $\Omega = \cup_{\ell=1}^{2}\Omega_{\ell}$ and its partition.}
\label{Fig:LOH1Domain}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[km]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 1 $ & $2600$ & $2000$ & $4000$ & $0$ \\
\hline
$\Omega_2$ & $ 16 $ & $2700$ & $3464$ & $6000$ & $0$ \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:LOH1}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Table:LOH1Materials}
\end{table}
The seismic excitation is given by a double couple point source located at the center of the domain expressed by
\begin{equation}
\label{Eq:LOH1Source}
\bm{f}(\bm{x},t) = \nabla \delta (\bm{x}-\bm{x}_S)M_0\bigg(\frac{t}{t_0^2}\bigg)\exp{(-t/t_0)},
\end{equation}
where $\bm{x}_S = (0,0,2)~{\rm km}$, $M_0 = 10^8~{\rm Nm}$ is the scalar seismic moment, $t_0 = 0.1~{\rm s}$ is the smoothness parameter, regulating the frequency content and amplitude of the source time function. The semi-analytical solution is available in \cite{DaBr01} together with further details on the problem's setup.
We employ the STDG method with different choices of polynomial degrees and time integration steps. In Figures~\ref{Fig:LOH1ResultsFine41}-\ref{Fig:LOH1ResultsCoarse44-2} we show the velocity wave field computed at point $(6,8,0)~{\rm km}$ along with the reference solution, in both the time and frequency domains, for the sets of parameters tested. We also report relative seismogram error
\begin{equation}
\label{Eq:LOH1Error}
E = \frac{\sum_{i=1}^{n_S}(\bm{u}_{\delta}(t_i)-\bm{u}(t_i))^2}{\sum_{i=1}^{n_S}(\bm{u}(t_i)^2)},
\end{equation}
where $n_S$ is the number of samples of the seismogram, $\bm{u}_{\delta}(t_i)$ and $\bm{u}(t_i)$ are, respectively, the value of seismogram at sample $t_i$ and the corresponding reference value. In Table~\ref{Table:LOH1Sensitivity} we report the set of discretization parameters employed, together with some results obtaineds in terms of accuracy and computational efficiency.
\begin{figure} [h!]
\centering
\includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Vel.png}%
\includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=1$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsFine41}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=2$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsFine42}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsCoarse44-3}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 5\cdot10^{-2}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsCoarse44-2}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Grid} & \multirow{2}{*}{$p$} & \multirow{2}{*}{$r$} & \multirow{2}{*}{$\Delta t~[{\rm s}]$} & GMRES & Exec. Time & Tot. Exec. & \multirow{2}{*}{Error $E$}\\
&&&&iter.&per iter. [s] &Time [s]&\\
\hline
\hline
Fine & $4$ & $1$ & $10^{-3}$ & $6$ & $2.9$ & $3.08\cdot10^{4}$ & $0.015$ \\
\hline
Fine & $4$ & $2$ & $10^{-3}$ & $8$ & $5.6$ & $6.59\cdot10^{4}$ & $0.020$ \\
\hline
Coarse & $4$ & $4$ & $10^{-3}$ & $12$ & $7.6$ & $8.14\cdot10^{4}$ & $0.229$ \\
\hline
Coarse & $4$ & $4$ & $5\cdot10^{-2}$ & $24$ & $27.9$ & $7.22\cdot10^{4}$ & $0.329$ \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:LOH1}. Set of discretization parameters employed, and corresponding results in terms of computational efficiency and accuracy. The execution times are computed employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy).}
\label{Table:LOH1Sensitivity}
\end{table}
By employing the ``fine'' grid we obtain very good results both in terms of accuracy and efficiency. Indeed, the minimum relative error is less than $2\%$ with time polynomial degree $r=1$, see Figure~\ref{Fig:LOH1ResultsFine41}. Choosing $r=2$, as in Figure~\ref{Fig:LOH1ResultsFine42}, the error is larger (by a factor 40\%) but the solution is still enough accurate. However, in terms of total Execution time, with $r=1$ the algorithm performs better than choosing $r=2$, cf. Table~\ref{Table:LOH1Sensitivity}, column 7.
As shown in Figure~\ref{Fig:LOH1ResultsCoarse44-3}, the ``coarse'' grid produces larger errors and worsen also the computational efficiency, since the number of GMRES iterations for a single time step increases. Doubling the integration time step $\Delta t$, see Figure~\ref{Fig:LOH1ResultsCoarse44-2}, causes an increase of the execution time for a single time step that partly compensate the decrease of total number of time steps. Consequently, the total execution time reduces but only by 12\%. In addition, this choice causes some non-physical oscillations in the code part of the signal that contribute to increase the relative error.
Indeed, we can conclude that for this test case, spatial discretization is the most crucial aspect. Refining the mesh produces a great decrease of the relative error and increases the overall efficiency of the method. Concerning time integration, it appears that the method performs well even with low order polynomial degrees both in terms of computational efficiency and of accuracy.
The method achieves its goal of accurately solving this elastodynamics problem that counts between 119 (``coarse'' grid) and 207 (``fine'' grid) millions of unknowns. The good properties of the proposed STDG method is once again highlighted by the fact that all the presented results are achieved without any preconditioning of the linear system.
\subsection{Seismic wave propagation in the Grenoble valley}
\label{Sec:Grenoble}
In this last experiment, we apply the STDG method to a real geophysical study \cite{ChSt10}. This application consists in the simulation of seismic wave propagation generated by an hypothetical earthquake of magnitude $M_w = 6$ in the Grenoble valley, in the French Alps. The Y-shaped Grenoble valley, whose location is represented in Figure~\ref{Fig:GrenobleDomain}, is filled with late quaternary deposits, a much softer material than the one composing the surrounding mountains. We approximate the mechanical characteristics of the ground by employing three different material layers, whose properties are listed in Table~\ref{Table:GrenobleMaterials}. The alluvial basin layer contains soft sediments that compose the Grenoble's valley and corresponds to the yellow portion of the domain in Figure~\ref{Fig:GrenobleDomain}. Then, the two bedrock layers approximate the stiff materials composing the surrounding Alps and the first crustal layer. The earthquake generation is simulated through a kinematic fault rapture along a plane whose location is represented in Figure~\ref{Fig:GrenobleDomain}.
\begin{figure} [h!]
\centering
\includegraphics[width=0.9\textwidth]{grenoble_paraview_2}%
\caption{Test case of Section~\ref{Sec:Grenoble}. Geophysical domain and its location.}
\label{Fig:GrenobleDomain}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|@{}|l|r|r|r|r|}
\hline
Layer & $\rho~[{\rm kg/m^3}]$ & $c_s~[\rm{m/s}]$ & $c_p~[\rm{m/s}]$ & $ \zeta~[\rm {1/s}]$ \\
\hline
\hline
Alluvial basin & 2140 + 0.125 $z_{d}$ & 300 + 19 $\sqrt{z_{d}}$ & 1450 + 1.2 $z_{d}$ & 0.01 \\
\hline
Bedrock $(0-3)$ km & 2720 & 3200 & 5600 & 0 \\
\hline
Bedrock $(3-7)$ km & 2770 & 3430 & 5920 & 0 \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:Grenoble}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$. $z_{d}$ measures the depth of a point calculated from the top surface.}
\label{Table:GrenobleMaterials}
\end{table}
The computational domain $\Omega=(0,50)\times(0,47)\times (-7,3)~{\rm km}$ is discretized with a fully unstructured hexahedral mesh represented in Figure~\ref{Fig:GrenobleDomain}. The mesh, composed of $202983$ elements, is refined in the valley with a mesh size $h=100~{\rm m}$, while it is coarser in the bedrock layers reaching $h\approx 1~{\rm km}$.
\begin{figure} [h!]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{monitors}%
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{GrenobleCrossSection}%
\end{minipage}
\caption{Left: surface topography in the Grenoble area. The white line indicates the monitor points examined in Figure~\ref{Fig:GrenobleVel}. Right: cross section of the valley in correspondence of the monitor points.}
\label{Fig:GrenoblePoints}
\end{figure}
\begin{figure} [h!]
\begin{minipage}{\textwidth}
\centering
\includegraphics[width=\textwidth]{sismogrammi}%
\end{minipage}
\caption{Test case of Section~\ref{Sec:Grenoble}. Computed velocity field at the monitored points in Figure~\ref{Fig:GrenoblePoints}, together with the computed peak ground velocity for each monitor point.
Comparisono between the STDG (bloack) solution and the SPECFEM (red) solution \cite{Chaljub2010QuantitativeCO}.}
\label{Fig:GrenobleVel}
\end{figure}
On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surface we consider absorbing boundary conditions \cite{stacey1988improved}. We employ the STDG method with polynomial degrees $p=3$ for the space discretization and $r=1$ for the time integration, together with a time step $\Delta t = 10^{-3}~{\rm s}$. We focus on a set of monitor points whose location is represented in Figure~\ref{Fig:GrenoblePoints}. In Figure~\ref{Fig:GrenobleVel}, we report the velocity field registered at these points compared with the ones obtained with a different code, namely SPECFEM \cite{Chaljub2010QuantitativeCO}. The results are coherent with the different location of the points. Indeed, we observe highly perturbed waves in correspondence of the points $1-7$ that are located in the valley, i.e. in the alluvial material. This is caused by a refraction effect that arises when a wave moves into a soft material from a stiffer one. Moreover, the wave remains trapped inside the layer bouncing from the stiffer interfaces. The absence of this effect is evident from the monitors $8$ and $9$ that are located in the bedrock material. These typical behaviors are clearly visible also in Figure~\ref{Fig:GrenobleSnap}, where the magnitude of the ground velocity is represented for different time instants.
Finally, concerning the computation efficiency of the scheme, we report that, with this choice of discretization parameters, we get a linear system with approximately $36$ millions of degrees of freedom that is solved in $17.5$ hours, employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy).
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{snapshot5}
\includegraphics[width=0.49\textwidth]{snapshot9}
\includegraphics[width=0.49\textwidth]{snapshot13}
\includegraphics[width=0.49\textwidth]{snapshot17}
\caption{Test case of Section~\ref{Sec:Grenoble}. Computed ground velocity at different time instants obtained with polynomial degrees $p=3$ and $r=1$, for space and time, respectively, and $\Delta t = 10^{-3}~s$.}
\label{Fig:GrenobleSnap}
\end{figure}
\section{Conclusions}
In this work we have presented and analyzed a new time Discontinuous Galerkin method for the solution of a system of second-order differential equations. We have built an energy norm that naturally arose by the variational formulation of the problem, and that we have employed to prove well-posedness, stability and error bounds. Through a manipulation of the resulting linear system, we have reduced the computation cost of the solution phase and we have implemented and tested our method in the open-source software SPEED (\url{http://speed.mox.polimi.it/}). Finally, we have verified and validated the proposed numerical algorithm through some two- and three-dimensional benchmarks, as well as real geophysical applications.
\section{Aknowledgements}
This work was partially supported by "National Group of Computing Science" (GNCS-INdAM). P.F. Antonietti has been supported by the PRIN research grant n. 201744KLJL funded by the Ministry of Education, Universities and Research (MIUR).
| proofpile-arXiv_065-25 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The properties of particle interactions determine the evolution of a quantum chromodynamical (QCD) system. Thorough understanding of these properties can help answer many fundamental questions in physics, such as the origin of the Universe or the unification of forces. This is one of the important reasons to collect data with particle accelerators, such as the Large Hadron Collider (LHC) at CERN. However, when collecting this data, we only register complex signals of high dimensionality which we can later interpret as signatures of final particles in the detectors. This interpretation stems from the fact that we, more or less, understand the underlying processes that produce the final particles.
In essence, from the all the particles produced in a collision at the accelerator, only the electron, the proton, the photon and the neutrinos are stable and can be reconstructed with certainty, given that you have the proper detector. Other particles are sometimes also directly detected, given that they reach the active volume of the detector without first decaying. These include muons, neutrons, charged pions and charged kaons. On the other hand, short-lived particles will almost surely decay before reaching the detector and we can only register the particles they decay into.
A similar situation arises with quarks, antiquarks and gluons, the building blocks of colliding nuclei. When a high energy collision happens, a quark within a nucleus behaves almost as if it doesn't interact with neighbouring particles, because of a property called asymptotic freedom. If it is struck with a particle from the other nucleus, it can be given sufficient momentum pointing outwards from the parent nucleus. However, we know that there are no free quarks in nature and that this quark needs to undergo a process called hadronisation. This is a process in which quark-antiquark pairs are generated such that they form hadrons. Most of the hadrons are short-lived and they decay into other, more stable, hadrons. The end result of this process is a jet of particles whose average momentum points in the direction of the original outgoing quark. Unfortunately, we don't know the exact quark, nor gluon, decay properties, which serves as a motivation for this work.
The determination of these properties is a long standing problem in particle physics. To determine them, we turn to already produced data and try to fit decay models onto them. With every new set of data our understanding changes. This is evident from the fact that we want to simulate a collision event, we can obtain, on average, slightly different results with different versions of the same tool \cite{pythia}. Therefore, even though simulation tools are regularly reinforced with new observations from data, we can not expect the complete physical truth from them.
Instead of trying to perform direct fits to data, we propose the use of machine learning methods to determine the decay properties. In fact, the onset of these methods is already hinted in the traditional approach, since a multivariate fit of decay models to data is already a form of a machine learning technique. It is only natural to extend the existing methods since we can't rely entirely on simulated data. In this work, we develop an interpretable model by first simulating a system of particles with well defined masses, decay channels, and decay probabilities. We take this to be the ,,true system'', whose decay properties we pretend not to know and want to reproduce. Mimicking the real world, we assume to only have the data that this system produces in the detector. Next, we employ an iterative method which uses a neural network as a classifier between events produced in the detector by the ,,true system'' and some arbitrary ,,test system''. In the end, we compare the distributions obtained with the iterative method to the ,,true'' distributions.
This paper is organized as follows: in the Materials and methods section we describe the developed artificial physical system and the algorithm used to recover underlying probability distributions of the system. Also, we present in detail the methodology used to obtain the presented results. In the Results section we present our findings to see whether our hypothesis holds true. We conclude the paper with the Discussion section...
\section{Materials and methods}
The code used for the development the particle generator, the neural network models and the calculations is written in the the Python programming language using the Keras module with the TensorFlow2 backend \cite{keras}. The calculations were performed using a standardized PC setup equipped with an NVIDIA Quadro p6000 graphics processing unit.
\subsection{The physical system}
In particle physics, jets are detected as collimated streams of particles. The jet production mechanism is in essence clear: partons from the initial hard process undergo the fragmentation and hadronization processes. In this work, we develop a simplified physical model in which the fragmentation process is modeled as cascaded $1 \rightarrow 2$ independent decays of partons with a constant number of decays. This way, any single jet can be represented as a perfect binary tree of depth $N$, corresponding to $2^N$ particles in the final state. Since the initial parton properties are set, jets can be described by $2^N - 1$ decay parameters. We represent each decay of a mother parton of mass $M$ by four real numbers $(\frac{m_1}{M}, \frac{m_2}{M}, \theta, \phi)$, where $m_1$ and $m_2$ are the masses of the daughter particles and $\theta$ and $\phi$ are the polar and azimuthal angle of the lighter particle, as measured from the rest frame of the mother particle. For simplicity we make all the decays isotropic, which isn't necessarily true in real processes. To fully define our physical system we set a decay probability distribution function $p(m_1, m_2 | M)$, the details of which are given in the following subsection. The aim of our proposed algorithm is to recover these underlying probability distributions, assuming we have no information on them, using only a dataset consisting of jets described with final particles' four-momenta, as one would get from a detector.
\subsection{Particle generator}
\label{ParticleGenerator}
To generate the jets, we developed an algorithm where we take a particle of known mass that undergoes three successive decays. We consider only the possibility of discrete decays, in the sense that the decay product masses and decay probabilities are well defined. We consider a total of 10 types of particles, labelled A -- J, which can only decay into each other. The masses and the decay probabilities of these particles are given in Table \ref{TableParticles}. In this scenario, the ,,decay probabilities'' $p$ are given by the ratios of decay amplitudes. Thus, the total sum of the probabilities for a given particle to decay into others has to be one, and the probabilities describe the number of produced daughters per $N$ decays, scaled by $1/N$.
\vskip 5mm
\begin{table}[h!t!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{A} & \multicolumn{2}{|c|}{B} & \multicolumn{2}{|c|}{C} & \multicolumn{2}{|c|}{D}& \multicolumn{2}{|c|}{E} \\\hline
\multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{0.1} & \multicolumn{2}{|c|}{0.6} & \multicolumn{2}{|c|}{1.3} & \multicolumn{2}{|c|}{1.9}& \multicolumn{2}{|c|}{4.4} \\\hline
\multicolumn{2}{|c|}{$p$ / channel} & 1 & A & 0.7 & B & 1 & C & 0.3 & A+C & 0.6 & C+C \\\hline
\multicolumn{2}{|c|}{} & & & 0.3 & A+A & & & 0.3 & A+A & 0.4 & E \\\hline
\multicolumn{2}{|c|}{} & & & & & & & 0.4 & D & & \\\hline\hline
\multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{F} & \multicolumn{2}{|c|}{G} & \multicolumn{2}{|c|}{H} & \multicolumn{2}{|c|}{I}& \multicolumn{2}{|c|}{J} \\\hline
\multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{6.1} & \multicolumn{2}{|c|}{8.4} & \multicolumn{2}{|c|}{14.2} & \multicolumn{2}{|c|}{18.1}& \multicolumn{2}{|c|}{25} \\\hline
\multicolumn{2}{|c|}{$p$ / channel} & 0.5 & A+A & 0.9 & B+B & 0.6 & D+D & 1 & F+G & 0.5 & F+I \\\hline
\multicolumn{2}{|c|}{} & 0.5 & B+C & 0.1 & A+F & 0.25 & D+E & & & 0.4 & G+H \\\hline
\multicolumn{2}{|c|}{} & & & & & 0.15 & E+F & & & 0.1 & E+E \\\hline
\end{tabular}
\caption{Allowed particle decays in the discrete model. The designation $p$/channel shows the probability that a mother particle will decay into specific daughters.}
\label{TableParticles}
\end{table}
\vskip 5mm
Particles A--E are set to be long lived and can thus be detected in the detector, which only sees the decay products after several decays. This can be seen in table \ref{TableParticles} as a probability for a particle to decay into itself. In this way, we assure two things: first, that we have stable particles and second, that each decay in the binary tree is recorded, even if it is represented by a particle decaying into itself. Particles A and C are completely stable, since they only have one ,,decay'' channel, in which they decay back into themselves. On the other hand, particles F--I are hidden resonances: if one of them appears in the i-th step of the decay chain, it will surely decay into other particles in the next, (i+1)-th step of the chain.
To create a jet, we start with particle J, which we call the mother particle, and allow it to decay in one of the decay channels. Each of the daughter particles then decays according to their decay channels, and this procedure repeats a total of 3 times. In the end, we obtain a maximum of 8 particles from the set A--E, with known momenta as measured from the rest frame of the mother particle. An example of a generated jet is given in Fig.\ref{FigRaspadi}.
\begin{figure}[h!t!]
\centering
\begin{forest}
for tree={
grow=east,
edge={->},
parent anchor=east,
child anchor=west,
s sep=1pt,
l sep=1cm
},
[J
[F
[A[A]]
[A[A]]
]
[I
[F
[B]
[C]
]
[G
[B]
[B]
]
]
]
\end{forest}
\caption{An example of the operation of the discrete jet generator. The mother particle J decays into particles I and F. According to decay probabilities, this happens in half the generated jets. The daughter particles subsequently decay two more times, leaving only stable, detectable particles in the final state.}
\label{FigRaspadi}
\end{figure}
\subsection{Introduction to the algorithm}
Let's assume we have two distinct datasets: one that consists of samples from a random variable X distributed with an unknown probability density $p(x)$, which we call the ,,real'' dataset, and the other, which consists of samples from a random variable Y distributed with a known probability density $q(x)$, which we call the ,,test'' dataset. We would like to perform a hypothesis test between $H_{0}:p = p(x)$ and $H_{1}:p = q(x)$ using a likelihood-ratio test. The approach we use follows earlier work employs the Neyman–Pearson lemma \cite{NNNP1, NNNP2, NNNP3}. This lemma states that the likelihood ratio, $\Lambda$, given by:
\begin{equation}
\Lambda (p \mid q)\equiv \frac {{\mathcal {L}}(x \mid real)}{{\mathcal {L}}(x \mid test)} = \frac{p(x)}{q(x)}
\label{NP}
\end{equation}
is the most powerful test at the given significance level \cite{NeyPear}.
We can obtain an approximate likelihood ratio $\Lambda$ by transforming the output of a classifier used to discriminate between the two datasets. Assume that the classifier is a neural network optimized by minimizing the \textit{crossentropy} loss. In this case, the network output gives the probability of $x$ being a part of the real dataset $C_{nn}(x) = p(real \mid x)$ \cite{NNProbability}. If the datasets consist of the same number of samples, we can employ the Bayes' theorem in a simple manner:
\begin{eqnarray}
p(real \mid x) &=& \frac{p(x \mid real)p(real)}{p(x \mid real) p(real)+p(x \mid test)p(test)} \nonumber \\
&=& \frac{p(x \mid p_{\textrm{real}})}{p(x \mid real)+p(x \mid test)} = \frac{\Lambda}{\Lambda+1}\,.
\label{Bayes}
\end{eqnarray}
A simple inversion of Eq.\ref{Bayes} gives:
\begin{equation}
\Lambda = \frac{p(x)}{q(x)} = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)},
\end{equation}
\begin{equation}
p(x) = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)} q(x).
\label{pq}
\end{equation}
Therefore, in ideal conditions, the unknown probability density $p(x)$ describing the real dataset can be recovered with the help of the known probability density $q(x)$ and a classifier, using \ref{pq}. It must be noted that \ref{pq} is strictly correct only for optimal classifiers, which are unattainable. In our case, the classifier is optimized by minimizing the \textit{crossentropy} loss defined by:
\begin{equation}
L = -\frac{1}{n}\sum_{i=1}^{n}\left[y(x_i)\ln C_{\textrm{NN}}(x_i) + (1-y(x_i))\ln (1-C_{\textrm{NN}}(x_i)) \right]\,,
\end{equation}
where $y(x_i)$ is 1 if $x_i$ is a part of the real dataset, and 0 if $x_i$ is a part of the test dataset. We can safely assume that the final value of loss of the suboptimal classifier is greater than the final value of loss of the optimal classifier:
\begin{equation}
L_{\textrm{optimal}} < L < \ln{2} \,.
\end{equation}
The value of $\ln 2$ is obtained under the assumption of the \textit{worst} possible classifier. To prove our findings, in the next step we regroup the sums in the loss function in two parts, corresponding to the real and the test distributions:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}^{\textrm{optimal}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln \frac{1}{2},
\label{Lreal}
\end{equation}
\begin{equation}
-\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i) \right]< -\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}(x_i)\right] < -\frac{1}{n}\sum_{i \in test}\ln \frac{1}{2}.
\label{Ltest}
\end{equation}
After expanding inequality \ref{Lreal} we obtain:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}{1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[\frac{C_{\textrm{NN}}(x_i)}{1 - C_{\textrm{NN}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1.
\label{Expanded}
\end{equation}
According to Eq.\ref{pq}, we can recover the real probability density $p(x)$ when using the optimal classifier. However, if one uses a suboptimal classifier, a slightly different probability density $p'(x)$ will be calculated. Since the ratios that appear as arguments of the logarithms in Eq.\ref{Expanded} correspond to distribution ratios, it follows that:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p(x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p'
(x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1.
\end{equation}
After some simplification this becomes:
\begin{equation}
\sum_{i \in real} \ln p(x_i) > \sum_{i \in real} \ln p'(x_i) > \sum_{i \in real} \ln q(x_i).
\label{proof1}
\end{equation}
If an analogous analysis is carried out for inequality \ref{Ltest} we get:
\begin{equation}
\sum_{i \in test} \ln p(x_i) < \sum_{i \in test} \ln p'(x_i) < \sum_{i \in test} \ln q(x_i).
\label{proof2}
\end{equation}
From this, it can be seen that probability density $p'(x)$ is on average closer to the real probability density $p(x)$ than to the test probability density $q(x)$. In a realistic case, Eq.\ref{pq} can't be used to completely recover the real probability density $p(x)$. However, it can be used in an iterative method; starting with a known distribution $q(x)$, we can approach the real distribution more and more with each iteration step.
\subsection{A simple example}
Let us illustrate the recovery of an unknown probability density by using a classifier on a simple example. We start with a set of 50 000 real numbers generated from a random variable with a probability density given by
\begin{equation}
p_{\textrm{real}}(x) = \frac{1}{4} \mathcal{N}(-1,1) + \frac{3}{4}\mathcal{N}(3,1)\,,
\label{eqpreal}
\end{equation}
where $\mathcal{N}(\mu,\sigma^2)$ denotes a normal distribution. A histogram of values in this set is shown in \ref{hsimple}. Let's now assume we don't know $p_{\textrm{real}}(x)$ and want to recover it using the procedure outlined in the previous subsection. This set will be denoted as the ,,real'' dateset and the underlying probability density will be denoted as the ,,real'' probability density.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{Images/nn_simple_real.png}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{Images/hsimple.png}
\end{subfigure}
\caption{(\textbf{a}) The normalized probability density for the example, given by Eq. \ref{eqpreal}. (\textbf{b}) A histogram of values sampled from the set generated by the same equation.}
\label{hsimple}
\end{figure}
To construct the ,,test'' dataset, we generate values with a uniform probability density in the interval $\left[-10,10 \right]$. Finally, we construct a simple neural network which is used as a classifier that distinguishes the examples from the real dataset from examples from the test dataset. The classifier we use is a simple \textit{feed-forward} neural network with 100 hidden units using a ReLU activation function. The activation function of the final neural network output is the \textit{sigmoid} function, which we use to constrain the output values to the interval [0,1]. After the classifier is trained to discriminate between the two datasets by minimizing the \textit{binary crossentropy} loss, we evaluate its output at 200 equidistant points between -10 and 10. Using Eq.\ref{pq}, the probability distribution $p_{\textrm{calculated}}(x)$ is calculated using the classifier outputs. The calculated $p_{\textrm{calculated}}(x)$ is compared to the real probability density $p_{\textrm{real}}(x)$ and is shown in \ref{nn_simple_0}.
Although the resulting probability density differs from the real probability density due to the non-ideal classifier, we can conclude that the calculated $p_{\textrm{calculated}}(x)$ is considerably closer to $p_{\textrm{real}}(x)$ than to uniform probability density $q(x)$ used to generate the test dataset. Now, if we use the calculated $p_{\textrm{calculated}}(x)$ to construct a new test dataset and repeat the same steps, we can improve the results even more. This procedure can therefore iteratively improve the resemblance of $p_{\textrm{calculated}}(x)$ to $p_{\textrm{real}}(x)$ to the point where the datasets are so similar that the classifier cannot distinguish between them. In this simple example convergence is reached after the 5th iteration, since no significant improvement is observed afterwards. The calculated probability density $p_{\textrm{calculated}}(x)$ after the final iteration is shown in \ref{nn_simple_0} compared to the real distribution $p_{\textrm{real}}(x)$. It is clear that in this case the procedure converges, and we could possibly obtain a better match between $p_{\textrm{calculated}}(x)$ and $p_{\textrm{real}}(x)$ if we used a more optimal classifier.
\begin{figure}[h!t!]
\centering
\includegraphics[width=15cm]{Images/nn_simple_0_new.png}
\caption{The calculated $p_{\textrm{calculated}}(x)$ (blue line) compared to the real probability density $p_{\textrm{real}}$(x) (orange line). (\textbf{a}) The left panel shows the comparison after one iteration of the algorithm, alongside the starting ,,test'' distribution (green line).(\textbf{b}) The right panel shows the comparison after the 5th iteration.}
\label{nn_simple_0}
\end{figure}
In essence, a simple histogram could be used in this simple example to determine the underlying probability distribution instead of using the method described above. However, in case of multivariate probability distributions, which can be products of unknown probability distributions, a histogram approach would not prove useful.
\subsection{Extension to jets}
We would now like to apply the described procedure on the datasets that contain jets. Every jet, represented by a binary tree of depth $N$, consists of $2^N-1$ independent decays producing a maximum of $2^N$ particles in the final state. Since all the decays are isotropic in space, a jet can be described with a 4 $\times$ $(2^N-1)$--dimensional vector $\vec{x}$ and a probability distribution function given by:
\begin{equation}
p\left(\vec{x} \right) = \prod_i^{2^N-1} p(m_1^i, m_2^i | M)p(\theta^i) p(\phi^i),
\label{jet_prob}
\end{equation}
where $i$ denotes the decay index and ($m_1^i$, $m_2^i$, $\theta^i$, $\phi_i$) are the components of the vector $\vec{x}$. Since both angles are uniformly spatially distributed, they contribute to the probability with a simple constant factor. Therefore, when plugging $p\left(\vec{x} \right)$ from Eq.\ref{jet_prob} into Eq.\ref{pq} we can omit angles, since the constant factors will cancel each other out:
\begin{equation}
\prod_i^{2^N-1} p(m_1^i, m_2^i | M) = \frac{C_{NN}(\vec{x})}{1 - C_{NN}(\vec{x})} \prod_i^{2^N-1} q(m_1^i, m_2^i | M).
\label{pq_jets}
\end{equation}
Taking the logarithm of both sides:
\begin{equation}
\sum_i^{2^N-1} \ln p(m_1^i, m_2^i | M) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M).
\label{log_pq_jets}
\end{equation}
Unfortunately, we cannot explicitly obtain the probability $p(m_1, m_2 \mid M)$ directly from Eq.\ref{log_pq_jets} without solving a linear system of equations. This task proves to be computationally exceptionally challenging due to the high dimensionality of the dataset. In order to avoid this obstacle, we introduce a neural network $f$ to approximate $\ln p(m_1,m_2|M)$. We can optimize this neural network by minimizing the \textit{mean squared error} applied to the two sides of Eq.\ref{log_pq_jets}.
\subsection{The 2 Neural Networks (2NN) algorithm}
At this point we are ready to recover the underlying probability distributions from an existing dataset that consists of jets described by the four-momenta of the final particles. We denote the jets from this dataset as ,,real''. The building blocks of the full recovery algorithm are two independent neural networks; the aforementioned classifier $C_{NN}$ and the neural network $f$. Based on the usage of 2 neural networks, we dubbed the algorithm \textit{2NN}. The detailed architectures of both networks are given in Appendix A.
The workflow of the 2NN algorithm is simple: first we initialize the parameters of both neural networks. Then, we generate a test dataset using the neural network $f$. The test dataset and the real dataset are fed into the classifier network, which produces a set of linear equations in the form of Eq.\ref{log_pq_jets}. We approximate the solution to these by fitting the neural network $f$, which in turn produces a new test dataset. The procedure is continued iteratively until there are no noticeable changes in the difference of the real and test distributions. More detailed descriptions of the individual steps are given in the next subsections.
\subsubsection{Generating the test dataset}
After the parameters of the neural network $f$ are initialized, we need to generate a test dataset of jets with known decay probabilities $q(\vec{x})$. The input of the neural network $f$ is a vector consisting of 3 real numbers: $a = m_1/M$, $b = m_2/M$ and $M$. We denote the output of the neural network with $f(a,b,M)$. Due to conservation laws, the sum $a+b$ needs to be strictly less or equal to 1. We can assume $a \leq b$ without any loss of generality. In order to manipulate with probabilities a partition function:
\begin{equation}
Z(M) = \int_{\Omega} e^{f(a,b,M)} \,\mathrm{d}a \mathrm{d}b
\label{Z}
\end{equation}
needs to be calculated. Here, $\Omega$ denotes the entire probability space and is shown as the gray area in the left panel of Fig. \ref{prob_space}. To calculate the integral in the above expression, the probability space is discretized into 650 equal areas shown in the right panel of Fig.\ref{prob_space}. These areas are obtained by discretizing the parameters $a$ and $b$ into equidistant segments of length 0.2. After the discretization, the partition function $Z(M)$ then becomes:
\begin{equation}
Z(M) \approx \sum_j \sum_{k} e^{f(a_j,b_k,M)} \,.
\label{Z_discrete}
\end{equation}
\begin{figure}[h!t!]
\begin{center}
\resizebox{\columnwidth}{!}{%
\begin{tikzpicture}
\draw[->,thick,>=stealth] (-1,0) -- (12,0) node[right] {{\huge $a$}};
\draw[->,thick,>=stealth] (0,-1) -- (0,12) node[left] {{\huge $b$}};
\draw[dashed,thick] (0,10)--(10,0);
\draw[dashed,thick] (0,0)--(10,10);
\node[rotate=45] at (8,8.5) {\huge $a = b$};
\node[rotate=-45] at (8,2.5) {\huge $a + b = 1$};
\fill[black!10] (0,0) -- (5,5) -- (0,10) -- cycle;
\node[] at (2,5) {{\fontsize{40}{60}\selectfont $\Omega$}};
\draw[->,thick,>=stealth] (14,0) -- (27,0) node[right] {{\huge $a$}};
\draw[->,thick,>=stealth] (15,-1) -- (15,12) node[left] {{\huge $b$}};
\draw[dashed,thick] (15,10)--(25,0);
\draw[dashed,thick] (15,0)--(25,10);
\node[rotate=45] at (23,8.5) {\huge $a = b$};
\node[rotate=-45] at (23,2.5) {\huge $a + b = 1$};
\foreach \x in {0,1,2,...,25} {
\draw[thick] (15+0.2*\x,-0.2+0.2*\x) -- (15+0.2*\x,10.2-0.2*\x);};
\foreach \y in {0,1,2,...,25} {
\draw[thick] (15,-0.2+0.2*\y) -- (15+0.2*\y,-0.2+0.2*\y);};
\draw[thick] (15,-0.2+0.2*26) -- (15+0.2*25,-0.2+0.2*26);
\foreach \y in {0,1,2,...,25} {
\draw[thick] (15,5.2+0.2*\y) -- (20-0.2*\y,5.2+0.2*\y);};
\end{tikzpicture}%
}
\caption{(\textbf{a}) The left panel shows the entire allowed probability space of the parameters $a$ and $b$, designated by $\Omega$. Due to conservation laws, $a+b \leq 1$ needs to hold true. To describe our system, we selected the case where $a \leq b$, which we can do without loss of generality. (\textbf{b}) The right panel shows the discretized space $\Omega$, as used to evaluate the partition function.}
\label{prob_space}
\end{center}
\end{figure}
To generate the jets which form the test dataset, we must generate each decay in the cascading evolution using the neural network $f$. Each of the decays is generated by picking a particular pair of parameters $(a,b)$ from the 650 possible pairs which form the probability space for a given mass $M$. The decay probability is then given by:
\begin{equation}
q(m_1, m_2 \mid M) = \frac{e^{f(a,b,M)}}{Z(M)}\,.
\label{q}
\end{equation}
After applying this procedure we have a test dataset in which each jet is represented as a list of $2^N$ particles and their four-momenta. For each decay, we also store the pairs $(a^i,b^i)$ as well the corresponding decay probabilities.
\subsubsection{Optimizing the classifier}
The classifier used in this work is a convolutional neural network. The input to these type network are sets of images. For this purposes, all the jets are preprocessed by transforming the list of particles' four-momenta into jet images. Two 32$\times$32 images are produces for a single jet. In both images the axes correspond to the decay angles $\theta$ and $\phi$, while the pixel values are either the energy of the momentum of the particle found in that particular pixel. If a pixel contains two or more particles, their energy and momenta are summed and stored as pixel values. The transformation of the jet representations is done on both the real and the test datasets. We label the ,,real'' jet images with the digit 1 and ,,test'' jet images with the digit 0. The classifier is then optimized by minimizing the \textit{binary crossentropy} loss between the real and the test datasets. The optimization is performed by ADAM algorithm \cite{adam}. It is important to note that the sizes of both datasets need to be the same.
\subsubsection{Optimizing the neural network $f$}
After the classifier is optimized, a new jet dataset is generated by using the neural network $f$. Just as earlier, the generated jets are first transformed to jet images and then fed to the classifier. Since we have access to each of the decay probabilities for each jet, the right side of Eq.\ref{log_pq_jets} can be easily calculated for all the jet vectors $\vec{x}$ in the dataset. This way we can obtain the desired log value of the total probability for each jet $p(\vec{x})$:
\begin{equation}
\ln p(\vec{x}) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M).
\label{p}
\end{equation}
Finally, we update the parameters of the neural network $f$ by minimizing the expression given by:
\begin{equation}
L = \frac{1}{n} \sum_i^n \left[ \sum_{j}^{2^N-1} f(a_i^j,b_i^j,M_j) - \ln p_i(\vec{x})\right]^2,
\label{loss}
\end{equation}
where $i$ denotes the jet index and $j$ denotes the decay index in a particular jet. After this step, the weights of the neural network are updated in such a way that the network output values $f(a,b,M)$ are on average closer to the real log value of $p(m_1,m_2 \mid M)$. The updated network $f$ is used to generate the test dataset in the next iteration.
\subsection{Evaluation of the 2NN algorithm}
Upon completion of each iteration of the algorithm, the underlying probability densities can be obtained from the output values of the neural network $f$ according to Eq.\ref{q}. In the Results section the 2NN algorithm is evaluated in terms of the Kullback-Leibler divergence (KL) in the following way \cite{KLD}:
\begin{equation}
KL(M) = \sum_{j} \sum_{k} p_{\textrm{real}} (m_1^j, m_2^k \mid M)\left[
\ln p_{\textrm{real}} (m_1^j, m_2^k \mid M) - f(a^j, b^k, M) + \ln{Z(M)}\right]
\label{kl}
\end{equation}
where the sum is performed over the whole probability space. The KL-divergence is a non-negative measure of the difference between two probability densities defined on same probability space. If the probability densities are identical, the KL divergence is zero.
\subsection{Hardware and software}
Code for calculations in this reasearch are is written in Python programming langauge using \textit{Tensorflow 2} and \textit{Numpy} modules. GPU unit NVIDIA Quadro p6000 obtained from the NVIDIA Grant for academic resarch is used to increase the speed of the performed calculations.
\section{Results}
In this section we present our findings after applying the $2NN$ algorithm on 500 000 jets created using the particle generator described in \ref{ParticleGenerator}. In each iteration, the classifier is optimized using 50 000 randomly picked jets from the ,,real'' dataset and 50 000 jets generated using the neural network $f$. To optimize the neural network $f$, we use 50 000 jets as well. The algorithm performed 800 iterations. After the final iteration of the $2NN$ algorithm we obtain the calculated probability densities, which can be then used to generate samples of jets. First, we show the energy spectrum of the particles in the final state in jets generated by the calculated probabilities in \ref{hE}. This spectrum is directly compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset and shown on Figure \ref{hE}.
\begin{figure}[h!t!]
\centering
\includegraphics[width=15cm]{Images/hE.png}
\caption{The energy spectrum of the particles in the final state in jets generated by the calculated probabilities, compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset.}
\label{hE}
\end{figure}
The plotted spectra are obtained using 10 000 jets from each dataset. The error bars in the histogram are smaller than the marker size and are hence not visible. A resemblance between the two spectra is notable, especially at higher energies. This points to the fact that the calculated probabilities are approximately correct, so we can use them to generate samples of jets that resemble ,,real'' jets. To further examine the calculated probability densities we need to reconstruct the hidden resonances which are not found in the final state. For this purpose, the calculated probability densities for mother particle masses of $M = 25.0$, $M = 18.1$, $M = 14.2$ and $M = 1.9$ are analyzed and compared to the real probability densities in the following subsections. These masses are chosen since they match the masses of the hidden resonances, as was introduced in table \ref{TableParticles}.
\subsection{Mother particle with mass $M$ = 25.0}
The calculated 2$d$-probability density $p(m_1,m_2 \mid M)$ is shown in Figure \ref{probs25}, compared to the real probability density. A simple eye reveals that 3 possible decays of particle of mass $M = 25.0$ are recognized by the algorithm. After dividing the probability space as in panel (c) in Figure \ref{probs25} with lines $m_2 > 16.0$ and $m_2 < 10.0$, we calculate the mean and the variance of the data on each subspace. As a result, we obtain $(m_1, m_2) = (18.1 \pm 0.5, 6.1 \pm 0.5)$ for $m_2 > 16.0$, $(m_1, m_2) = (14.0 \pm 0.7, 8.4 \pm 0.7)$ for $16.0 \leq m_2 > 10.0$ and $(m1, m2) = (4.8 \pm 0.2, 4.6 \pm 0.2)$ for $m_2 \leq 10.0$. These mean values closely agree with the masses of the resonances expected as the products of decays of the particle with mass $M = 25.0$. The calculated small variances indicate that the algorithm is very precise. The total decay probabilities for each of the subspaces are equal to $p_1 = 0.48$, $p_2 = 0.47$, $p_3 = 0.05$, which approximately agree with the probabilities of decay channels of the particle with mass $M = 25.0$, as defined in table \ref{TableParticles}.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_25.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_25.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_25.png}
\caption{}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 25.0$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs25}
\end{figure}
These results show that we can safely assume that the $2NN$ algorithm successfully recognizes all the decay modes of the particle that initiates a jet. To quantify the difference between the calculated probabilty density and the real probability density, we use the KL-divergence.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_25.png}
\caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 25.0$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.}
\label{kl25}
\end{figure}
Figure \ref{kl25} shows the dependence of the KL-divergence on the iteration of the $2NN$ algorithm. First, we observe an initial steep decrease in the value of the divergence. Large variations in divergence value are observed later. This is an indicator that the approximate probability density is found relatively quickly - after a few hundred iterations. As the algorithm decreases the width of the peaks found in the probability distribution, the KL-divergence becomes very sensitive to small variations in the location of these peaks and can therefore vary by a large relative amount.
\subsection{Mother particle with mass $M$ = 18.1}
A similar analysis is performed for the particle with mass $M = 18.1$. The calculated probability density is shown in Figure \ref{probs18} compared to the expected probability density. In this case, only one decay is allowed, so a division into probability subspaces is not necessary, as was for the case when $M$=25.0. The calculated mean and the variance of the shown probability density are $(m_1, m_2) = (5.9 \pm 0.4, 8.2 \pm 0.6)$. In this case, just as in the former, the calculated values closely agree with the only possible decay, in which the mother particle decays into two particles of masses 6.1 and 8.4. Also, just as in the previous subsection, the obtained result is very precise. Therefore, the algorithm can successfully find hidden resonances, as well as recognize the decay channels, without ever seeing them in the final state in the ,,real'' dataset.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/probability_18.png}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/preal_18.png}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 18.1$. (\textbf{a}) The calculated density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. }
\label{probs18}
\end{figure}
The calculated KL-divergence in the case of particle with mass $M = 18.1$ decreases over time in a very smooth manner, as can be seen in Figure \ref{kl18}. We believe this could be due to the simpler expected probability density, which algorithm manages to find very quickly.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_18.png}
\caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 18.1$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.}
\label{kl18}
\end{figure}
\subsection{Mother particle with mass $M$ = 14.2}
Figure \ref{probs14} shows the 2$d$-probability density for the decaying particle of mass $M = 14.2$. In this case, we can identify 3 possible decay channels, which are not as clearly separated as the channels in the previous subsections. Similar to the case of decaying particle of mass $M = 25.0$, we divided the probability space into 3 subpaces, each of which covered one of the possible decays. In this case, the three subspaces cover areas where $m_2 \leq 4.0$, $4.0 < m_2 \leq 5.5 $ and $m_2 > 5.5$. The mean values of the probability density on each of the subspaces are $(m_1,m2) = (2.4 \pm 0.5, 2.9 \pm 0.7)$, $(m_1,m_2)= (2.7 \pm 0.7, 4.3 \pm 0.3)$ and $(m_1,m_2) = (4.4 \pm 0.4, 6.2 \pm 0.3)$, respectively. The allowed decays of a mother particle with mass $M$ = 14.2 in the ,,real'' data are into channels with masses $(1.9,1.9)$, $(1.9, 4.4)$ and $(4.4, 6.2)$, which agree with the calculated results. However, in this case the calculations show higher variance, especially for decays where one of the products is a particle with mass 1.9. The total probabilities of decay in each of the subspaces are 0.89, 0.05 and 0.06, respectively. The relative probabilities of decay channels into particles with masses (4.4, 6.1) and (1.9, 4.4) are approximately the same as expected. However, the algorithm predicts more decays in the channel (1.9,1.9) than expected. The KL-divergence shows a steady decrease with occasional spikes, as shown on Figure \ref{kl14}.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_14.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_14.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_14.png}
\caption{}
\label{probs2c}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 14.2$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs14}
\end{figure}
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_14.png}
\caption{KL-divergence between calculated and real probability density evaluated for the $M = 14.2$. Results are averaged over the intervals of 50 iteration. Error bars represent standard deviation on the same interval}
\label{kl14}
\end{figure}
\subsection{Mother particle with mass $M$ = 1.9}
The last probability density we analyze is the probability density for the mother particle with mass $M$ = 1.9. Figure \ref{probs2} shows the calculated probability density. It can be seen that one of the decay modes present in the ,,real'' data, namely when the particle decays in the $(0.1, 0.1)$ channel, is not recognized by the algorithm, but the decay mode when the particle decays in the $(0.1, 1.3)$ channel is visible. If we isolate given decay as shown in the right panel of Figure \ref{probs2}, we get a mean value of $(m_1, m_2) = (0.14 \pm 0.09, 1.27 \pm 0.09)$, which agrees with the expected decay. We also observe significant decay probabilities along the line $m_1 + m_2 = 1.9$. The decays that correspond to the points on this line in effect create particles with zero momentum in the rest frame of the mother particle. In the lab frame this corresponds to the daughter particles flying off in the same direction as the mother particle. Since they reach the detector in the same time, they are registered as one particle of total mass $M = 1.9$. Thus, we can conclude that the probabilities on this line have to add up to the total probability of the mother particle not decaying. The calculated probabilities in the case of no decay and in the case when decaying into particles with masses $(0.1,1.3)$ are 0.71 and 0.29, respectively. We note that relative probabilities are not correct, but 2 of the 3 decay modes are still recognized by the algorithm. The KL-divergence in this case can't produce reasonable results, simply because of multiple points in the $(m_1,m_2)$ phase space which produce the same decay and is therefore omitted from the analysis.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_2.png}
\caption{}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 1.9$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs2}
\end{figure}
\subsection{The accuracy of the classifier}
The accuracy of the classifier is defined as the fraction of correctly ,,guessed'' samples on a given dataset. The criterion used for guessing is checking whether the output of the classifier, $C_{NN}$, is greater than 0.5. The accuracy can indirectly indicate how distinguishable are some two datasets. In our algorithm, after starting from a test probability density, we approach the real probabilility density with increasing iteration number, so we can expect that the two jet datasets, the ,,real'' and the ,,test'' dataset, are less and less distinguishable over time. In Figure \ref{acc} we show the accuracy of the classifier in dependence on the iteration number.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/acc.png}
\caption{The calculated accuracy of the classifier in dependence on the iteration number.}
\label{acc}
\end{figure}
After an initially high value, the accuracy decreases with growing iteration number, which demonstrates that the test dataset becomes more and more similar to the real dataset. Ideally, the datasets are no longer distinguishable by a given classifier if the evaluated accuracy reaches 0.5. Therefore, we can use the evaluated accuracy of the classifier as a criterion for stopping the algorithm. Other measures can also be used as the stopping criterion such as the loss value of the classifer or the area under reciever operating characteristic (ROC) curve of the classifier. In this work, the algorithm is stopped after the accuracy reaches a value of 0.65, because we didn't see any significant decreasy in the accuracy once it reached this value. An accuracy value of 0.65 clearly shows that the classifier is capable of further discriminating between the two datasets. This is explained by the fact that the neural network $f$ and its hyperparameters are not fully optimized. For the algorithm to perform better, we need to optimize the neural network $f$ and possibly improve the architecture for the selected task.
\newpage
\section{Discussion}
In this work we propose a method for calculating underlying probability distributions in particle decays, using only the data that can be collected in a real-world physical system. First, we developed an artificial physical system based on the QCD fragmentation process. Next, we present the core part of the method: the $2NN$ algorithm, which we described in detail. The algorithm performs very well when tested on the developed physical system. It accurately predicts most of the hidden resonant particles, as well as their decay channels, which can occur in the evolution of jets. The energy spectra of the particles in the final state can also be accurately reproduced.
Although tested only on the developed artificial physical system, we believe that the method is general enough to be applicable to real-world physical systems, such as collisions of high-energy particles, with a few possible modifications. For example, we hope that this method can in the future prove helpful in measuring the fragmentation functions of quarks and gluons. Also, one could employ such a method in the search for supersymmetric particles of unknown masses, or in measuring the branching ratios of known decays.
The $2NN$ algorithm does not specify the exact architecture of the neural networks, nor the representation of the data used. Furthermore, the classifier does not need to be a neural network - it can be any machine learning technique which maximizes likelihood. Although the algorithm has a Generative Adversarial Netowrk (GAN)-like structure, it converges readily and does not show usual issues associated with GANs, such as mode collapse or vanishing gradients. The downside of the presented algorithm are high computational requirements. Continuous probability distributions, which we expect to occur in nature, are approximated by discrete probability distributions. In quest for higher precision and a better description of reality, one always aims to increase the resolution of discrete steps, but this carries a high computational cost. Also, the used neural networks are not fully optimized, which slows down the convergence of the algorithm. In conclusion, in order to cut down computational costs, a more thorough analysis of convergence is needed to achieve better performance.
In future work we hope to make the method even more general and thus even more applicable to real-world physical systems. In particular, we want to introduce angle dependent probability distributions, which can be retrieved from some detector data. We would also like to investigate the possibility of including other decay modes, such as $1 \rightarrow 3$ type decays.
\newpage
| proofpile-arXiv_065-32 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart{I}{mages} captured under low-light conditions often suffer from poor visibility, unexpected noise, and color distortion.
In order to take high-quality images in low-light conditions, several operations including setting long exposures, high ISO, and flash are commonly applied. However, solely turning up the brightness of dark regions will inevitably amplify image degradation.
To further mitigate the degradation caused by low-light conditions, several traditional methods have been proposed. Histogram Equalization (HE)~\cite{pizer1990contrast} rearranges the pixels of the low-light image to improve the dynamic range of the image. Retinex-based methods~\cite{wang2013naturalness, wang2014variational} decompose the low-light images into illumination and reflection maps and obtain the intensified image by fusing the enhanced reflection map and illumination map. Dehazing-based methods~\cite{dong2011fast, li2015low} regard the inverted low-light image as a haze image and improve visibility by applying dehazing. Although these methods can improve brightness, especially for dark pixels, they barely consider realistic lighting factors, often making the enhanced results visually tenuous and inconsistent with the actual scene.
\begin{figure}
\begin{center}
\subfigure[Input]{
\includegraphics[width=0.48\linewidth]{Figures/Fig-1/Input.jpg}
}\hspace*{-2mm}
\subfigure[Zero-DCE~\cite{guo2020zero}]{
\includegraphics[width=0.48\linewidth]{Figures/Fig-1/Zero-DCE.jpg}
}
\subfigure[EnglightenGAN~\cite{jiang2021enlightengan}]{
\includegraphics[width=0.48\linewidth]{Figures/Fig-1/EnlightenGAN.jpg}
}\hspace*{-2mm}
\subfigure[Ours]{
\includegraphics[width=0.48\linewidth]{Figures/Fig-1/Ours.jpg}
}
\caption{Visual results of the proposed method compared with the state-of-the-art unsupervised low-light enhancement methods. The low-light image of (a) is from EnlightenGAN test set~\cite{jiang2021enlightengan}.}
\label{fig:intro}
\end{center}
\end{figure}
Recently, Deep Convolutional Neural Networks (CNNs) set the state-of-the-art in low-light image enhancement. Compared with traditional methods, the CNNs learn better feature representation to obtain enhanced results with superior visual quality, which benefits from the large dataset and powerful computational ability. However, most CNN-based methods require training examples with references, whereas it is extremely challenging to simultaneously capture low-light and normal-light images of the same visual scene. To eliminate the reliance on paired training data, several unsupervised deep learning-based methods~\cite{guo2020zero,zhang2020self,jiang2021enlightengan} have been proposed. These algorithms are able to restore images with better illumination and contrast in some cases. However, most unsupervised methods heavily rely on carefully selected multi-exposure training data or unpaired training data, which makes these approaches not generalize well to handle various types of images. Therefore, it is of great interest to seek a novel strategy to deal with different scenarios in the wild.
In this study, we propose an unsupervised low-light image enhancement algorithm based on an effective prior termed histogram equalization prior~(HEP). Our work is motivated by an interesting observation on the pre-trained networks: the feature maps of histogram equalization enhanced image and the ground truth are similar. Intuitively, the feature maps of histogram equalization enhanced images can directly provide abundant texture and luminance information~\cite{geirhos2018imagenet}. We show theoretically and empirically that this generic property of the histogram equalization enhanced image holds for many low-light images; more details are shown in Section~\ref{method}. This inspires us to regularize the feature similarity between the histogram equalization enhanced images and the restored images.
Following~\cite{Chen2018Retinex}, we split the low-light image enhancement process into two stages: image brightening and image denoising. The first stage decomposes the low-light images into illumination and reflectance maps, and the reflectance maps can be regarded as restored images. We formulate the histogram equalization prior to guiding the training process and add an illumination smoothness loss to suppress the texture and color information in the illumination map. However, according to the derivation based on Retinex theory~\cite{zhang2021beyond}, the reflectance maps are contaminated by noise. To improve the image quality, the second stage works as an enhancer to denoise the reflectance map. In this stage, we propose an unsupervised denoising model based on disentangled representation to remove the noise and generate the final enhanced image. The disentanglement is achieved by splitting the content and noise features in a reflectance map using content encoders and noise encoders. Inspired by~\cite{bao2018towards}, we add a KL divergence loss to regularize the distribution range of extracted noise features to suppress the contained content information. Moreover, we adopt the adversarial loss and the cycle-consistency loss as regularizers to assist the generator networks in yielding more realistic images and preserving the content of the original image. Extensive experiments demonstrate that our method performs favorably against the state-of-the-art unsupervised low-light enhancement algorithms and even matches the state-of-the-art supervised algorithms. Fig.\ref{fig:intro} shows an example of enhancing the low-light image. In comparison to state-of-the-art methods, our method delivers improved image brightness while preserving the details.
In summary, the main contributions of this work are as follows:
1. We propose an effective prior termed histogram equalization prior (HEP) for low-light image decomposition and add an illumination smoothness loss to suppress the texture and color information in the illumination map.
2. We introduce a noise disentanglement module to disentangle the noise and content in the reflectance maps with the reliable aid of unpaired clean images.
3. We build an unsupervised low-light image enhancement framework based on Retinex and disentangled representation, possessing more effective training and faster convergence speed.
4. We demonstrate that the proposed method achieves remarkable performance compared with the state-of-the-art unsupervised algorithms and even matches the state-of-the-art supervised algorithms.
The rest of this paper is organized as follows. Section~\ref{related} provides a brief review of some related works. Section~\ref{method} presents our proposed histogram equalization prior first, then introduces the decomposition network, finally, presents the proposed noise encoder. Section~\ref{experiment} illustrated the experimental results. Section~\ref{ablation} provided the ablation studies on each component. Finally, concluding remarks are provided in Section~\ref{conclusion}.
\section{Related Work}
\label{related}
\textbf{Conventional Methods} The conventional methods for low-light image enhancement can be roughly divided into three aspects: Gamma Correction (GC)~\cite{farid2001blind}, Histogram Equalization (HE)~\cite{pizer1990contrast}, and Retinex~\cite{land1971lightness}. Gamma correction edits the gamma curve of the image to perform nonlinear tone editing to detect the dark part and the light part in the image signal and increase the ratio of the two-part to improve the contrast. However, the global parameters lead to local over/under-exposure, and the value of the global parameter is very complicated to select. Rahman~{\emph{et al.}}~\cite{rahman2016adaptive} proposed an adaptive gamma correction method that dynamically determines the intensity conversion function based on the statistical characteristics of the image.
Histogram Equalization stretches the image's dynamic range by evenly distributing the pixel values to improve the contrast and brightness of the image. However, it applies the adjustment globally, leads to unexpected local overexposure and amplifying the noise. Adaptive Histogram Equalization (AHE)~\cite{pizer1987adaptive} has been proposed to map the histogram of the local region as a simple mathematical distribution. Pizer~{\emph{et al.}}~\cite{pizer1990contrast} proposed Contrast Limited Adaptive Histogram Equalization (CLAHE). This method sets a threshold and assumes that if a certain pixel value of the histogram exceeds the threshold, crop this pixel and evenly distribute the part that exceeds the threshold to each pixel.
Retinex theory is a calculation theory of color constancy. As a model of human visual perception, these methods decompose images into reflectance and illumination maps. MSR~\cite{jobson1997multiscale} obtains enhanced results by fusing different single-scale Retinex outputs. MSRCR~\cite{jobson1997multiscale} improves the color distortion problem of the previous methods. However, the Retinex methods lead to unreal or partially over-enhanced. Inspired by the Retinex theory, NPE~\cite{wang2013naturalness} was proposed for the enhancement of non-uniform illumination images. MF~\cite{fu2016fusion} was proposed to apply multi-layer fusion to image enhancement under different light conditions. LIME~\cite{guo2016lime} evaluate the illumination map of the image and smooth the illumination map for enhancement. SRIE~\cite{fu2016weighted} evaluate the illumination map and the reflectance map simultaneously through a weighted variational model.
\textbf{Deep learning based Methods} Deep learning-based methods have dominated the research of low-light image enhancement. Lore~{\emph{et al.}}~\cite{lore2017llnet} proposed the first convolutional neural networks for low-light image enhancement termed LL-Net, perform contrast enhancement and denoising based on deep auto-encoder. Chen~{\emph{et al.}}~\cite {Chen2018Retinex} proposed Retinex-Net, which includes a Decom-Net that splits the input images into reflectance and illumination maps, and an Enhance-Net that adjusts the illumination map for low-light enhancement. Zhang~{\emph{et al.}} proposed KinD~\cite{zhang2019kindling}, which is similar to Reinex-Net. It presented a new decomposition network, a reflection map enhancement network, and an illumination map enhancement network, which achieved outstanding performance in low-light image enhancement. Zhang~{\emph{et al.}} proposed KinD++~\cite{zhang2021beyond}, which improves the KinD method, and achieves state-of-the-art performance. Guo~{\emph{et al.}}~\cite{guo2020zero} proposed a zero-shot learning method named Zero-DCE, which is achieved by an intuitive and straightforward nonlinear curve mapping. However, Zero-DCE heavily relies on the usage of multi-exposure training data. Zhang~{\emph{et al.}}~\cite{zhang2020self} proposed a self-supervised method that uses the max entropy loss for better image decomposition, but the restored image still suffers from noise contamination.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{Figures/Fig-2.jpg}
\caption{Overview of the framework. The proposed method consists of two stages: (a) light-up and (b) noise disentanglement. The light-up module first decomposes the low-light image into an illumination map and reflectance map. Then the noise disentanglement module denoises the reflectance map to yield the final enhanced image. In (a), the bright channel is a 1-channel image, which is obtained by calculating the maximum channel value of the input RGB image. Then the bright channel and the input image are concatenated together to form a 4-channel image as the input of the network. In (b), blue arrows represent the data flow of the noise domain, orange arrows represent the data flow of the clean domain. $E^N$ is the noise encoder for noise images; $E^C_Y$ and $E^C_X$ are the content encoders for noise and clean images; $G_X$ and $G_Y$ are noise image and clean image generators.}
\label{fig arc}
\end{figure*}
\textbf{Image to Image Translation} Generative Adversarial Network~(GAN) is the most influential generative model in computer vision technology. Based on the powerful generative capabilities of GAN, image-to-image translation has become an important way to achieve image enhancement, which is achieved by converting corrupted images to sharp images. Zhu {\emph{et al.}}~\cite{zhu2017unpaired} proposed CylceGAN, which showed tremendous capacity in the field of the image domain transfer. Liu {\emph{et al.}}~\cite{liu2017unsupervised} proposed UNIT, which learned shared-latent representation for diverse image translation. Lee {\emph{et al.}}~\cite{lee2018diverse} proposed DRIT, which separated the latent space to content space and attribute space. The content space is shared, the attribute space is independent. Yuan~{\emph{et al.}}~\cite{yuan2018unsupervised} proposed a nested CycleGAN to achieve the unsupervised image super-resolution. Lu~{\emph{et al.}}~\cite{lu2019unsupervised} extended DRIT and proposed to decompose the image into the image content domain and the noise domain to achieved unsupervised image deblurring. Based on Lu's work, Du~{\emph{et al.}}~\cite{du2020learning} added Background Consistency Module and Semantic Consistency Module to the networks, learning robust representation under dual-domain constraints, such as feature and image domains. Jiang~{\emph{et al.}}~\cite{jiang2021enlightengan} proposed a backbone model EnlightenGAN for low-light image enhancement based on adversarial learning. However, EnlightenGAN relies on large number of parameters for good performance.
\section{Methodology}
\label{method}
The main purpose of our method is to recover texture details, reduce noise and color bias, and maintain sharp edges for low-light image enhancement. As shown in Fig.\ref{fig arc}, the proposed method consists of two components: 1) Light Up Module (LUM); 2) Noise Disentanglement Module (LUM). The first stage is improving the brightness of the images, and the second stage is removing the noise of the images.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{Figures/Fig-3.jpg}
\caption{Feature maps on the $conv4\_{1}$ layer of VGG-19 networks pre-trained on ImageNet.}
\label{fig prior}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/Fig-4.jpg}
\caption{Histogram of the cosine similarities. Green: cosine similarities between the feature maps of the input low-light images and the ground truth. Blue: cosine similarities between the histogram equalization prior and the ground truth.}
\label{fig cos}
\end{figure}
For low-light image enhancement, unsupervised learning-based methods are complicated to implement. The main reason is that texture and color information in low-light images is difficult to extract without the aid of paired ground truth data or prior information. Therefore, we investigate an effective prior information to guide the training process and maintain the texture and structure. In the following subsections, we first introduce the proposed histogram equalization prior in Section~\ref{hist}. Then, we present the method to decompose the low-light images into reflectance maps and illumination maps in Section~\ref{lightup}. In Section~\ref{noise}, we discuss the approach to disentangle the noise and content in reflectance maps.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Figures/Fig-5.jpg}
\caption{Feature maps on different layers of VGG-19 networks pre-trained on ImageNet with a histogram equalization enhanced image.}
\label{fig fea}
\end{figure}
\subsection{Histogram Equalization Prior}
\label{hist}
The histogram equalization prior is based on histogram equalization enhanced image. Traditional histogram equalization can make the dark images visible by stretching the dynamic range of dark images via manipulating the corresponding histogram. However, it is not flexible enough for visual property adjustment in local regions and leads to undesirable local appearances, e.g., under/over-exposure and amplified noise. Encouraging the pixels of output images to match the histogram equalization enhanced images will capture unpleasant local impressions contained in the enhanced image. Inspired by \cite{johnson2016perceptual}, we can adopt the VGG feature map to constrain the perceptual similarity between the low-light image and its histogram equalization enhanced version. As shown in Fig.\ref{fig prior}, we can observe that the feature map of the input low-light image has less semantic information~\cite{wang2021rethinking}. In contrast, the feature map of histogram equalization enhanced image has rich semantic information, and it is remarkably similar to the feature map of ground truth.
To further verify the validity of the histogram equalization prior, we have selected 500 paired images from the LOL dataset~\cite{Chen2018Retinex}. We calculate the cosine similarity between the feature maps of the histogram equalization enhanced image and the feature maps of the ground truth. Fig.\ref{fig cos} is the histogram of cosine similarities over all 500 low-light images. We can observe that about $80\%$ of the cosine similarities are concentrated above $0.8$. Compared with the cosine similarities between the feature maps of the input low-light images and the feature maps of the ground truth, the cosine similarities has been substantially improved. This statistic provides a strong support to our histogram equalization prior, and it indicates that we can adopt this prior instead of ground truth to guide the training process.
Fig.\ref{fig fea} shows the different layer of VGG-19~\cite{simonyan2014very} networks pre-trained on ImageNet~\cite{deng2009imagenet} with a histogram equalization enhanced image. Feature maps closer to the input layer pay more attention to the specific details of texture information, and some feature maps can also show the shape of the toy's face. Feature maps farther away from the input layer are more concerned with semantic and abstract information, such as the toy's eye and nose characteristics. The feature maps of the deepest layers become more obscure and can no longer provide adequate information, while the features are similar between each group of feature maps. Based on this information, we select the feature map of $conv4\_{1}$ layer as the feature similarity.
\subsection{Light Up}
\label{lightup}
The first stage improves the brightness of the images based on the Retinex theory. According to the Retinex theory, the images can be decomposed into reflectance maps and illumination maps. Mathematically, a degraded low-light image can be naturally modeled as follows:
\begin{equation}
I=R \circ L + N
\end{equation}
\noindent where I stands for input image, R stands for reflectance map, L is the illumination map, N represents the noise component, $\circ$ represents element-wise multiplication.
As the illumination map determines the dynamic range of images, it cannot be affected by noise. In contrast, the reflectance map represents the intrinsic properties of the images, which are often affected by noise during the imaging process. Hence, by taking simple algebra steps~\cite{zhang2021beyond}, we can have the following formula:
\begin{equation}
I=R \circ L + N=R \circ L + \tilde{N} \circ L =(R + \tilde{N}) \circ L=\tilde{R} \circ L
\end{equation}
\noindent where $\tilde{N}$ stands for the degradation having the illumination decoupled, $\tilde{R}$ represents for the polluted reflectance map.
According to the above theory, the reflectance map can be regarded as a restored image with noise. Therefore, we design a neural network to decompose the low-light images into reflectance and illumination maps, and then send the reflectance maps to the NDM for further denoising. We follow the similar network architecture as the one used in~\cite{zhang2020self}, the module framework is shown in Fig.\ref{fig arc}(a). It first uses a 9×9 convolutional layer to extract features from the input image. Secondly, three 3×3 convolutional+ReLU layers and one 3×3 deconvolutional+ReLU layer are followed. A residual feature from the conv2 layer concatenates with the feature from the deconv layer and feeds to a 3×3 convolutional+ReLU layer. The feature from this layer concatenates with the feature from a 3×3 convolutional+ReLU layer, which extracts features from the input image. Finally, two 3×3 convolutional layers project reflectance map and illumination map from feature space. The sigmoid function constrains both reflectance map and illumination map in the range of [0,1].
Due to the lack of ground-truth data to guide the training process, it is tough to recover these two components from low-light images. We adopt the histogram equalization prior to constrain the reflectance map. We define an MSE loss between the feature map of the output reflectance map and the feature map of the input image, which we call the histogram equalization prior loss. The loss function can be formulated as follows:
\begin{equation}
\mathcal{L}_{hep} = \parallel F(\tilde{R}) - F(I)\parallel_2^2
\end{equation}
\noindent where $F(\cdot)$ denotes the feature map extracted from a VGG-19 model pre-trained on ImageNet.
Since the network decomposes the image into an illumination map and a reflectance map, the decomposed two maps should reproduce the input image. We introduce reconstruction loss to ensure the quality of the generated image. The formula is as follows:
\begin{equation}
\mathcal{L}_{recon} = \parallel \tilde{R} \circ L - I\parallel_1
\end{equation}
As the reflectance map should preserve more texture and color details. In other words, the illumination map should be smooth in textural information while still preserving the structural boundaries. To make the illumination map aware of the image structure boundary, we modify the illumination smoothness loss proposed in \cite{zhang2021beyond}. Different from the previous loss, our illumination smoothness loss only takes the low-light input image as the reference. This term constrains the relative structure of the illumination map to be consistent with the input image, which can reduce the risk of over-smoothing on the structure boundary. The illumination smoothness loss is formulated as:
\begin{equation}
\mathcal{L}_{is} = \parallel \frac{\nabla L}{max(\mid \nabla I \mid, \epsilon)}\parallel_1
\end{equation}
\noindent where $\mid\!\cdot\!\mid$ means the absolute value operator, $\epsilon$ is a small positive constant for avoiding zero denominators, $\nabla$ denotes the gradient including $\nabla h$ (horizontal) and $\nabla v$ (vertical).
As a result, the loss function of the LUM is as follows:
\begin{equation}
\mathcal{L} = \mathcal{L}_{recon} + \lambda_{rs}\mathcal{L}_{hep} + \lambda_{is}\mathcal{L}_{is}
\end{equation}
In our experiment, these parameters are set to $\lambda_{hep}=\lambda_{is}= 0.1$, $\epsilon=0.01$. Due to the careful settings of these loss terms, the light-up module can perform sufficiently well. Still, the light-up image is constrained by histogram equalization, as the method often causes noise and blur. Although the images generated by the network have been enhanced, however, compared with the normal-light images, the noise level cannot meet the visual quality. Therefore, they need to be further denoised.
\subsection{Noise Disentanglement}
\label{noise}
Although the content information of the low-light image appears after being decomposed into a reflectance map, the noise contained in it seriously interferes with the clarity of the image. Therefore, we adopt the domain transfer method to eliminate the noise and retain the content information. As shown in Fig.\ref{fig arc}(b). The noise disentanglement module consists of six parts: 1) content encoder $E_X^C$ and $E_Y^C$(due to the shared parameter, we regard the content encoder of two domains are the same); 2) noise encoder $E^N$; 3) noise domain image generator $G_X$; 4) clean domain image generator $G_Y$; 5) noise domain image discriminator $D_X$; 6) clean domain image discriminator $D_Y$. Given a training image sample $I_n$ from the noise domain and a training image sample $I_c$ from the clean domain, the content encoder $E_X^C$ and $E_Y^C$ extract the content feature from corresponding image samples, the noise encoder $E^N$ extract the noise feature from noise image samples. $G_X$ then takes the content feature of the clean domain and noise feature of the noise domain to generate a noise image $I_{gn}$ while $G_Y$ then takes the content feature of the noise domain to generate a clean image $I_{gc}$. The discriminators $D_X$ and $D_Y$ distinguish between the real and generated examples.
Due to the unpaired setting, it is not trivial to disentangle the content information from a noise image. To restrain the noise encoder to only encode the noise information, we add the KL distance to constrain the distribution of noise feature extracted by the noise encoder, forcing the distribution of the noise feature to be closer to the standard normal distribution. The KL distance formula is as follows:
\begin{equation}
KL(q(z_n)\parallel p(z))=\int q(z_n)\log \frac{p(z)}{q(z_n)}dz
\end{equation}
\noindent where $q(z_n)$ stands for the distribution of the noise features $z_n$, $p(z)$ stands for the distribution of the standard normal distribution $N(0,1)$.
As proved in \cite{kingma2013auto}, the KL divergence loss will suppress the content information contained in the noise feature $z_n$, and minimizing the KL divergence is equivalent to minimizing the following loss function, which has been proved in \cite{bao2018towards}.
\begin{equation}
\mathcal{L}_{KL}=\frac{1}{2}\sum_{i=1}^d(-\log(\sigma_i^2)+\mu_i^2+\sigma_u^2-1)
\end{equation}
\noindent where $d$ is the dimension of noise feature, $\mu$ and $\sigma$ are the mean and standard deviation of noise feature.
In order to make the enhanced images look like realistic normal-light images, we adopt the adversarial loss to minimize the distance between the real image and output distributions. We modified the discriminator slightly to replace the loss function with the least-square GAN (LSGAN) loss. The adversarial loss function is as follows:
\vspace{-1mm}
\begin{equation}
\label{adv_loss_x}
\mathcal{L}_{adv}=\frac{1}{2}\mathbb{E}_{x\sim p_r}[(D(x)-b)^2]+\frac{1}{2}\mathbb{E}_{z\sim p_z}[(D(G(z))-a)^2]
\end{equation}
\noindent where $a$ is the label for the generated samples, $b$ is the label for the real samples, and $z$ is the latent vector.
Without pairwise supervision, the denoised image may lose some content information. Similar to~\cite{zhu2017unpaired}, we introduce the cycle-consistency loss to guarantee that the generated corrupted image $I_{gn}$ translates back to the original clean image domain, and the denoised image $I_{gc}$ reconstructed to the original corrupted sample. We define the cycle-consistency loss on both domains as:
\begin{equation}
\mathcal{L}_{cc}=\parallel I - \tilde{I}\parallel_1
\end{equation}
\noindent where $I$ is the input samples, $\tilde{I}$ is the the backward translation of the input samples.
In addition to the cycle-consistency loss, we introduce self-reconstruction loss to facilitate the better-perceived quality of the generated image. The formula of the loss function is as follows:
\begin{equation}
\mathcal{L}_{rec} = \parallel I_{rec} - I\parallel_1
\end{equation}
Following the observations from \cite{taigman2016unsupervised} that features extracted from the deep layer of pre-trained model contain rich semantic information, we add perceptual loss between the denoised images and the original corrupted images to recover finer image texture details. It could be formulated as:
\begin{equation}
\mathcal{L}_{per}=\parallel \phi_l(I_g) - \phi_l(I)\parallel_2^2
\end{equation}
\noindent where $\phi_l(\cdot)$ represents the feature extracted from $l$-th layer of the pre-trained VGG network, $I_g$ is the generated samples. In our experiments, we use the $conv3\_{2}$ layer of the VGG-19 pre-trained network on ImageNet.
To eliminate the potential color deviations in the denoised image, we adopt the color constancy loss proposed in \cite{guo2020zero}, it follows the Gray-World color constancy hypothesis that color in each sensor channel averages to gray over the entire image. The loss function can be expressed as:
\begin{equation}
\mathcal{L}_{col}\!=\!\sum\nolimits_{\forall(p,q)\in\epsilon}\! (\mathcal{J}^{p}-\mathcal{J}^{q})^2,\!\epsilon=\{(R,G),(R,B),(G,B)\}
\end{equation}
\noindent where $\mathcal{J}^{p}$ represents the the average intensity value of $p$ channel in the denoised image, $(p,q)$ represents a pair of channels.
From our preliminary experiments, we find that the generated denoised samples often over-smooth in the background, then we adopt background consistency loss proposed by \cite{du2020learning}, which uses a multi-scale Gaussian-Blur operator to obtain multi-scale features respectively. The loss function is formulated as:
\begin{equation}
\mathcal{L}_{bc}=\sum_{\sigma=i,j,k} \lambda_\sigma \parallel B_\sigma(I) - B_\sigma(I_g)\parallel_1
\end{equation}
\noindent where $\lambda_\sigma$ is the hyper-parameter to balance the errors at different Gaussian-Blur levels, $B_\sigma(\cdot)$ represents the Gaussian-Blur operator with blur kernel $\sigma$. In our experiments, we set $\lambda_\sigma={0.25, 0.5, 1.0}$ for $\sigma={5,9,15}$ respectively.
The entire loss function for the NDM is summarized as follows:
\begin{equation}
\begin{split}
\mathcal{L}=& \quad\mathcal{L}_{adv}+\lambda_{KL}\mathcal{L}_{KL}+\lambda_{cc}\mathcal{L}_{cc}+\lambda_{col}\mathcal{L}_{col}\\
& \quad+\lambda_{per}\mathcal{L}_{per}+\lambda_{bc}\mathcal{L}_{bc}+\lambda_{rec}\mathcal{L}_{rec}
\end{split}
\end{equation}
We empirically set these parameters to $\lambda_{KL}=0.01$, $\lambda_{per}=0.1$, $\lambda_{col}=0.5$, $\lambda_{bc}=5$, $\lambda_{cc}=\lambda_{rec}=10$. At test time, given a test corrupted sample, $E^N$ and $E_X^C$ extract the noise and content features map respectively. Then $G_Y$ takes the latent vector and generates the denoised image as the outputs.
\begin{figure*}
\begin{center}
\hspace*{-4mm}
\subfigure[Input]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Input_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Input_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Input_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Input_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[HE~\cite{pizer1990contrast}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_HE_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_HE_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_HE_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_HE_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[LIME~\cite{guo2016lime}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_LIME_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_LIME_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_LIME_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_LIME_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[Retinex-Net~\cite{Chen2018Retinex}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_RetinexNet_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_RetinexNet_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_RetinexNet_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_RetinexNet_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[KinD++~\cite{zhang2021beyond}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_KinD++_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_KinD++_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_KinD++_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_KinD++_2_magnifier_0.png}
\end{tabular}
}
\hspace*{-4mm}
\subfigure[Zero-DCE~\cite{guo2020zero}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Zero-DCE_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Zero-DCE_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Zero-DCE_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Zero-DCE_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[EnlightenGAN~\cite{jiang2021enlightengan}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_EnlightenGAN_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_EnlightenGAN_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_EnlightenGAN_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_EnlightenGAN_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[Self-Supervised~\cite{zhang2020self}]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Self-Supervised_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Self-Supervised_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Self-Supervised_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Self-Supervised_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[Ours]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Ours_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Ours_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Ours_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Ours_2_magnifier_0.png}
\end{tabular}
}\hspace*{-5mm}
\subfigure[Ground-Truth]{
\begin{tabular}[]{c}
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_GT_1.jpg}\\
\includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_GT_2.jpg}\\
\includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_GT_1_magnifier_0.png}
\includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_GT_2_magnifier_0.png}
\end{tabular}
}
\caption{Visual comparison with other state-of-the-art methods on LOL dataset~\cite{Chen2018Retinex}. Best viewed in color and by zooming in.}
\label{fig:LOL}
\end{center}
\end{figure*}
\section{Experimental Validation}
\label{experiment}
In this section, we first introduce the implementation details of the proposed method for low-light image enhancement. Then we qualitatively and quantitatively compare the proposed method with the state-of-the-art methods (include supervised and unsupervised methods), we use traditional metrics to evaluate, such as Peak-Signal-Noise-Rate (PSNR), Structural Similarity (SSIM)~\cite{wang2004image}, and Natural Image Quality Evaluator (NIQE)~\cite{mittal2012making}. Furthermore, we test the proposed method on some real-world datasets while comparing them with the state-of-the-art methods in terms of visual performance and NIQE metrics. Finally, we conduct ablation studies to demonstrate the effectiveness of each component or loss in the proposed method.
\subsection{Implementation Details}
Since the proposed method is a two-stage model, we need to train the model separately. In the first stage, our training dataset is selected from the low-light part of the LOL dataset~\cite {Chen2018Retinex}, which includes 500 low/normal-light image pairs. During the training, we use Adam~\cite{kingma2014adam} optimizer to perform optimization with the weight decay equal to 0.0001. The initial learning rate is set to $10^{-4}$, which decreases to $10^{-5}$ after 20 epochs and then to $10^{-6}$ after 40 epochs. The batch size is set to 16 and the patch size to 48×48. In the second stage, we assemble a mixture of 481 low-light images from the LOL dataset and 481 normal-light images from the EnlightenGAN dataset~\cite{jiang2021enlightengan}. The Adam method is adopted to optimize the parameters with the momentum equal to 0.9 and the weight decay equal to 0.0001. The learning rate is initially set to $10^{-4}$ and exponential decay over the 10K iterators. The batch size is set to 16 and the patch size to 64×64. All experiments are conducted using PyTorch~\cite{paszke2017automatic} framework on an Nvidia 2080Ti GTX GPU.
\begin{figure*}
\begin{center}
\subfigure[Input]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/Input.jpg}
}\hspace*{-2mm}
\subfigure[HE~\cite{pizer1990contrast}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/HE.jpg}
}\hspace*{-2mm}
\subfigure[LIME~\cite{guo2016lime}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/LIME.jpg}
}
\subfigure[Retinex-Net~\cite{Chen2018Retinex}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/RetinexNet.jpg}
}\hspace*{-2mm}
\subfigure[KinD++~\cite{zhang2021beyond}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/KinD++.jpg}
}\hspace*{-2mm}
\subfigure[Zero-DCE~\cite{guo2020zero}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/Zero-DCE.jpg}
}
\subfigure[EnlightenGAN~\cite{jiang2021enlightengan}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/EnlightenGAN.jpg}
}\hspace*{-2mm}
\subfigure[Self-Supervised~\cite{zhang2020self}]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/Self-Supervised.jpg}
}\hspace*{-2mm}
\subfigure[Ours]{
\includegraphics[width=0.32\linewidth]{Figures/Fig-7/Ours.jpg}
}
\caption{Visual comparison with state-of-the-art methods on the SCIE dataset~\cite{cai2018learning}. Best viewed in color and by zooming in.}
\label{fig:real}
\end{center}
\end{figure*}
\subsection{Qualitative Evaluation}
We first visually evaluate our proposed networks on the classical low-light image datasets: LOL datasets, and compare it with other state-of-the-art approaches with available codes, including HE~\cite{pizer1990contrast}, LIME~\cite{guo2016lime}, Retinex-Net~\cite {Chen2018Retinex}, KinD++~\cite{zhang2021beyond}, Zero-DCE~\cite{guo2020zero}, EnlightenGAN~\cite{jiang2021enlightengan}, and Self-Supervised~\cite{zhang2020self}. We have fine-tuned all models on the LOL train set and then evaluated it on the LOL test set. Fig.\ref{fig:LOL} shows some representative results for visual comparison. The enhanced results show that the EnlightenGAN and Zero-DCE fail to recover the images. HE significantly improves the brightness of the low-light image. However, it applies a contrast pull-up to each channel of RGB separately, which leads to color distortion (for example, the wall in Fig.\ref{fig:LOL}(b)). LIME enhances the images by directly estimating the illumination map, but this approach enhances both details and noise. Retinex-Net notably improves the visual quality of the low-light images, but it over-smoothes details, amplifies noise, and even produces color bias. It seems that the results of Self-Supervised, KinD++, and ours have better visual quality among all the methods. To further investigate the differences between these three methods, we have zoomed in the details inside the red and green bounding boxes. We can find from Fig.\ref{fig:LOL}(h) that Self-Supervised produces blurred results for the rotation switch in the red rectangle, while the results of KinD++ and ours show a better reconstruction. For the platform area in the green rectangle, we can see that the image estimated by Self-Supervised is corrupted, while KinD++ and our results are clearer. In summary, the best visual quality can be obtained with our proposed method and KinD++. Considering that KinD++ is a supervised method, this shows that our proposed unsupervised method is very effective.
\begin{table}[t]
\centering
\caption{Quantitative comparisons on the LOL test set in terms of PSNR, SSIM, and NIQE. The best result is in red, whereas the second-best results are in blue, respectively. T, SL, and UL represent the traditional method, supervised learning method, and unsupervised learning method, respectively.}
\begin{tabular}{c|c|c|c|c}
\hline
\textbf{Learning} &\textbf{Method} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$\\ \hline
&Input &7.77 &0.191 &6.749\\ \hline
\multirow{2}{*}{T}
&HE~\cite{pizer1990contrast} &14.95 &0.409 &8.427\\
&LIME~\cite{guo2016lime} &17.18 &0.484 &8.221\\ \hline
\multirow{2}{*}{SL}
&Retinex-Net~\cite{Chen2018Retinex} &16.77 &0.425 &8.879\\
&KinD++~\cite{zhang2021beyond} &{\textcolor{red}{21.32}} &{\textcolor{red}{0.829}} &5.120\\ \hline
\multirow{4}{*}{UL}
&Zero-DCE~\cite{guo2020zero} &14.86 &0.562 &7.767\\
&EnlightenGAN~\cite{jiang2021enlightengan} &17.48 &0.652 &{\textcolor{blue}{4.684}}\\
&Self-Supervised~\cite{zhang2020self} &19.13 &0.651 &4.702\\
&Ours &{\textcolor{blue}{20.23}} &{\textcolor{blue}{0.790}} &{\textcolor{red}{3.780}}\\ \hline
\end{tabular}
\label{table:LOL}
\end{table}
\subsection{Quantitative Evaluation}
We have also quantitatively compared our method to the other state-of-the-art methods. We have fine-tuned all models on the LOL train set and then evaluated it on the LOL test set. As shown in Table~\ref{table:LOL}, the proposed method achieves the best performance with an average PSNR score of 20.23 dB, SSIM score of 0.79, and NIEQ score of 3.78 in unsupervised methods, which exceed the second-best unsupervised method (Self-Supervised) by 1.1 dB on PSNR, 0.139 on SSIM, and 0.922 on NIQE. It demonstrates that the proposed method possesses the highest capability among all unsupervised methods and its performance is approximating the level of the state-of-the-art supervised methods. Recently, NIQE has been used to evaluate the image quality of low-light image enhancement, which evaluating real image restoration without ground truth. A smaller NIQE score indicates better visual quality. We can see from Table~\ref{table:LOL} that our method obtains the best NIQE scores in all unsupervised methods and even surpasses the state-of-the-art supervised method KinD++. It indicates that the low-light images enhanced with our method have the best visual quality.
\begin{table*}[t]
\centering
\caption{NIQE scores on low-light image sets(MEF, LIME, NPE, VV, DICM, SCIE, ExDark, EnlightenGAN, COCO). The best result is in red whereas the second best results are in blue, respectively. Smaller NIQE scores indicate a better quality of perceptual tendency.}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
\hline
\textbf{Learning} &\textbf{Method} &MEF &LIME &NPE &VV &DICM &EnlightenGAN &SCIE &ExDark &COCO &Avg\\ \hline
\multirow{2}{*}{T}
&HE~\cite{pizer1990contrast} &3.472 &4.125 &4.289 &3.202 &3.643 &6.993 &3.373 &4.135 &4.206 &4.530\\
&LIME~\cite{guo2016lime} &3.56 &4.138 &4.194 &2.456 &3.818 &6.956 &3.222 &4.759 &4.24 &4.667\\ \hline
\multirow{2}{*}{SL}
&Retinex-Net~\cite{Chen2018Retinex} &4.386 &4.68 &4.567 &2.461 &4.451 &8.063 &3.705 &5.274 &4.89 &5.296\\
&KinD++~\cite{zhang2021beyond} &3.734 &4.81 &4.381 &2.352 &3.787 &{\textcolor{blue}{4.572}} &3.143 &{\textcolor{blue}{4.074}} &{\textcolor{blue}{3.896}} &{\textcolor{blue}{3.926}}\\ \hline
\multirow{4}{*}{UL}
&Zero-DCE~\cite{guo2020zero} &3.283 &3.782 &4.273 &3.217 &3.56 &6.582 &3.284 &4.149 &3.903 &4.386\\
&EnlightenGAN~\cite{jiang2021enlightengan} &{\textcolor{blue}{3.221}} &{\textcolor{blue}{3.678}} &{\textcolor{blue}{4.125}} &{\textcolor{red}{2.251}} &{\textcolor{blue}{3.546}} &4.609 &{\textcolor{blue}{2.939}} &4.357 &3.953 &3.973\\
&Self-Supervised~\cite{zhang2020self} &4.477 &4.966 &4.743 &3.364 &4.588 &4.872 &3.978 &5.176 &4.947 &4.758\\
&Ours &{\textcolor{red}{3.188}} &{\textcolor{red}{3.484}} &{\textcolor{red}{3.504}} &{\textcolor{blue}{2.336}} &{\textcolor{red}{3.425}} &{\textcolor{red}{3.711}} &{\textcolor{red}{2.864}} &{\textcolor{red}{3.422}} &{\textcolor{red}{3.037}} &{\textcolor{red}{3.325}}\\ \hline
\end{tabular}
\label{table:real}
\end{table*}
\subsection{Generalization Ability on Real-World Images}
To further demonstrate the generalization ability of the proposed method, we have tested the proposed method on some real-world low-light image sets, including MEF~\cite{lee2011power}(17 images), LIME~\cite{guo2016lime}(10 images), NPE~\cite{wang2013naturalness}(84 images), VV\footnote{https://sites.google.com/site/vonikakis/datasets}(24 images), DICM~\cite{lee2013contrast}(64 images), EnlightenGAN~\cite{jiang2021enlightengan}(148 images), SCIE~\cite{cai2018learning}(select 100 low-light images from the datasets). Furthermore, in order to showcase this unique advantage of our method in practice, we also conduct experiments using low-light images from other datasets, which are built for object detection and recognition. We selected 216 low-light images from ExDark~\cite{loh2019getting} and 100 nighttime images from COCO~\cite{lin2014microsoft}. We have fine-tuned all models on the EnlightenGAN train set\footnote {Since EnlightenGAN dataset is unpaired and cannot be used as the train set for supervised method, so we use LOL dataset as the train set for the supervised method} and then evaluated it on all the low-light image sets. As all these datasets are unpaired, we employ the NIQE metric to provide quantitative comparisons with the state-of-the-art methods, which are used for evaluating real image restoration without ground truth. The NIQE results on nine publicly available image sets used by previous works are reported in Table~\ref{table:real}. Our method achieved the best performance in eight of these nine datasets and achieved the first place in the average score. Fig.\ref{fig:real} shows the results of a challenging image on the SCIE dataset. From the results, we can observe that our proposed method and KinD++ enhance dark regions and simultaneously preserve the color. The result is visually pleasing without obvious noise and color casts. In contrast, HE, LIME, and EnlightenGAN generate visually good results, but it contains some undesired artifacts (e.g., the white wall). Zero-DCE fails to recover the image. Retinex-Net and Self-Supervised over-smooth the details, amplify noise, and even produce color deviation. Our proposed method and KinD++ enhance dark regions and preserve the color of the input image simultaneously. The result is visually pleasing without obvious noise and color casts. It demonstrates that our method has great generalization ability in real-world images with more naturalistic quality.
\section{Ablation Study}
\label{ablation}
To demonstrate the effectiveness of each component proposed in Section~\ref{method}, we conduct several ablation experiments. We primarily analyze the components in our Light Up Module (LUM), which are the core contribution and play critical roles in this work.
\subsection{Contribution of Light Up}
\subsubsection{Effect of Histogram Equalization Prior}
Since histogram equalization prior is the main contribution in our work, a comparative assessment of its validity has been carried out. We employ the histogram equalization enhanced image instead as the reference image. We have evaluated different loss functions with the histogram equalization enhanced image: L1 loss $\mathcal{L}_{L1}$, MSE loss $\mathcal{L}_{MSE}$, and SSIM loss $\mathcal{L}_{SSIM}$, and max information entropy loss $\mathcal{L}_{max}$~\cite{zhang2020self}. The formulas of these losses are as follows:
\begin{equation}
\mathcal{L}_{L1} = \parallel R - H(I)\parallel_1
\end{equation}
\begin{equation}
\mathcal{L}_{MSE} = \parallel R - H(I)\parallel_2^2
\end{equation}
\begin{equation}
\mathcal{L}_{SSIM} = 1-SSIM(R, H(I))
\end{equation}
\begin{equation}
\mathcal{L}_{max} = \parallel \mathop{max}\limits_{c\in{R,G,B}}(R^c) - H(\mathop{max}\limits_{c\in{R,G,B}} (I^c))\parallel_1
\end{equation}
\noindent where $H(\cdot)$ stands for histogram equalization operation, $R$ represents the relfectance map, $I$ denotes the input low-light image, $R^c$ represents the max channel of relfectance map, $I^c$ represents the max channel of input low-light image.
The comparison results are shown in Table \ref{table:prior}. Using $\mathcal{L}_{L1}$ or $\mathcal{L}_{MSE}$ achieves similar SSIM and NIQE scores. Nevertheless, for the PSNR values, the estimation from $\mathcal{L}_{MSE}$ exceeds those from $\mathcal{L}_{L1}$ with 0.33dB. The $\mathcal{L}_{SSIM}$ improves the NIQE score by a large margin. $\mathcal{L}_{SSIM}$ surpasses HEP in NIQE score, but HEP outperformed by 1.58dB in PSNR and 0.047 in SSIM. $\mathcal{L}_{max}$ achieves similar SSIM scores with HEP, but it failed in NIQE and PSNR by a large margin. Fig.\ref{Figure prior} shows a visual comparison of these loss functions. $\mathcal{L}_{L1}$ and $\mathcal{L}_{MSE}$ significantly improves the brightness of the low-light images. However, they have obvious color deviations (e.g., the color of the floor) and undesired artifacts (e.g., the dark region of the wall). $\mathcal{L}_{SSIM}$ reveals the color and texture, but with the blurry mask. $\mathcal{L}_{max}$ has color distortion. Both quantitative and qualitative results demonstrate the effectiveness of the proposed prior.
\begin{figure*}[t]
\centering
\subfigure[Input]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/input.jpg}
}\hspace*{-2mm}
\subfigure[with $\mathcal{L}_{L1}$]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/L1.jpg}
}\hspace*{-2mm}
\subfigure[with $\mathcal{L}_{MSE}$]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/MSE.jpg}
}\hspace*{-2mm}
\subfigure[with $\mathcal{L}_{SSIM}$]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/SSIM.jpg}
}\hspace*{-2mm}
\subfigure[with $\mathcal{L}_{max}$]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/Max.jpg}
}\hspace*{-2mm}
\subfigure[with HEP]{
\includegraphics[width=0.16\linewidth]{Figures/Fig-8/Ours.jpg}
}
\caption{Ablation study of the contribution of histogram equalization prior in LUM (replace reflectance similarity loss $\mathcal{L}_{rs}$ with L1 loss $\mathcal{L}_{L1}$, MSE loss $\mathcal{L}_{MSE}$, SSIM loss $\mathcal{L}_{SSIM}$, and max information entropy loss $\mathcal{L}_{max}$).}
\label{Figure prior}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfigure[Input]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-9/input.jpg}
}\hspace*{-2mm}
\subfigure[w/o $\mathcal{L}_{recon}$]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-9/without_recon.jpg}
}\hspace*{-2mm}
\subfigure[w/o $\mathcal{L}_{is}$]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-9/withour_IS.jpg}
}\hspace*{-2mm}
\subfigure[w/o $\mathcal{L}_{hep}$]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-9/without_RS.jpg}
}\hspace*{-2mm}
\subfigure[full loss]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-9/LUM.jpg}
}
\caption{Ablation study of the contribution of loss functions in LUM (reconstruction loss $\mathcal{L}_{recon}$, illumination smoothness loss $\mathcal{L}_{is}$), histogram equalization prior loss $\mathcal{L}_{hep}$). Red rectangle indicate the obvious differences and amplified details.}
\label{Figure abs1loss}
\end{figure*}
\begin{table}[t]
\centering
\caption{Ablation study of the contribution of histogram equalization prior in LUM in terms of PSNR, SSIM and NIQE.}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Loss Function} & \textbf{PSNR}$\uparrow$ & \textbf{SSIM}$\uparrow$ & \textbf{NIQE}$\downarrow$ \\ \hline
Input &7.77 &0.191 &6.749\\
with $\mathcal{L}_{L1}$ &17.51 &0.687 &6.343\\
with $\mathcal{L}_{MSE}$ &17.84 &0.698 &6.649\\
with $\mathcal{L}_{SSIM}$ &17.94 &0.654 &4.869\\
with $\mathcal{L}_{max}$ &18.29 &0.690 &7.294\\
with HEP &19.52 &0.701 &5.480\\ \hline
\end{tabular}
\label{table:prior}
\end{table}
\subsubsection{Effect of Loss functions}
We present the results of LUM trained by various combinations of losses in Fig.~\ref{Figure abs1loss}. Removing the reconstruction loss $\mathcal{L}_{recon}$ fails to brighten the image, and this shows the importance of reconstruction loss in enhancing the quality of the generated image. The results with illumination smoothness loss $\mathcal{L}_{is}$haves relatively lower contrast than the full results, and it shows smooth illumination map can somehow brighten the reflectance map. Finally, removing the histogram equalization prior loss $\mathcal{L}_{hep}$ hampers the correlations between neighboring regions leading to obvious artifacts. To further demonstrate the effectiveness of each loss, we conduct several experiments on the LOL dataset. The evaluation results of each loss show in Table~\ref{table:loss1}. The results show that without the histogram equalization prior loss, the PSNR decrease from 19.52 to 9.0, the SSIM decrease from 0.701 to 0.54. It demonstrates the importance of perceptual loss. To better prove the role of perceptual loss, we conduct an ablation study on this prior in the next subsection.
\begin{table}[t]
\centering
\caption{Ablation study of the contribution of loss functions in LUM in terms of PSNR, SSIM and NIQE.}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Loss Function} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$\\ \hline
Input &7.77 &0.191 &6.749\\
w/o $\mathcal{L}_{hep}$ &9.00 &0.540 &4.539\\
w/o $\mathcal{L}_{recon}$ &17.06 &0.675 &6.782\\
w/o $\mathcal{L}_{is}$ &17.93 &0.621 &6.350\\
full loss &19.52 &0.701 &5.480\\ \hline
\end{tabular}
\label{table:loss1}
\end{table}
\subsection{Contribution of Noise Disentanglement}
\subsubsection{Effect of Network Architecture}
In this part, we compare three different denoise manners, including a traditional denoising tool BM3D~\cite{dabov2007image}, a GAN-based denoise method~\cite{du2020learning}, which has a similar architecture to ours, and our proposed NDM. Fig.\ref{fig noise} shows the comparison results of these three methods. The BM3D and the GAN-based method are the state-of-the-art denoising methods. However, the results show that the BM3D can handle noise, but it blurs the image. The GAN-based method is visually similar to our proposed NDM, but the image is overexposed compared to the ground truth. The result of our proposed NDM contains more delicate details and more vivid colors than other methods. As the quantitative results are shown in Table~\ref{table:arc2}, the NDM improves the GAN-based denoise method by a large margin in terms of PSNR and outperforms the BM3D by about 0.66dB in PSNR, 0.014 in SSIM, and 2.497 in NIQE. The new design of the NDM proves its effectiveness by the best results in this comparison.
\begin{figure*}[htbp]
\centering
\subfigure[Input]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-10/Input.jpg}
}\hspace*{-2mm}
\subfigure[LUM + BM3D~\cite{dabov2007image}]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-10/BM3D.jpg}
}\hspace*{-2mm}
\subfigure[LUM + Du~{\emph{et al.}}~\cite{du2020learning}]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-10/LIR.jpg}
}\hspace*{-2mm}
\subfigure[LUM + NDM]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-10/Ours.jpg}
}\hspace*{-2mm}
\subfigure[Ground-Truth]{
\includegraphics[width=0.19\linewidth]{Figures/Fig-10/GT.jpg}
}
\caption{Ablation study of the contribution of noise encoder in NDM (compare with BM3D and a GAN-based denoise mode). Red rectangle indicate the obvious differences and amplified details.}
\label{fig noise}
\end{figure*}
\begin{table}[t]
\centering
\caption{Ablation study of the contribution of noise encoder in NDM in terms of PSNR, SSIM, and NIQE.}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Denoise Model} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$ \\ \hline
LUM &19.52 &0.701 &5.480\\
LUM + BM3D~\cite{dabov2007image} &19.57 &0.776 &6.277\\
LUM + Du~{\emph{et al.}}~\cite{du2020learning} &18.74 &0.791 &4.539\\
LUM + NDM &20.23 &0.790 &3.780\\ \hline
\end{tabular}
\label{table:arc2}
\end{table}
\subsubsection{Effect of Loss functions}
We evaluate the loss functions of the NDM and the evaluation results are shown in table~\ref{table:loss2}. From the results, we can conclude that removing self-reconstruction loss $\mathcal{L}_{recon}$ can significantly reduce PSNR and NIQE scores. Without the KL divergence loss $\mathcal{L}_{KL}$, background consistency loss $\mathcal{L}_{bc}$, and perceptual loss $\mathcal{L}_{per}$, all metrics have dropped a lot. Removing the adversarial loss $\mathcal{L}_{adv}$ cause SSIM and NIQE scores to drop a lot. Finally, when removing the cycle-consistency loss $\mathcal{L}_{cc}$, the NIQE scores have risen by 0.028, but at the same time, PSNR and SSIM have dropped by 0.32dB and 0.01. The entire loss function of NDM is designed to transfer noise image to clean image, and it performs stronger noise suppression on regions where the brightness is significantly promoted after the image brightness enhancement guided by the histogram equalization prior.
\begin{table}[t]
\centering
\caption{Ablation study of the contribution of loss functions in NDM in terms of PSNR, SSIM, and NIQE.}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Loss Function} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$ \\ \hline
w/o $\mathcal{L}_{adv}$ &19.66 &0.705 &5.299\\
w/o $\mathcal{L}_{KL}$ &19.68 &0.778 &4.394\\
w/o $\mathcal{L}_{per}$ &19.83 &0.781 &4.389\\
w/o $\mathcal{L}_{cc}$ &19.91 &0.780 &3.752\\
w/o $\mathcal{L}_{bc}$ &19.92 &0.785 &4.143\\
w/o $\mathcal{L}_{recon}$ &19.96 &0.783 &4.234\\
full loss &20.23 &0.790 &3.780\\ \hline
\end{tabular}
\label{table:loss2}
\end{table}
\section{Conclusion}
\label{conclusion}
In this work, we propose an unsupervised network for low-light image enhancement. Inspired by Retinex theory, we design a two-stage network to enhance the low-light image. The first stage is an image decomposition network termed light up module (LUM), and the second stage is an image denoising network termed noise disentanglement module (NDM). The LUM brightens the image by decomposing the images into reflectance and illumination maps. In the absence of ground truth, we introduce an effective prior termed histogram equalization prior to guiding the training process, which is an extension of histogram equalization that investigates the spatial correlation between feature maps. Benefiting from the abundant information of the histogram equalization prior, the reflectance maps generated by LUM simultaneously improve brightness and preserve texture and color information. The NDM further denoises the reflectance maps to obtain the final images while preserving more natural color and texture details. Both qualitative and quantitative experiments demonstrate the advantages of our model over state-of-the-art algorithms.
In the future work, we intend to explore more effective prior for low-light image enhancement and investigate some GAN-based methods for low-light and normal-light image transfer. Due to the limited application value of low light enhancement, we also expect to integrate enhancement algorithms with some high-level tasks, such as object detection and semantic segmentation, which can be used for autonomous driving to provide reliable visual aids for dark and challenging environments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-35 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sect1}
In the past decades, magnetic fields in galaxy clusters have been
observed and studied \citep[see review
of][]{han17,ct02,gf04,fgs+08,fgg+12}. The magnetic fields are
crucial for a comprehensive understanding of radio emission from the
diffuse intracluster medium (ICM). The presence of diffuse radio halos
and radio relics in galaxy clusters is the direct evidence for
magnetic fields in the ICM \citep[e.g.][]{gbf+09,vrbh10}. Under the
minimum energy hypothesis or equipartition approach, magnetic fields
permeating the ICM are roughly estimated from the radio emission
intensity maps with a strength of a few micro-Gauss
\citep[e.g.][]{gf04}.
Statistical study of Faraday rotation measures (RMs) of radio sources
within or behind galaxy clusters is an alternative way to investigate
magnetic fields in galaxy clusters
\citep[e.g.][]{ktk91,ckb01,gdm+10,bfm+10,bvb+13,pjds13,bck16}. When a
linearly polarized electromagnetic wave signal travels through a
magnetized plasma, the plane of polarization is rotated by an angle
$\Delta \psi$ proportional to the wavelength squared $\lambda^2$, i.e.
\begin{equation}
\Delta \psi = \psi-\psi_0= \rm{RM} \cdot \lambda^2,
\end{equation}
where $\psi$ and $\psi_0$ are the measured and intrinsic polarization
angle, and RM is the rotation measure which is an integrated quantity
of the product of the thermal electron density $n_e$ and magnetic
field strength ${ B}$ from the source to us, most effectively
probing the fields along the line of sight. For a polarized radio
source at redshift $z_{\rm s}$, RM is expressed by
\begin{equation}
{\rm RM} = 812\int_{\rm source}^{\rm us} n_e { B} \cdot d{ l}
=812 \int_{\rm z_s}^{\rm us}\frac{n_e(z)B_{||}(z)}{(1+z)^{2}}\frac{dl}{dz} dz.
\label{rmz}
\end{equation}
The electron density $n_e$ is in cm$^{-3}$, the magnetic field is a
vector ${ B}$ (and magnetic field along the line of sight $B_{||}$)
in units of $\mu$G, and $d{ l}$ is the unit vector
along the light path towards us in units of kpc. The comoving path
increment per unit redshift $\frac{dl}{dz}$ is in kpc and $(1+z)^2$
reflects the change of wavelength at redshift $z$ over the path transformed
to the observer's frame.
The observed rotation measure ${\rm RM}_{\rm obs}$, is a sum of the foreground
Galactic RM (GRM) from the Milky Way, the rotation measure from
intergalactic medium ${\rm RM}_{\rm IGM}$ and intrinsic to the source
${\rm RM}_{\rm in}$
i.e.
\begin{equation}
{\rm RM}_{\rm obs} = {\rm GRM} + {\rm RM}_{\rm IGM} + {\rm RM}_{\rm in}.
\label{rmobs}
\end{equation}
When studying RMs of sources at a cosmological distance, one has to
account RM contributions from all kinds of the intervening medium
along the line of sight. For most extragalactic radio sources, the
foreground Galactic RM is the dominant contribution. If the foreground
GRM is not assessed properly, it is impossible to get small
extragalactic contributions. There have been many efforts to
investigate the foreground GRM \citep[e.g.][]{hmbb97, ojr+12, xh14,
ojg+15}. RM values intrinsic to a radio source (${\rm RM}_{\rm in}$)
at a redshift of $z_{\rm s}$ are reduced by a factor
$(1+z_{\rm s})^{-2}$ due to change of $\lambda$ when the values are transformed
to the observer's frame. The typical distribution of source-intrinsic
RMs of distant quasar-like sources is only several rad~m$^{-2}
$\citep{bsg+14}. The RMs from the intergalactic medium ${\rm RM}_{\rm
IGM}$ may have several contributors, such as rotation measures from
the cosmic webs, intervening galaxy halos and intracluster medium on
the line of sight. The rotation measure from the cosmic webs might be
traced by Ly$\alpha$ forest, and there have been some simulations on
their contribution \citep[e.g.][]{bbo99,ar10,ar11,ptu16}. It is very
small ($\sim$1--2 rad~m$^{-2}$) that it hardly be detected from
present available data \citep{xh14b,omv+19,obv+20}. The excess of
rotation measure from galaxy halos or protogalactic environments has
been studied by intervening absorbers like Mg~II absorption lines
\citep[e.g.][]{bml+08,foc+14,frg+17}. \citet{jc13} and \citet{mcs20}
obtained an increase in the distribution deviation of around 8
rad~m$^{-2}$ for quasars with Mg~II absorption lines. Statistics of
RMs of polarized radio sources located inside or behind galaxy
clusters \citep[e.g.][]{ktk91,ckb01,gdm+10,bfm+10,bvb+13,pjds13,bck16}
show the RM excess for the contributions from the intracluster medium
with an amplitude from a few to a few tens of rad~m$^{-2}$
\citep{xh14b,gdm+10,ckb01}.
It is now well established that the magnetic fields are ubiquitous in
the ICM \citep[e.g.][]{ct02}. The intracluster magnetic fields are
dominated by turbulent fluctuations over a range of scales. The field
strength decreases from the central regions to the outskirts. The
spatial power spectrum is well represented by a Kolmogorov power
spectrum \citep{bfm+10}. Turbulent magnetic fields with a coherence
length of a few kpc are indicated by RM dispersion studies of
polarized radio sources \citep[e.g.][]{ktk91,gdm+10} and found in both
relaxed clusters and merging clusters regardless of dynamic states
\citep{ckb01,bck16,sd19}. Coherent rotation measures of radio relics
reveal large-scale ($>$100 kpc) compressed magnetic fields
\citep{ore+14,kbh+17}. The organized magnetic fields are responsible
for systematic RM gradient over lobes of radio galaxies
\citep[e.g.][]{tp93}. The ordered net magnetic fields can be
considered as the large-scale fluctuations at the outer scale of
turbulent magnetic fields where the energy is injected
\citep{vmg+10}. The magnetic fields close to the center of galaxy
clusters are more disturbed and tangled with a strength of a few
micro-Gauss while those near the outskirts are more representative for
the large-scale fluctuation component with a field strength of an
order of magnitude smaller \citep{rkcd08}.
\begin{figure}
\centering
\includegraphics[angle=0,width=80mm,trim=50 100 0 50,clip]{DeltaRMsche.ps}
\caption{A schematic diagram showing a pair of lobes from a FR~II
radio galaxy with observed RMs on each side (RM1 and RM2,
respectively). The RMs of the pair with such a small angular
separation ($\Delta$r) of an order of arcminutes have almost the
same Galactic contributions (in general coherent from several
degrees at low Galactic latitudes to tens of degrees at high
Galactic latitudes) and the same intergalactic contributions in
front of the lobes. Therefore the RM difference ($\Delta$RM) between
the two lobes are the best probes for the magnetic properties of the
ICM.}
\label{FRIIsch}
\end{figure}
The RM difference of a pair of lobes from an embedded FR~II radio
galaxy \citep{fr74} are the best probes for the magnetic fields in the
ICM and their redshift evolution, because both the foreground Galactic
RM and the RM contributions on the way to the cluster in all
intervening galactic and intergalactic medium can be diminished, as
depicted in Figure~\ref{FRIIsch}. The real physical pair of lobes are
the bulk of radio emission from a galaxy on opposite sides, formed
when central active galactic nuclei produce two opposite collimated
jets that drive relativistic electrons running in magnetic fields into
the lobes to generate synchrotron emission \citep{br74}. The environs
of the host galaxy must be rich of gas. The jets travel through the
interstellar medium of the host galaxy, and stay supersonic to a great
distance to push their way through the external medium where a shock
front is formed as shown by hot spots. The end of the jets move
outwards much more slowly than material flows along the jets. A back
flow of relativistic plasma deflected at the end of the jets forms the
lobes. The gaseous environment they inhabit is very important to
provide a working surface for the jets to terminate, therefore, the
ICM provides an ideal environment for producing FR~II radio
sources. The observed radio radiation from FR~II type radio sources is
often highly linearly polarized \citep[e.g.][]{bfm+10}. { The
Laing-Garrington effect strongly suggests the existence of
intracluster magneto-ionic material surrounding the radio sources
causing asymmetry in the polarization properties of double radio
sources with one jet \citep{lai88,glcl88}.} Many double radio
sources have been detected from galaxies at low redshifts ($z<0.3$),
and a large number of sources have been found in dense cluster-like
gaseous environments at higher redshifts
\citep{ymp89,hl91,wd96,pvc+00,md08}.
It is not known if there is any evolution of intracluster magnetic
fields at different cosmological epochs. Statistical studies of the
redshift evolution of {\it net} rotation measures contributed by the
ICM is the key for the puzzle. { Cosmological simulations by
\citet{ar11} predicted the redshift dependence of extragalactic
rotation measures caused by the intergalactic medium. Contributions
by galaxy clusters, however, could not be properly modeled given the
cell size in their simulations.} Previously, there have been a
number of works to investigate the redshift evolution of extragalactic
rotation measures \citep{hrg12,nsb13,xh14b,ptu15,lrf+16,opa+17}, which
were generally made for the whole contributions on the path from the
observer to the sources. A marginal dependence of redshift was
found. In the early days, the RM differences were also studied for a
small number of double radio galaxies at low Galactic latitudes to
investigate the enhanced turbulence in the interstellar medium
\citep{sc86,prm+89,lsc90,ccsk92,ms96}. \citet{akm+98} studied 15
radio galaxies at high redshift $z>2$ with large rotation measures,
and claimed their RM contributions are likely to be in the vicinity of
the radio sources themselves. \citet{gkb+04} and \citet{opa+17}
concluded that no statistically significant trend was found for the RM
difference of two lobes against redshift. \citet{vgra19} classified a
large sample of close pairs and found a significant difference of
$\sim$5--10 rad~m$^{-2}$ between physical pairs (separate components
of a multi-component radio galaxy or multiple RMs within one of the
components) and random pairs, though the redshift dependence of the
physical pairs is not evident. \citet{obv+20} used a similar method
but high precision RM data from the LOFAR Two-Metre Sky Survey, and
they find no significant difference between the $\Delta$RM
distributions of the physical and non-physical pairs. In fact, the
uncertainty of RM measurement is a very important factor for the
evolution investigation. For example, very small RM differences
(1$\sim$2 rad~m$^{-2}$) between the lobes of large radio galaxies at
low redshifts can be ascertained with high precision observations
\citep{omv+19,bowe19,sob+20}. RM differences for a larger sample of
pure double radio sources is necessary to further investigate their
correlation with redshift.
A real pair of two physically associated lobes shown as double radio
sources have a small separation and an almost the same flux density,
which can be found in the Jansky Very Large Array (JVLA) Sky Survey
\citep[NVSS;][]{ccg+98}. \citet{tss09} have reprocessed the 2-band
polarization data of the NVSS, and obtained the two-band RMs for
37,543 sources. \citet{xh14} compiled a catalog of reliable RMs for
4553 extragalactic point radio sources. In addition to the previously
cataloged RMs, many new RM data are published in the literature. In
this paper, we have classified RM pairs in the NVSS RM data and the
compiled catalog and later literature since 2014, and cross-identified
available galaxy redshift data to obtain RMs and redshifts for 627
pairs. We use these data to study the redshift evolution of RM
differences. We introduce the rotation measure data in
Section~\ref{sect2} and study the distributions of RM differences of
pairs in Section~\ref{sect3}. { Finally, we discuss our results and
present conclusions in Section~\ref{sect4} and Section~\ref{sect5},
respectively.}
Throughout this paper, a standard $\Lambda$CDM cosmology is used, taking
$H_0=100h$~km~s$^{-1}$Mpc$^{-1}$, where $h=0.7$, $\Omega_m=0.3$ and
$\Omega_{\Lambda}=0.7$.
\section{Rotation measure data of pairs}
\label{sect2}
We obtain the RM data for a sample of pairs from the NVSS RM catalog
\citep{tss09} and literature \citep[][and afterwards]{xh14}. We search
for real pairs for the two RM datasets separately, since observation
frequencies and resolutions for RM measurements are very
different. The NVSS radio images are visual inspected to ensure
physical pairs.
\subsection{The NVSS RM pairs}
In the NVSS RM catalog, RM data and flux density measurements are
available for 37,543 ``sources''. Here a ``source'' is an independent
radio emission component, while a galaxy can produce a few radio
components, e.g. two unresolved lobes in addition to a compact core of
a radio galaxy. We cross-matched the catalog against itself, and found
1513 source pairs with a flux density ratio $S_{\rm large}/S_{\rm
small}$ less than 1.5 and an angular separation between $10'$ and
$45''$ (i.e. the angular resolution of the NVSS survey). Flux
densities of real pairs from two lobes of radio galaxies are most
likely to be consistent with each other because of a similar radio
power ejected from the same central black hole. The ratio limit is
therefore used to largely excludes false pairs from two physically
unrelated sources. The maximum separation of $10'$ is set for two
reasons. The first is that it would be difficult to identify
physically related double sources at a larger separation without a
clear connection such as diffuse emission between two sources.
Second, the number of physical pairs at larger separations is small.
In the sample of \citet{vgra19}, only a few pairs have angular sizes
greater than $10'$. The minimum separation was set as being the beam
size $45''$ of the NVSS survey, so that two very close sources can be
just resolved. \citet{vgra19} adopted two times the beam size,
i.e. $1'.5$, while we found the number of physical pairs with
separation $\Delta r < 1'.5$ is more than the twice for pairs with
$\Delta r > 1'.5$, which is important to get pairs for high redshift
galaxies.
\begin{figure}
\centering
\includegraphics[angle=-90,width=40mm]{005226+121929.ps}
\includegraphics[angle=-90,width=40mm]{025209+025422.ps}\\%3.05'
\includegraphics[angle=-90,width=40mm]{000042-342401.ps}
\includegraphics[angle=-90,width=40mm]{045434-162638.ps
\caption{Example images of paired sources from radio galaxies with
available RMs in the NVSS RM catalog. The top left is the pair of
J005226+121929 ($\Delta r \simeq 0'.89$); the top right is the pair
of J025209+025422 ($\Delta r \simeq 3'.05$); the bottom left is the
pair of J000042-342401 ($\Delta r \simeq 0'.88$); and the bottom
right is the pair of J045434-162638 ($\Delta r \simeq 3'.05$). The
pair names here are corresponding to the mean RA and Dec of the
pair. The top two pairs are located in the FIRST survey area and
therefore the FIRST contours are shown in red. All contours are
plotted at levels at $\pm$1, 2, 4, ... mJy beam$^{-1}$, with the
plus ``+'' indicating the central coordinate of double radio
sources.}
\label{dbsch}
\end{figure}
Visual inspection was carried out to identify real physical pairs. We
obtain the NVSS image centered on the mean RA and Dec of each pair,
and make a contour map, as shown in Figure~\ref{dbsch}. For candidates
with angular separations $\Delta r >3'$, the clear presence of fainter
emission connecting the two ``sources'' is the signature for a real
pair, so we get 34 real pairs with $\Delta r >3'$. For pairs with a
smaller angular separation, we check candidates in the survey coverage
area of the VLA Faint Images of the Radio Sky at Twenty centimeters
\citep[FIRST;][]{bwh95} to verify the true pair. With the experience
of classification of real pairs from the NVSS contour maps in the
FIRST area, we extrapolate the method to the sources outside the
survey area of the FIRST. We noticed that physically unrelated pairs
are very scarce at much smaller angular separations
\citep{vgra19,obv+20}. We get 1007 real pairs from the NVSS sources in
total. Four examples of identified real pairs are shown in
Figure~\ref{dbsch}.
For these 1007 pairs, we search for their redshifts of the host
galaxies from several large optical redshift surveys and online
database. First, we cross-match the mean coordinates of RM pairs with
the released spectroscopic redshift of 2.8 million galaxies from Data
Release 16 of the Sloan Digital Sky Survey \citep[SDSS
DR16,][]{aaa+20}, and we obtain spectroscopic redshift data for
galaxies within 10 arcsec of the given position for 100 pairs. Second,
we get another spectroscopic redshifts from the cross-identification
of galaxies in the 6dF Galaxy Survey Redshift Catalogue Data Release 3
\citep{jrs+09} for 10 pairs. We get photometric redshifts for 227
pairs from the cross-match with the SDSS DR8. For the left sources, we
cross-identified with the NASA/IPAC Extragalactic Database (NED), and
we get redshifts for 64 pairs. In total, we get redshifts and RMs for
401 pairs, as listed in Table~\ref{samplenvss}. The reliability of
such cross-match is about 80\%, as discussed in the
Appendix~\ref{appen}. This is the largest sample of RMs for pairs with
redshifts currently available for the NVSS RM data.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=75mm]{dRMz.ps}
\includegraphics[angle=-90,width=75mm]{dRMz10.ps}
\caption{In the left panel, the RM differences $\Delta$RM for 401
pairs from the NVSS data ({\it top-subpanel}) and for 226 pairs from
the compiled data ({\it middle-subpanel}) and their histograms ({\it
bottom subpanel}) against redshift are shown together with the
histograms for uncertainties $\sigma_{\Delta \rm RM}$. There are
2 and 9 pairs with the $\Delta$RM values outside the value range of
the subpanels for the NVSS and compiled data, respectively.
%
The distributions for same data but $\sigma_{\Delta \rm RM}
\leqslant$ 10 rad~m$^{-2}$ are shown in the right panel.}
\label{dRMz}
\end{figure*}
\subsection{The compiled RM pairs}
In the compiled RM catalog \citep{xh14} and more recently published
literature after then, RMs are available for many pairs, as listed or
presented with radio images in the original references. We inspected
all literature and find 444 double sources as real physical
pairs. Among them 95 pairs have redshifts already listed in the
references or from the NED. For the remaining 349 double sources
without redshifts and known host galaxies, we adopted the same
procedure for redshift search as for the NVSS RM pairs. The central
coordinates of each pair are cross-matched with the SDSS DR16, and we
find spectroscopic redshifts for 40 pairs within 10 arcsec. No
objects can get the spectroscopic redshift from the 6dF Galaxy Survey
data. From the catalog of SDSS DR8, we obtain photometric redshifts
for 83 pairs. For the left, we found 8 redshifts from the NED. In
total, we have 226 physical pairs with both RMs and redshifts, as
listed in Table~\ref{samplecomp}. The redshifts for 95 pairs are very
reliable, as marked with '*' in the 10th column, but for the rest 131
pairs, redshift reliability is about 80\%. Notice that the redshifts
of pairs of $z>0.9$ are very reliable, because 34 of the 37 pairs have
redshifts well measured.
\subsection{The RM differences of pairs}
For a physical pair, i.e. the two lobes of a radio galaxy shown as
double radio sources, their radio waves experience almost the same
integration path for the Faraday rotation from their inhabited
environment in front of the radio galaxy to us, as shown in
Figure~\ref{FRIIsch}. The RM difference of a pair indicates mostly the
immediate difference of the magnetoionic medium in their local
environment on a scale comparable to the projected source separation
on the sky plane, i.e. a scale from tens of kpc to a few Mpc, though
we do not know the angle between the line of sight and the pair
connection in 3D. All pairs of sources collected in this work are
unresolved point sources, so that their RMs are produced by almost the
same intervening medium between the source and the observer. The RM
difference $\Delta$RM$=$ RM1 -- RM2 with an uncertainty of
$\sigma_{\Delta \rm RM} = \sqrt{\sigma_{\rm RM1}^2+\sigma_{\rm
RM2}^2}$ therefore is the cleanest measurements of Faraday
rotation in the ICM, avoiding any additional uncertainties caused by
not-well-measured foreground GRM and by the unknown intergalactic
contributions such as from cosmic webs and galaxy halos. These
unknown uncertainties caused by the foreground of sources are
inherited in all traditional statistics for extragalactic RMs.
The RM difference can be negative or positive as we randomly take one
to subtract the other, so that statistically the zero mean is expected
for a large samples. For our sample the mean of RM difference is
--0.21 and --0.11 rad~m$^{-2}$ for the NVSS and compiled RM pairs,
respectively, which approximate to zero as expected. The distribution
of $\Delta$RM for two samples of pairs are shown in
Figure~\ref{dRMz}. The RMs, their differences and redshifts of all
these 401 and 226 pairs from the compiled data and the NVSS data are
listed in Table~\ref{samplenvss} and \ref{samplecomp}, respectively,
together with angular separation $\Delta$r, projected linear separation
LS. Only 12 of 401 pairs (3\%) of the NVSS RM sources have redshifts
larger than 0.9, compared with 37 of 226 pairs (16\%) in the compiled
sources. { In the compiled RM data, 34 double sources marked with '--'
in column 11 and 12 have the coordinates for host galaxies but no
coordinates for the two radio lobes, and thus the angular and linear
separations are not available.}
Because the RM uncertainty is a very important factor for the study
of the small RM difference of pairs, and because the formal uncertainty
of the NVSS RM measurements are much larger than those for the compiled
data, the two samples should be analyzed separately. The RM data with
small uncertainties are more valuable to reveal the possible evolution
with redshift, the subsamples with $\sigma_{\Delta \rm RM} \leqslant$
10 rad~m$^{-2}$ are taken seriously here and their distribution is
shown in the right panel of Figure~\ref{dRMz}.
\begin{figure}
\centering
\includegraphics[angle=-90,width=0.8\columnwidth]{dRMGB.ps}
\caption{The absolute values of RM difference $|\Delta$RM$|$ of pairs
from the NVSS data ({\it top panel}) and the compiled data ({\it
lower panel}) against the Galactic latitudes $|b|$. No apparent
dependence imply no significant contribution from the ISM. The
uncertainties of the NVSS RM data are not shown for clarity. }
\label{GB}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=\columnwidth]{dRMalsep.ps}
\caption{The absolute values of RM difference $|\Delta$RM$|$ of pairs
for the NVSS data ({\it top panels}) and the compiled data ({\it
lower panels}) against the angular separation ($\Delta$r) and the
projected linear separation (LS). The uncertainties of the NVSS data
are not shown for clarity. A few pairs without the separation values
or having a RM difference out of the plotted ranges are not shown.}
\label{dRMalsep}
\end{figure}
\section{Large RM difference at high redshifts}
\label{sect3}
Based on this largest samples of pairs with both RMs and redshift data
available so far, we study their evolution with redshift, and check if
the RM difference is related to the separations of two sources.
Figure~\ref{GB} shows the distribution of absolute values of
$|\Delta$RM$|$ of pairs against the Galactic latitude. Because the RM
differences of double sources at low Galactic latitudes may be
contaminated by enhanced turbulence in the interstellar medium when
the radio waves pass through the Galactic plane \citep[e.g.][]{sc86,
ccsk92,ms96}, we discard 9 NVSS pairs and 3 pairs from the compiled
data at low Galactic latitudes of $|b|<10\degr$, though these few
pairs may not affect our statistics (see Figure~\ref{GB}). A Spearman
rank test demonstrates the absolute $|\Delta$RM$|$ of the NVSS data is
uncorrelated with Galactic latitude, with a correlation coefficient of
$\sim$ --0.004 ($p$-value $\sim$ 0.93). For the pairs from the
compiled data, only a very weak correlation was found from data,
with a correlation coefficient of --0.22 ($p$-value $\sim$ 0.002).
We therefore conclude that the ``leakage'' to the RM differences
from the Galactic interstellar medium can be ignored.
Figure~\ref{dRMalsep} shows the absolute RM difference as a function
of the angular separation and projected linear separation of two lobes
on the sky plane. For the purpose to explore the magnetic fields in
the intracluster medium, we discard 4 pairs with a LS $\geqslant$
1~Mpc from the NVSS data and 25 pairs from the compiled data, because
these pairs probably impact much less ICM and their differences may
stand more for the RM contribution from the intergalactic medium, given
the typical size of galaxy clusters being about 1~Mpc.
In addition, one pair from a very distant radio galaxy in the compiled
RM data and one pair from the NVSS data have a host galaxy with a
redshift of $z>3$. They are also discarded for the following
statistics.
{ All these discarded pairs are marked with '$\dag$' in the column 13 of
Table~\ref{samplecomp} and \ref{samplenvss}.} We finally have a very
cleaned 387 NVSS pairs and 197 compiled pairs with a separation of LS
$<$ 1~Mpc, $|b|>10\degr$ and $z<3$ for further analysis.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=70mm]{dRMzsta_bl10.ps}
\includegraphics[angle=-90,width=70mm]{dRMzsta_bl10_rm10.ps}
\caption{Distribution of absolute values of RM difference $|\Delta \rm
RM|$ and the data dispersions as a function of redshift for 387 NVSS
pairs and the 197 pairs of the compiled data with a projected
separation of LS $<$ 1~Mpc, $|b|>10\degr$ and $z<3$ in the left
panel. Sources with $|\Delta \rm RM| >$ 100 rad~m$^{-2}$ are plotted
at top boundary. The vertical dotted lines in the top two rows
indicate the redshift at $z=$ 0.3, 0.6, 0.9, 1.5. The dispersions of
the $\Delta$RM distribution are calculated with a Gaussian fitting
with a characteristic width $W_{\rm \Delta RM}$, or simply taken as
the median absolute values, as shown in the third and fourth rows of
panels, respectively. The open circles represent the values from the
NVSS RM data, and the filled dots stand for values from the compiled
data, plotted at the median redshift for each redshift range.
The same plots but for 152 NVSS pairs and 186 compiled pairs with a
formal $\Delta$RM uncertainty $\sigma_{\Delta \rm RM} \leqslant$ 10
rad~m$^{-2}$ are shown in the right.}
\label{dRMzsta}
\end{figure*}
\begin{table*}
\centering
\caption{Statistics of the $\Delta$RM distribution for pairs in redshift bins.\label{dataresult}}
\begin{tabular}{crccccrccc}
\hline
\multicolumn{1}{c}{ } & \multicolumn{5}{c}{Subsamples from the NVSS RM data} & \multicolumn{4}{c}{Subsamples from the compiled RM data} \\
Redshift & No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$ &$W_{\Delta \rm RM_{mock}}$~~~&
No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~~~~\\
range & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) & (rad~m$^{-2}$) &
pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) \\
\hline
\multicolumn{10}{c}{584 pairs with no uncertainty constraint: 387 NVSS RM pairs and 197 compiled RM pairs}\\% & \multicolumn{4}{c}{$\sigma_{RRM} <$ 20 rad~m$^{-2}$} \\
\hline
0.0--0.3 & 116 & 0.171 & 13.3$\pm$1.3 & 9.9$\pm$1.2 &10.2$\pm$3.1 & 57 & 0.198 & 2.1$\pm$0.3 & 1.6$\pm$0.3 \\
0.3--0.6 & 174 & 0.455 & 11.5$\pm$0.9 &11.0$\pm$0.8 &10.2$\pm$1.6 & 67 & 0.439 & 1.5$\pm$0.2 & 1.0$\pm$0.2 \\
0.6--0.9 & 86 & 0.668 & 12.3$\pm$1.3 &13.9$\pm$1.2 &10.7$\pm$2.1 & 39 & 0.708 & 2.0$\pm$0.4 & 1.1$\pm$0.4 \\
0.9--1.5 & 9 & 1.148 & 18.7$\pm$6.6 &17.5$\pm$6.1 & -- & 25 & 1.131 & 46.5$\pm$9.7 &28.5$\pm$10.6 \\
2.0--3.0 & 2 & -- & -- & -- & -- & 9 & 2.430 & 35.1$\pm$12.4 &38.5$\pm$10.7 \\
0.9--3.0 & 11 & 1.198 & 17.3$\pm$5.5 &17.0$\pm$5.1 & -- & 34 & 1.222 & 43.7$\pm$7.7 &28.8$\pm$8.2 \\
\hline
\multicolumn{10}{c}{338 pairs of $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$: 152 NVSS RM pairs and 186 compiled RM pairs}\\% & \multicolumn{4}{c}{$\sigma_{RRM} <$ 20 rad~m$^{-2}$} \\
\hline
0.0--0.3 & 54 & 0.150 & 8.6$\pm$1.2 & 7.1$\pm$1.2 &7.4$\pm$1.4 & 53 & 0.198 & 1.7$\pm$0.3 & 1.6$\pm$0.2 \\
0.3--0.6 & 60 & 0.454 & 9.0$\pm$1.2 & 7.9$\pm$1.1 &8.2$\pm$0.8 & 66 & 0.438 & 1.5$\pm$0.2 & 1.0$\pm$0.2 \\
0.6--0.9 & 33 & 0.647 & 9.7$\pm$1.7 & 8.5$\pm$1.6 &8.4$\pm$1.9 & 36 & 0.709 & 2.0$\pm$0.4 & 1.1$\pm$0.4 \\
0.9--1.5 & 4 & -- & -- & -- & -- & 23 & 1.131 & 46.8$\pm$10.2 &28.5$\pm$11.2 \\
2.0--3.0 & 1 & -- & -- & -- & -- & 8 & 2.414 & 33.3$\pm$12.6 &30.9$\pm$11.5 \\
0.9--3.0 & 5 & -- & -- & -- & -- & 31 & 1.201 & 43.6$\pm$8.1 &28.5$\pm$8.7 \\
\hline
\multicolumn{10}{l}{$W_{\Delta \rm RM_{mock}}$ denotes the ``intrinsic'' dispersions
of the NVSS data derived by the mock method in Appendix~\ref{appenB}.}
\end{tabular}
\end{table*}
\subsection{The RM difference versus redshift}
In order to reveal the possible redshift evolution of the small RM
difference caused by the intracluster medium, the $\Delta$RM data have
to be carefully analyzed.
From Figure~\ref{dRMz} and Table~\ref{samplecomp} and
\ref{samplenvss}, we see that the uncertainties $\sigma_{\Delta RM}$
from the NVSS RM measurements have a value between 0 and 25
rad~m$^{-2}$, and those for the compiled RM data are mostly less than
10 rad~m$^{-2}$ and more than half less than 1 rad~m$^{-2}$.
\citet{xh14b} showed that large uncertainties would leak to the
$\Delta$RM distribution. Therefore, we have to study the two samples
of pairs with very different $\Delta$RM uncertainties separately. We
examine two cases, one for the $\Delta$RMs from the whole samples
without a threshold of uncertainty, and the other with the threshold
of $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$.
According to number distribution in Figure~\ref{dRMz}, we divide the
samples of pairs in five redshift ranges, z$=$(0.0,0.3), (0.3,0.6),
(0.6,0.9), (0.9,1.5) and (2.0,3.0), and examine the data dispersion in
these ranges as shown in Figure~\ref{dRMzsta}, assuming an
insignificant evolution of RM differences in a given redshift range.
The RM differences of a pair of lobes can be negative or positive, and
for an ideal case of a large sample of the $\Delta$RM values should
follow a Gaussian distribution with the zero mean. The dispersion,
i.e. the width of a Gaussian function $W_{\Delta \rm RM_{rms}}$ and
can be fitted from the real data distribution of $\Delta {\rm RM}$,
through calculating the root mean square (rms) for the $\Delta$RMs:
\begin{equation}
W_{\Delta \rm RM_{rms}}
=\sqrt{\frac{\sum_{i=1,N}(RM1-RM2)_i^2}{N}},
\label{rms}
\end{equation}
here $N$ is the total number of pairs. Alternatively, a more robust
approach is to get the median absolute deviation $\rm W_{\Delta \rm
RM_{mad}}$, which is good for small data samples and robust in the
presence of outliers \citep[cf.][]{mcs20}. For our $\Delta$RM data,
the zero mean is expected. Therefore, we consider the median of the
absolute values of the RM difference, i.e.
\begin{equation}
\rm W_{\Delta \rm RM_{mad}}^{\rm ori} = Median(|RM1-RM2|_{i=1,N}).
\label{madfm}
\end{equation}
For a normally distributed data, this can be linked to $W_{\Delta \rm
RM_{rms}}$ by
$ W_{\Delta \rm RM_{mad}} = 1.4826 \times W_{\Delta \rm RM_{mad}}^{\rm ori} \simeq W_{\Delta \rm RM_{rms}}$
\citep{llk+13}.
In the redshift ranges with more than five pairs, we calculate the
dispersion of RM differences, $ W_{\Delta \rm RM_{rms}} $ and $
W_{\Delta \rm RM_{mad}} $, see Table~\ref{dataresult} and
Figure~\ref{dRMzsta}. { Though a large $\Delta$RM is possible for
embedded double sources contributed from the intracluster medium,
with a value maybe up to a few hundred rad~m$^{-2}$
\citep[e.g.][]{ckb01}, a few outliers are cleaned in our statistics
since they affect the calculation of the dispersion of the main
stream of data. For the rms calculation, data points scattered away
from the main distribution by more than three times the standard
deviation are marked as outliers, and removed iteratively until no
outliers are marked. The trimmed rms of $\Delta \rm RM$ are
taken as $ W_{\Delta \rm RM_{rms}} $ for a subsample in a redshift
bin.} The uncertainty of $ W_{\Delta \rm RM_{rms}} $ is taken as the
standard error for the zero mean, as done by \citet{vgra19}. { For
the median calculation, the outliers are also cleaned first, and the
median is found from the remaining $|\Delta {\rm RM}|$, which is
taken as $ W_{\Delta \rm RM_{mad}}^{ori}$ and then converted to $
W_{\Delta \rm RM_{mad}}$ with a factor of 1.4826. Its uncertainty is
taken as being $\sigma_{\left < |\Delta \rm RM_i| \right >}$, the
error of the estimated mean value of $|\Delta {\rm RM}|$, also with
a factor of 1.4826. }
The dispersion calculated above in fact includes a ``noise'' term
coming from various uncertainties of RM values. In principle, the
noise term should be discounted from the $\Delta$RM dispersion to get
real astrophysical contributions. For each pair, the noise term can be
expressed from the quadrature sum of the uncertainty of RMs of two
lobes, i.e. for the $i$th pair, the noise $ \sigma_{\Delta \rm RM_i}^2
= (\sigma_{RM1}^2+\sigma_{RM2}^2)_i$. The procedure of noise
subtraction for the dispersion width $\sqrt{ \ W_{\Delta \rm
RM_{rms}}^2 - \langle \sigma_{\Delta \rm RM_i}^2 \rangle }$
should be carried out under the assumption that the uncertainties in
the observed RMs provide a realistic estimate of the measurement
error. However, RM uncertainties of the NVSS data are underestimated
for most sources \citep{sts11} or probably overestimated for physical
pairs \citep{vgra19}, probably caused by a previously unknown
systematic uncertainty \citep{mgh+10,xh14}. For the compiled RM data,
different estimation methods were used for measurement errors, or
observations with uncorrected ionospheric RM will introduce an extra
RM uncertainty about 3 rad~m$^{-2}$. It is hard to get a realistic
uniformed estimate of the measurement error for the pair sample in
this paper. Fortunately for this work the RM difference $(\Delta \rm
RM)^2$ is concerned, which can largely diminish any systematical
uncertainties which contribute the same amount to the RM measurements
of two closely located sources, though a small unknown amount of noise
leakage still may occur. We found that the $ W_{\Delta \rm RM_{mad}}$
are even much smaller than the average noise power $\langle
\sigma_{\Delta \rm RM}^2 \rangle$, thus no correction of the noise
term is made to dispersion quantities $ W_{\Delta \rm RM_{rms}} $ and
$ W_{\Delta \rm RM_{mad}} $ in Table~\ref{dataresult}.
With these careful considerations above, it is the time to look at
the dispersion of RM differences of pairs as a function of redshift
$z$, with or without a threshold of $\Delta$RM uncertainty for
the NVSS RM pairs and the compiled RM pairs, respectively.
First of all, the amplitudes of dispersion represented by $ W_{\Delta
\rm RM_{rms}} $ and $ W_{\Delta \rm RM_{mad}} $ are consistent with
each other within error bars, as shown in Table~\ref{dataresult} and
Figure~\ref{dRMzsta}.
{ Second, for the NVSS RM pairs, no significant variation of the
dispersion with redshift is seen in both whole sample and the high
precision sample with $\sigma_{\Delta \rm RM} \leqslant$ 10
rad~m$^{-2}$, which is consistent with the results for physical
pairs obtained by \citet{vgra19}. However, the systematically
larger dispersion is obtained from the whole sample than that from
the high precision sample,} which implies the large uncertainty of
the NVSS RM values \citep[a noise term around 10.4 rad~m$^{-2}$ given
by][]{sch10} significantly affects the dispersion of $\Delta$RM, and
probably buries the small amplitude evolution at low redshifts. This
is a sign of the some noise leakage which cannot be cleaned.
Third, for pairs from the compiled RM data which have very a small
noise, a much larger dispersion appears for pairs of $z>0.9$ in both
samples with/without $\sigma_{\Delta \rm RM}$ threshold setting,
compared to a small dispersion for pairs of $z<0.9$. { The
amplitude of dispersion for pairs of $z<0.9$ mostly is less than 2
rad~m$^{-2}$, but for pairs of $z>0.9$ the dispersion is about 30 to
40 rad~m$^{-2}$. Even the measurement noise, which is about 5.6/4.7
rad~m$^{-2}$ at $z>0.9$ without/with $\sigma_{\Delta \rm
RM} \leqslant 10$~rad~m$^{-2}$ threshold is discounted, the result on larger
dispersion is not changed. Since the dispersion values for two
redshift ranges of $z>0.9$ are similar, the data of all pairs in the
redshift of $0.9<z<3.0$ are therefore jointly analyzed, and the
uncertainty becomes smaller. The large dispersion for the
high-redshift pairs of $z>0.9$ is therefore a good detection at
about a 5-sigma level. }
{ We note that the pairs with a low redshift in the compiled data
are mainly measured at low frequencies by LOFAR (144~MHz)
\citep[e.g.][]{obv+20} and MWA (200~MHz) \citep[e.g.][]{rgs+20}.
Low frequency data may probe the outer part of galaxy clusters or
poor clusters, hence the dispersion amplitude around 2 rad~m$^{-2}$
calculated from pairs of $z<0.9$ should read as a lower limit of
Faraday rotation from the intracluster medium. The dispersion about
7$\sim$9 rad~m$^{-2}$ estimated from the NVSS RM data with
$\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$ should taken as a
upper limit. The ``intrinsic'' dispersions of the NVSS RM data in
three low redshift bins at $z<0.9$ are verified by the mock method
introduced by \citet{xh14b}, see Appendix~\ref{appenB}.
Based on above results, we conclude that the dispersion of RM
differences for pairs of $z<0.9$ should be a value in the range of
2$\sim$8 rad~m$^{-2}$, much smaller than the value of 30$\sim$40
rad~m$^{-2}$ for high-redshift pairs of $z>0.9$}.
\begin{figure}
\centering
\includegraphics[angle=-90,width=0.7\columnwidth]{LSz.ps}
\caption{Projected separation of pairs at various redshifts from the
NVSS sample (\textit{top}) and the compiled data
(\textit{bottom}). Note that 34 pairs (14 pairs at $z>0.9$) in the
compiled data are not included since their angular and hence linear
separations are not available. }
\label{LSz}
\end{figure}
\begin{table*}
\centering
\caption{Statistics of the $\Delta$RM distribution for pairs with a
separation larger or smaller than 500~kpc.\label{dataresultls500}}
\begin{tabular}{crcccrccc}
\hline
\multicolumn{1}{c}{ } & \multicolumn{4}{c}{Subsamples from the NVSS RM data} & \multicolumn{4}{c}{Subsamples from the compiled RM data} \\
Redshift & No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~&
No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~~~~\\
range & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) &
pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) \\
\hline
\multicolumn{9}{c}{pairs with a separation larger than 500 kpc: 76 NVSS pairs and 54 compiled pairs}\\%
\hline
0.0--0.3 & 7 & 0.218 & 10.9$\pm$4.5 &15.7$\pm$3.4 & 15 & 0.199 & 2.6$\pm$0.7 & 2.1$\pm$0.7 \\
0.3--0.6 & 37 & 0.467 & 13.7$\pm$2.3 & 8.5$\pm$2.4 & 19 & 0.467 & 1.1$\pm$0.2 & 1.1$\pm$0.2 \\
0.6--0.9 & 27 & 0.704 & 9.7$\pm$1.9 & 7.1$\pm$1.8 & 18 & 0.754 & 4.0$\pm$1.0 & 2.1$\pm$1.0 \\
0.9--1.5 & 5 & 1.247 & 23.2$\pm$11.6&29.8$\pm$9.6 & 2 & -- & -- & -- \\
\hline
\multicolumn{9}{c}{pairs with a separation smaller than 500 kpc: 309 NVSS pairs and 109 compiled pairs}\\%
\hline
0.0--0.3 & 109 & 0.167 & 13.4$\pm$1.3 & 9.7$\pm$1.3 & 34 & 0.210 & 1.7$\pm$0.3 & 1.5$\pm$0.3 \\
0.3--0.6 & 137 & 0.453 & 11.3$\pm$1.0 &11.2$\pm$0.9 & 38 & 0.397 & 1.6$\pm$0.3 & 0.9$\pm$0.3 \\
0.6--0.9 & 59 & 0.653 & 13.3$\pm$1.8 &15.3$\pm$1.6 & 19 & 0.684 & 1.4$\pm$0.4 & 0.6$\pm$0.4 \\
0.9--1.5 & 4 & -- & -- & -- & 18 & 1.114 & 28.1$\pm$6.8 &27.7$\pm$6.9 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=70mm]{dRMzls5_nv.ps}
\includegraphics[angle=-90,width=70mm]{dRMzls5.ps}
\caption{Absolute RM difference ($|\Delta \rm RM|$) distributions and
their dispersion ($ W_{\Delta \rm RM_{rms}} $ and $ W_{\Delta \rm
RM_{mad}} $) against redshift for pairs with a separation larger
and smaller than 500~kpc for the NVSS RM sample ({\it left}) and the
compiled data sample ({\it right}). }
\label{dRMzls5}
\end{figure*}
\subsection{The RM difference and projected separation}
Is the significant change of $\Delta$RM dispersion for pairs at
$z>0.9$ biased by the linear sizes of double radio sources or their
separation? Figure~\ref{LSz} shows the projected separation of pairs
versus the redshift for both the NVSS and compiled RM samples. The
majority of high-redshift pairs ($z>0.9$) in the compiled data have a
separation less than 500~kpc.
As seen in Figure~\ref{dRMalsep}, the absolute values of RM
differences decline to small values when a projected separation is
larger than 1~Mpc, the typical size of a galaxy cluster. The pairs
with such a projected separation greater than 1~Mpc probably lie at a
large angle to the line of sight, and their light paths pass through
much less content of the intracluster medium.
To exam if the larger RM dispersion of high redshift pairs is caused
by different separation, in the following we split the NVSS sample and
compiled data sample into two cases, i.e. the subsamples with a
separation larger or smaller than 500~kpc. In the compiled sample, 34
pairs (14 sources of $z>0.9$) are omitted since the angular and hence
the linear separations are not available though they are probably
smaller than 1~Mpc. Statistics results are shown in
Figure~\ref{dRMzls5} and listed in Table~\ref{dataresultls500}. No
obvious difference of the $\Delta \rm RM$ dispersion can be seen
between two subsamples with a separation smaller and larger than
500~kpc in all three low redshift bins for both the NVSS data and the
compiled data. The dispersion values of subsamples are consistent with
the results derived from the whole sample, which means that the
redshift-dependent dispersion is not caused by different sizes of pair
separation. For high-redshift pairs of $z>0.9$, statistics can be made
for the NVSS subsample with a larger separation than 500~kpc and also
the compiled subsample data with a smaller separation. They both show
a larger dispersion though with different uncertainties. The larger RM
difference is detected at a $4\sigma$ level for the compiled subsample.
\section{Discussion}
\label{sect4}
If the larger RM differences of high-redshift pairs were caused by
intergalactic medium between the pair and us, the larger the
separation between a pair of two lobes, the more likely their radio
waves experience different foreground cosmic filaments and intervening
medium along the lines of sight. That is to say, the larger the
separation of lobe positions, the more likelihood for the larger RM
difference \citep[e.g.][]{omv+19}.
However, for the compiled RM samples in Figure~\ref{dRMalsep} and
Figure~\ref{dRMzls5}, this is not the case, and the results are just
the opposite, which means that the main RM differences are caused by
the local ICM environment surrounding the double radio sources,
instead of the intervening intergalactic medium in the foreground of a
pair of two lobes. Therefore the RM differences of pairs are excellent
probes for the ICM.
\subsection{Strong magnetic fields in the intracluster medium in the early Universe}
Evidence for larger RM differences for higher-redshift pairs, having
wisely excluded any obvious influence by the Galactic and
intergalactic contributions and also possible dependence on linear
separations of pairs, demonstrates the strong magnetic fields in the
ICM in the early Universe. We can estimate the field strengths in the
ICM from the dispersion of RM differences at the present epoch and at
high redshift.
As mentioned in Section~\ref{sect1}, a pair of lobes are believed to
mainly reside in dense environments of galaxy clusters/groups. Such
dense ambient gas plays a key role in forming Faraday screens which
contributes to the difference between the RM values of the lobes. The
RM asymmetry of a pair of lobes indicates that there probably exists a
large-scale ordered net magnetic fields in the foreground ICM with a
scale of pair separation. Because of turbulent nature for intracluster
magnetic fields, large scale fluctuations ($>$ 100~kpc) should be
responsible for the RM differences of pairs, and a very large outer
scale for turbulent intracluster magnetic fields of $\sim$450~kpc is
possible as being used for modeling of magnetic fields for a giant
radio halo \citep{vmg+10}. The small-scale field fluctuations at a few
kpc could be averaged out over a path length comparable to the
projected separation.
A pair of radio sources in our sample could have any separation and
arbitrary orientations in space. The path difference along the line of
sight of the two lobes may vary from zero to the largest linear size.
Assuming a unidirectional large-scale magnetic field geometry
and a constant electron density in the ambient environs, we get a
RM difference as being
\begin{equation}
\Delta {\rm RM} = 812~n_e B L_{||} \cos{\theta},
\end{equation}
where $L_{||}$ is the separation of the pair (in kpc) projected onto
the line of sight, and $\theta$ is the angle between the magnetic
field direction and the line of sight. For a sample of pairs with the
same separation but random directions of magnetic fields, the mean of
$\Delta$RM is
\begin{equation}
\left<\Delta {\rm RM}\right> = 812~n_e B L_{||} \int_0^\pi\cos{\theta} \sin{\theta} d\theta \left/\int_0^\pi \sin{\theta} d\theta=0 \right. ,
\end{equation}
and the variance is given by
\begin{equation}
\begin{split}
\left<(\Delta {\rm RM})^2\right>
& = (812~n_e B L_{||})^2 \int_0^\pi\cos^2{\theta}\sin{\theta} d\theta \left/\int_0^\pi \sin{\theta} d\theta \right. \\
& = \frac{1}{3}(812~n_e B L_{||})^2.
\end{split}
\end{equation}
Further more, we consider a pair of sources with a random separation
$L$ along a random orientation $\phi$, i.e. $L_{||} = L \cos{\phi}$,
where $L$ is the size and $\phi$ is the angle between the orientation
and the line of sight. Hence, we expect
\begin{equation}
\begin{split}
\left<(\Delta {\rm RM})^2\right>
& = \frac{1}{3}(812~n_e B)^2 \left<L^2\right> \left<\cos^2{\phi}\right> \\
& =\frac{1}{9}(812~n_e B)^2\left<L^2\right>.
\end{split}
\end{equation}
Here $\left<L^2\right>$ denotes the mean square of the separation of pairs.
The rest-frame RM dispersion of a Faraday screen at redshift $z$ is
expected to be decreased to the observed values by the factor of $(1+z)^2$.
Then we can derive an analytical formulation by assuming the field strength
and the electron density to be constant in the environs around the double
radio sources at redshift $z$, i.e.
\begin{equation}
\begin{split}
\left<(\Delta {\rm RM})^2\right>
=\frac{1}{9}812^2\left<L(z)^2\right> \left[\frac{n_e(z)B(z)}{(1+z)^2}\right]^2.
\end{split}
\end{equation}
and finally we get
\begin{equation}
\begin{split}
W_{\Delta \rm RM_{rms}}
& = \left<(\Delta {\rm RM})^2\right>^{1/2}\\
& = 271~n_{e}(z)B(z)\left<L(z)^2\right>^{1/2} (1+z)^{-2}.
\end{split}
\label{Wmodel}
\end{equation}
From the Equation~(\ref{Wmodel}), we can derive the magnetic fields in
the ICM if the dispersion of RM difference, electron density $n_{e}$
and the variance of the pair separations $\left<L^2\right>^{1/2}$ at
redshift $z$ are known.
Based on the results shown in Figure~\ref{dRMzsta}, the dispersion of
the RM difference of pairs remains nearly flat at $z<0.9$, with an
amplitude about 2 to 8 rad~m$^{-2}$. We take a typical value being 3.5
rad~m$^{-2}$ to to represent the dispersion at the present time. For
pairs of $z>0.9$, the dispersion increases to 30 to 40 rad~m$^{-2}$ at
a median redshift of $z=1.1$. We take a typical value being 35
rad~m$^{-2}$ at $z=1.1$. For the variance of the pair separations
$\left<L^2\right>^{1/2}$ at redshift $z$, we take the same typical
value of 350~kpc for pairs at low and high redshifts\footnote{The
average projected linear separation is 281~kpc and 234~kpc for
samples of $z<0.9$ and $z>0.9$ with a separation smaller than
500~kpc, based on the fact that majority of pairs at $z>0.9$
have small separations and their dispersions are consistent with
those from the whole sample. Considering random projection
effect, we estimate the real pair separations should be larger
by a factor of $\sqrt{2}$=1.4, i.e. 396~kpc or 329~kpc, respectively. }.
At low redshifts, the mean electron density $n_{e}$ in the ICM is
taken to be $4\times 10^{-4}$~cm$^{-3}$, which is obtained by
integrating the $\beta$-model profile of electron density over a
sphere with a radius of 1~Mpc for 12 galaxy clusters
\citep{gdm+10}. According to the Equation~(\ref{Wmodel}), from
$W_{\Delta \rm RM_{rms}} = 3.5$ rad~m$^{-2}$, $n_{e}$ = $4\times
10^{-4}$~cm$^{-3}$ and $\left<L^2\right>^{1/2}$ = 350~kpc at $z=0$, we
can obtain a simple estimation of the magnetic field strength over
this scale as being $B= 0.1 \mu$G at present epoch. At high redshift
$z>0.9$, we do not know the exact properties of the ICM. If we assume
the mean electron density $n_{e}(z)$ at $z>0.9$ is as same as the
density at the present epoch, along with $W_{\Delta \rm RM_{rms}} =
35$ rad~m$^{-2}$, and $\left<L^2\right>^{1/2}$ = 350~kpc as well at
$z=1.1$, the magnetic field would be $B(z) = 4~\mu$G. To get this
value, any field reversals smaller than 350~kpc are ignored. If the
fields reverse at a scale of 30~kpc are considered, the field strength
would be boosted by a factor of $\sqrt{350/30}$, reaching the field
strength of 14~$\mu$G.
\subsection{Implication of strong magnetic fields in the ICM}
The field strength estimated above for the ICM from the pairs at
$z<0.9$, if in the form of a uniform large-scale field geometry, is
0.1 $\mu$G, close to the minimum intracluster magnetic field obtained
by \citet{pjds13}. More tangled fields would have a strength of a few
times stronger. The estimated field strength is smaller than that of
some targeted clusters, such as a few $\mu$G on scales of tens of kpc
in merging clusters and a few 10 $\mu$G in cool core clusters
\citep[see e.g.][]{ct02,vda+19}. There are two possible reasons. The
first one, the well-measured RM differences at low redshifts are
predominantly by the RM data with very small uncertainties, which were
mainly measured at low frequencies by LOFAR \citep[e.g.][]{obv+20} and
MWA \citep[e.g.][]{rgs+20}. Those observations at such low frequencies
may probe medium in the outer part of galaxy clusters or poor
clusters, so that the estimated field strength is close to the
large-scale intergalactic magnetic fields around galaxy clusters, as
illuminated by simulations \citep{rkcd08}. In contrast, the small
number of RM data with larger uncertainties and more scattered in the
distribution were mostly observed at 1.4 GHz or higher frequencies,
which are more likely to probe the inner part of galaxy clusters.
Secondly, at low redshifts most powerful radio sources reside in
comparatively sparse environment with few exceptions [e.g. Cygnus A
\citep{dcp87} and other sources of large RM differences in the
compiled data], as pointed by \citet{pvc+00}, so that the dispersion
of RM difference is small. This is supported by the NVSS sample with
a similar small dispersion of RM difference, i.e. upper limit of 7--9
rad~m$^{-2}$ derived by this work and 4.6$\pm$1.1 rad~m$^{-2}$ by
\citet{vgra19}.
The value of uniform intracluster magnetic field strength of 4 $\mu$G
(or $\sim$14 $\mu$G for tangled fields) at $z>0.9$ derived from the RM
difference of pairs is intriguing , as it is comparable to the field
strength of galaxy clusters at low redshifts \citep[see a review
by][]{han17}, for example a central field strength of 4.7 $\mu$G in
the Coma cluster \citep{bfm+10} and a few microGauss in a sample of
X-ray selected clusters \citep{ckb01,bck16}. This is evidence for
strong organized magnetic fields in galaxy clusters in the early
Universe. If this scenario is correct, it poses a considerable
challenge to theories on the origin of intracluster magnetic fields,
because time available at $z>0.9$ is not sufficient to generate and
align strong magnetic fields on such a large scales. The building-up
of large-scale coherent magnetic fields via the inverse cascade of the
$\alpha-\Omega$ dynamo fields that often works in normal spiral
galaxies cannot operate in galaxy clusters because they do not have an
observed organized rotation. Even if they have, only one or two
rotations at this age of the Universe under slow cluster rotation ($v
\leq 100$ km s$^{-1}$) is insufficient for generation of such a strong
mean field \citep{ct02}.
The origin and the growth of magnetic fields in galaxy clusters are an
enigma. The widely accepted hypothesis is that they are amplified from
much weaker seed fields (either primordial or injected by galactic
outflows) through a variety of processes \citep[see
review][]{dvbz18}. Simulations show evidence of significant magnetic
field amplification with a small-scale dynamo driven by turbulence and
compression during structure formation \citep{vbbb18,dvbb19}.
Assuming the dynamo growth can start soon after the cluster forms, it
often takes a time-span of several Gyr to amplify magnetic fields to a
few $\mu$G \citep[e.g.][]{dvbb19}. Increasing the Reynolds number can
reduce the time scale for magnetic amplification, but the number is
limited by the efficiency of the transfer of kinetic energy into
magnetic energy. Merger induced shocks that sweep through the ICM or
motions induced by sloshing cool cores may play additional roles in
fast amplification of intracluster magnetic field at high redshifts
\citep{dvbz18}, but not up to such a large scale. The recent
observations of diffuse radio emission in distant galaxy clusters
\citep{dvb+21} have put a strong limit on the time scale of the
magnetic growth by discovering field strengths of $\mu$G at $z\sim$
0.7. The time available for the amplification in their case is about
3.7~Gyr. Our results is strong evidence for strong magnetic field
strengths at such a large-scale at $z>$ 0.9 and even up to $z\sim$ 2,
comparable to those in nearby clusters, which is a more stringent
constraint for magnetic field generation and evolution.
\section{Conclusions}
\label{sect5}
Faraday rotation measure differences between the two lobes of a sample
of radio galaxies, which is completely free from the Faraday rotation
effect contributed from the interstellar medium inside the Milky Way
and the intergalactic medium between radio galaxies and us, is
significantly large at $z>0.9$, indicating the average intracluster
magnetic fields about 4 $\mu$G (or 14 $\mu$G for tangled fields), in
contrast to the weaker intracluster fields at the present epoch about
0.1 $\mu$G (or 0.3 $\mu$G for tangled fields). Such a strong magnetic
fields in the early universe makes a big challenge on the generation
of cosmic magnetic fields.
More RM data for pairs at high redshift are desired to reach firm
conclusion, since current data sets are not enough in number and have
somehow large measurement uncertainties. Polarization observations for
RMs of a larger sample of double radio sources with a better precision
of RMs should be available soon, which are necessary to further
constrain the evolution of magnetic fields in the ICM.
| proofpile-arXiv_065-40 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
On 22 September 2017, IceCube reported a neutrino event with energy $\sim 290\ \rm TeV$, which was shown to be associated with the blazar TXS 0506+056 \citep{aartsen1807science}. This opened a window of high energy neutrino astrophysics. The origin of high energy neutrinos is not clear yet, and TDEs are the other possible sources.
Recently, \cite{stein2021tidal} reported a correlation between a neutrino event with energy $\sim$0.2 PeV detected by IceCube on 1 October 2019 (IceCube-191001A) and a TDE (AT2019dsg) discovered by Zwicky Transient Facility (ZTF), and the neutrino event lags the onset of TDE by 6 months. AT2019dsg's redshift is $z=0.051$, i.e., the luminosity distance is $D=230$ Mpc.
The optical luminosity was observed to decrease from $10^{44.5} \rm erg\ s^{-1}$ to $10^{43.5} \rm erg\ s^{-1}$ \citep{van2020} on a timescale of a half year. AT2019dsg is among the top 10\% of the 40 known optical TDEs in luminosities. The peak radiation is well described by a $10^{14}\rm cm$-sized blackbody photosphere of temperature $10^{4.59}$ K.
AT2019dsg is an unusual TDE \citep{2102.11879}; it belongs to neither the typical soft-X-ray TDEs nor the typical optical/UV TDEs because it emits optical/UV radiations as well as X-ray and radio radiations.
Fermi Large Area Telescope (Fermi-LAT) provides an upper limit of flux $1.2\times 10^{-11}\rm erg\ cm^{-2}\ s^{-1}$ in 0.1-800 GeV range. HAWC observatory also set an upper limit for the period from 30 September to 2 October, $ E^{2}\Phi=3.51\times 10^{-13}(\frac{E}{\rm TeV})^{-0.3}\rm TeV\ cm^{-2}\ s^{-1} $ for 300 GeV-100 TeV \citep{van2020}.
In the previous literatures (e.g., \citealt{wang2016tidal,liu2020}), a jet from the TDE is assumed to accelerate protons and generate neutrinos via p$\gamma$ interactions, dominating over pp interactions since the density of photons is extremely high. Moreover, in the TDE model by \cite{murase2020high} high-energy neutrinos and soft gamma-rays may be produced via hadronic interactions in the corona, the radiatively inefficient accretion flows and the hidden sub-relativistic wind.
It is known that, in addition to the radiative outburst, TDE is also expected to launch ultra-fast and energetic outflows. First, due to the relativistic apsidal precession, after passing the pericenter, the stream collides with the still falling debris, leading to a collision-induced outflow \citep{lu2020self}. Second, after debris settling into an accretion disk, the high accretion mode will launch energetic outflows \citep{curd2019grrmhd}. Since the duration of both processes above are of the order of months, the duration of launching energetic TDE outflows is also roughly in months. For AT2019dsg, the physics of the outflow is estimated via radio emission. The velocity inferred in different models are similar: 0.07 c in outflow--circumnuclear medium (CNM) interaction model \cite{cendes2021radio}, or 0.06--0.1c in outflow--cloud interaction model \citep{mou2021radio}). However, the outflow energy or kinetic luminosity of outflows ranges a lot in different models. \citet{cendes2021radio} estimated an energy of $4\times 10^{48}$ erg, which is much smaller than the energy budget of the TDE system ($\sim 10^{53}$ erg). \citet{mou2021radio} inferred that the outflow power should be around $10^{44}$ erg s$^{-1}$ which is consistent with numerical simulations \citep{curd2019grrmhd}, and the total energy should be in the order of $10^{51}$ erg if the outflow continues for months.
When the outflow impacts a cloud, the bow shock (BS) can be produced outside the cloud, and could effectively accelerate particles via diffusive shock acceleration (DSA) processes.
The electrons accelerated at BSs give rise to synchrotron emission, which can be detected in $\sim$GHz radio band \citep{mou2021radio}. The accelerated high energy protons may be the precursor of the neutrinos, especially considering the high-density cloud near the BS which is favorable for pp collisions \citep{mou2021years2}. A basic premise is whether there are clouds around the black hole, especially at the distance of $10^{-2}$ pc as inferred from the delay of the neutrino event and the possible outflow speed. It is well known that for active galactic nuclei (AGN), there exists a so-called broad line region (BLR) around the supper massive black hole (SMBH) \citep{antonucci1993unified}, which is frequently referred to as ``clouds''. However, for quiescent SMBH or low-luminosity AGN (the case for AT2019dsg), due to the lack of ionizing radiation irradiating the clouds, the existence of the clouds becomes difficult to verify, and the physics of such clouds remains largely unknown.
To distinguish from the BLR cloud concept in AGN, we hereby call the possibly existing clouds around quiescent or low-luminosity SMBH at similar position of the BLR as ``dark clouds''. Transient events may help reveal the physics of dark clouds. For AT2019dsg, in addition to the indirect speculation of the existence of dark clouds via radio emission \citep{mou2021radio}, direct evidences arise from the dusty echo, and broad emission line components.
First, \cite{van2021establishing} reported that AT2019dsg was detected with remarkable infrared echo about 30 days after the optical onset, suggesting that clouds should exist at a distance of 0.02 pc to the SMBH (note that there may exist clouds in more inward regions but not surveyed by WISE/neoWISE). Second, it is reported that there exist the broad emission line components (line widths $>$ 70 \r{A}) of H$\alpha$, H$\beta$, and He\uppercase\expandafter{\romannumeral2} \citep{cannizzaro2021accretion}, implying the existence of materials of the velocity over several thousand kilometers per second, although their nature is unclear.
In our TDE outflow-cloud interaction model, we assume that there exist dark clouds at $\sim 0.01$ pc to the BH, and we simply set up the parameters (covering factor, cloud size and density) of dark clouds similar referring to those of classical BLR clouds.
This paper is organized as follows. We introduce the general physics picture of the model in Sec.~\ref{Pp}; The products (GeV-TeV gamma-rays and PeV neutrinos) from hadronic emission are described in \sethlcolor{red}Sec.~\ref{dp}; In Sec.~\ref{cwo}, we compare the calculations with the present observations; The conclusions and discussion are presented in the last section.
\section{Physical picture of outflow-cloud interactions}\label{Pp}
As shown in Fig. \ref{S}, consider that there are dark clouds surrounding the SMBH
The TDE outflows released from the SMBH collide with the dark clouds, forming two shock waves \citep{mckee1975}, i.e., a bow shock (BS) outside the cloud and a cloud shock (CS) sweeping through the cloud. Following \cite{celli2020spectral}, protons may be accelerated to very high energy with a power-law distribution. The high-energy protons may partly propagate into the cloud and interact with matter therein.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{9f.eps}
\caption{Schematic plot of the TDE outlfow--cloud interaction model. A star engages the Roche radius of the SMBH thereby the disrupted debris is blown away. The collision-induced outflows hit clouds forming shock waves, a bow shock and a cloud shock.
The outflow--cloud interactions occur at 0.01 pc from the SMBH.
See section \ref{poo} for details. }
\label{S}
\end{figure*}
\subsection{Dynamics}\label{poo}
We consider a simplified spherically symmetric outflow, with kinetic luminosity $L_{\rm kin}$ and velocity $V_{\rm o}$. We take $L_{\rm kin}=10^{45} \rm erg~s^{-1}$ as the fiducial value \citep{curd2019grrmhd}, which is also close to the constraint given by \cite{mou2021radio}. Following the interpretation of the radio observation \citep{stein2021tidal} by synchrotron radiation from non-thermal electrons in the outflow-CNM model, we take the outflow velocity derived from the model, $V_{\rm o} = 0.07\rm c$ \citep{cendes2021radio}. The duration of the outflow launching is assumed to be $T_{\rm o}\sim 6$ months.
Define $\rho_{\rm o}$ the density of outflow material, then we write the kinetic luminosity as $L_{\rm kin}=\frac{1}{2}\dot{M_{\rm o}}V_{\rm o}^{2}=2\pi r_{\rm o}^{2}\rho_{\rm o}V_{\rm o}^3$, with $\dot{M_{\rm o}}$ the mass flowing rate. Since the time delay of the neutrino event and the TDE is $t_{\rm delay}\sim 6$ month \citep{van2020}, we assume the typical distance of the dark clouds from the central SMBH is $r_{\rm o}=V_{\rm o}t_{\rm delay} \simeq 0.01$ pc.
Thus the number density of the outflow is $n_{\rm o}=\frac{\rho_{\rm o}}{m_{\rm H}}\sim 1.14\times 10^{7}(\frac{L_{\rm kin}}{{{10^{45}\rm erg\ s^{-1}}}})(\frac{V_{\rm o}}{0.07{\rm c}})^{-3}(\frac{r_{\rm o}}{0.01{\rm pc}})^{-2}\rm cm^{-3}$.
The interaction between outflows and clouds drives a CS that sweeps across the cloud. The velocity of CS is related to the outflow velocity by $V_{\rm c}=\chi^{-0.5}V_{\rm o}$ \citep{mckee1975, mou2020years1}, where $\chi\equiv \frac{n_{\rm c}}{n_{\rm o}}$, with $n_{\rm c}$ the particle density in the cloud. According to the photoionization models in BLRs of AGN, we assume the cloud particle density to be $n_{\rm c} \sim 10^{10} \rm cm^{-3}$ \citep{osterbrock2006astrophysics}. So, here $\chi\simeq 8.8\times 10^{2}$.
Let us assume the size of the clouds is typically $r_{\rm c}=10^{14}$cm\footnote{This can be obtained from the column density ($\sim10^{24}{\rm cm^{-2}}$) and the cloud density $ n_{\rm c}\sim10^{10}\rm cm^{-3}$ \citep{osterbrock2006astrophysics}.}. The CS crosses the cloud in a timescale of
\begin{equation}
T_{\rm cloud}=\frac{r_{\rm c}}{V_{\rm c}}=\frac{r_{\rm c}}{V_{\rm o}} \chi^{0.5},
\end{equation}
i.e., $T_{\rm cloud}\sim1(\frac{r_{\rm c}}{10^{14}\rm cm})(\frac{V_{\rm o}}{0.07\rm c})^{-1}(\frac{n_{\rm c}}{10^{10}\rm cm^{-3}})^{0.5}(\frac{n_{\rm o}}{1.14 \times 10^{7}\rm cm^{-3}})^{-0.5}$ month. Note, the cloud could be devastated by the outflow, and the survival timescale of the clouds after the CS crossing is comparable to $T_{\rm cloud}$ \citep{klein1994hydrodynamic}, so $T_{\rm cloud}$ can also be regarded as the survival timescale of the cloud.
\subsection{Particle acceleration and propagation}
\label{swa}
Both BS and CS may accelerate particles. According to DSA mechanism, the acceleration timescale in the BS for a particle with energy $E_p$ and charge number $Z$ is \citep{drury1983introduction}
\begin{equation}
T_{\rm acc,BS}\approx\frac{8}{3}\frac{cE_{\rm p}}{ZeB_{\rm o}V_{\rm o}^2},
\label{AM}
\end{equation}
where $B_o$ is the magnetic field strength in the outflow. For the CS, the acceleration timescale is
\begin{equation}
T_{\rm acc,CS}\approx\frac{8}{3}\frac{cE_{\rm p}}{ZeB_{\rm c}V_{\rm c}^2}
\end{equation}
where $B_{\rm c}$ is the magnetic field in the cloud. We will assume $B_{\rm o}= B_{\rm c}=B$ in the nearby region of the outflow--cloud interaction, and we take $B=1$ G.
The particle acceleration suffers from several factors. The first is the particle energy loss due to hadronic interactions, namely the pp interactions. In the BS, the timescale is
\begin{equation}
t_{\rm pp,BS}=\frac{1}{cn_{\rm o}\sigma_{\rm pp}} ,\label{tpp}
\end{equation}
whereas in the CS,
\begin{equation}
t_{\rm pp,CS}=\frac{1}{cn_{\rm c}\sigma_{\rm pp}} .\label{tpp,cs}
\end{equation}
Here $\sigma_{\rm pp}\simeq 30$mb is the pp cross section. The other suppression factor is the lifetime of the relevant shocks. As the cloud survival timescale is comparable to the CS crossing time, and after the cloud is destroyed, both BS and CS end. So the acceleration in either BS or CS is only available within a time period of $T_{\rm cloud}$ \citep{klein1994hydrodynamic}. Finally, the maximum energy of the accelerated particles is determined by equating the acceleration time to the shortest one between the pp interaction time and the CS crossing time of the cloud.
All the timescales are plotted in Fig. \ref{tt}.
For the BS, $T_{\rm cloud} \sim 1$ month is the main constraint than the pp energy loss time, due to the low density in the outflow, $t_{\rm pp,BS}=3.1(\frac{n_{\rm o}}{1.14\times 10^{7}{\rm cm^{-3}}})^{-1}$yr. By equating $T_{\rm acc,BS}=T_{\rm cloud}$ we obtain the maximum energy of particles accelerated in the BS, $E_{\rm p,max}\simeq 60(\frac{B}{1{\rm G}})(\frac{V_{\rm o}}{0.07{\rm c}})^{2}(\frac{T_{\rm acc,BS}}{{\rm 1\ month}})$PeV.
For the CS, due to the dense cloud, the pp collision time is short, $t_{\rm pp,CS}=1(\frac{n_{\rm c}}{10^{10}\rm cm^{-3}})^{-1}$ day, and is more important in suppressing acceleration. By $T_{\rm acc,CS}=t_{\rm pp,CS}$, one obtains the maximum energy $E_{\rm p,max} \simeq 2.9(\frac{B}{1{\rm G}})(\frac{V_{\rm c}}{7.1\times 10^{7}{\rm cm\ s^{-1}}})^{2}(\frac{T_{\rm acc,CS}}{1 {\rm day}})$TeV. Thus only the BS can accelerate particles up to PeV scale.
\begin{figure}
\centering
\includegraphics[scale=0.6]{timescale_10f.eps}
\caption{The timescales of particle acceleration (solid) and pp interaction (dotted) in the BS (blue) and CS (red), and the cloud survival timescale (black). }
\label{tt}
\end{figure}
Note that we neglect the effect of energy loss due to p$\gamma$ interactions between the high-energy particles and the TDE photons. Given the cross section $\sigma_{\rm p\gamma}\sim0.2$mb on average, and the TDE photon number density at $r_{\rm o}$, $n_{\rm ph}\simeq 10^{9} \rm cm^{-3}$ (see Sec.~\ref{gammarays}), the timescale of $\rm p\gamma$ interactions is relatively large, $t_{\rm p\gamma}\sim 3.2(\frac{n_{\rm ph}}{1\times 10^{9}{\rm cm^{-3}}})^{-1}\rm yr$. In the previous works (e.g., \citealt{liu2020}), $\rm p\gamma$ reaction is important because a site closer to the center is considered, so the photon density is high, $n_{\rm ph}\sim 10^{16}(\frac{L}{10^{43}\rm erg\ s^{-1}})(\frac{r_{\rm o}}{10^{14.5}\rm cm})^{-2}\rm cm^{-3}$ (with $L$ being the TDE luminosity, see below) and $\rm p\gamma$ reaction is important. In our case, neither p$\gamma$ nor pp reactions in the BS consume significant energy of the accelerated particles.
After acceleration in the BS, the high-energy particles may diffuse away from the BS \citep{bultinck2010origin,taylor1961diffusion,kaufman1990explanation}. As suggested by some literatures \citep{1997ApJ...478L...5D,2010ApJ...724.1517B,2012ApJ...755..170B,2018arXiv180900601W}, we assume that a significant fraction $F$ of the accelerated particles can effectively reach and enter the cloud, whereas the other propagate away bypassing the cloud. Basically, the relatively low-energy protons are likely to be advected with the shocked material,
while the relatively energetic particles ($\gtrsim 1\,\rm TeV$) tend to diffuse up to the cloud. Besides, these high-energy particles entering the cloud could be more important by suppressing the possible advection escape under the certain magnetic configuration \citep{bosch2012}. The detailed treatment of the particle propagation is beyond the scope of this paper. To evaluate this uncertainty, $F\simeq 0.5$ is invoked in our calculations. For the other high-energy particles that do not propagate into the cloud, no hadronic interactions are expected, given the low density of cold particles outside the cloud.
After entering the cloud, the particles may propagate in the cloud by diffusion. The residence time in the cloud before escaping can be estimated by
\begin{equation}
\tau_{\rm es}=C_{\rm e}\frac{r_{\rm c}^{2}}{D_{\rm B}},
\label{eqescape}
\end{equation}
where $D_{\rm B}$ is the Bohm diffusion coefficient. and $C_{\rm e}$ is a correction factor that accounts for the difference between the actual diffusion coefficient and the Bohm diffusion. We take $C_{\rm e}=0.75$. The Bohm diffusion coefficient of protons is given by \citep{kaufman1990explanation} $D_{\rm B}=\frac{r_{\rm g}^{2}\omega_{\rm g}}{16}$, where $r_{\rm g}=\frac{E_{\rm p}}{eB}$ is the cyclotron radius, and $\omega_{\rm g}= \frac{eBc}{E_{\rm p}}$ is the cyclotron frequency. Thus we get $\tau_{\rm es}\sim 1.3(\frac{E_{\rm p}}{7\rm PeV})^{-1}(\frac{B}{1\rm G})$day, a value similar to $t_{\rm pp,CS}$ for $E_{\rm p}\sim7$ PeV.
\section{Hadronic emission}\label{dp}
In the outflow--cloud interaction, the kinetic energy of the outflow will be converted into the BS and CS. The energy ratio between the CS and BS is $\chi^{-0.5}\simeq0.034$, so the energy dissipation in the CS can be neglected compared to the BS (see appendix A in \citealt{mou2020years1}).
The covering factor of the clouds is $C_{\rm v}\sim 0.1$, and the shock acceleration efficiency, i.e., the fraction of the shock energy converted to accelerated particles, is $\eta \sim 0.1$. Given the kinetic energy of the outflow, the average luminosity of the accelerated particles is, in the BS,
\begin{equation}
L_{\rm b}=C_{\rm v}\eta L_{\rm kin}, \label{Eb}
\end{equation}
and in the CS,
\begin{equation}
L_{\rm c}=C_{\rm v}\chi^{-0.5}\eta L_{\rm kin}. \label{Eb}
\end{equation}
Plugging the numbers, $L_{\rm c}\approx 3.4\times 10^{41}\rm erg\ s^{-1}$, and $L_{\rm b}\approx 10^{43}\rm erg\ s^{-1}$ for $\eta=0.1$. The luminosity of CSs is so small, which allow us to neglect their contribution in emission.
We assume the accelerated relativistic particles distribute as a power-law spectrum with spectral index $\Gamma$ and an exponential cut-off at the high energy:
\begin{equation}
\frac{dn(E_{\rm p})}{dE_{\rm p}}=K_{\rm p}E_{\rm p}^{-\Gamma}e^{-\frac{E_{\rm p}}{E_{\rm p,max}}}\label{Ep}.
\end{equation}
The normalization factor $K_{\rm p}$ can be determined by the normalization of the particle luminosity $L_{\rm p}=\int E_{\rm p}\frac{dn(E_{\rm p})}{dE_{\rm p}}dE_{\rm p}$.
Since neglecting contribution from the CS, we have $L_{\rm p}=L_{\rm b}$. We will consider a range of $\Gamma$ value, from 2 to 1.5 \citep{celli2020spectral}
The pp collision produce neutral and charged pions,
\begin{eqnarray}
&p+p\to p+p+a\pi^{0}+b(\pi^{+}+\pi^{-}),\\
& p+p\to p+n+\pi^{+}+a\pi^{0}+b(\pi^{+}+\pi^{-}),\label{pp2}
\end{eqnarray}
where $a\approx b$.
The pions decay and generate $\gamma$-rays and leptons:
\begin{eqnarray}
& \pi^{0}\to 2\gamma\\
& \pi^{+}\to \mu^{+}+\nu_{\mu},~ \mu^{+}\to e^{+}+\nu_{e}+\bar{\nu}_{\mu},\\
& \pi^{-}\to \mu^{-}+\bar{\nu}_{\mu},~ \mu^{-}\to e^{-}+\bar{\nu}_{e}+\nu_{\mu}. \label{n2}
\end{eqnarray}
The final product particles produced by pp collisions in the clouds per unit time on average can be given by \citep[e.g.][]{kamae2006,aartsen2014}:
\begin{equation}
\frac{dn_{\rm f}}{dE_{\rm f}}= 1.5Fcn_{\rm H} \int\frac{d\sigma(E_{\rm p},E_{\rm f})}{dE_{\rm f}}\frac{dn(E_{\rm p})}{dE_{\rm p}}dE_{\rm p},\label{nf}
\end{equation}
where f$=\gamma, \nu$, etc., represents the type of final particles, $\sigma$ is the inclusive cross section as function of final particles and proton's energy, and $n_{\rm H}$ is the background number density of protons.
The coefficient 1.5 is a correction factor accounting for the contribution of Helium (we assume that the Helium abundance in the BS is similar to Galactic cosmic rays \citep{mori1997galactic}).
Here the integration is calculated using the cparamlib package\footnote{https://github.com/niklask/cparamlib}. If the particle escape from the cloud is fast, the calculated spectrum above should be multiplied by a factor $\frac{\tau_{\rm es}(E_{\rm p})}{t_{\rm pp,CS}}$ to take into account the reduction of secondary products by escape.
\begin{table*}
\centering
\caption{Model parameters}
\label{123}
\begin{tabular}{lcr}
\hline
Parameters & Descriptions & Fiducial Values \\
\hline
$L_{\rm kin}$ & the kinetic luminosity of outflow & $10^{45}\rm erg\ s^{-1}$ \\
$V_{\rm o}$ & the velocity of outflow & 0.07c \\
$T_{\rm o}$ & outflow launching duration & 6 months \\
$r_{\rm o}$ & the typical distance of clouds from the SMBH & $0.01\,\rm pc$ \\
$n_{\rm c}$& the particle density of cloud & $10^{10}\,\rm cm^{-3}$ \\
$r_{\rm c}$ & the typical size of clouds & $10^{14}\,\rm cm$ \\
$B$ & magnetic field strength around outflow-cloud interaction region & 1G \\
$C_{\rm v}$ & covering factor of clouds & 0.1 \\
$\eta$ & the fraction of shock energy converted to accelerated particles & 0.1 \\
$F$&the fraction of accelerated particles propagate into the cloud & 0.5\\
$C_{\rm e}$ & the correction factor of diffusion coefficient relative to Bohm limit & 0.75\\
\hline
\end{tabular}
\end{table*}
\subsection{Neutrino}
Given the neutrino luminosity and spectrum, we calculate the neutrino event number expected to be detected by IceCube in a time period of $T_{\rm o}$,
\begin{equation}
N_{\nu}=\frac{T_{\rm o}}{4\pi D^{2}}\int_{0.1\rm PeV}^{1\rm PeV}dE_{\nu}A_{\rm eff}(E_{\nu})\frac{dn_{\nu}}{dE_{\nu}}.\label{n}
\end{equation}
The detected event IceCube-191001A has a neutrino energy of $>0.2$ PeV, thus we only calculate the sub-PeV neutrino events in $0.1-1$ PeV range. The real time effective area of IceCube is described by \citep{blaufuss2021next}
\begin{equation}
A_{\rm eff}=2.058\times \left(\frac{E_{\nu}}{1\rm TeV}\right)^{0.6611}-32
\end{equation}
The number of neutrino events is calculated to be $N_{\nu}\simeq 3.5\times10^{-3}$ for $\Gamma=1.5$, considering particle escape from cloud (see details in Fig. \ref{NF}).
\begin{figure}
\includegraphics[scale=0.57]{11FN.eps}
\caption
The energy distribution of neutrino luminosity. The blue, red and green lines correspond to $\Gamma=1.5$, 1.7, and 2, respectively. The solid (dotted) lines correspond to the cases with (without) the consideration of particle escape from the cloud.
}
\label{NF}
\end{figure}
\subsection{$\gamma$-ray}\label{gammarays}
The intrinsic $\gamma$-ray spectrum accompanying the neutrino emission can also be calculated with equation \ref{nf}, but high-energy $\gamma$-ray photons may be attenuated by interacting with low-energy background photons via $\gamma\gamma\to e^{+}e^{-}$. The background photons may come from the TDE, host galaxy, extragalactic background light (EBL), cosmic microwave background (CMB), and even radiation in the cloud, etc.
Consider the TDE photons first. At a distance $r_{\rm o}$ from the SMBH, the number density of TDE photons around typical energy $E_{\rm ph}\sim10$eV is estimated by
\begin{equation}
n_{\rm ph}=\frac{L}{4\pi r_{\rm o}^{2}cE_{\rm ph}},
\label{n_ph}
\end{equation}
where $L$ is the TDE radiation luminosity, which is given by \cite{stein2021tidal} for AT2019dsg.
The TDE luminosity may evolve with time, approximately estimated as following the accretion rate evolution, i.e.,
$ L\propto \dot{M}\propto \left(\frac{t}{T_{\ast}} \right)^{-5/3}$,
where $T_{\ast}\approx 0.1(\frac{R_{\ast}}{1R_{\odot}})^{3/2}(\frac{M_{\ast}}{1M_{\odot}})^{-1/2}$ yr is the minimum orbit period of the disrupted material \citep{evans1989} depending on the radius $R_{\ast}$ and the mass of the star $M_{\ast}$. The TDE luminosity decreases to about $6\%$ of the peak luminosity after $t_{\rm delay}=6$ months, i.e., $L\sim10^{43}\rm erg\ s^{-1}$.
Thus, the TDE photon density is estimated as $n_{\rm ph}\sim 10^{9}(\frac{L}{10^{43}\, \rm erg\ s^{-1}})(\frac{r_{\rm o}}{0.01\,\rm pc})^{-2} (\frac{E_{\rm ph}}{10\,\rm eV})^{-1}\,\rm cm^{-3}$.
The $\gamma$-rays are emitted from clouds $\sim0.01$ pc away from the SMBH, so the absorption due to the TDE photon field relies on the emergent angle. We calculated the angle-averaged optical depth for the $\gamma$-rays (see the appendix J of \citealt{mou2020years1}). The optical depth due to TDE photons is found to be moderate for $\gamma$-rays of tens of GeV, as presented in Fig. \ref{GF}.
Next, the absorption of the host galaxy's background light is considered. The photons are assumed to be isotropic, and we calculate $\gamma\gamma$ absorption as \cite{aharonian2004}. For host galaxy background photon fields, there was no more observation of the host galaxy 2MASS J20570298+1412165, except the infrared observation \citep{skrutskie2006}, namely J ($1.25\rm \mu m$), H ($1.65\rm \mu m$), and K ($2.16\rm \mu m$). Thus we get the mean luminosity in J, H, K bands to be about $10^{42} \rm erg\ s^{-1}$. For the spectral profile we adopt the model in \cite{finke2010modeling} (the black line for redshift $z=0$ in Fig. 5 therein), and normalized to the infrared luminosity. The size of the host galaxy is typically of kpc scale. we find only mild absorption beyond TeV (see Fig. \ref{GF}).
Furthermore, there would be significant absorption by the EBL and CMB. Considering the spectrum of EBL as model C in \cite{finke2010modeling} our calculation result for the EBL and CMB absorption is presented in Fig. \ref{GF}.
Both intrinsic and attenuated spectra are plotted for comparison. The absorption significantly changes the emergent spectrum, mainly due to the EBL and CMB absorption.
Notice that we have considered also the absorption in the cloud since the high-energy $\gamma$-rays are produced in the cloud, but we find the attenuation in the cloud is negligible. Firstly, the shocked cloud is optically thin to the high-energy gamma-rays for the electron-photon scattering between the thermal electron and $E_\gamma\sim\,\rm GeV$ high-energy photon, the optical depth for the GeV photon ${\tau _{e\gamma }} \simeq {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm KN,GeV}} \simeq 6\times 10^{-4} r_{\rm c,14} n_{\rm c,10}$ with $\sigma _{\rm KN,GeV} \sim \sigma _{\rm T} (\varepsilon_{\gamma}/m_e c^2)^{-1} \simeq \sigma _{\rm T}/1000$ and the Bethe-Heitler process ${\tau _{\rm BH}} = {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm BH}} \simeq 0.05{r_{\rm c,14}}{n_{\rm c,10}}$ for the parameters adopted in our model, i.e., cloud density $n_{\rm c,10}=n_{\rm c}/10^{10}\,\rm cm^{-3}$ and cloud radius $r_{\rm c,14}=r_{\rm c}/10^{14}\,\rm cm$. In addition to e-$\gamma$ scattering and Bethe-Heitler process for the high-energy gamma-rays, we evaluate the $\gamma\gamma$ absorption due to thermal radiations of clouds next. The shocked cloud emits through free-free radiation with a temperature $T_{\rm c} \approx {m_{\rm p}}V_{\rm c}^2/3k \approx 10^{7}\left( \frac{V_{\rm c}}{7\times 10^7 \, \rm cm/s} \right)^2\,\rm K$, and a luminosity for a single cloud ${L_{\rm X}} \simeq kT_{\rm c} * 4\pi r_{\rm c} ^3 n_{\rm c}/\max ({t_{\rm ff}},{t_{\rm sc}})\simeq 5\times 10^{37} \,\rm erg/s$, where $t_{\rm ff}\sim 2\times 10^{4} T_{\rm c,7}^{1/2}n_{\rm c,10}^{-1}\,\rm s$ is the cooling timescale of free-free radiation and $T_{\rm cloud}\sim 1.4 \times 10^{6}\,\rm s$ is the CS crossing timescale. The optical depth for the high-energy gamma-rays can be estimated as
\begin{equation}
{\tau _{\gamma \gamma,c }} \sim {n_{\rm X}}{r_{\rm c}}{\sigma _{\gamma \gamma }} \sim 0.2{n_{\rm X}}{r_{\rm c}}{\sigma _{\rm T}} \sim {10^{ - 4}}n_{\rm X,7}r_{\rm c,14}\,
\end{equation}
with the most optimistic cross section, where the number density can be written as ${n_{\rm X}} = \frac{{{L_{\rm X}}}}{{4\pi ck{T_{\rm c}}r_{\rm c}^2}} \simeq 1 \times {10^7}\,\rm c{m^{-3}}$ since the cloud is optically moderately thin to its own radiations with the optical depth ${\tau _{e\gamma,{\rm c} }} \simeq {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm T}} \simeq 0.6 r_{\rm c,14} n_{\rm c,10}$ (contributions of other clouds to number density can be easily neglected). Therefore, the opacity ($\tau _{e\gamma }, \tau _{\rm BH}, \tau _{\gamma \gamma }$) caused by the cloud to the high-energy gamma-rays can be neglected. In addition, considering the superposition of the free-free emission of $C_{\rm v} r_{\rm o}^2/r_{\rm c}^2 \sim 10^3$ clouds (total luminosity $\sim 10^3 L_{\rm X}\sim 5 \times 10^{40} \,\rm erg/s$), the corresponding total flux is quite low with a value $\sim 5 \times 10^{-15}\,\rm erg\, cm^{-2} s^{-1}$ at the keV energy band so that it is much lower than the observational upper limit of X-rays, even for the deep upper limit of $9 \times 10^{-14}\,\rm erg\, cm^{-2} s^{-1}$ (0.3-10 keV) given by \emph{XMM} observations in \cite{stein2021tidal}.
\begin{figure}
\centering
\includegraphics[scale=0.55]{odf.eps}
\includegraphics[scale=0.59]{G13F.eps}
\caption{ The $\gamma\gamma$ optical depth (upper penal) and energy distribution (bottom penal) of $\gamma$-ray emission. {\bf Upper penal:} the blue, yellow and red lines correspond to absorption due to TDE photons, host galaxy background light and EBL and CMB, respectively.
{\bf Bottom penal:} The predicted gamma-ray emission spectra during the outflow-cloud interactions. The blue, red and green lines present the spectrum for $\Gamma=1.5$, 1.7 and 2, respectively. The dotted and solid lines present the intrinsic and attenuated spectra. Also shown are the cumulative upper limits to the $\gamma$-ray flux observed by HAWC (dash-dot black line) and Fermi-LAT (dashed line).
}
\label{GF}
\end{figure}
\subsection{Other radiation}
The BS accelerates both electrons and protons. The leptonic process of accelerated electrons also produces radiations. However, the ratio between the energy budgets of electrons and protons could be around $\sim 10^{-2}$ \citep{mou2020years1}, leading to a radiation luminosity by electrons is at most $\sim 10^{41}\rm erg\ s^{-1}$ for the fast cooling case, and the corresponding flux is quite low, $\sim 10^{-14}\rm erg\ cm^{-2}\ s^{-1}$ and can be neglected.
Moreover, the secondary electrons from $\gamma\gamma$ absorption may generate photons again via Inverse Compton scatterings, leading to the electromagnetic cascades. As shown in Fig.~\ref{GF}, only the EBL and CMB absorption is significant, and may results in electromagnetic cascades in the intergalactic medium. The deflection of electrons by the intergalactic magnetic field is expected to spread out the cascade emission, and contribute little to the observed flux.
\section{Results compared with observations}\label{cwo}
We summarize in Table \ref{123} the fiducial values for the model parameters used in the calculation. The neutrino luminosity is presented in Fig. \ref{NF}. According to equation \ref{n}, we get the expected event number of 0.1-1PeV neutrinos detected by IceCube. Without considering the partile escape from cloud, the expected number is $ 7.1\times10^{-3}$, $ 2.3\times10^{-3}$, $ 7.6\times10^{-4}$, and $3.8\times10^{-4}$ for $\Gamma=1.5$, 1.7, 1.9 and 2, respectively. However, if consider particle escape, as described by equation \ref{eqescape}, the numbers change to $ 3.5\times10^{-3}$, $1.3\times10^{-3}$, $4\times10^{-4}$, and $1.9\times10^{-4}$, respectively.
Thus, for the fiducial model parameters, the expected neutrino event number is somewhat lower than the expected neutrino number of $0.008 \lesssim N_\nu \lesssim 0.76$ for AT2019dsg \citep{stein2021tidal}.
The interactions produce $\gamma$-rays from GeV to TeV energy bands. Considering the host galaxy distance of 230 Mpc, the photons above $\sim 100$ TeV cannot arrive at the Earth due to the absorptions by CMB and EBL. In addition, the strong absorption by the host galaxy photon field based on the infrared observations \citep{skrutskie2006} is moderate. Finally, the $pp$ processes produce the maximum $\gamma$-ray flux up to $\sim 10^{-13}\ \rm erg\ cm^{-2}\ s^{-1}$ for $\Gamma =1.5-2$ in the bands of 0.1 GeV - 1 TeV, which is lower than the present gamma-ray observational limits by Fermi/LAT and HAWC (see Fig. \ref{GF}).
\section{Conclusion and discussion}
In this work, we considered the high-speed TDE outflows colliding with the clouds to produce the high-energy neutrinos and gamma-rays, which can explain the sub-PeV neutrino event in AT2019dsg. The assumed outflow velocity is $V_{\rm o}\sim 0.07\rm c$ and the kinetic luminosity is $L_{\rm kin}\sim 10^{45}\rm erg~s^{-1}$. The outflow-cloud interactions will produce the BS ahead clouds. Particle acceleration is efficient in the BS and the pp process would contribute to the observational high energy neutrinos and gamma-rays. We assumed an escaping parameter of $F=0.5$, which means half of the accelerated protons escape from the BS region while the rest enter the dense cloud participating in pp collisions. Here we should point out that the escaping parameter of $F$ cannot be constrained well, leading to the $F$ values being quite uncertain, and the factor $F$ may be taken as a much smaller value in the more realistic cosmic-ray escape models.
For the fiducial model parameters, the expected neutrino event number would be relatively low compared to observations. In order to reach the observational neutrino number, one has to invoke some challenging model parameters. For instance, (1) considering a higher cloud density or a larger cloud size, inducing the escaping time $\tau_{\rm es}$ will be much higher than $t_{\rm pp,CS}$ (see equation \ref{eqescape}) and the interactions of protons which produce 0.1-1 PeV neutrinos becomes more efficient, the expected number of neutrinos would increase by a factor of $\sim 2$ (see Fig.~\ref{NF}); (2) the converting fraction of energy from the outflow to protons relies on the covering factor of BS $C_{\rm v}$, and shock acceleration efficiency $\eta$, so the expected number of neutrinos could increase by a factor of $\sim 10 (C_{\rm v} / 0.3)(\eta / 0.3)$ by taking a larger $C_{\rm v}$ and a larger $\eta$ than the fiducial values listed in Table.~\ref{123}. For sure, on the contrary, the expected neutrino number could be reduced if a lower cloud column density, smaller $C_{\rm v}$ and $\eta$, or a softer proton index is adopted. As a result, the predicted neutrino number in our model depends on the uncertainties of model parameters and in order to match the observations, some challenging values of parameters have been involved.
In above calculations, we anchored two parameters of the outflows, the kinetic luminosity $L_{\rm kin}=10^{45} \rm erg~s^{-1}$ and the velocity of outflows $V_{\rm o}=0.07\rm c$.
Numberical simulations indicate that TDE can launch powerful wind with a kinetic luminosity of $10^{44-45}$ erg s$^{-1}$ \citep{curd2019grrmhd}, or even higher \citep{dai2018unified}.
AT2019dsg also exhibits radio flares, arising fifty days post burst and lasting for more than one year \citep{stein2021tidal,cendes2021radio}. Modeling the radio flare by the outflow-CNM (circumnuclear medium) model suggests that the averaged kinetic luminosity is $10^{43}$ erg s$^{-1}$ \citep{stein2021tidal} or even lower \citep{cendes2021radio}.
However, if the radio flare originates from outflow-cloud interaction which is the same scenario as our current model, the inferred kinetic luminosity may be in the order of $10^{44}$ erg s$^{-1}$ \citep{mou2021radio}.
Radio outflow and delayed neutrino may be explained by the same physical process.
The detected neutrino number is linearly proportional to the kinetic luminosity. For the case of $L_{\rm kin}=10^{44}\rm erg ~s^{-1}$, the modeling neutrino luminosity is presented in Fig. \ref{N44}.
The expected neutrino number will be about one order of magnitude lower than the above values in the case of $L_{\rm kin}\sim 10^{45}\rm erg ~s^{-1}$.
The velocity of the outflow is taken as $V_{\rm o}=0.07\rm c$, but this value is still uncertain, the radio observations suggested the outflow velocity in AT2019dsg is $V_{\rm o}=0.12\rm c$ \citep{stein2021tidal}, 0.07c \citep{cendes2021radio}
or around 0.1c \citep{mou2021radio}. If the outflow velocity is higher, the maximum energy of accelerated protons in the BS will
be also higher accordingly. Then the peak neutrino luminosity will move to the higher energy ranges. If we integrate the neutrino number from $0.1-1$ PeV, the detected neutrino number would be different. For a comparison, we also plotted the neutrino luminosity versus energy in the case of $L_{\rm kin}=10^{44}~\rm{erg~s^{-1}}$, $V_{\rm o}=0.12\rm c$ in Fig. \ref{N44}. Since we only integrate the neutrino number in the range of $0.1-1$ PeV, if $V_{\rm o}$ is about 0.04c, and $E_{\rm p,max}\sim 20$ PeV, the neutrino SED will peak at $0.1-1$ PeV, and the expected number of neutrinos will increase by $50\%$.
After the submission of this work, we notice recent reports on more neutrino events associated with the time-variable emission from accreting SMBHs \citep{van2021establishing,reusch2021candidate}, among which AT2019fdr is a TDE candidate in a Narrow-Line Seyfert 1 AGN in which the BLR clouds should exist. Moreover, the neutrino events lag the optical outbursts by half year to one year (\citealt{van2021establishing}), consistent with the assumption that clouds exist at the distance of $\sim 10^{-2}$ pc from the central BH if the outflow velocity is in the order of $10^9$ cm s$^{-1}$. The outflow--cloud interaction may also contribute to the high energy neutrino background \citep{abbasi2021icecube}.
\begin{figure}
\centering
\includegraphics[scale=0.57]{diss5f.eps}
\caption{The neutrino luminosity as function of neutrino energy for different $L_{\rm kin}$ and $V_{\rm o}$, assuming $\Gamma =1.7$. The red line is the same as in Fig.3 with fiducial values. The blue line represents the case of $L_{\rm kin} = 10^{44}\ \rm erg/s$, resulting in the expected neutrino events of $1.3\times 10^{-5}$, about one order of magnitude lower than the fiducial value case. When the velocity of outflows is 0.12c, the maximum energy of accelerated protons reaches 180 PeV. The green line represents the case of $L_{\rm kin} = 10^{44}\ \rm erg/s$ and $V_{\rm o}= 0.12c$, leading to $E_{\rm max} = 180\ \rm PeV$ and the expected neutrino events of $9\times10^{-6}$.
}
\label{N44}
\end{figure}
\section*{Acknowledgments}
We are grateful to the referee for the useful suggestions to improve the manuscript. This work is supported by the National Key Research and Development Program of China (Grants No. 2021YFA0718500, 2021YFA0718503), the NSFC (12133007, U1838103, 11622326, 11773008, 11833007, 11703022, 12003007, 11773003, and U1931201), the Fundamental Research Funds for the Central Universities (No. 2020kfyXJJS039, 2042021kf0224), and the China Manned Space Project (CMS-CSST-2021-B11).
\section*{Data Availability}
The data used in this paper were collected from the previous literatures. These data X-ray, gamma-ray and neutrino observations are public for all researchers.
\bibliographystyle{mnras}
| proofpile-arXiv_065-42 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec1}
Let $\Omega\subseteq \mathbb{R}^{n}~(n\geq 2)$ be a domain and $1<p<\infty$. In the monograph \cite{HKM} by Heinonen, Kilpel\"{a}inen, and Martio, the authors studied a nonlinear potential theory for the second-order quasilinear elliptic operator~$\dive(\mathcal{A}(x,\nabla u))$, which is called the $\mathcal{A}$-Laplace operator (or in short, the $\mathcal{A}$-Laplacian). We recall that the $\mathcal{A}$-Laplacian might be degenerate or singular elliptic operator that satisfies some natural local regularity assumptions. In addition, it is assumed that the $\mathcal{A}$-Laplacian is $(p-1)$-homogeneous and monotone in its second variable (for details, see Assumption \ref{ass8}). Prototypes of the $\mathcal{A}$-Laplace operator are the $p$-Laplacian $\dive{\left(|\nabla u|^{p-2}\nabla u\right)}$ and the $(p,A)$-Laplacian
$$\dive{\left(|\nabla u|^{p-2}_{A}A\nabla u\right)}\triangleq
\mathrm{div}\,\left((A(x)\nabla u\cdot\nabla u)^{(p-2)/2}A(x)\nabla u\right),$$
where $A$ is a locally bounded and locally uniformly elliptic matrix (see, \cite{HKM,Pinchover,Regev,Tintarev}).
A systematic criticality theory has been developed for the $p$-Laplace operator and the~$(p,A)$-Laplace operator with a locally bounded potential in \cite{Tintarev} and \cite{Regev}, respectively. Furthermore, in \cite{Pinchover}, Pinchover and Psaradakis have extended the theory to the case of the $(p,A)$-Laplace operator with a potential in the local Morrey space. See \cite{Murata, Pinchoverlinear} for the criticality theory for the second-order linear elliptic (not necessarily symmetric) case. We refer also to Pinsky's book \cite{Pinsky}, where the author studies this topic from the probabilistic point of view. Moreover, a criticality theory for Schr\"{o}dinger operators on graphs has also been established by Keller, Pinchover, and Pogorzelski in \cite{Keller}. The theory has witnessed its applications in the works of Murata and Pinchover and their collaborators (see recent examples in \cite{Beckus, KellerHardy, MT}). For the case of generalized Schr\"{o}dinger forms, we refer to \cite{Takeda2014, Takeda2016}.
Criticality theory has applications in a number of areas of analysis. For example in spectral theory of Schr\"odinger operators \cite{Pinchoverlinear}, variational inequalities (like Hardy, Rellich, and Hardy-Sobolev-Maz'ya type inequalities) \cite{Kovarik,HSM}, and stochastic processes \cite{Pinsky}. Among the applications in PDE we mention results concerning the large time behavior of the heat kernel \cite{PinchoverGreen}, Liouville-type theorems \cite{Lioupincho}, the behavior of the minimal positive Green function \cite{PinchoverGreen2,Pinchoverlinear}, and the asymptotic behavior of positive solutions near an isolated singularity \cite{Fraas}.
The goal of the present paper is to extend the results in \cite{Pinchover,Regev,Tintarev} concerning positive solutions of the homogeneous quasilinear equation$$Q'_{p,A,V}[u]\triangleq -\dive{\left(|\nabla u|^{p-2}_{A(x)}A(x)\nabla u\right)}+V(x)|u|^{p-2}u=0\quad \mbox{ in } \Omega,$$ to the equation
$$Q'_{p,\mathcal{A},V}[u]\triangleq -\dive{\mathcal{A}(x,\nabla u)}+V(x)|u|^{p-2}u=0\quad \mbox{ in } \Omega.$$
The latter equation is the {\em local} Euler-Lagrange equation of the energy functional
$$Q_{p,\mathcal{A},V}[\vgf]\triangleq Q_{p,\mathcal{A},V}[\vgf;\Omega]\triangleq \int_{\Omega}\left(\mathcal{A}(x,\nabla \vgf)\cdot\nabla \vgf + V(x)|\vgf|^{p}\right)\,\mathrm{d}x \qquad \vgf\inC_c^{\infty}(\Omega).$$
Note that the equation $Q'_{p,\mathcal{A},V}[u]=0$ (and in particular, $Q'_{p,A,V}[u]=0$) is {\em half-linear}, that is, if $v$ is a solution of this equation, then for every $c\in\mathbb{R}$, $cv$ is also a solution.
We assume that the potential $V$ belongs to
the local Morrey space $M^{q}_{{\rm loc}}(p;\Omega)$ associated with the exponent $p$ (see Definitions~\ref{Morreydef1} and \ref{Morreydef2}), which is almost the largest class of potentials that guarantees the validity of the Harnack inequality and the H\"older continuity of solutions. The assumptions on the $\mathcal{A}$-Laplacian are as in \cite{HKM} (see Assumption \ref{ass8}). In addition, a strong convexity of $\mathcal{A}$ (Assumption \ref{ass2})
is assumed to prove certain important results in Sections~\ref{sec_eigenvalue}, \ref{criticality}, and \ref{minimal}. In fact, the local strong convexity of $\mathcal{A}$ is utilized in two different ways. One is direct (see Proposition \ref{mainlemma}). The other is indirect, i.e., via the D\'{\i}az-Sa\'{a}-type inequality (Lemma~\ref{elementary}), see Theorem \ref{maximum}.
Our main results include the existence, uniqueness, and simplicity of the principal eigenvalue of the operator~$Q'_{p,\mathcal{A},V}$ in a domain~$\omega\Subset\Omega$, a weak comparison principle, and the criticality theory for $Q'_{p,\mathcal{A},V}$. Moreover, based on a Picone-type identity and a generalized H\"older inequality (see Lemma~\ref{ass1}), two alternative proofs of Agmon-Allegretto-Piepenbrink type (AAP) theorem are given (see Lemma \ref{lem_alter} and Theorem \ref{thm_AAP}, see also \cite{Agmon, Allegretto1974}, and also \cite{Pinchover} for a short updated review on the AAP theorem). In addition, we characterize in a Lipschitz domain~$\omega\Subset\Omega$ the validity of the generalized strong/weak maximum principles and the unique solvability in $W^{1,p}_0(\gw)$ of a nonnegative solution of the Dirichlet problem $Q'_{p,\mathcal{A},V}[v]=g\geq 0$ with~$g\in L^{p'}(\omega)$ via the strict positivity of the principal eigenvalue.
The paper is organized as follows. In Section \ref{back},
we introduce
a variational Lagrangian $F$
and then obtain from $F$ the operator~$\mathcal{A}$ by virtue of \cite[Lemma 5.9]{HKM}. We establish a generalized H\"{o}lder inequality (Lemma~\ref{ass1}) which is a key result used to prove several fundamental results, and formulate the additional assumption discussed above (Assumption~\ref{ass2}). In addition, we recall the definition of the associated local Morrey spaces, and the Morrey-Adams theorem, which is an essential tool for our study. Finally, we define the notion of weak solutions of the quasilinear equation $Q'_{p,\mathcal{A},V}[u]=0$.
In Section \ref{toolbox}, we present certain a priori properties of weak solutions of the quasilinear equation $Q'_{p,\mathcal{A},V}[u]=0$, including Harnack-type inequalities, local H\"{o}lder estimate, and the Harnack convergence principle.
In Section \ref{sec_eigenvalue}, we first extend D\'{\i}az-Sa\'{a} type inequalities, and then prove the coercivity and weak lower semicontinuity of certain related functionals. We also establish a Picone-type identity. Then we show that in a domain~$\omega\Subset\Omega$, the generalized principal eigenvalue is a principal eigenvalue, that is a Dirichlet eigenvalue with a nonnegative eigenfunction. Moreover, we prove that the generalized principal eigenvalue is simple. With these preliminaries, we also study the generalized weak and strong maximum principles, the positivity of the generalized principal eigenvalue and other related properties. Furthermore, we establish a weak comparison principle by virtue of the super/sub-solution technique.
In Section \ref{AP}, we prove for our setting the corresponding AAP type theorem which turns out to be closely related to the existence of solutions of a certain nonlinear first-order equation of the divergence type. As a result, we show that the AAP theorem implies the uniqueness of the principal eigenvalue in a domain~$\omega\Subset\Omega$.
In Section \ref{criticality}, we establish a systematic criticality theory for the operator $Q'_{p,\mathcal{A},V}$ with applications to a Hardy-Sobolev-Maz'ya inequality and the $(\mathcal{A},V)$-capacity.
In Section \ref{minimal}, we study the removability of an isolated singularity. We also show that the criticality of~$Q_{p,\mathcal{A},V}$ is equivalent to the existence of a global minimal positive solution. Moreover, we prove that the existence of a minimal positive Green function, with an additional assumption in the case of~$p>n$, implies the subcriticality of $Q'_{p,\mathcal{A},V}$. Finally, we extend the results in \cite{Kovarik} and answer the question: how large can Hardy-weights be?
\section{$\mathcal{A}$-Laplacian and Morrey potentials}\label{back}
In this section, we introduce the~$\mathcal{A}$-Laplace operator. We recall the local Morrey space where our potential~$V$ lies and the Morrey-Adams theorem, both are defined and proved in \cite{Pinchover}. Finally, we define weak solutions and supersolutions of the quasilinear elliptic equation~$Q'_{p,\mathcal{A},V}[v]=g$.
Let $g_1,g_2$ be two positive functions defined in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. We use the notation $g_1\asymp g_2$ in
$\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if there exists a positive constant $C$ such that $C^{-1}g_{2}(x)\leq g_{1}(x) \leq Cg_{2}(x)$ for all $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
\subsection{Variational Lagrangian $F$ and its gradient $\mathcal{A}$}
In this subsection, we present a variational Lagrangian $F$ which satisfies certain desired conditions. Then we define the $\mathcal{A}$-Laplacian as the divergence of the gradient of $F$.
\subsubsection{Variational Lagrangian $F$}
Following the assumptions in {\cite[Page 97]{HKM}}, we list below our structural and regularity assumptions on the variational Lagrangian $F.$
\begin{assumptions}\label{ass9}
{\em
\label{assump1}
Let~$\Omega\! \subseteq \!\mathbb{R}^{n}$ be a nonempty domain, let $F:\Omega} \def\Gx{\Xi} \def\Gy{\Psi\times \mathbb{R}^{n} \! \rightarrow \!\mathbb{R}_+$, and let $1\!<\!p\!<\!\infty$. We assume that $F$ satisfies the following conditions:
\begin{itemize}
\item {\bf Measurability:} For all~$\xi\in\mathbb{R}^{n}$, the mapping $x\mapsto F(x,\xi)$ is measurable in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
\item {\bf Ellipticity:} For all $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ there exist $0<\kappa_\omega\leq\nu_\omega<\infty$ such that for almost all $x\in \omega$ and all $\xi\in \mathbb{R}^n$,
$\kappa_\omega|\xi|^{p}\leq F(x,\xi)\leq\nu_\omega|\xi|^{p}$.
\item {\bf Convexity and differentiability with respect to $\xi$}: For a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the mapping $\xi\mapsto F(x,\xi)$ is strictly convex and continuously differentiable in $\mathbb{R}^n$.
\item {\bf Homogeneity:} $F(x,\lambda\xi)=|\lambda|^{p}F(x,\xi)$ for a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, all~$\lambda\in\mathbb{R}$, and all~$\xi\in\mathbb{R}^{n}$.
\end{itemize}
}
\end{assumptions}
The following is a useful inequality derived directly from the strict convexity of $F$.
\begin{lemma}[{\cite[Lemma 5.6]{HKM}}]\label{strictconvexity}
For a.e.~$x\in\Omega$ and all~$\xi_{1},\xi_{2}\in\mathbb{R}^{n}$ with~$\xi_{1}\neq\xi_{2}$, we have:
$$F(x,\xi_{1})-F(x,\xi_{2})>\nabla_{\xi}F(x,\xi_{2})\cdot(\xi_{1}-\xi_{2}).$$
\end{lemma}
\subsubsection{$\mathcal{A}$-Laplacian}
\begin{Def}
{\em
Let~$\Omega\subseteq \mathbb{R}^{n}$ be a nonempty domain and $F(x,\xi)$ satisfy Assumptions \ref{ass9}. For a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, we denote by $\mathcal{A}(x,\xi) \triangleq \nabla_{\xi}F(x,\xi)$ the classical gradient of $F(x,\xi)$ with respect to~$\xi$. The\emph{~$\mathcal{A}$-Laplacian} is defined as the divergence of~$\mathcal{A}$.
}
\end{Def}
\begin{remark}
\emph{ By Euler's homogeneous function theorem, for a.e.~$x\in\omega$,
$$\mathcal{A}(x,\xi)\cdot\xi =
p F(x,\xi) \geq p\gk_\gw |\xi|^p \qquad \forall \xi\in \mathbb{R}^n .$$
Moreover, since for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ the nonnegative function
\begin{equation}\label{newformula}
\vert\xi\vert_{\mathcal{A}}=\vert\xi\vert_{\mathcal{A}(x)}\triangleq (\mathcal{A}(x,\xi)\cdot\xi)^{1/p}
\end{equation}
is positively homogeneous of degree $1$ in $\xi$, and $\{\xi \in \mathbb{R}^n \mid \vert\xi\vert_{\mathcal{A}}\leq 1\}$ is a convex set, it follows that for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$,
$\vert\xi\vert_{\mathcal{A}}$
is a norm on $\mathbb{R}^n$ (see, for example, \cite[Theorem 1.9]{Simon}).}
\end{remark}
\begin{Thm}[{\cite[Lemma 5.9]{HKM}}]\label{thm_1}
Let~$\Omega\subseteq \mathbb{R}^{n}$ be a nonempty domain. For every domain $\omega\Subset\Omega$, denote $\alpha_{\omega}=\kappa_{\omega}$, $\beta_{\omega}=2^{p}\nu_{\omega}$. Then the vector-valued function~$\mathcal{A}(x,\xi): \Omega} \def\Gx{\Xi} \def\Gy{\Psi\times \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ satisfies the following conditions:
\begin{itemize}
\item {\bf Regularity:} For a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the function
$\mathcal{A}(x,\xi ): \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$
is continuous with respect to $\xi$, and $x \mapsto \mathcal{A}(x,\xi)$ is Lebesgue measurable in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ for all~$\xi\in \mathbb{R}^{n}$.
\item {\bf Homogeneity:} For all~$\lambda\in {\mathbb{R}\setminus\{0\}}$,
$\mathcal{A}(x,\lambda \xi)=\lambda\,|\lambda|^{p-2}\,\mathcal{A}(x,\xi).$
%
\item {\bf Ellipticity:} For all domains $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, all $\xi \in \mathbb{R}^{n}$, and a.e. $x\in \omega$,
\begin{equation}\label{structure}
\alpha_\omega|\xi|^{p}\le\mathcal{A}(x,\xi)\cdot\xi,
\quad
|\mathcal{A}(x,\xi)|\le \beta_\omega\,|\xi|^{{p}-1}.
\end{equation}
\item {\bf Monotonicity:} For a.e.~$x\!\in\! \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and all~$\xi\!\neq \! \eta \! \in \! \mathbb{R}^{n}$,
$\big(\mathcal{A}(x,\xi)-\mathcal{A}(x,\eta)\big) \! \cdot \! (\xi-\eta)> 0.$
\end{itemize}
\end{Thm}
\begin{ass}\label{ass8}
{\em Throughout the paper we assume that $\mathcal{A}(x,\xi)=\nabla_{\xi}F(x,\xi)$, where $F$ satisfies Assumptions~\ref{ass9}. In particular, we assume that $\mathcal{A}$ satisfies all the conditions mentioned in Theorem~\ref{thm_1}.
}
\end{ass}
\begin{comment}
\begin{Rem}{\em \red{This remark is now redundant}
It can be easily checked that the equation$$-\dive{\mathcal{A}(x,\nabla u)}=0\qquad \mbox {in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi $$ is the Euler-Lagrange equation of the functional $\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi F(x,\nabla u)\,\mathrm{d}x$, See \cite[theorems~5.13 and 5.18]{HKM}.
}
\end{Rem}
\end{comment}
\subsubsection{Generalized H\"older inequality}
In the proof of the AAP Theorem (Theorem~\ref{thm_AAP}),
we use the following generalized H\"older inequality. The inequality follows similarly to the proof of \cite[Lemma 2.2]{newpicone}, where the case $\mathcal{A}=\mathcal{A}(\xi)$ is considered. Nevertheless, since the generalized H\"older inequality is a pointwise inequality with respect to $x$, the proof holds also for $\mathcal{A}=\mathcal{A}(x,\xi)$.
\begin{lemma}[Generalized H\"older inequality]\label{ass1}
Let~$p'$ be the conjugate exponent of~$1<p<\infty$. Then the following inequality holds $$\big|\mathcal{A}(x,\xi)\cdot\eta\big|
\leq\big(\mathcal{A}(x,\xi)\cdot\xi\big)^{1/p'}\big(\mathcal{A}(x,\eta)\cdot\eta\big)^{1/p}
=\vert\xi\vert_{\mathcal{A}}^{p-1}\vert\eta\vert_{\mathcal{A}}, \qquad \forall \xi,\eta\in\mathbb{R}^{n} \mbox{ and a.e. } x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi.$$
\end{lemma}
\subsubsection{Local strong convexity of $|\xi|_{\mathcal{A}}^p$}
By our assumptions, for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the function $\xi\mapsto |\xi|_{\mathcal{A}}^p$ defined by \eqref{newformula} is strictly convex. For certain important results in Sections~\ref{sec_eigenvalue}, \ref{criticality}, and \ref{minimal} we need to assume:
\begin{ass}[Local strong convexity of $|\xi|_{\mathcal{A}}^p$]\label{ass2}
{\em
We suppose that $|\xi|_{\mathcal{A}}^p$ is a locally strongly convex function with respect to~$\xi$, that is, there exists $\bar{p}\geq p$ such that for every subdomain $\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ there exists
a positive constant $C_\gw(\bar{p}, \mathcal{A})$ such that
$$|\xi|^{p}_{\mathcal{A}}-|\eta|^{p}_{\mathcal{A}}-p\mathcal{A}(x,\eta)\cdot(\xi-\eta)\geq C_\gw(\bar{p}, \mathcal{A}) |\xi-\eta|^{\bar{p}}_\mathcal{A} \qquad \forall \xi,\eta\in \mathbb{R}^n \mbox{ and a.e. } x\in \gw.$$
}
\end{ass}
\begin{Rem}\label{pAlaplacian}
{\em
See \cite[Lemma 3.4]{Regev} and \cite[Lemma 2.2]{Lioupincho} for a similar inequality for $Q'_{A,p,V}$, and in particular, for the~$(p,A)$-Laplacian.
Note that in \cite{Pinchover}, for $p<2$ the authors assume the local boundedness of a positive supersolution of $Q'_{A,p,V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and its gradient, and use the H\"{o}lder inequality to obtain the desired result.
}
\end{Rem}
\subsubsection{Pseudo-$p$-Laplacian}
We present further examples of operators which fulfill the assumptions above.
\begin{Def}
\emph{A measurable matrix function $A:\Omega} \def\Gx{\Xi} \def\Gy{\Psi\to \mathbb{R}^{n^2}$ is called \emph{locally bounded} if for every subdomain $\omega\Subset\Omega$, there exists a positive constant~$C(\omega)$ such that,~$|A(x)\xi|\leq C(\omega)|\xi|$ for all $\xi\in\mathbb{R}^n$ and a.e. $x\in \omega$.}
\end{Def}
\begin{exa}\label{exa}
\emph{For a.e.~$x\in\Omega$ and every~$\xi =(\xi_1,\ldots,\xi_n)\in\mathbb{R}^{n}$, let
$$F(x,\xi)\triangleq \frac{1}{p}\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p},$$
where~$p\geq 2$ and the Lebesgue measurable functions locally satisfy $a_{i}\asymp 1$. }
\end{exa}
\begin{lemma}\label{pseudo}
Let~$F$ be as in Example \ref{exa}. For a.e.~$x\in\Omega$ and every~$\xi =(\xi_1,\ldots,\xi_n)\in\mathbb{R}^{n}$, we have
\begin{enumerate}
\item[$(1)$] $\nabla_{\xi}F(x,\xi)=\mathcal{A}(x,\xi)=(a_{1}(x)|\xi_{1}|^{p-2}\xi_{1},\ldots,a_{n}(x)|\xi_{n}|^{p-2}\xi_{n})$, $|\xi|_{\mathcal{A}}^{p}=\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}$;
\item[$(2)$] the operator $\mathcal{A}$ satisfies Assumptions~\ref{ass8} and \ref{ass2}.
\end{enumerate}
Furthermore,
\begin{enumerate}
\item[$(3)$]for $0\leq t\leq 1$, consider the Lagrangian $F_{t,A}\triangleq tF+((1-t)/p)|\xi|_{A}^{p}$, where $A$ is a locally bounded, symmetric, and locally uniformly positive definite matrix function. Then $\mathcal{A}_{t,A}$, the gradient of $F_{t,A}$, satisfies assumptions~\ref{ass8} and \ref{ass2}.
\end{enumerate}
\end{lemma}
\begin{remark}
\emph{If $a_{i}= 1$ for all~$i=1,2,\ldots,n$, then the operator $\mathrm{div}\,\!(\mathcal{A})$ is called the {\em pseudo-$p$-Laplacian}.}
\end{remark}
\begin{proof}[Proof of Lemma \ref{pseudo}]
Part $(1)$ is obtained by a straightforward differentiation.
$(2)$ Our proof is inspired by \cite[Lemma 4.2]{Lindqvist}. Since $\displaystyle{\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}}$ is convex with respect to $\xi$, we get by Lemma \ref{strictconvexity},
$$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+p\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}),$$
for a.e.~$x\in\Omega$ and all~$\xi,\eta\in\mathbb{R}^{n}$.
Hence,
$$\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}+\eta_{i}}{2}\right\vert^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+\frac{p}{2}\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}).$$
By Clarkson's inequality for $p\geq 2$ \cite[Theorem 4.10]{Brezis}, we obtain
$$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}+\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}\geq 2\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}+\eta_{i}}{2}\right\vert^{p} +2\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}-\eta_{i}}{2}\right\vert^{p}.$$
Then
$$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+ p\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}) +2^{1-p}\sum_{i=1}^{n}a_{i}(x)\left\vert\xi_{i}-\eta_{i}\right\vert^{p},$$ which gives Assumption \ref{ass2} for $p\geq 2$ because locally~$a_{i}\asymp 1$ for all~$i=1,2,\ldots,n$.
Moreover, on the unit Euclidean sphere in~$\mathbb{R}^{n}$, the function~$f(\xi)\triangleq\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}$ has a positive lower bound and a finite upper bound, and therefore, the local ellipticity conditions follow.
$(3)$ For all~$\xi\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$, $|\xi|_{\mathcal{A}_{t,A}}^{p}= t|\xi|_{\mathcal{A}}^{p}+(1-t)|\xi|_{A}^{p}$. Hence,
the local strong convexity and the ellipticity conditions of~$|\cdot|_{\mathcal{A}_{t,A}}^{p}$ follow from Remark \ref{pAlaplacian} and~$(2)$.
\end{proof}
\begin{comment}
\begin{Rem}
\red{Please add a remark on the convex combination of the functional
$\frac{1}{p}\sum_{i=1}^{n}A_i(x)|\xi_{i}|^{p}$ and the $(p,A)$-Dirichlet
functional}.
\end{Rem}cl
\end{comment}
\subsection{Morrey potentials}
In this subsection, we give a short review of the local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$, the functional space where the potential~$V$ belongs to, and recall the Morrey-Adams theorem.
\subsubsection{Local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$}
The following is a revised definition of the local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$, where $q=q(p)$.
\begin{Def}[{\cite[definitions 2.1 and 2.3]{Pinchover}}]\label{Morreydef1}{\em
Let $\omega\Subset \Omega$ be a domain and $f\in L^1_{\rm loc}(\omega)$ be a real-valued function. Then
\begin{itemize}
\item for $p<n$, we say that $f\in M^{q}(p;\omega)$ if $q>n/p$ and
$$\Vert f\Vert_{M^{q}(p;\omega)}\triangleq \sup_{\substack{y\in\gw\\0<r<\diam(\gw)}}
\frac{1}{r^{n/q'}}\int_{\omega\cap B_{r}(y)}|f|\,\mathrm{d}x<\infty,$$ where $\mathrm{diam}(\omega)$ is the diameter of~$\omega$;
\item for $p=n$, we say that~$f\in M^{q}(n;\omega)$ if $q>n$ and
$$\Vert f\Vert_{M^{q}(n;\omega)}\triangleq \sup_{\substack{y\in\gw\\0<r<\diam(\gw)}} \varphi_{q}(r)\int_{\omega\cap B_{r}(y)}|f|\,\mathrm{d}x<\infty,$$
where $\varphi_{q}(r)\triangleq \Big(\log\big(\mathrm{diam}(\omega)/r\big)\Big)^{n/q'};$
\item for $p>n$ and $q=1$, we define~$M^{q}(p;\omega)\triangleq L^{1}(\omega)$.
\end{itemize}
}
\end{Def}
\begin{Def}[{\cite[Definition 2.3]{Pinchover}}]\label{Morreydef2}{\em
For every real-valued function $f\in L^1_{\rm loc}(\Omega)$ and~$1<p<\infty$, we say that $f\in M^{q}_{{\rm loc}}(p;\Omega)$ if $f\in M^{q}(p;\omega)$ for every domain~$\omega\Subset\Omega$.
}
\end{Def}
For a more detailed discussion on Morrey spaces, see \cite{Maly,Pinchover} and references therein.
\subsubsection{Morrey-Adams theorem}
We present the Morrey-Adams theorem proved in \cite{Pinchover}, which is crucial when dealing with the potential term. See \cite{Maly,Morrey1966,Rakotoson1990,Trudinger1967} for relevant earlier results.
\begin{Thm}[{\cite[Theorem 2.4]{Pinchover}}]\label{MA_thm}
Let~$\omega\Subset\mathbb{R}^{n}$ be a domain and~$V\in M^{q}(p;\omega)$.
\begin{enumerate}
\item[$(1)$] There exists a constant~$C(n,p,q)>0$ such that for any~$\delta>0$ and all~$u\in W^{1,p}_{0}(\omega)$,
\begin{equation*}
\int_{\omega}|V||u|^{p}\,\mathrm{d}x\leq \delta\Vert\nabla u\Vert^{p}_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{C(n,p,q)}{\delta^{n/(pq-n)}}\Vert V\Vert^{pq/(pq-n)}_{M^{q}(p;\omega)}\Vert u\Vert^{p}_{L^{p}(\omega)}.
\end{equation*}
\item[$(2)$] For any~$\omega'\Subset\omega$ with Lipschitz boundary, there exists $\delta_{0}$ such that for any~$0<\delta\leq \delta_{0}$ and all~$u\in W^{1,p}(\omega')$,
\begin{equation*}
\int_{\omega'}|V||u|^{p}\,\mathrm{d}x\leq \delta\Vert\nabla u\Vert^{p}_{L^{p}(\omega';\mathbb{R}^{n})}+C\left(n,p,q,\omega',\omega,\delta,\Vert V\Vert_{M^{q}(p;\omega)}\right)\Vert u\Vert^{p}_{L^{p}(\omega')}.
\end{equation*}
\end{enumerate}
\end{Thm}
\subsection{Weak solutions of $Q'_{p,\mathcal{A},V}[u]=g$}
With the preliminaries of the previous subsections in hand, we may define weak solutions of the equation~$Q'_{p,\mathcal{A},V}[u]=g$.
\begin{Def}\label{def_sol}
{\em Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8} and~$V, g\in M^{q}_{\mathrm{loc}}(p;\Omega)$. A function~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is a {\em (weak) solution} of the equation
\begin{equation}\label{half}
Q'_{p,\mathcal{A},V}[v]\triangleq -\dive\mathcal{A}(x,\nabla v)+V|v|^{p-2}v=g,
\end{equation}
in~$\Omega$ if for all~$\vgf \in C_{c}^{\infty}(\Omega)$,$$\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot \nabla \vgf\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v \vgf\,\mathrm{d}x=\int_{\Omega} g\vgf\,\mathrm{d}x,$$ and a \emph{supersolution} of \eqref{half} if for all nonnegative~$\vgf \in C_{c}^{\infty}(\Omega)$,$$\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot \nabla \vgf\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v \vgf\,\mathrm{d}x\geq \int_{\Omega} g\vgf\,\mathrm{d}x.$$
A supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of \eqref{half} is said to be \emph{proper} if~$v$ is not a solution of \eqref{half}.
}
\end{Def}
By virtue of the Morrey-Adams theorem and an approximation argument, we obtain:
\begin{lemma}\label{lem4.11}
Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8} and~$V\in M^{q}_{\mathrm{loc}}(p;\Omega)$.
\begin{enumerate}
\item[$(1)$] All the integrals in Definition \ref{def_sol} are well defined.
\item[$(2)$] The test function space~$C_{c}^{\infty}(\Omega)$ in Definition \ref{def_sol} can be replaced with~$W^{1,p}_{c}(\Omega)$.
\end{enumerate}
\end{lemma}
\section{Properties of weak solutions of~$Q'_{p,\mathcal{A},V}[u]=0$}\label{toolbox}
In this section, we present various properties of weak solutions of~$Q'_{p,\mathcal{A},V}[u]=0$ which are frequently used subsequently, including Harnack and weak Harnack inequalities, standard elliptic H\"{o}lder estimates, and a Harnack convergence principle.
\subsection{Harnack inequality}
By \cite[Theorem 3.14]{Maly} for~$p\!\leq \!n$ and \cite[Theorem 7.4.1]{Pucci} for~$p\!>\!n$, we have the following local Harnack inequality for nonnegative solutions of $Q'_{p,\mathcal{A},V}[u]\!=\!0$. See \cite{Trudinger,Maly,Moser,Serrin1964} for Harnack's inequalities for linear and quasilinear equations in divergence form.
\begin{Thm}
Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$v$ be a nonnegative solution~$v$ of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\gw\Subset \Omega$. Then for any~$\omega'\Subset\omega$,
$$\sup_{\omega'}v\leq C\inf_{\omega'} v,$$ where~$C$ is a positive constant depending only on~$n,p,q,\omega',\omega,\alpha_{\omega},\beta_{\omega},$ and $\Vert V\Vert_{M^{q}(p;\omega)}$.
\end{Thm}
\subsection{H\"{o}lder estimate}
Let $v$ be a H\"{o}lder continuous function of the order $0<\gamma\leq 1$ in $\gw$. We denote
$$[v]_{\gamma,\omega}\triangleq\sup_{x,y\in\omega,x\neq y}\frac{\big|v(x)-v(y)\big|}{|x-y|^{\gamma}}\,.$$
The H\"{o}lder continuity of solutions of $Q'_{p,\mathcal{A},V}[u]=0$ follows from \cite[Theorem 4.11]{Maly} for~$p\leq n$ and \cite[Theorem 7.4.1]{Pucci} for~$p>n$. For further regularity of solutions of quasilinear elliptic equations, see \cite{Trudinger, Maly, Pucci}. We need the following result:
\begin{Thm}
Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$v$ be a solution of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\gw\Subset \Omega$. Then~$v$ is locally H\"{o}lder continuous of order $0<\gamma\leq 1$ (depending on~$n,p,q,\alpha_{\omega}$, and~$\beta_{\omega}$). Furthermore, for any~$\omega'\Subset\omega$,$$[v]_{\gamma,\omega'}\leq C\sup_{\omega}|v|,$$ where~$C$ is a positive constant depending only on~$n,p,q,\omega',\omega,\alpha_{\omega},\beta_{\omega}$, and~$\Vert V\Vert_{M^{q}(p;\omega)}$.
\end{Thm}
\subsection{Weak Harnack inequality}
If $v$ is a nonnegative supersolution of \eqref{half}, then the Harnack inequality still holds for $p>n$ by \cite[Theorem 7.4.1]{Pucci} (See also \cite{Trudinger1967}).
On the other hand, for~$p\leq n$, we have:
\begin{Thm}[{\cite[Theorem 3.13]{Maly}}]
Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$p\leq n$ and~$s=n(p-1)/(n-p)$. For any nonnegative supersolution~$v$ of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\omega\Subset \Omega$, any~$\omega'\Subset\omega$, and any~$0<t<s$,
$$\Vert v\Vert_{L^{t}(\omega')}\leq C\inf_{\omega'}v,$$ where~$C$ is a positive constant depending only on~$n,p,t,\omega,\omega',$ and~$\Vert V\Vert_{M^{q}(p;\omega)}$. In particular, such a supersolution is either strictly positive in the domain $\gw$ or vanishes identically.
\end{Thm}
\subsection{Harnack convergence principle}
In this subsection, we generalize the Harnack convergence principle \cite[Proposition 2.11]{Pinchover} to our setting. See \cite[Proposition 2.7]{Giri} for a slightly more general Harnack convergence principle in the sense that the second-order term is also not fixed but a sequence.
\begin{Def}
\emph{ By a \emph{Lipschitz exhaustion} of~$\Omega$, we mean a sequence of Lipschitz domains~$\{\omega_{i}\}_{i\in\mathbb{N}}$ satisfying for all~$i\in\mathbb{N}$,~$\omega_{i}\Subset\omega_{i+1}\Subset\Omega$ and~$\cup_{i=1}^{\infty}\omega_{i}=\Omega.$}
\end{Def}
For the existence of a Lipschitz exhaustion of~$\Omega$, see for example \cite[Proposition 8.2.1]{smooth}.
\begin{Thm}[Harnack convergence principle]\label{HCP}
Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$ and~$x_{0}\in \omega_{1}$. Assume that~$\mathcal{V}_{i}\in M^{q}(p;\omega_{i})$ converges weakly in~$M^{q}_{{\rm loc}}(p;\Omega)$ to~$\mathcal{V}\in M^{q}_{{\rm loc}}(p;\Omega)$ as~$i\rightarrow\infty$. For every~$i\in\mathbb{N}$, suppose that~$v_{i}$ is a positive solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[u]=0$ in~$\omega_{i}$ with~$v_{i}(x_{0})=1$. Then there exists a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly in $W^{1,p}_{{\rm loc}}(\Omega)$ and locally uniformly in~$\Omega$ to a positive weak solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$.
\end{Thm}
\begin{proof}
We use the same approach as in the proof of \cite[Proposition 2.11]{Pinchover}.
Note our convention throughout the proof: when extracting a suitable subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$, we keep denoting the obtained subsequence by~$\{v_{i}\}_{i\in\mathbb{N}}$ without stating it.
By the local H\"{o}lder continuity,~$v_{i}$ are continuous in~$\omega_{i}$ for all~$i\in\mathbb{N}$. Fix a subdomain $\gw_1\Subset \gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. By the local Harnack inequality, $\{v_{i}\}_{i\in\mathbb{N}}$ is uniformly bounded in $\omega$. Therefore, the local H\"{o}lder continuity guarantees that $\{v_{i}\}_{i\in\mathbb{N}}$ is equicontinuous over~$\omega$. Applying the Arzel\`{a}-Ascoli theorem, we obtain a subsequence converging uniformly in $\gw$ to a positive continuous function $v$.
Now we aim to find a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly in~$W^{1,p}(\omega)$ to a positive solution of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$. Fix $k\in\mathbb{N}$. Then for any~$\varphi\in C^{\infty}_{c}(\omega_{k})$, we have~$v_{i}|\varphi|^{p}\in W^{1,p}_{c}(\omega_{k})$ for $i>k$. Testing~$v_{i}|\varphi|^{p}$ in the definition of~$v_{i}$ being a positive weak solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[u]=0$ in $\gw_k$, we obtain
%
$$\left\Vert|\nabla v_{i}|_{\mathcal{A}}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})}\leq p\int_{\omega_{k}}|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}v_{i}|\nabla \varphi|_{\mathcal{A}}\,\mathrm{d}x+\int_{\omega_{k}}|\mathcal{V}_{i}|v_{i}^{p}|\varphi|^{p}\,\mathrm{d}x.$$
%
Applying Young inequality $pab\leq \varepsilon a^{p'}+\left((p-1)/\varepsilon\right)^{p-1}b^{\,p},$ on $p\!\int_{\omega_{k}}\!|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}v_{i}|\nabla \varphi|_{\mathcal{A}}\!\,\mathrm{d}x$ with $\varepsilon\in (0,1), a=|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}$, and~$b=v_{i}|\nabla \varphi|_{\mathcal{A}}$, and the Morrey-Adams theorem (Theorem~\ref{MA_thm}) on $\int_{\omega_{k}}|\mathcal{V}_{i}|v_{i}^{p}|\varphi|^{p}\,\mathrm{d}x$, we conclude:
\begin{equation*}
(1-\varepsilon)\!\left\Vert|\nabla v_{i}|_{\mathcal{A}}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})}
\leq \!\left(\frac{p-1}{\varepsilon}\right)^{p-1}\!\!\left\Vert|\nabla \varphi|_{\mathcal{A}}v_{i}\right\Vert^{p}_{L^{p}(\omega_{k})}+\delta\left\Vert\nabla(v_{i}\varphi)\right\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}
+C\left\Vert v_{i}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})},
\end{equation*}
where $C=C\left(n,p,q,\delta,\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$.
By virtue of the structural properties of~$\mathcal{A}$ and the frequently used inequality:
$$\big\Vert\nabla(v_{i}\varphi)\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}\leq 2^{p-1}\Big(\big\Vert v_{i}\nabla \varphi\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}+\big\Vert \varphi\nabla v_{i}\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}\Big),$$
we observe that for all~$i>k$ and all~$\varphi\in C^{\infty}_{c}(\omega_{k})$:
\begin{equation*}
\left((1\!-\!\varepsilon)\alpha_{\omega_{k}} \! \!- \! 2^{p-1}\delta\right)\!\left\Vert|\nabla v_{i}|\varphi\right\Vert_{L^{p}(\omega_{k})}^{p}
\!\leq \!\! \left(\!\!\left(\!\frac{p-1}{\varepsilon} \! \right)^{p-1} \!\!\beta_{\omega_{k}} \! + \! 2^{p-1}\delta \!\right)\!\!\left\Vert v_{i}|\nabla \varphi|\right\Vert_{L^{p}(\omega_{k})}^{p}
\!\! + \!C\left\Vert v_{i}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})} \!,
\end{equation*}
where~$C=C\left(n,p,q,\delta,\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$.
Let $\delta>0$ be such that~$(1-\varepsilon)\alpha_{\omega_{k}}-2^{p-1}\delta>0$, and fix $\omega\Subset\omega'\Subset\omega_{k}$. Choose~$\varphi\in C^{\infty}_{c}(\omega_{k})$ \cite[Theorem 1.4.1]{cutoff} such that$$\supp(\varphi)\subseteq\omega',\quad0\leq \varphi\leq 1~\mbox{in}~\omega',\quad \varphi=1~\mbox{in}~\omega, \mbox{ and } |\nabla \varphi|\leq C(\omega',\omega)~\mbox{in}~\omega'.$$ Consequently, with~$C'=C\left(n,p,q,\delta,\varepsilon,\alpha_{\omega_{k}},\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$ and~$C''=C(p,\delta,\varepsilon,\alpha_{\omega_{k}},\beta_{\omega_{k}})$, we have
\begin{eqnarray*}
\Vert\nabla v_{i}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}^{p}+\Vert v_{i}\Vert_{L^{p}(\omega)}^{p}
&\leq& \Vert|\nabla v_{i}|\varphi\Vert_{L^{p}(\omega_{k})}^{p}+\Vert v_{i}\varphi\Vert_{L^{p}(\omega_{k})}^{p}\\
&\leq& C'\big\Vert v_{i}\varphi\big\Vert^{p}_{L^{p}(\omega_{k})}
+ C''\big\Vert v_{i}|\nabla \varphi|\big\Vert_{L^{p}(\omega_{k})}^{p}
\leq \tilde{C}
\end{eqnarray*}
where the positive constant $\tilde{C}$ does not depend on~$v_{i}$.
So~$\{v_{i}\}_{i\in\mathbb{N}}$ is bounded in~$W^{1,p}(\omega)$. Hence, there exists a subsequence converging weakly in~$W^{1,p}(\omega)$ to the nonnegative function~$v\in W^{1,p}(\omega)$ with~$v(x_{0})=1$ because~$\{v_{i}\}_{i\in\mathbb{N}}$ converges uniformly to~$v$ in~$\omega$.
The task is now to show that~$v$ is a positive solution of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}\Subset\omega$ such that~$x_{0}\in\tilde{\omega}$. For any~$\psi\in C^{\infty}_{c}(\tilde{\omega})$, we have
\begin{eqnarray*}
&&\left\vert\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x\right\vert\\
&=&\left\vert\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}_{i}v^{p-1}\psi\,\mathrm{d}x+\int_{\tilde{\omega}}\mathcal{V}_{i}v^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x\right\vert\\
&\leq& \int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert|\psi|\,\mathrm{d}x+\left\vert\int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\right\vert\\
&\leq& C(\psi)\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert\,\mathrm{d}x+\left\vert\int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\right\vert.
\end{eqnarray*}
The sequence~$\{v_{i}\}_{i\in\mathbb{N}}$ is uniformly bounded by Harnack's inequality in~$\tilde{\omega}$. The limit function~$v$ is continuous in~$\omega$ and hence bounded in~$\tilde{\omega}$. The function~$f(t)\triangleq t^{p-1}$ is uniformly continuous on~$[0,L]$ for any~$L>0$. Then~$\{v_{i}^{p-1}\}_{i\in\mathbb{N}}$ converges uniformly in~$\tilde{\omega}$ to~$v^{p-1}$ as~$\{v_{i}\}_{i\in\mathbb{N}}$ converges uniformly to~$v$. Furthermore, by a standard finite covering argument, because~$\mathcal{V}_{i}$ converges weakly to~$\mathcal{V}$ in~$M^{q}_{{\rm loc}}(p;\Omega)$, we infer that~$\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert\,\mathrm{d}x$ is bounded with respect to~$i$. Hence, $$\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert\,\mathrm{d}x\to 0 \qquad \mbox{ as } i\to \infty.$$
Moreover, by the weak convergence of $\{\mathcal{V}_{i}\}_{i\in\mathbb{N}}$ to
$\mathcal{V}$, it follows that $$ \int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\to 0 \qquad \mbox{ as } i\to\infty.$$
Consequently, it follows that
\begin{equation}\label{potentialconvergence}
\lim_{i\rightarrow\infty}\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x=\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x.
\end{equation}
{\bf Claim:} The sequence $\{\mathcal{A}(x,\nabla v_{i})\}$ converges weakly in~$L^{p'}(\tilde{\omega};\mathbb{R}^{n})$ to
$ \mathcal{A}(x,\nabla v)$. This will imply that for any~$\psi\in C^{\infty}_{c}(\tilde{\omega})$ we have
$$\int_{\tilde{\omega}}\mathcal{A}(x,\nabla v)\cdot\nabla \psi\,\mathrm{d}x+\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x=\lim_{i\to\infty}\int_{\tilde{\omega}}\left(\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi+\mathcal{V}_{i}v_{i}^{p-1}\psi\right)\,\mathrm{d}x=0.$$ In other words, $v$ is a nonnegative solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$.
To this end, choose~$\psi\in C^{\infty}_{c}(\omega)$ \cite[Theorem 1.4.1]{cutoff} such that$$\supp(\psi)\subseteq\omega,\quad0\leq \psi\leq 1~\mbox{in}~\omega,\quad \psi=1~\mbox{in}~\tilde{\omega}, \mbox{ and } |\nabla \psi|\leq C(\tilde{\omega},\omega)~\mbox{in}~\omega.$$ Testing~$\psi(v_{i}-v)$ in the definition of~$v_{i}$ being a solution of~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[w]=0$ in~$\omega_{i}$, we get$$\int_{\omega}\psi\mathcal{A}(x,\nabla v_{i})\cdot \nabla(v_{i}-v)\,\mathrm{d}x=-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x-\int_{\omega}\mathcal{V}_{i}v_{i}^{p-1}\psi(v_{i}-v)\,\mathrm{d}x.$$
We claim that
\begin{equation}\label{ineq7}
\int_{\omega}\psi\mathcal{A}(x,\nabla v_{i})\cdot \nabla(v_{i}-v)\,\mathrm{d}x\rightarrow 0\mbox{ as } i\rightarrow \infty.
\end{equation}
As in the proof of \eqref{potentialconvergence}, $\int_{\omega}\mathcal{V}_{i}v_{i}^{p-1}\psi(v_{i}-v)\,\mathrm{d}x \to 0$ as $i\rightarrow\infty$.
In addition
\begin{eqnarray*}
\left\vert-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x\right\vert
&\leq& \beta_{\omega}\int_{\omega}|\nabla v_{i}|^{p-1}|(v_{i}-v)\nabla \psi|\,\mathrm{d}x\\
&\leq& \beta_{\omega}\Big(\int_{\omega}|(v_{i}-v)\nabla \psi|^{p}\,\mathrm{d}x\Big)^{1/p}\Vert\nabla v_{i}\Vert^{p/p'}_{L^{p}(\omega;\mathbb{R}^{n})}\\
&\leq &C\left(\beta_{\omega},\omega,\tilde{\omega}, \psi\right)\Vert v_{i}-v\Vert_{L^{p}(\omega)}\Vert\nabla v_{i}\Vert^{p/p'}_{L^{p}(\omega;\mathbb{R}^{n})}.
\end{eqnarray*}
Because the norms $\Vert\nabla v_{i}\Vert^{{p}/{p'}}_{L^{p}(\omega;\mathbb{R}^{n})}$ are uniformly bounded for all~$i\in\mathbb{N}$, and~$v_{i}$ converges to~$v$ uniformly in~$\omega$ as~$i \to \infty$,
we get $$\left\vert-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x\right\vert \to 0 \mbox{ as } i\to \infty.$$
\begin{comment}
For every~$X,Y\in\mathbb{R}^{n},n\geq 2$,
\begin{multline}
\big(|X|^{p-2}_{A}AX-|Y|^{p-2}_{A}AY\big)\cdot(X-Y)\\
=|X|^{p}_{A}-|X|^{p-2}_{A}AX\cdot Y+|Y|^{p}_{A}-|Y|^{p-2}_{A}AY\cdot X\\
\geq |X|^{p}_{A}-|X|^{p-1}_{A}|Y|_{A}+|Y|^{p}_{A}-|Y|^{p-1}_{A}|X|_{A}\\
=(|X|^{p-1}_{A}-|Y|^{p-1}_{A})(|X|_{A}-|Y|_{A})\geq 0.
\end{multline}
\end{comment}
It follows that
\begin{eqnarray*}
0\leq \mathcal{I}_{i}&\triangleq&\int_{\tilde{\omega}}\big(\mathcal{A}(x,\nabla v_{i})-\mathcal{A}(x,\nabla v)\big)\cdot(\nabla v_{i}-\nabla v)\,\mathrm{d}x\\
&\leq&\int_{\omega}\psi\big(\mathcal{A}(x,\nabla v_{i})-\mathcal{A}(x,\nabla v)\big)\cdot(\nabla v_{i}-\nabla v)\,\mathrm{d}x \to 0 \mbox{ as } i\to \infty,
\end{eqnarray*}
which is derived from \eqref{ineq7} and the weak convergence of $\{\nabla v_{i}\}_{i\in\mathbb{N}}$ to~$\nabla v$.
Hence, $\lim_{i\rightarrow\infty}\mathcal{I}_{i}=0$. By means of~\cite[Lemma 3.73]{HKM}, we obtain in~$L^{p'}(\tilde{\omega};\mathbb{R}^{n})$,
$$\mathcal{A}(x,\nabla v_{i})\rightharpoonup \mathcal{A}(x,\nabla v)
\qquad \mbox{ as } i\to \infty.$$
Hence, $v$ is a positive solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$ satisfying $v(x_{0})=1$.
In conclusion, for any~$\tilde{\omega}\Subset\omega\Subset\Omega$ with~$x_{0}\in \tilde{\omega}$, there exists a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly and locally uniformly in~$\tilde{\omega}$ to a positive weak solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$. Using a standard diagonal argument, we may extract a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ which converges weakly in~$W^{1,p}(\omega_{i})$ for all~$i\in\mathbb{N}$ and locally uniformly in~$\Omega$ to a positive weak solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$.
\end{proof}
\begin{comment}
\subsection{Gradient estimate}
In the sequel, we will need for the case $1<p<2$ the local boundedness of the modulus of the gradient of a solution of \eqref{half}. The following theorem is a consequence of \red{\cite[Theorem~1.1]{DM} and} \cite[Theorem~5.3]{Lieberman}.
\red{Please change the formulation of the gradient estimate using the result in \cite{DM}, and prove that this estimate implies that solutions are in fact $C^1$.}
\begin{Thm}\label{localbound}
Let $1<p<2$, $\mathcal{A}\triangleq(\mathcal{A}^{1},\mathcal{A}^{2},\ldots,\mathcal{A}^{n})$ satisfy \red{Assumption~\ref{ass8}} and $V\in L^{\infty}_{{\rm loc}}(\Omega)$. Suppose further that there exist positive constants $\theta\leq 1,\mu,\Lambda,\Lambda_{1}$, and a nonnegative constant~$\iota\leq 1$ such that for all~$(x,y,\eta)\in\Omega\times\Omega\times(\mathbb{R}^{n}\setminus\{0\})$ and all~$\xi\in\mathbb{R}^{n}$,
$$\frac{\partial \mathcal{A}^{i}}{\partial \eta^{j}}(x,\eta)\xi_{i}\xi_{j}\geq \mu(\iota+|\eta|)^{p-2}|\xi|^{2};$$
$$\left|\frac{\partial \mathcal{A}}{\partial \eta}(x,\eta)\right|\leq \Lambda(\iota+|\eta|)^{p-2};$$
$$|\mathcal{A}(x,\eta)-\mathcal{A}(y,\eta)|\leq \Lambda_{1}(1+|\eta|)^{p-1}|x-y|^{\theta}.$$
Then there exists a positive constant~$\gamma'=\gamma'\left(p,n,\theta,\frac{\Lambda}{\mu}\right)$ such that any locally bounded weak solution~$u$ of \eqref{half} is in~$C^{1,\gamma'}_{{\rm loc}}(\Omega)$. In particular, the modulus of the gradient of such a solution is locally bounded.
\end{Thm}
\end{comment}
\section{Generalized principal eigenvalue}\label{sec_eigenvalue}
Throughout the present section we consider solutions in a fixed domain $\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. First, by virtue of the weak lower semicontinuity as well as the coercivity of certain functionals related to the functional $Q_{p,\mathcal{A},V}$, we prove that the generalized principal eigenvalue of the operator $Q'_{p,\mathcal{A},V}$ in $\gw$ is, in fact, a principal eigenvalue of $Q'_{p,\mathcal{A},V}$. Moreover, the principal eigenvalue is simple, which is proved by virtue of the Picone-type identity (Lemma \ref{Picone}). After that, we show in Theorem \ref{maximum} (together with Theorem~\ref{complement}) that the following properties are equivalent: the positivity of the principal eigenvalue, the validity of the generalized weak or strong maximum principles, and the unique solvability of the Dirichlet problem $Q'_{p,\mathcal{A},V}[u]=g$ in~$W^{1,p}_{0}(\omega)$. We also establish a weak comparison principle, which is of core importance in Section~\ref{minimal}. See \cite{Pinchovergp} for more on the generalized principal eigenvalue.
\subsection{D\'{\i}az-Sa\'a type inequalities}
In this subsection, we generalize the D\'{\i}az-Sa\'a type inequalities as a counterpart of \cite[Lemma 3.3]{Pinchover},
see also \cite{Anane1987, Diaz, Lindqvist} for related results. To this end, Assumption \ref{ass2}, concerning the local strong convexity of the norm $|\cdot|_{\mathcal{A}}^p$ is assumed. The D\'{\i}az-Sa\'a type inequalities are used to prove
the uniqueness of solutions of two Dirichlet problems (see theorems~\ref{maximum} and \ref{5proposition}). But in Lemma \ref{newDiaz} and hence in Theorem \ref{5proposition}, this assumption is not supposed.
\begin{Def}
{\em Assume that $\mathcal{A}$ satisfies Assumption \ref{ass8}, and let $V\in M_{\rm loc}^{q}(p;\Omega)$. Let $\omega\Subset\Omega$ be a subdomain.
A real number~$\lambda$ is called an \emph{eigenvalue with an eigenfunction v} of the Dirichlet eigenvalue problem
\begin{equation}\label{evp}
\begin{cases}
%
Q'_{p,\mathcal{A},V}[u]=\lambda|u|^{p-2}u&\text{in~$\omega$},\\
%
u=0& \text{on~$\partial\omega$},
\end{cases}
\end{equation}
if~$v\in W^{1,p}_{0}(\omega)\setminus\{0\}$ is a solution of the equation $Q'_{p,\mathcal{A},V}[u]=\lambda |u|^{p-2}u$
}
\end{Def}
\begin{lem}[D\'{\i}az-Sa\'a type inequalities]\label{elementary}
Assume that~$\mathcal{A}$ satisfies Assumption~\ref{ass8} and Assumption \ref{ass2}. Let~$\omega\Subset\Omega$ be a domain and~$g_{i},V_{i}\in M^{q}(p;\omega)$, where~$i=1,2$. There exists a positive constant~$C(\omega,p)$ and $\bar p\geq p$ satisfying the following conclusions.
\begin{enumerate}
\item[$(1)$] Let $w_{i}\in W^{1,p}_{0}(\omega)\setminus\{0\}$ be nonnegative solutions of $Q'_{p,\mathcal{A},V_{i}}[u]=g_{i}$ in $\gw$, where $i=1,2$, and let~$w_{i,h}\triangleq w_{i}+h$, where~$h$ is a positive constant. Then
\begin{multline*}
I_{h,g_{1},g_{2},w_{1},w_{2}}
\!\!\triangleq \!\!\int_{\omega}\!\!\left(\!\frac{g_{1} \! - \! V_{1}w_{1}^{p-1}}{w_{1,h}^{p-1}}-
\frac{g_{2}\!-\!V_{2}w_{2}^{p-1}}{w_{2,h}^{p-1}}\!\right)\!\!(w^{p}_{1,h}-w^{p}_{2,h})\!\,\mathrm{d}x
\!\geq \! C(\bar p,\mathcal A,\gw)L_{h,w_{1},w_{2}},
\end{multline*}
where
$$L_{h,w_{1},w_{2}}\triangleq\int_{\omega}(w^{p}_{1,h}+w^{p}_{2,h})\left|\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right|_{\mathcal{A}}^{\bar{p}}\,\mathrm{d}x.$$
\item[$(2)$] Let $w_\gl$ and $w_\mu} \def\gn{\nu} \def\gp{\pi$ be nonnegative eigenfunctions of the operators $Q'_{p,\mathcal{A},V_{i}}$ in $\gw$ with eigenvalues $\gl$ and $\mu} \def\gn{\nu} \def\gp{\pi$, respectively. Then
$$I_{0,g_{\lambda},g_{\mu},w_{\lambda},w_{\mu}}=\int_{\omega}\big((\lambda-\mu)-(V_{1}-V_{2})\big)(w_{\lambda}^{p}-w_{\mu}^{p})\,\mathrm{d}x\geq
C(\bar p,\mathcal A,\gw)L_{0,w_{\lambda},w_{\mu}}.$$
\end{enumerate}
\end{lem}
\begin{proof}
$(1)$ Let~$\psi_{1,h}\triangleq(w^{p}_{1,h}-w^{p}_{2,h})w_{1,h}^{1-p}$. Then~$\psi_{1,h}\in W^{1,p}_{0}(\omega)$. It follows that
\begin{equation*}
\int_{\omega}\mathcal{A}(x,\nabla w_{1})\cdot \nabla \psi_{1,h}\,\mathrm{d}x+\int_{\omega}V_{1}|w_{1}|^{p-2}w_{1}\psi_{1,h}\,\mathrm{d}x=\int_{\omega}g_{1}\psi_{1,h}\,\mathrm{d}x.
\end{equation*}
We thus get
\begin{eqnarray*}\label{w1h}
&&\int_{\omega}(w^{p}_{1,h}-w^{p}_{2,h})\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}\,\mathrm{d}x
-p\int_{\omega}w_{2,h}^{p}\mathcal{A}\left(x,\frac{\nabla w_{1,h}}{w_{1,h}}\right)\cdot\left(\frac{\nabla w_{2,h}}{w_{2,h}}-\frac{\nabla w_{1,h}}{w_{1,h}}\right) \!\! \,\mathrm{d}x\\
&=&\int_{\omega}\frac{g_{1}-V_{1}w_{1}^{p-1}}{w_{1,h}^{p-1}}(w^{p}_{1,h}-w^{p}_{2,h})
\,\mathrm{d}x.
\end{eqnarray*}
Similarly, we see that
\begin{eqnarray*}\label{w2h}
&&\int_{\omega}(w^{p}_{2,h}-w^{p}_{1,h})\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p}\,\mathrm{d}x
-p\int_{\omega}w_{1,h}^{p}\mathcal{A}\left(x,\frac{\nabla w_{2,h}}{w_{2,h}}\right)\cdot\left(\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right)\!\!\,\mathrm{d}x\\
&=&\int_{\omega}\frac{g_{2}-V_{2}w_{2}^{p-1}}{w_{2,h}^{p-1}}(w^{p}_{2,h}-w^{p}_{1,h})\,\mathrm{d}x.
\end{eqnarray*}
Adding the previous derived equalities yields
\begin{eqnarray*}
& &I_{h,g_{1},g_{2},w_{1},w_{2}}
\!=\!\int_{\omega}\!\!w^{p}_{1,h} \! \left(\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}\!\!-\!\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p}
\!-\! p\mathcal{A}\!\!\left(x,\frac{\nabla w_{2,h}}{w_{2,h}}\right)\!\!\cdot\!\!\left(\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right) \! \right)\!\!\,\mathrm{d}x\\[2mm]
&+& \!\!\int_{\omega}w^{p}_{2,h}\left(\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p}-\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}-p\mathcal{A}\left(x,\frac{\nabla w_{1,h}}{w_{1,h}}\right)\cdot\left(\frac{\nabla w_{2,h}}{w_{2,h}}-\frac{\nabla w_{1,h}}{w_{1,h}}\right)\right)\!\!\,\mathrm{d}x.
\end{eqnarray*}
Applying the local strong convexity of $\mathcal{A}$ (Assumption~\ref{ass2}), we obtain the conclusion $(1)$.
$(2)$ Using part $(1)$, we have
\begin{eqnarray*}
&&\left\vert\left((\lambda-V_{1})\left(\frac{w_{\lambda}}{w_{\lambda,h}}\right)^{p-1}-(\mu-V_{2})\left(\frac{w_{\mu}}{w_{\mu,h}}\right)^{p-1}\right)(w_{\lambda,h}^{p}-w_{\mu,h}^{p})\right\vert\\
&\leq& \left(|\lambda-V_{1}|+|\mu-V_{2}|\right)\left((w_{\lambda}+1)^{p}+(w_{\mu}+1)^{p}\right)\\
&\leq& 2^{p-1}\left(|\lambda-V_{1}|+|\mu-V_{2}|\right)(w_{\lambda}^{p}+w_{\mu}^{p}+2)\in L^{1}(\omega).
\end{eqnarray*}
On the other hand, for a.e.~$x\in\omega$, the limit
\begin{eqnarray*}
&&\lim_{h\rightarrow 0}\left((\lambda-V_{1})\left(\frac{w_{\lambda}}{w_{\lambda,h}}\right)^{p-1}-(\mu-V_{2})\left(\frac{w_{\mu}}{w_{\mu,h}}\right)^{p-1}\right)(w_{\lambda,h}^{p}-w_{\mu,h}^{p})\\
&=&(\lambda-\mu-V_{1}+V_{2})(w_{\lambda}^{p}-w_{\mu}^{p}).
\end{eqnarray*}
Hence, the dominated convergence theorem and Fatou's lemma imply $(2)$.
\end{proof}
\begin{lemma}\label{newDiaz}
Assume that $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a bounded Lipschitz domain, $\mathcal{A}$ satisfies Assumption~\ref{ass8}, $g_{i},V_{i}\in M^{q}(p;\omega)$. For $i=1,2$, let $w_{i}\in W^{1,p}(\omega)$ be respectively,
positive solutions of the equations $Q'_{p,\mathcal{A},V_{i}}[w]=g_{i}$ in $\gw$, which are bounded away from zero in~$\omega$ and satisfy $w_{1}=w_{2}>0$ on~$\partial\omega$ in the trace sense.
Then
$$I_{0,g_{1},g_{2},w_{1},w_{2}}=\int_{\omega}\left(\left(\frac{g_{1}}{w_{1}^{p-1}}-\frac{g_{2}}{w_{2}^{p-1}}\right)-(V_{1}-V_{2})\right)(w_{1}^{p}-w_{2}^{p})\,\mathrm{d}x
\geq 0,$$ and~$I_{0,g_{1},g_{2},w_{1},w_{2}}=0$ if and only if $\nabla w_{1}/w_{1}=\nabla w_{2}/w_{2}$.
\end{lemma}
\begin{proof}
Letting~$h=0$, we see at once that the lemma follows from the proof of $(1)$ of Theorem \ref{elementary} and Lemma \ref{strictconvexity} (without assuming local strong convexity).
\end{proof}
\begin{comment}
The following lemma is a direct corollary of \cite[Theorem 3.23]{HKM} which will be used only when we prove that the principal eigenvalue is isolated. This lemma is a weak counterpart of \cite[Lemma 3.4]{Pinchover}. We do not know whether such a conclusion in our setting holds if~$\mathcal{V}$ is nontrivial.
\begin{lem}
Let~$\mathcal{V}=0$. For any supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$,~$v^{-}\in W^{1,p}_{{\rm loc}}(\Omega)$ is a subsolution of the same equation.
\end{lem}
\end{comment}
\subsection{Weak lower semicontinuity and coercivity}
In this subsection, we study the weak lower semicontinuity and coercivity of certain functionals related to the functional $Q_{p,\mathcal{A},V}$. See also \cite[Section 8.2]{Evans}.
\begin{Def
{\em
Let~$(X,\Vert\cdot\Vert_{X})$ be a Banach space. A functional $J:X\to\mathbb{R}\cup\{\infty\}$ is said to be {\em coercive} if
$J[u] \to \infty\mbox{ as }\Vert u\Vert_{X} \to \infty.$
\noindent The functional~$J$ is said to be {\em (sequentially) weakly lower semicontinuous} if $$J[u] \leq \liminf_{k\to\infty} J[u_{k}] \qquad \mbox{ whenever }u_{k}\rightharpoonup u.$$}
\end{Def}
\begin{notation}
\emph{For a domain~$\omega\subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$,~$\mathcal{A}$ satisfying Assumption~\ref{ass8}, and $V\in M^{q}_{\rm loc} (p;\Omega)$, let $Q_{p,\mathcal{A},V}[\varphi;\omega]$ be the functional on $C^{\infty}_{c}(\omega)$ given by $$\vgf\mapsto \int_{\omega}\Big(\vert\nabla \varphi\vert_{\mathcal{A}}^{p}+V\vert \varphi\vert^{p}\Big)\,\mathrm{d}x.$$ When~$\omega=\Omega$, we write $Q_{p,\mathcal{A},V}[\varphi]\triangleq Q_{p,\mathcal{A},V}[\varphi;\Omega].$}
\end{notation}
The next four theorems can be proved by standard arguments which are similar to the proof of \cite[propositions 3.6 and 3.7]{Pinchover}, and therefore their proofs are omitted. We state them as four separate theorems for the sake of clarity.
\begin{Thm}\label{ThmJ}
Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, and $g,\mathcal{V}\in M^{q}(p;\omega')$, where~$\omega$ is Lipschitz. Then the functional
$$\bar{J}:W^{1,p}(\omega)\rightarrow\mathbb{R}\cup\{\infty\},\quad \bar{J}[u]\triangleq Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]-\int_{\omega}g\vert u\vert\,\mathrm{d}x,$$ is weakly lower semicontinuous in~$W^{1,p}(\omega)$.
\end{Thm}
\begin{Thm}\label{ThmJ1}
Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $\mathcal{V}\in M^{q}(p;\omega)$, and~$g\in L^{p'}(\omega)$. Then the functional
$$J:W^{1,p}_{0}(\omega)\rightarrow\mathbb{R}\cup\{\infty\},\quad J[u]\triangleq Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]-\int_{\omega}gu\,\mathrm{d}x,$$ is weakly lower semicontinuous in~$W^{1,p}_{0}(\omega)$.
\end{Thm}
\begin{Thm}
Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, where~$\omega$ is a Lipschitz domain. Let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $g, \mathcal{V}\in M^{q}(p;\omega')$, and~$\mathcal{V}$ is nonnegative. For any~$f\in W^{1,p}(\omega)$, ~$\bar{J}$ is coercive in$$\mathbf{A}\triangleq\{u\in W^{1,p}(\omega):u-f\in W^{1,p}_{0}(\omega)\}.$$
\end{Thm}
\begin{Thm}\label{thm-coercive}
Consider a domain~$\omega\Subset\Omega$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $\mathcal{V}\in M^{q}(p;\omega)$, and~$g\in L^{p'}(\omega)$. If for some~$\varepsilon>0$ and all~$u\in W^{1,p}_{0}(\omega)$ we have,
$$Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]\geq\varepsilon\Vert u\Vert^{p}_{L^{p}(\omega)},$$
then~$J$ is coercive in~$W^{1,p}_{0}(\omega)$.
\end{Thm}
\subsection{Picone identity}
This subsection concerns a Picone-type identity for $Q_{p,\mathcal{A},V}$. Picone's identities for the $(p,A)$-Laplacian and the $p$-Laplacian are crucial tools in \cite{Regev, Tintarev} (see also \cite{Regev1, Regev2}). In the present work, it will be used to give an alternative and direct proof (without Assumption~\ref{ass2}) of the AAP type theorem (see Lemma~\ref{lem_alter}), and in Theorem~\ref{newthm}.
\begin{lem}[{cf. \cite[ Lemma 2.2]{newpicone}}]\label{RL}
Let~$\mathcal{A}$ satisfy Assumption \ref{ass8}, and define
$$L(u,v)\triangleq |\nabla u|_{\mathcal{A}}^{p}+(p-1)\frac{u^{p}}{v^{p}}|\nabla v|^{p}_{\mathcal{A}}-p\frac{u^{p-1}}{v^{p-1}}\mathcal{A}(x,\nabla v)\cdot\nabla u,$$
and
$$R(u,v) \triangleq |\nabla u|_{\mathcal{A}}^{p}- \mathcal{A}(x,\nabla v)
\cdot\nabla\left(\frac{u^{p}}{v^{p-1}}\right),$$
where the functions~$u\in W^{1,p}_{{\rm loc}}(\Omega)$ and~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ are respectively nonnegative and positive with
$u^{p}/v^{p-1}\in W^{1,p}_{{\rm loc}}(\Omega)$ such that the product rule for $u^{p}/v^{p-1}$ holds.
Then
$$L(u,v)(x)=R(u,v)(x) \qquad \mbox{for a.e.~$x\in \Omega$}.$$
Furthermore,
we have~$L(u,v) \geq 0$ a.e. in $\Omega$ and $L(u,v)=0$ a.e. in $\Omega$ if and only if $u=kv$ for some constant~$k\geq 0$.
\end{lem}
\begin{remark}
\emph{The lemma concerns pointwise equality and inequality. Therefore, the proof in \cite[ Lemma 2.2]{newpicone} applies to our more general case where $\mathcal{A}$ depends also on $x$. Hence, the proof is omitted.}
\end{remark}
\begin{lem}[Picone-type identity]\label{Picone}
Let $\mathcal{A}$ satisfy Assumption \ref{ass8} and $V\!\in\!M^{q}_{{\rm loc}}(p;\Omega)$. For any positive
solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega$, and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ with $u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ such that the product rule for $u^{p}/v^{p-1}$ holds, we have
$$Q_{p,\mathcal{A},V}[u]=\int_{\Omega} L(u,v)(x)\,\mathrm{d}x.$$ If instead,~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is either a positive subsolution or a positive supersolution, and all the other conditions are satisfied, then respectively,
$$Q_{p,\mathcal{A},V}[u]\leq\int_{\Omega} L(u,v)(x)\,\mathrm{d}x, \quad
\mbox{or} \quad Q_{p,\mathcal{A},V}[u]\geq\int_{\Omega} L(u,v)(x)\,\mathrm{d}x.$$
\end{lem}
\begin{proof}
The proof is similar to that of \cite[Proposition 3.3]{Regev}, and therefore it is omitted.
\end{proof}
\begin{lem}\label{lem_alter}
Let~$\mathcal{A}$ satisfy Assumption \ref{ass8}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
\begin{enumerate}
\item[$(1)$] For any positive
solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[\psi]=0$ in~$\Omega$ and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ such that~$u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ and the product rule for~$u^{p}/v^{p-1}$ holds, if~$vw$ satisfies the product rule for~$w\triangleq u/v$, then
$$Q_{p,\mathcal{A},V}[vw]=\int_{\Omega} \left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w\right)\,\mathrm{d}x.$$
\item[$(2)$] For a positive subsolution or a positive supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[\psi]=0$ in~$\Omega$ and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ such that~$u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ and the product rule for~$u^{p}/v^{p-1}$ holds, if~$vw$ satisfies the product rule for~$w\triangleq u/v$, then, respectively,
%
$$Q_{p,\mathcal{A},V}[vw]\leq\int_{\Omega}\left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w \right) \,\mathrm{d}x,$$ or
$$Q_{p,\mathcal{A},V}[vw]\geq\int_{\Omega}\left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w \right) \,\mathrm{d}x.$$
\item[$(3)$]If $v\in W^{1,p}_{{\rm loc}}(\Omega)$ is either a positive
solution or a positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$, then the functional
$Q_{p,\mathcal{A},V}$ is nonnegative on $W^{1,p}_{c}(\Omega)$.
\end{enumerate}
\end{lem}
\begin{Rem}
\emph{The third part of the lemma gives an alternative proof of~$(2)\Rightarrow (1)$ and~$(3)\Rightarrow (1)$ of the AAP type theorem (Theorem \ref{thm_AAP}).}
\end{Rem}
\begin{proof}[Proof of Lemma~\ref{lem_alter}]
For the first two parts of the lemma, we apply the product rule directly in the final equality/inequalities of Lemma \ref{Picone}. The third part follows from the first two parts, the strict convexity of the function $|\cdot|^{p}_{\mathcal{A}}$, and an approximation argument. For details, see \cite[Theorem 5.2]{Regev} and \cite[Theorem 2.3]{Tintarev}.
\end{proof}
\subsection{Principal eigenvalues in domains~$\omega\Subset\Omega$}\label{eigenvalueunique}
\begin{Def}{\em
Let~$\mathcal{A}$ satisfy Assumption \ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. The \emph{generalized principal eigenvalue} of $Q'_{p,\mathcal{A},V}$ in a domain $\gw \subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$
is defind by
$$\lambda_{1}=\lambda_{1}(Q_{p,\mathcal{A},V};\omega)\triangleq\inf_{u\in C^{\infty}_{c}(\omega) \setminus\{0\}}\frac{Q_{p,\mathcal{A},V}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}\,.$$}
\end{Def}
\begin{Rem}\label{rem_lambda1}{\em
It follows that for a domain $\gw\Subset\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ we have
\begin{equation}\label{eq_pev}
\lambda_{1}(Q_{p,\mathcal{A},V};\omega) = \inf_{u\in W^{1,p}_{0}(\omega)\setminus\{0\}}\frac{Q_{p,\mathcal{A},V}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}\,.
\end{equation}
}
\end{Rem}
\begin{lemma}\label{easylemma}
Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption \ref{ass8}, and let $V\in M^{q}(p;\omega)$. All eigenvalues of \eqref{evp} are larger than or equal to $\lambda_{1}$.
\end{lemma}
\begin{proof}
Testing any eigenfunction, we get the conclusion.
\end{proof}
\begin{Def}
{\em A \emph{principal eigenvalue} of \eqref{evp} is an eigenvalue with a nonnegative eigenfunction, which is called a \emph{principal eigenfunction}.}
\end{Def}
We first state a useful lemma.
\begin{lemma}\label{functionalcv}
Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. For every domain~$\omega\Subset\Omega$, if~$u_{k}\rightarrow u$ as~$k\rightarrow\infty$ in~$W^{1,p}_{0}(\omega)$, then
$\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=Q_{p,\mathcal{A},V}[u]$.
\end{lemma}
\begin{proof}
By \cite[Lemma 5.23]{HKM}, we get~$\lim_{k\rightarrow\infty}\int_{\omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x=\int_{\omega}|\nabla u|_{\mathcal{A}}^{p}\,\mathrm{d}x$. The elementary inequality
$$|x^{p}-y^{p}|\leq p|x-y|(x^{p-1}+y^{p-1}) \qquad \forall x,y\geq0,$$
the H\"{o}lder inequality, and the Morrey-Adams theorem, imply
\begin{eqnarray*}
\left\vert\int_{\omega}\!V(|u_{k}|^{p}-|u|^{p})\!\,\mathrm{d}x \right\vert&\!\!\leq\! \!& \int_{\omega}\!|V|||u_{k}|^{p}-|u|^{p}|\!\,\mathrm{d}x
\!\leq p\!\int_{\omega}\!|V||u_{k}-u|||u_{k}|^{p-1}+|u|^{p-1}|\!\,\mathrm{d}x\\
&\!\!\leq\!\!& C(p)\!\left(\int_{\omega}\!\!|V||u_{k}\!-\! u|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p}\!\!\left(\int_{\omega}\!|V||u_{k}|^{p} \!+\! |V||u|^{p}\!\,\mathrm{d}x\!\right)^{\!1/p'}\!
\underset{ k\rightarrow\infty}{\rightarrow 0}.
\end{eqnarray*}
Hence, the desired convergence follows.
\end{proof}
Now we prove that in every domain~$\omega\Subset\Omega$, the generalized principal eigenvalue is a principal and simple eigenvalue, whose uniqueness is proved in Corollary \ref{newuniqueness}.
\begin{Thm}\label{principaleigenvalue}
Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8},
and let $V\in M^{q}(p;\omega)$.
\begin{enumerate}
\item[$(1)$] The generalized principal eigenvalue is a principal eigenvalue of the operator~$Q'_{p,\mathcal{A},\mathcal{V}}$.
\item[$(2)$] The principal eigenvalue is simple, i.e., for any two eigenfunctions $u$ and~$v$ associated with the eigenvalue $\gl_1$, we have $u=cv$ for some~$c\in\mathbb{R}$.
\end{enumerate}
\end{Thm}
\begin{proof}[Proof of Theorem \ref{principaleigenvalue}]
$(1)$ Applying the Morrey-Adams theorem (Theorem~\ref{MA_thm}) with the positive number~$\delta=\alpha_{\omega}$ and the ellipticity condition \eqref{structure}, we obtain that
$$\lambda_{1}\geq -C(n,p,q)\alpha_{\omega}^{-n/(pq-n)}\Vert V\Vert^{pq/(pq-n)}_{M^{q}(p;\omega)}>-\infty.$$
%
For any~$\varepsilon>0$, letting~$\mathcal{V}=V-\lambda_{1}+\varepsilon$, we immediately see that for all~$u\in W^{1,p}_{0}(\omega)$, $$Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]\geq \varepsilon\Vert u\Vert_{L^{p}(\omega)}^{p}.$$
Therefore, the functional~$Q_{p,\mathcal{A},V-\lambda_{1}+\varepsilon}[\;\cdot\; ;\omega]$ is coercive and weakly lower semicontinuous in $W^{1,p}_{0}(\omega)$, and hence also in~$W^{1,p}_{0}(\omega)\cap\{\Vert u\Vert_{L^{p}(\omega)}=1\}$. Therefore, the infimum in \eqref{eq_pev}
is attained in~$ W^{1,p}_{0}(\omega)\setminus\{0\}$.
Let~$v\in W^{1,p}_{0}(\omega)\setminus\{0\}$ be a minimizer of \eqref{eq_pev}.
By standard variational calculus techniques, we conclude that the minimizer $v\in W^{1,p}_{0}(\omega)$ satisfies the equation $Q'_{p,\mathcal{A},V}[u]=\lambda_1|u|^{p-2}u$ in the weak sense. Note that~$|v|\in W^{1,p}_{0}(\omega)$. In addition, almost everywhere in~$\omega$, we have~$\big|\nabla(|v|)\big|=|\nabla v|$ and~$\big|\nabla(|v|)\big|_{\mathcal{A}}=|\nabla v|_{\mathcal{A}}.$ Thus~$|v|$ is also a minimizer, and therefore, it satisfies the equation $Q'_{p,\mathcal{A},V}[u]=\lambda_1|u|^{p-2}u$ in the weak sense. So~$\lambda_{1}$ is a principal eigenvalue. The Harnack inequality and H\"{o}lder estimates guarantee that~$|v|$ is strictly positive and continuous in~$\omega$. Therefore, we may assume that $v>0$.
$(2)$ The proof is inspired by \cite[Theorem 2.1]{Regev1}. Let $v,u\in W^{1,p}_{0}(\omega)$ be, respectively, a positive principal eigenfunction and any eigenfunction associated with~$\lambda_{1}$. It suffices to show~$u=cv$ for some~$c\in\mathbb{R}$.
By part $(1)$ we may assume that $u>0$ in~$\omega$. Let~$\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\omega)$ a nonnegative sequence approximating $u$ in~$W^{1,p}_{0}(\omega)$ and a.e. in~$\omega$. Then the product rule for~$\varphi_{k}^{p}/v^{p-1}$ holds for all~$k\in\mathbb{N}$. By Lemma \ref{RL}, we get, for all~$k\in\mathbb{N}$,
$$\int_{\omega} L(\varphi_{k},v)(x)\,\mathrm{d}x=\int_{\omega}|\nabla \varphi_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x-\int_{\omega}\mathcal{A}(x,\nabla v)
\cdot\nabla\left(\frac{\varphi_{k}^{p}}{v^{p-1}}\right)\,\mathrm{d}x.$$
Since~$\varphi_{k}^{p}/v^{p-1}\in W^{1,p}_{c}(\omega)$, we obtain
$$\int_{\omega}\mathcal{A}(x,\nabla v)\cdot\nabla\left(\frac{\varphi_{k}^{p}}{v^{p-1}}\right)\,\mathrm{d}x+\int_{\omega}(V-\lambda_{1})v^{p-1}\frac{\varphi_{k}^{p}}{v^{p-1}}\,\mathrm{d}x=0.$$
It follows that~$Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\gw] =\int_{\omega} L(\varphi_{k},v)(x)\,\mathrm{d}x.$
By Fatou's lemma and Lemma \ref{functionalcv}, we obtain
\begin{eqnarray*}
0 & \leq &\int_{\omega} L(u,v)(x)\d
\leq \int_{\omega}\liminf_{k\rightarrow\infty}L(\varphi_{k},v)(x)\,\mathrm{d}x \leq
\liminf_{k\rightarrow\infty}\int_{\omega}L(\varphi_{k},v)(x)\,\mathrm{d}x\\
&=&\liminf_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\gw]
=Q_{p,\mathcal{A},V-\lambda_{1}}[u;\gw]=0.
\end{eqnarray*}
Lemma \ref{RL} and the connectedness of $\gw$ imply that $u=cv$ in $\gw$ for some~$c>0$.
\end{proof}
\subsection{Positivity of the principal eigenvalues}\label{localtheory}
In this subsection, we consider positivity features of the operator $Q'_{p,\mathcal{A},V}$ in a {\em Lipschitz} domain
$\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. In particular, we study the relationship between the validity of the generalized strong/weak maximum principles, the existence of a proper positive supersolution, the unique solvability in $W^{1,p}_{0}(\omega)$ of the nonnegative Dirichlet problem $Q'_{p,\mathcal{A},V}[u]=g \geq 0$ in $\gw$, and the positivity of the principal eigenvalue.
\begin{Def}
\emph{Let $\omega$ be a bounded Lipschitz domain. A function $v\in W^{1,p}(\omega)$ is said to be \emph{nonnegative} on~$\partial\omega$ if $v^{-}\in W^{1,p}_{0}(\omega)$. A function~$v$ is said to be \emph{zero} on~$\partial\omega$ if $v\in W^{1,p}_{0}(\omega)$.}
\end{Def}
\begin{Def}
\emph{ Let $\omega\Subset\Omega$ be a Lipschitz domain,~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}(p;\omega)$.
\begin{itemize}
\item The operator~$Q'_{p,\mathcal{A},V}$ is said to satisfy the \emph{generalized weak maximum principle in $\gw$} if every solution~$v \in W^{1,p}(\omega)$ of the equation $Q'_{p,\mathcal{A},V}[u]=g$ in $\gw$ with $0\leq g\in L^{p'}(\omega)$ and $v\geq 0$ on~$\partial\omega$ is nonnegative in~$\omega$;
\item the operator~$Q'_{p,\mathcal{A},V}$ satisfies the \emph{generalized strong maximum principle in $\gw$} if any such a solution $v$ is either the zero function or strictly positive in~$\omega$.
\end{itemize}}
\end{Def}
Under Assumption~\ref{ass2}, by Theorem \ref{complement}, all the assertions in the following theorem are in fact equivalent even though we can not prove it completely at this point.
\begin{Thm}\label{maximum}
Let~$\omega\Subset\Omega$, where~$\omega$ is a Lipschitz domain,~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and $V\in M^{q}(p;\omega)$. Consider the following assertions:
\begin{enumerate}
\item[$(1)$] The operator~$Q'_{p,\mathcal{A},V}$ satisfies the generalized weak maximum principle in~$\omega$.
\item[$(2)$] The operator~$Q'_{p,\mathcal{A},V}$ satisfies the generalized strong maximum principle in~$\omega$.
\item[$(3)$] The principal eigenvalue $\lambda_{1} =\lambda_{1}(Q_{p,\mathcal{A},V};\omega)$ is positive.
\item[$(4)$] The equation $Q'_{p,\mathcal{A},V}[u]=0$ has a proper positive supersolution in~$W^{1,p}_{0}(\omega)$.
\item[$(4')$] The equation $Q'_{p,\mathcal{A},V}[u]=0$ has a proper positive supersolution in~$W^{1,p}(\omega)$.
\item[$(5)$] For any nonnegative~$g\in L^{p'}(\omega)$, there exists a nonnegative solution $v\in W^{1,p}_{0}(\omega)$ of the equation~$Q'_{p,\mathcal{A},V}[u]=g$ in~$\omega$ which is either zero or positive.
\end{enumerate}
Then $(1)\Leftrightarrow (2)\Leftrightarrow (3)\Rightarrow (4)\Rightarrow (4')$, and~$(3)\Rightarrow (5)\Rightarrow (4).$
\medskip
Furthermore,
\begin{enumerate}
\item[$(6)$] If Assumption~\ref{ass2} is satisfied and $\gl_1>0$, then the solution in $(5)$ is unique.
\end{enumerate}
\begin{comment}
Assume that$$\lambda_{1}\triangleq\inf_{u\in W^{1,p}_{0}(\omega)\setminus\{0\}}\frac{Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}>0.$$ Then for~$0\leq g\in L^{p'}(\omega)$, the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[v]=g$ has a nonnegative solution. Any such a solution is either strictly positive or the zero function.
\end{comment}
\end{Thm}
\begin{proof}
$(1)\Rightarrow (2)$ The generalized weak maximum principle implies that any solution~$v$ of $Q'_{p,\mathcal{A},V}[u]=g$ with $g\geq 0$, which is nonnegative on~$\partial\omega$, is nonnegative in~$\omega$. So, $v$ is a nonnegative supersolution of \eqref{half}. The weak Harnack inequality implies that either $v>0$ or $v=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
$(2)\Rightarrow (3)$ Let~$\lambda_{1}\leq 0$ and $v>0$ be its principal eigenfunction. By the homogeneity, the function $w=-v$ satisfies $Q'_{p,\mathcal{A},V}[w]=\gl_1|w|^{p-1}w$, and $w=0 $ on $\partial\omega$ in the weak sense, but this contradicts the generalized strong maximum principle.
$(3)\Rightarrow (1)$ Let~$v$ satisfy $v$ of $Q'_{p,\mathcal{A},V}[u]=g$ with $g\geq 0$, and $v\geq 0$ on~$\partial\omega$. Suppose that $v^{-}\neq 0$. Testing~$v^{-}$ in the definition of the solution of $Q'_{p,\mathcal{A},\mathcal{V}}[u] =g\geq 0$ , we get $$Q_{p,\mathcal{A},\mathcal{V}}[v^{-};\omega]=\int_{\{x\,\in\,\omega\,:\,v\,<\,0\}}gv\,\mathrm{d}x\leq 0.$$
Therefore, $\lambda_{1}\leq 0$, which contradicts the assumption.
$(3)\Rightarrow (4)$ Since $\lambda_{1}>0$, its principal eigenfunction is a proper positive supersolution of \eqref{half} in~$\omega$.
$(4)\Rightarrow (4')$ This implication follows from~$W^{1,p}_{0}(\omega)\subseteq W^{1,p}(\omega)$.
$(3)\Rightarrow (5)$ By Theorems \ref{ThmJ1} and \ref{thm-coercive}, the functional
$J[u]= Q_{p,\mathcal{A},V}[u;\omega]- p\int_{\omega}gu\,\mathrm{d}x $
is weakly lower semicontinuous and coercive in~$W^{1,p}_{0}(\omega)$ for $g\in L^{p'}(\gw)$. Therefore, the functional~$J$ has a minimizer in~$W^{1,p}_{0}(\omega)$ (see for example \cite[Theorem 1.2]{Struwe}). Consequently, the corresponding equation~$Q'_{p,\mathcal{A},V}[u]=g$ has a weak solution $v_{1}\in W^{1,p}_{0}(\omega)$. Note that~$(3)\Rightarrow (2)$. Therefore, the solution~$v_{1}$ is either zero or positive in~$\omega$.
$(5)\Rightarrow (4)$ Use $(5)$ with~$g = 1$ to obtain a proper positive supersolution.
$(6)$ Assume now that Assumption~\ref{ass2} is satisfied, and let us prove that $v_1=v$ is unique. If~$v_{1}=0$, then~$g=0$. Hence, $Q_{p,\mathcal{A},V}[v;\omega]=0$, but this contradicts the assumption that $\gl_1 >0$.
Assume now that~$v_1>0$. Let~$v_{2}\in W^{1,p}_{0}(\omega)$ be any other positive solution. By part $(1)$ of Lemma~\ref{elementary} with~$g_{i}=g,V_{i}=V$ and $i=1,2$, we conclude
\begin{equation*}
\int_{\omega}V\!\!\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{p-1}\!\!\! -
\!\!\left(\frac{v_{2}}{v_{2,h}}\right)^{p-1}\!\right)\!\!\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\!\!\,\mathrm{d}x
\leq \! \int_{\omega} \! g\left(\!\frac{1}{v_{1,h}^{p-1}}-\frac{1}{v_{2,h}^{p-1}} \!\right)\!\!
\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\!\!\,\mathrm{d}x
\leq 0.
\end{equation*}
We note that $$\lim_{h\rightarrow 0}V\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{p-1}-\left(\frac{v_{2}}{v_{2,h}}\right)^{p-1}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)=0,$$
and
\begin{eqnarray*}
\left\vert V\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{\!p-1}\!\!-\left(\frac{v_{2}}{v_{2,h}}\right)^{\!p-1}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\right\vert
\leq 2|V|\left(\left(v_{1}+1\right)^{p}+\left(v_{2}+1\right)^{p}\right)
\in L^{1}(\omega).
\end{eqnarray*}
It follows that$$\lim_{h\rightarrow 0}\int_{\omega}g\left(\frac{1}{v_{1,h}^{p-1}}-\frac{1}{v_{2,h}^{p-1}}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\,\mathrm{d}x=0.$$
By Fatou's lemma, and Lemma~\ref{elementary}, we infer that $L_{0,v_1,v_2}\!=\!0$. Hence, $v_{2}\!=\!v_{1}$ in~$\omega$.
\begin{comment}
Moreover,~$v$ is a nonnegative supersolution of the equation~\ref{half}. Assume that~$v(x_{0})>0$ and~$v(x_{1})=0$. Let~$\omega'$ containing~$x_{1}$ and~$x_{0}$ be a domain compactly included in~$\omega$. By virtue of the Harnack inequality($p\leq n$) or the weak Harnack inequality($p>n$), it must be that~$v\equiv0$ in~$\omega'$. Contradiction! So~$v$ is either strictly positive or the zero function.
\end{comment}
\end{proof}
\subsection{Weak comparison principle}\label{WCP}
\subsubsection{Super/sub-solution technique}
The following two theorems can be obtained by similar arguments to those of \cite[Lemma 5.1 and Proposition 5.2]{Pinchover}. We first state a weak comparison principle under the assumption that the potential is nonnegative.
\begin{lem}\label{5lemma}
Let $\omega\Subset\Omega$ be a Lipschitz domain, $\mathcal{A}$ satisfy Assumption \ref{ass8}, $g\in M^{q}(p;\omega)$, and~$\mathcal{V}\in M^{q}(p;\omega)$, where $\mathcal{V}$ is nonnegative. For any subsolution~$v_{1}$ and any supersolution~$v_{2}$ of the equation $Q'_{p,\mathcal{A},\mathcal{V}}[u]=g$, in $\omega$ with $v_{1},v_{2}\in W^{1,p}(\omega)$ satisfying $\left(v_{2}-v_{1}\right)^{-}\in W^{1,p}_{0}(\omega)$, we have
$v_{1}\leq v_{2}\quad \mbox{in}~\omega$.
\end{lem}
\begin{proof} Testing the integral inequalities for the subsolution~$v_{1}$ and the supersolution~$v_{2}$ with ~$\left(v_{2}-v_{1}\right)^{-}$ and then subtracting one from the other, we arrive at
\begin{equation*}
\int_{\omega}\left(\mathcal{A}(x,\nabla v_{1})-\mathcal{A}(x,\nabla v_{2})\right)\cdot\nabla\left((v_{2}-v_{1})^-\right)\,\mathrm{d}x
+\int_{\omega}\mathcal{V}v_{1,2}\left(v_{2}-v_{1}\right)^{-}\,\mathrm{d}x\leq 0,
\end{equation*}
where~$v_{1,2}\triangleq|v_{1}|^{p-2}v_{1}-|v_{2}|^{p-2}v_{2}$.
It follows that
\begin{equation*}
\int_{\{v_{2}<v_{1}\}}\left(\mathcal{A}(x,\nabla v_{1})-\mathcal{A}(x,\nabla v_{2})\right)\cdot\left(\nabla v_{1}-\nabla v_{2}\right)\,\mathrm{d}x\\
+\int_{\{v_{2}<v_{1}\}}\mathcal{V}v_{1,2}\left(v_{1}-v_{2}\right)\,\mathrm{d}x\leq 0.
\end{equation*}
Since the above two terms are nonnegative, the two integrals are zero. Hence,
$\nabla (v_{2} - v_{1})^-\!=\!0$ a.e. in $\omega$. Therefore, the negative part of $v_{2} -v_{1}$ is a constant a.e. in $\omega$. Hence, Poincar\'{e}'s inequality implies
$(v_{2}\! - \! v_{1})^{-} \! = \! 0$ a.e. in $\omega$. Namely, $v_{1}\leq v_{2}$ a.e. in $\omega$.
\end{proof}
In order to establish a weak comparison principle when $V$ is not necessarily nonnegative, we need the following result which is of independent interest.
\begin{Thm}[Super/sub-solution technique]\label{5proposition}
Let~$\omega\Subset\Omega$ be a Lipschitz domain, let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $g,V\in M^{q}(p;\omega)$, where~$g$ is nonnegative a.e.~in~$\omega$. We assume that~$f,\varphi,\psi\in W^{1,p}(\omega)\cap C(\bar{\omega}),$ where~$f\geq 0$ a.e. in~$\omega$, and
\[\begin{cases}
%
Q'_{p,\mathcal{A},V}[\psi]\leq g\leq Q'_{p,\mathcal{A},V}[\varphi]&\text{in~$\omega$ in the weak sense,}\\
%
\psi\leq f\leq \varphi& \text{on~$\partial\omega$,}\\
0\leq \psi\leq\varphi&\text{in~$\omega$.}
\end{cases}\]
Then there exists a nonnegative function $u\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfying
\[\begin{cases}
%
Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\
%
u=f& \text{on~$\partial\omega$,}
\end{cases}\]
and $\psi\leq u\leq \varphi$ in~$\omega$.
Moreover, if $f>0$ a.e. on~$\partial\omega$, then the above boundary value problem has a unique positive solution.
\end{Thm}
\begin{proof}
Set$$\mathcal{K}\triangleq\left\{v\in W^{1,p}(\omega)\cap C(\bar{\omega}):0\leq \psi\leq v\leq \varphi \mbox{ in } \omega\right\}.$$
For every~$x\in\omega$ and~$v\in\mathcal{K}$, let $G(x,v)\triangleq g(x)+2V^{-}(x)v(x)^{p-1}$. Then~$G\in M^{q}(p;\omega)$ and~$G\geq 0$ a.e. in~$\omega$.
%
Let the functionals
$J$,$\bar{J}:W^{1,p}(\omega)\rightarrow \mathbb{R}\cup\{\infty\}$, be as in Theorems \ref{ThmJ} and \ref{ThmJ1} with~$|V|$ and~$G(x,v)$ as the potential and the right hand side, respectively. Choose a sequence~$\{u_{k}\}_{k\in\mathbb{N}}$ in
$$\mathbf{A}\triangleq\{u\in W^{1,p}(\omega):u=f \mbox{ on }\partial\omega\},$$
satisfying
$$J[u_{k}]\downarrow m\triangleq \inf_{u\in\mathbf{A}}J[u].$$
Because~$f\geq 0$,~$\{|u_{k}|\}_{k\in\mathbb{N}}\subseteq \mathbf{A},$ we infer $$m\leq J[|u_{k}|]=\bar{J}[u_{k}]\leq J[u_{k}],$$
where the last inequality is on account of~$G(x,v)\geq 0$ a.e. in~$\omega$. Then~$\lim_{k\rightarrow\infty}\bar{J}[u_{k}]=m$. It is also immediate that~$\inf_{u\in\mathbf{A}}\bar{J}[u]=m$. On the other hand,~$\bar{J}$ is weakly lower semicontinuous and coercive. Noting that~$\mathbf{A}$ is weakly closed, it follows from \cite[Theorem 1.2]{Struwe} that~$m$ is attained by a nonnegative function~$u_0\in\mathbf{A}$, that is,~$\bar{J}[u_0]=m$. In addition,~$J[u_0]=\bar{J}[u_0]=m$. Then~$u_0$ is a solution of
\[\begin{cases}\label{problem1}
%
Q'_{p,\mathcal{A},|V|}[u]=G(x,v)&\text{in~$\omega$,}\\
%
u=f& \text{in the trace sense on~$\partial\omega$,}
\end{cases}\]
and for any~$v\in\mathcal{K}$, let~$T(v)$ be a solution of this Dirichlet problem.
%
%
Then the map
$$T:\mathcal{K}\longrightarrow W^{1,p}(\omega),$$ is increasing.
Indeed, pick any~$v_{1}\leq v_{2}$ in~$\mathcal{K}$. Because~$G(x,v_{1})\leq G(x,v_{2})$, we infer that~$T(v_{1})$ and $T(v_{2})$ are respectively a solution and a supersolution of$$Q'_{p,\mathcal{A},|V|}[u]=G(x,v_{1}).$$ On~$\partial\omega$, we have~$T(v_{1})=f=T(v_{2})$. By Lemma \ref{5lemma}, we obtain~$T(v_{1})\leq T(v_{2})$ in~$\omega$.
Consider any subsolution~$v\in W^{1,p}(\omega)\cap C(\bar{\omega})$ of the boundary value problem
\[\begin{cases}
%
Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\
%
u=f& \text{on~$\partial\omega$.}
\end{cases}\]
Then in the weak sense in~$\omega$,$$Q'_{p,\mathcal{A},|V|}[v]=Q'_{p,\mathcal{A},V}[v]+G(x,v)-g(x)\leq G(x,v).$$ Furthermore, $T(v)$ is a solution of
\[\begin{cases}
%
Q'_{p,\mathcal{A},|V|}[u]=G(x,v)&\text{in~$\omega$,}\\
%
u=f& \text{in the trace sense on~$\partial\omega$.}
\end{cases}\]
Invoking Lemma \ref{5lemma}, we get~$v\leq T(v)$ a.e. in~$\omega$. Furthermore, $$Q'_{p,\mathcal{A},V}[T(v)]=g+2V^{-}\left(|v|^{p-2}v-|T(v)|^{p-2}T(v)\right)\leq g\; \mbox{ in } \omega.$$
Analogously, for any supersolution $v\in W^{1,p}(\omega)\cap C(\bar{\omega})$ of the boundary value problem
\[\begin{cases}
Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\
u=f& \text{on~$\partial\omega$,}\\
\end{cases}\]
~$T(v)$ is a supersolution of the same problem with~$v\geq T(v)$ a.e. in~$\omega$.
We define two sequences by recursion: for any~$k\in\mathbb{N}$,
$$\underline{u}_{0}\triangleq \psi,\; \underline{u}_{k}\triangleq T(\underline{u}_{k-1})=T^{(k)}(\psi),
\quad \mbox{ and } \; \bar{u}_{0} \triangleq \varphi,\; \bar{u}_{k}\triangleq T(\bar{u}_{k-1})=T^{(k)}(\varphi).$$
Then the monotone sequences~$\{\underline{u}_{k}\}_{k\in\mathbb{N}}$ and~$\{\bar{u}_{k}\}_{k\in\mathbb{N}}$ converge to $\underline{u}$ and~$\bar{u}$ a.e. in~$\omega$, respectively. Using \cite[Theorem 1.9]{Lieb}, we conclude that the convergence is also in~$L^{p}(\omega)$. Arguing as in the Harnack convergence principle, we may assert that~$\underline{u}$ and~$\bar{u}$ are both fixed points of~$T$ and solutions of \[\begin{cases}
%
Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\
%
u=f& \text{on~$\partial\omega$,}
\end{cases}\]
with~$\psi\leq\underline{u}\leq \bar{u}\leq\varphi$ in~$\omega$. The uniqueness is derived from Lemma \ref{newDiaz}.
\end{proof}
%
%
%
%
%
%
%
%
%
%
\subsubsection{Weak comparison principle}
The following weak comparison principle extends \cite[Theorem 5.3]{Pinchover} to our setting and has a similar proof to \cite[Theorem 5.3]{Pinchover}, and therefore it is omitted.
\begin{Thm}[Weak comparison principle]\label{thm_wcp}
Let~$\omega\Subset\Omega$ be a Lipschitz domain, let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $g,V\in M^{q}(p;\omega)$ with~$g\geq 0$ a.e. in~$\omega$. Assume that~$\lambda_{1}>0$. If~$u_{2}\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfies
\[\begin{cases}
Q'_{p,\mathcal{A},V}[u_{2}]=g&\text{in~$\omega$,}\\
u_{2}>0& \text{on~$\partial\omega$,}\\
\end{cases}\]
and $u_{1}\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfies
\[\begin{cases}
Q'_{p,\mathcal{A},V}[u_{1}]\leq Q'_{p,\mathcal{A},V}[u_{2}]&\text{in~$\omega$,}\\
u_{1}\leq u_{2}& \text{on~$\partial\omega$,}\\
\end{cases}\]
then $u_{1}\leq u_{2}$ in $\omega$.
\end{Thm}
%
%
\section{Agmon-Allegretto-Piepenbrink (AAP) theorem}\label{AP}
In this section, we establish an Agmon-Allegretto-Piepenbrink (in short, AAP) type theorem. See \cite{Agmon, Allegretto1974}, \cite{Pinchover}, and \cite{Keller, AAPform}, respectively, for the counterparts, in the linear case, the quasilinear case, and the cases of graphs and certain Dirichlet forms.
\subsection{Divergence-type equation}
We introduce a divergence-type equation of the first order. For a related study, see \cite{firstreference}.
\begin{Def}\label{def2}
{\em Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. A vector field
$S\in L^{p}_{\mathrm{loc}}(\Omega;\mathbb{R}^{n})$ is a {\em solution} of the first order partial differential equation
\begin{equation}\label{first}
-\dive{\mathcal{A}(x,S)}+(1-p)\mathcal{A}(x,S)\cdot S+V=0 \qquad \mbox{ in } \Omega,
\end{equation}
if
$$\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}(\mathcal{A}(x,S)\cdot S)\psi\,\mathrm{d}x + \int_{\Omega}V\psi\,\mathrm{d}x=0,$$ for every function~$\psi\in C_{c}^{\infty}(\Omega)$, and a {\em supersolution} of the same equation
if$$\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}(\mathcal{A}(x,S)\cdot S)\psi\,\mathrm{d}x+ \int_{\Omega}V\psi\,\mathrm{d}x\geq 0,$$ for every nonnegative function~$\psi\in C_{c}^{\infty}(\Omega)$.
}
\end{Def}
We state a straightforward assertion without proof.
\begin{assertion}
All the integrals in Definition~\ref{def2} are finite.
Moreover, for any solution $S$ defined as above, the corresponding integral equality holds for all bounded functions in $W^{1,p}_{c}(\Omega)$, and for any supersolution~$S$, the corresponding integral inequality holds for all nonnegative bounded functions in $W^{1,p}_{c}(\Omega)$.
\end{assertion}
\begin{comment}
\begin{proof} For any bounded function~$\psi\in W^{1,p}_{c}(\Omega)$, we can find a sequence of uniformly bounded functions~$\{\psi_{k}\}_{k\in\mathbb{N}}\subseteq C_{c}^{\infty}(\Omega)$ approximating~$\psi$ in~$W^{1,p}(\Omega)$. For any nonnegative bounded function~$\psi\in W^{1,p}_{c}(\Omega)$, we can find a sequence of nonnegative uniformly bounded functions~$\{\psi_{k}\}_{k\in\mathbb{N}}\subseteq C_{c}^{\infty}(\Omega)$ approximating~$\psi$ in~$W^{1,p}(\Omega)$. By \cite[Page 250, Theorem 1]{Evans} and \cite[Page 630, Theorem 6]{Evans}, we may assume that all the support sets of~$\psi_{k}$ and~$\psi$ are compactly included in a Lipschitz domain~$\omega$ and~$|\psi|\leq M, |\psi_{k}|\leq M$ for all~$k\in \mathbb{N}$ and some~$M>0$. Then
\begin{eqnarray*}
\int_{\omega}\!|\mathcal{A}(x,S)\cdot(\nabla \psi_{k}-\nabla \psi)|\!\,\mathrm{d}x
&\!\!\leq\!\!&\beta_{\omega} \int_{\omega}|S|^{p-1}|\nabla \psi_{k}-\nabla \psi|_{A}\,\mathrm{d}x\\
&\!\!\leq\!\!&\beta_{\omega}\!\left(\!\!\int_{\omega}|S|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p'}
\!\!\left(\int_{\omega}|\nabla \psi_{k}-\nabla \psi|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p} \!\!\rightarrow 0\mbox{ as } k\to \infty,
\end{eqnarray*}
and
$$
\int_{\omega}|V|| \psi_{k}- \psi|\,\mathrm{d}x
\leq \left(\int_{\omega}|V||\psi_{k}- \psi|^{p}\,\mathrm{d}x\right)^{1/p}\left(\int_{\omega}|V|\,\mathrm{d}x\right)^{1/p'}\rightarrow 0\mbox{ as } k\to \infty.
$$
For any~$m>0$, we have~$\lim_{k\rightarrow \infty}\left|\left\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\right\}\right|=0,$ and hence,
\begin{eqnarray*}
&\!\!\!\!&\int_{\omega}|\psi_{k}- \psi|\mathcal{A}(x,S)\cdot S\,\mathrm{d}x
\leq \beta_{\omega}\int_{\omega}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x\\
&\!\!=\!\!&\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\}}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x+\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|\leq m\}}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x\\
&\!\!\leq\!\!& 2M\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\}}|S|^{p}\,\mathrm{d}x+m\beta_{\omega}\int_{\omega}|S|^{p}\,\mathrm{d}x
\rightarrow 0\;\mbox{ as } k\to \infty, m\rightarrow 0. \qedhere
\end{eqnarray*}
\end{proof}
\end{comment}
\subsection{AAP type theorem}
Following the approach in \cite{Pinchover}, we establish the AAP type theorem. In other words, we prove that the nonnegativity of the functional~$Q_{p,\mathcal{A},V}$ on $C_c^{\infty}(\Omega)$ is equivalent to the existence of a positive solution or positive supersolution of the equation~$Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. We also obtain certain other conclusions involving the first-order equation \eqref{first} defined above. Recall that for every~$\vgf \in C_c^{\infty}(\Omega)$,
$$Q_{p,\mathcal{A},V}[\vgf]=\int_{\Omega}\left(\vert\nabla \vgf\vert_{\mathcal{A}}^{p}+V\vert \vgf\vert^{p}\right)\,\mathrm{d}x.$$
The functional~$Q_{p,\mathcal{A},V}$ is said to be {\em nonnegative} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if $Q_{p,\mathcal{A},V}[\varphi]\geq 0$ for all~$\varphi\in C^{\infty}_{c}(\Omega)$.
\begin{theorem}[AAP type theorem]\label{thm_AAP}
Let the operator~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega).$ Then the following assertions are equivalent:
\begin{enumerate}
\item[$(1)$] the functional~$Q_{p,\mathcal{A},V}$ is nonnegative in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$;
\item[$(2)$] the equation~$Q'_{p,\mathcal{A},V}[u]= 0$ admits a positive solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$;
\item[$(3)$] the equation~$Q'_{p,\mathcal{A},V}[u]= 0$ admits a positive supersolution~$\tilde{v}\in W^{1,p}_{{\rm loc}}(\Omega)$;
\item[$(4)$]the first-order equation \eqref{first} admits a solution~$S\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$;
\item[$(5)$] the first-order equation \eqref{first} admits a supersolution $\tilde{S}\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm_AAP}]
The proof of the theorem is similar to that of \cite[Theorem 4.3]{Pinchover}, but the arguments for the implications~$(2)\Rightarrow (4)$ and~$(3)\Rightarrow (5)$ are simpler.
It suffices to show~$(1)\Rightarrow (2)\Rightarrow (j)\Rightarrow (5)$, where~$j=3,4$, $(3)\Rightarrow (1)$, and~$(5)\Rightarrow (1)$.
$(1)\Rightarrow (2)$
Fix a point~$x_{0}\in \Omega$ and let~$\{\omega_{i}\}_{i\in \mathbb{N}}$ be a Lipschitz exhaustion of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that~$x_{0}\in \omega_{1}$. Assertion (1) yields for~$i\in \mathbb{N}$,$$\lambda_{1}\big(Q_{p,\mathcal{A},V+1/i};\omega_{i}\big)\triangleq\inf_{\substack{u\in W^{1,p}_{0}(\omega_{i})\setminus\{0\}}}\frac{Q_{p,\mathcal{A},V+1/i}[u;\omega_{i}]}{\Vert u\Vert^{p}_{L^{p}(\omega_{i})}}\geq \frac{1}{i},$$ which, combined with Theorem \ref{maximum}, gives a positive solution~$v_{i}\in W^{1,p}_{0}(\omega_{i})$ of the problem~$Q'_{p,\mathcal{A},V+1/i}[u]=f_{i}$ in~$\omega_{i}$ with~$u=0$ on~$\partial\omega_{i}$, where~$f_{i}\in C^{\infty}_{c}(\omega_{i}\setminus\bar{\omega}_{i-1})\setminus\{0\}, i\geq 2,$ are nonnegative. and$$Q_{p,\mathcal{A},V+1/i}[u;\omega_{i}]\triangleq\int_{\omega_{i}}(\vert\nabla u\vert_{\mathcal{A}}^{p}+(V+1/i)\vert u\vert^{p})\,\mathrm{d}x.$$
Setting~$\omega'_{i}=\omega_{i-1}$, we get for all~$u\in W^{1,p}_{c}(\omega'_{i})$,
$$\int_{\omega_{i}'}\mathcal{A}(x,\nabla v_{i})\cdot \nabla u\,\mathrm{d}x+\int_{\omega_{i}'}\Big(V+\frac{1}{i}\Big)v_{i}^{p-1}u\,\mathrm{d}x=0.$$
Normalize~$f_{i}$ so that~$v_{i}(x_{0})=1$ for all~$i\geq 2$. Applying the Harnack convergence principle with~$\mathcal{V}_{i}\triangleq V+1/i$, we get the second assertion.
$(2)\Rightarrow (3)$ We may choose~$\tilde{v}=v$.
$(3)\Rightarrow (1)$ Let~$\tilde{v}$ be a positive supersolution of~$Q'_{p,\mathcal{A},V}[u]=0$ and~$T\triangleq -|\nabla \tilde{v}/\tilde{v}|_{\mathcal{A}}^{p-2}.$ For any~$\psi\in C^{\infty}_{c}(\Omega)$, picking~$|\psi|^{p}\tilde{v}^{1-p}\in W^{1,p}_{c}(\Omega)$ as a test function, we obtain:
$$(p-1)\int_{\Omega}|T|_{\mathcal{A}}^{p'}|\psi|^{p}\,\mathrm{d}x\leq p\int_{\Omega}|T|_{\mathcal{A}}|\psi|^{p-1}|\nabla \psi|_{\mathcal{A}}\,\mathrm{d}x+\int_{\Omega}V|\psi|^{p}\,\mathrm{d}x.$$ Then Young's inequality~$pab\leq (p-1)a^{p'}+b^{p}$ with~$a=|T|_{\mathcal{A}}|\psi|^{p-1}$ and~$b=|\nabla \psi|_{\mathcal{A}}$ yields the first assertion. For an alternative proof, see Lemma \ref{lem_alter}.
$(2)\Rightarrow (4)$ Let~$v$ be a positive solution of ~$Q'_{p,\mathcal{A},V}[u]= 0$. Then~$1/v\in L^{\infty}_{{\rm loc}}(\Omega)$ by the weak Harnack inequality (or by the Harnack inequality if~$p>n$). Let $S\triangleq \nabla v/v$. Because~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ and~$1/v\in L^{\infty}_{{\rm loc}}(\Omega)$, it follows that~$S\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$.
\begin{comment}
Let~$u\in C^{\infty}_{c}(\Omega)$. Using~$|u|^{p}v^{1-p}\in W^{1,p}_{c}(\Omega)$ as a test function in the definition of the equation~$Q'_{p,\mathcal{A},V}[w]= 0$ with~$v$ as a (super)solution, we get
$$
\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\nabla\big(|u|^{p}v^{1-p}\big)\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v\cdot|u|^{p}v^{1-p}\,\mathrm{d}x\geq 0.
$$
Noting that~$\nabla\big(|u|^{p}v^{1-p}\big)=p|u|^{p-1}v^{1-p}\nabla |u|+(1-p)|u|^{p}v^{-p}\nabla v$, we may deduce that
\begin{multline}
\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\big(p|u|^{p-1}v^{1-p}\nabla |u|\big)\,\mathrm{d}x+\int_{\Omega}V|u|^{p}\,\mathrm{d}x\\
\geq (p-1)\int_{\Omega}\big(\mathcal{A}(x,\nabla v)\cdot\nabla v\big)|u|^{p}v^{-p}\,\mathrm{d}x\\
=(p-1)\int_{\Omega}\Bigg(\mathcal{A}\Big(x,\frac{\nabla v}{v}\Big)\cdot \frac{\nabla v}{v}\Bigg)|u|^{p}\,\mathrm{d}x\\
\geq(p-1)\alpha\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x
\end{multline}
Moreover,
\begin{multline}
\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\Big(p|u|^{p-1}v^{1-p}\nabla |u|\Big)\,\mathrm{d}x\\
=p\int_{\Omega}|u|^{p-1}\mathcal{A}\Big(x,\frac{\nabla v}{v}\Big)\cdot \nabla|u|\,\mathrm{d}x\\
\leq p\beta\int_{\Omega} |u|^{p-1}|S|^{p-1}|\nabla u|\,\mathrm{d}x.
\end{multline}
Then$$(p-1)\alpha\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\leq p\beta\int_{\Omega} |u|^{p-1}|S|^{p-1}|\nabla u|\,\mathrm{d}x+\int_{\Omega}V|u|^{p}\,\mathrm{d}x.$$
Let~$\eta>0, a,b\geq 0$.
$$
\frac{\eta a^{p'}}{p}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}
=\frac{\eta}{p-1}\frac{a^{p'}}{\frac{p}{p-1}}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}
=\frac{\eta}{p-1}\frac{a^{p'}}{p'}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}.
$$
It follows that$$ab=\Big(\frac{\eta}{p-1}\Big)^{\frac{p-1}{p}}a\Big(\frac{p-1}{\eta}\Big)^{\frac{p-1}{p}}b\leq \frac{\eta a^{p'}}{p}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}.$$
Applying Young's inequality$$pab\leq \eta a^{p'}+\Big(\frac{p-1}{\eta}\Big)^{p-1}b^{p},$$ where~$\eta\in \big(0,(p-1)\alpha\big), a=|u|^{p-1}|S|^{p-1}$, and ~$b=\beta|\nabla u|$, we see at once that
\begin{multline}
\big((p-1)\alpha-\eta\big)\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\leq \beta^{p}\Big(\frac{p-1}{\eta}\Big)^{p-1}\int_{\Omega}|\nabla u|^{p}\,\mathrm{d}x+\int_{\Omega}|V||u|^{p}\,\mathrm{d}x
\end{multline}
For any~$\omega\Subset \Omega$, choose~$u\in C^{\infty}_{c}(\Omega)$ such that~$u|_{\omega}\equiv 1$
Then
$$
\big((p-1)\alpha-\eta\big)\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\geq \big((p-1)\alpha-\eta\big)\int_{\omega}|S|^{p}\,\mathrm{d}x.
$$
\end{comment}
We claim that~$S$ is a solution of the equation \eqref{first}. For any~$\psi\in C^{\infty}_{c}(\Omega)$, we employ~$\psi v^{1-p}\in W^{1,p}_{c}(\Omega)$, with$$\nabla\big(\psi v^{1-p}\big)=v^{1-p}\nabla \psi+(1-p)\psi v^{-p}\nabla v,$$ as a test function in the definition of the equation~$Q'_{p,\mathcal{A},V}[w]= 0$ with~$v$ as a solution.
\begin{eqnarray*}
&&\int_{\Omega}\mathcal{A}(x,\nabla v)\!\cdot \! v^{1-p}\nabla \psi \!\,\mathrm{d}x
+\int_{\Omega}\mathcal{A}(x,\nabla v) \! \cdot \! (1-p)\psi v^{-p}\nabla v\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v\psi v^{1-p}\,\mathrm{d}x\\
&=&\int_{\Omega}\mathcal{A}\left(x,\frac{\nabla v}{v}\right)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}\psi\mathcal{A}\left(x,\frac{\nabla v}{v}\right)\cdot\frac{\nabla v}{v}\,\mathrm{d}x+\int_{\Omega}V\psi\,\mathrm{d}x\\
&=&\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}\psi\mathcal{A}(x,S)\cdot S\,\mathrm{d}x+\int_{\Omega}V\psi\,\mathrm{d}x=0.
\end{eqnarray*}
$(3)\Rightarrow (5)$ For a positive supersolution~$\tilde{v}$ of~$Q'_{p,\mathcal{A},V}[u]= 0$ , we adopt the same argument as above with~$S$ replaced by~$\tilde{S}\triangleq \nabla \tilde{v}/\tilde{v}$, except using nonnegative test functions~$\psi\in C^{\infty}_{c}(\Omega)$ and in the last step.
$(4)\Rightarrow (5)$ We may choose~$ \tilde{S}=S$.
$(5)\Rightarrow (1)$
For any~$\psi\in C_{0}^{\infty}(\Omega)$, we get
\begin{eqnarray}
\int_{\Omega}\!\!\mathcal{A}(x,\tilde{S})\! \cdot \! \nabla |\psi|^{p}\!\!\,\mathrm{d}x
&\!\!=\!\!& p\!\int_{\Omega}\!|\psi|^{p-1}\mathcal{A}(x,\tilde{S}) \!\cdot \! \nabla |\psi| \!\,\mathrm{d}x\\
&\!\!\leq \!\!& p\!\int_{\Omega} \! |\psi|^{p-1}|\tilde{S}|_{\mathcal{A}}^{p-1}|\nabla \psi|_{\mathcal{A}} \!\,\mathrm{d}x\\
&\!\leq \!& (p\!-\! 1)\int_{\Omega}\!\!|\psi|^{p}|\tilde{S}|_{\mathcal{A}}^{p}\!\,\mathrm{d}x \!+ \! \int_{\Omega}\!|\nabla \psi|_{\mathcal{A}}^{p} \! \,\mathrm{d}x, \label{eqS}
\end{eqnarray}
where the first inequality is derived from the generalized H\"older inequality (Lemma \ref{ass1}),
and in the last step, Young's inequality $pab\leq (p-1)a^{p'}+b^{p}$ is applied with~$a=|\psi|^{p-1}|\tilde{S}|_{\mathcal{A}}^{p-1}$ and~$b=|\nabla \psi|_{\mathcal{A}}$.
Because $\tilde{S}$ is a supersolution of \eqref{first}, we have
$$\int_{\Omega}\mathcal{A}(x,\tilde{S})\cdot\nabla |\psi|^{p}\,\mathrm{d}x+(1-p)\int_{\Omega}|\tilde{S}|_{\mathcal{A}}^{p}|\psi|^{p}\,\mathrm{d}x+ \int_{\Omega}V|\psi|^{p}\,\mathrm{d}x\geq 0, $$
which together with \eqref{eqS}
implies $Q_{p,\mathcal{A},V}[\psi]\geq 0$ for all $\psi\in C_{0}^{\infty}(\Omega)$.
\end{proof}
\begin{corollary}\label{newuniqueness}
Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let~$V\in M^{q}(p;\omega)$. Then the principal eigenvalue is unique.
\end{corollary}
\begin{proof}
Let~$\lambda$ be any eigenvalue with an eigenfunction~$v_{\lambda}\geq 0$. By Harnack's inequality, the eigenfunction~$v_{\lambda}$ is positive in~$\omega$. Then~$v_{\lambda}$ is a positive solution of~$Q'_{p,\mathcal{A},V-\gl}[u]=0$. By the AAP type theorem, the functional~$Q_{p,\mathcal{A},V-\gl}$ is nonnegative in~$\omega$ and hence by the definition of~$\gl_{1}$ and Lemma \ref{easylemma}, we get $\gl_{1}\leq\gl\leq \gl_{1}$. Thus, $\lambda_{1}=\gl$.
\end{proof}
\section{Criticality theory}\label{criticality}
In this section we introduce the notions of criticality and subcriticality and establish fundamental results in criticality theory for the functional $Q_{p,\mathcal{A},V}$ that generalize the counterpart results in \cite[Section 4B]{Pinchover} and \cite[Theorem~6.8]{Regev}.
\subsection{Characterizations of criticality}\label{subsec_null}
\subsubsection{Null-sequences and ground states}
\begin{Def}{\em
Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$.
\begin{itemize}
\item If there exists a nonnegative function~$W\in M^{q}_{{\rm loc}}(p;\Omega)\setminus\{0\}$ such that
\begin{equation*}\label{subcritical}
Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x,
\end{equation*}
for all $\varphi\in C^{\infty}_{c}(\Omega)$, we say that the functional $Q_{p,\mathcal{A},V}$ is \emph{subcritical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and $W$ is a \emph{Hardy-weight} for $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$;
\item if $Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$ but $Q_{p,\mathcal{A},V}$ does not admit a Hardy-weight in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, we say that the functional $Q_{p,\mathcal{A},V}$ is \emph{critical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$;
\item if there exists~$\varphi\in C^{\infty}_{c}(\Omega)$ such that $Q_{p,\mathcal{A},V}[\varphi]<0$, we say that the functional~$Q_{p,\mathcal{A},V}$ is \emph{supercritical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
\end{itemize}
}
\end{Def}
\begin{Def}
\emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. A nonnegative sequence~$\{u_{k}\}_{k\in\mathbb{N}}\subseteq W^{1,p}_{c}(\Omega)$ is called a \emph{null-sequence} with respect to the nonnegative functional~$Q_{p,\mathcal{A},V}$ in~$\Omega$ if
\begin{itemize}
\item for every $k\in\mathbb{N}$, the function~$u_{k}$ is bounded in~$\Omega$;
\item there exists a fixed open set~$U\Subset\Omega$ such that~$\Vert u_{k}\Vert_{L^{p}(U)}=1$ for all~$k\in\mathbb{N}$;
\item and~$\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=0.$
\end{itemize}}
\end{Def}
\begin{Def}
\emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. A \emph{ground state} of the nonnegative functional~$Q_{p,\mathcal{A},V}$ is a positive function~$\phi\in W^{1,p}_{{\rm loc}}(\Omega)$, which is an~$L^{p}_{{\rm loc}}(\Omega)$ limit of a null-sequence.}
\end{Def}
\begin{lem}\label{simplelemma}
Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. If a nonnegative sequence~$\{u_{k}\}_{k\in\mathbb{N}}\subseteq W^{1,p}_{c}(\Omega)$ satisfies:
\begin{itemize}
\item for every $k\in\mathbb{N}$, the function~$u_{k}$ is bounded in~$\Omega$;
\item there exists a fixed open set~$U\Subset\Omega$ such that~$\Vert u_{k}\Vert_{L^{p}(U)}\asymp 1$ for all~$k\in\mathbb{N}$;
\item
$\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=0;$
\item and~$\{u_{k}\}_{k\in\mathbb{N}}$ converges in~$L^{p}_{{\rm loc}}(\Omega)$ to a positive~$u\in W^{1,p}_{{\rm loc}}(\Omega)$,
\end{itemize}
then $u$ is a ground state up to a multiplicative constant.
\end{lem}
\begin{proof}
By the second condition, we may assume that up to a subsequence $\displaystyle{\lim_{k\rightarrow\infty}}\Vert u_{k}\Vert_{L^{p}(U)}=C_{0}$ for some positive constant~$C_{0}$. Set $C_{k}\triangleq\Vert u_{k}\Vert_{L^{p}(U)}$. Then $\left\{u_{k}/C_{k}\right\}_{k\in\mathbb{N}}$ is a null-sequence converging in~$L^{p}_{{\rm loc}}(\Omega)$ to~${u/C_{0}}$.
\end{proof}
\begin{corollary}\label{nullrem}
Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8}
and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then a positive principal eigenfunction~$v$ associated to the principal eigenvalue $\gl_1=\gl_1(Q_{p,\mathcal{A},V};\omega)$ is a ground state of the functional~$Q_{p,\mathcal{A},V-\lambda_{1}}$ in~$\omega$.
\end{corollary}
\begin{proof}
Let $v'\in W^{1,p}_{0}(\omega)$ be a principal eigenfunction associated to $\gl_1$, and let $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\omega)$
be a sequence approximating $v'$ in $W^{1,p}_{0}(\omega)$. Then Lemma~\ref{functionalcv} implies that
$$\lim_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\omega]=Q_{p,\mathcal{A},V-\lambda_{1}}[v';\omega]=0,\; \mbox{and } \; \Vert \varphi_{k}\Vert_{L^{p}(U)}\asymp 1 \;\; \forall k\in\mathbb{N},$$
where~$U\Subset\omega$ is a fixed nonempty open set. By Lemma \ref{simplelemma}, for some positive constant~$C$, the principal eigenfunction~$v\triangleq Cv'$ is a ground state of~$Q_{p,\mathcal{A},V-\lambda_{1}}$ in~$\omega$.
\end{proof}
\begin{comment}
\begin{Def}
\emph{ Let~$1<p<2$. A positive supersolution of the equation \eqref{half} is called \emph{regular} if the supersolution, together with the modulus of its gradient, is locally bounded a.e. in~$\Omega$.}
\end{Def}
\end{comment}
\begin{proposition}\label{mainlemma}
Let~$\{u_{k}\}_{k\in\mathbb{N}}$ be a null-sequence with respect to a nonnegative functional $Q_{p,\mathcal{A},V}$ in~$\Omega$, where $\mathcal{A}$ satisfies Assumptions~\ref{ass8} and \ref{ass2}, and $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let $v$ be a positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega$ and let $w_{k}=u_{k}/v$, where $k\in\mathbb{N}$. Then the sequence~$\{w_{k}\}_{k\in\mathbb{N}}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$ and~$\nabla w_{k}\rightarrow 0$ as~$k\rightarrow\infty$ in~$L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$.
\end{proposition}
\begin{proof
Let $U$ be a fixed open set such that for all~$k\in\mathbb{N}$,~$\Vert u_{k}\Vert_{L^{p}(U)}=1$, and let $U\Subset\omega\Subset\Omega$ be a Lipschitz domain. Using the Minkowski inequality, the Poincar\'{e} inequality, and the weak Harnack inequality, we obtain for every $k\in\mathbb{N}$,
\begin{eqnarray*}
\Vert w_{k}\Vert_{L^{p}(\omega)}&\leq& \Vert w_{k}-\langle w_{k}\rangle_{U}\Vert_{L^{p}(\omega)}+\langle w_{k}\rangle_{U}\left(\mathcal{L}^{n}(\omega)\right)^{1/p}\\
&\leq& C(n,p,\omega,U)\Vert \nabla w_{k}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{1}{\inf_{U}v}\langle u_k\rangle_{U}\left(\mathcal{L}^{n}(\omega)\right)^{1/p}.
\end{eqnarray*}
By H\"{o}lder's inequality, noting that $\Vert u_{k}\Vert_{L^{p}(U)}=1$, we obtain
\begin{equation}\label{estimate}
\Vert w_{k}\Vert_{L^{p}(\omega)}\leq C(n,p,\omega,U)\Vert \nabla w_{k}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{1}{\inf_{U}v}\left(\frac{\mathcal{L}^{n}(\omega)}{\mathcal{L}^{n}(U)}\right)^{1/p}.
\end{equation}
By Lemma \ref{strictconvexity} with $\xi_{1}=\nabla(vw_{k})=\nabla(u_{k})$ and $\xi_{2}=w_{k}\nabla v$, we obtain
$$|\nabla u_{k}|_{\mathcal{A}}^{p} -w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - p\mathcal{A}(x,w_k\nabla v)\!\cdot\! v\nabla w_{k}\geq 0,$$
which together with the local strong convexity of $|\xi|_{\mathcal{A}}^p$ (Assumption~\ref{ass2}) implies
\begin{eqnarray*}
C_\gw(\bar{p}, \mathcal{A})\int_{\gw}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x
&\leq& \int_{\omega}\left(|\nabla u_{k}|_{\mathcal{A}}^{p} -w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - p\mathcal{A}(x,w_k\nabla v)\!\cdot\! v\nabla w_{k}\right)\,\mathrm{d}x\\
&\leq& \int_{\Omega}\left(|\nabla u_{k}|_{\mathcal{A}}^{p} - w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - v\mathcal{A}(x,\nabla v)\!\cdot\! \nabla\left(w_{k}^{p}\right)\right)\,\mathrm{d}x\\
&=&\int_{\Omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x-\int_{\Omega}\mathcal{A}(x,\nabla v)\!\cdot\! \nabla\left(w_{k}^{p}v\right)\!\,\mathrm{d}x.
\end{eqnarray*}
Since $v$ is a positive supersolution, we obtain
$$ C_\gw(\bar{p}, \mathcal{A})\int_{\gw}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x
\leq \int_{\Omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\Omega}Vu_{k}^{p}\,\mathrm{d}x=Q_{p,\mathcal{A},V}[u_{k}].$$
By the weak Harnack inequality, and the ellipticity condition \eqref{structure}, we get for a positive constant~$c$ which does not depend on~$k$,
$$c\int_{\omega}|\nabla w_{k}|^{\bar{p}}\,\mathrm{d}x\leq C_\gw(\bar{p}, \mathcal{A})\int_{\Omega}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x\leq Q_{p,\mathcal{A},V}[u_{k}]\rightarrow 0\; \mbox{ as } k\to \infty.$$
Consequently, by H\"{o}lder's inequality, because~$\bar{p}\geq p$,
$$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\to \infty\; \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}),$$ and this and \eqref{estimate} also imply that~$\{w_{k}\}_{k\in\mathbb{N}}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$.
\begin{comment}
Now we deal with the case of~$p<2$. Let~$q_{k}\triangleq Q_{p,\mathcal{A},V}[u_{k}]$. Applying H\"{o}lder's inequality with the conjugate indexes~$\frac{2}{p}$ and~$\frac{2}{2-p}$, we have
\begin{multline*}
\int_{\omega}v^{p}|\nabla w_{k}|^{p}_{\mathcal{A}}\,\mathrm{d}x\\
\leq\left(\int_{\omega}v^{2}|\nabla w_{k}|^{2}_{\mathcal{A}}\left(\left\vert\nabla(vw_{k})\right\vert_{\mathcal{A}}+w_{k}\left\vert\nabla v\right\vert_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x\right)^{\!\frac{p}{2}}\!\!\left(\int_{\omega}\left(\left\vert\nabla(vw_{k})\right\vert_{\mathcal{A}}+w_{k}\left\vert\nabla v\right\vert_{\mathcal{A}}\right)^{p}\,\mathrm{d}x\right)^{\!1-\frac{p}{2}}\\
\leq Cq_{k}^{\frac{p}{2}}\left(\int_{\omega}v^{p}|\nabla w_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p}\,\mathrm{d}x\right)^{1-\frac{p}{2}}\\
\leq Cq_{k}^{\frac{p}{2}}\left(\int_{\omega}v^{p}|\nabla w_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p}\,\mathrm{d}x+1\right),
\end{multline*}
where~$C$ is a constant depending only on~$p$ but may be different from~$C(p)$.
Because~$v$ is regular and locally has a positive lower bound, we may estimate, for some positive constants~$c_{j},j=1,2,3,4$ independent of~$k$, in view of \red{the ellipticity condition} \eqref{structure} and the estimate \eqref{estimate},
$$c_{1}\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x\leq c_{2}q_{k}^{\frac{p}{2}}\left(\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}\,\mathrm{d}x+1\right)\leq c_{2}q_{k}^{\frac{p}{2}}\left(c_{3}\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x+c_{4}\right).$$
Once more, by virtue of~$\lim_{k\rightarrow\infty}q_{k}=0$, we get$$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\rightarrow \infty \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}).$$ Recalling \eqref{estimate}, we conclude that~$w_{k}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$.
\end{comment}
\end{proof}
\begin{comment}
\begin{Rem}
\emph{ One can see that in the proof, the strong convexity with the coefficient depending only on~$p$ is used.}
\end{Rem}
\end{comment}
\begin{Thm}\label{mainthm}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$, and assume that the functional $Q_{p,\mathcal{A},V}$ is nonnegative on $C_c^{\infty}(\Omega)$.
\begin{comment}
where if~$p\geq 2$,~$A$ is symmetric, locally uniformly positive definite, and bounded and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$ or if~$1<p<2$,~$A\in C^{0,\gamma}_{{\rm loc}}(\Omega;\mathbb{R}^{n\times n})$(still symmetric, locally uniformly positive definite, and bounded) and~$V\in M^{q}_{{\rm loc}}(\Omega),q>n$.
\end{comment}
Then every null-sequence of~$Q_{p,\mathcal{A},V}$ converges, both in~$L^{p}_{{\rm loc}}(\Omega)$ and a.e. in~$\Omega$, to a unique (up to a multiplicative constant) positive supersolution of the equation $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$.
Furthermore, a ground state is in~$C^{\gamma}_{{\rm loc}}(\Omega)$ for some~$0<\gamma\leq 1$, and it is the unique positive
solution and the unique positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$.
\end{Thm}
\begin{Rem}
\emph{By uniqueness we mean uniqueness up to a multiplicative constant.}
\end{Rem}
\begin{proof}[Proof of Theorem~\ref{mainthm}]
By the AAP type theorem, there exist a positive
supersolution $v\in W^{1,p}_{{\rm loc}}(\Omega)$ and a positive
solution~$\tilde{v}\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Let $\{u_{k}\}_{k\in\mathbb{N}}$ be a null-sequence of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and set $w_{k}=u_{k}/v$. Using Proposition \ref{mainlemma}, we obtain$$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\rightarrow \infty \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}).$$ The Rellich-Kondrachov theorem yields a subsequence, which is still denoted by~$w_{k}$, with~$w_{k}\rightarrow c$ for some~$c\geq 0$ in~$W^{1,p}_{{\rm loc}}(\Omega)$ as~$k\rightarrow\infty$.
Since $v$ is locally bounded away from zero, it follows that up to a subsequence, $u_{k}\rightarrow cv$ a.e. in~$\Omega$ and also in~$L^{p}_{{\rm loc}}(\Omega)$.
Therefore, $c=1/\Vert v\Vert_{L^{p}(U)}>0$. Furthermore, any null-sequence~$\{u_{k}\}_{k\in\mathbb{N}}$ converges (up to a positive constant multiple) to the same positive supersolution~$v$. Noting that the solution~$\tilde{v}$ is
also a positive supersolution, we conclude that~$v=C\tilde{v}$ for some~$C>0$. It follows that~$v$ is also the unique positive solution.
\end{proof}
As a corollary of the above theorem we have:
\begin{Thm}\label{complement}
Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}(p;\omega)$.
Suppose that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$ admits a proper positive supersolution in~$W^{1,p}(\omega)$. Then the principal eigenvalue $\lambda_{1} =\lambda_{1}(Q_{p,\mathcal{A},V};\omega)$ is strictly positive.
Therefore, all the assertions in Theorem \ref{maximum} are equivalent if~$\mathcal{A}$ and~$V$ are as above and~$\omega\Subset\Omega$ is a Lipschitz domain.
\end{Thm}
\begin{proof}
We need to prove $(4')\Rightarrow (3)$ in Theorem \ref{maximum}. Indeed, by the AAP type theorem,~$Q_{p,\mathcal{A},V}$ is nonnegative. In particular, $\lambda_{1}\geq 0.$ If~$\lambda_{1}=0$, then positive principal eigenfunctions
are all positive solutions of the equation $Q'_{p,\mathcal{A},V}[u]=0$. By Corollary \ref{nullrem} and Theorem \ref{mainthm}, the positive principal eigenfunctions are the unique
positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$, but this contradicts our assumption that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$ admits a proper positive supersolution in~$W^{1,p}(\omega)$. Hence, $\lambda_{1}>0$.
\end{proof}
\subsubsection{Characterizations of criticality}
The next theorem contains fundamental characterizations of criticality or subcriticality.
\begin{Thm}\label{thm_Poincare}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let
~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$.
Then the following assertions hold true.
\begin{enumerate}
\item[$(1)$] The functional~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$ if and only if~$Q_{p,\mathcal{A},V}$ has a null-sequence in~$C_c^{\infty}(\Omega)$.
\item[$(2)$] The functional~$Q_{p,\mathcal{A},V}$ has a null-sequence if and only if the equation $Q'_{p,\mathcal{A},V}[u]=0$ has a unique positiv
~supersolution.
\item[$(3)$]The functional $Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$ if and only if $Q_{p,\mathcal{A},V}$ admits a strictly positive continuous Hardy-weight $W$ in~$\Omega$.
\item[$(4)$] Assume that~$Q_{p,\mathcal{A},V}$ admits a ground state $\phi$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Then there exists a strictly positive continuous
function~$W$ such that
for $\psi\in C^{\infty}_{c}(\Omega)$ with~$\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0$ there exists a constant $C>0$ such that the following Poincar\'{e}-type inequality holds:
$$Q_{p,\mathcal{A},V}[\varphi]+C\left\vert\int_{\Omega}\varphi\psi\,\mathrm{d}x\right\vert^{p}\geq \frac{1}{C}\int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x\qquad \forall \varphi\in C^{\infty}_{c}(\Omega).$$
\end{enumerate}
\end{Thm}
\begin{proof}
$(1)$ Suppose that~$Q_{p,\mathcal{A},V}$ is critical. For every nonempty open~$U\Subset\Omega$, let$$c_{U}\triangleq\inf_{\substack{0\leq \varphi\in C^{\infty}_{c}(\Omega)\\\Vert \varphi\Vert_{L^{p}(U)}=1}}Q_{p,\mathcal{A},V}[\varphi]=\inf_{\substack{\varphi\in C^{\infty}_{c}(\Omega)\\\Vert \varphi\Vert_{L^{p}(U)}=1}}Q_{p,\mathcal{A},V}[\varphi],$$ where the equality is an immediate corollary of Lemma \ref{functionalcv}. Pick $W\in C^{\infty}_{c}(U)\setminus\{0\}$ such that~$0\leq W\leq 1$. Then for all~$\varphi\in C^{\infty}_{c}(\Omega)$ with~$\Vert \varphi\Vert_{L^{p}(U)}=1$, we have
$$c_{U}\int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x\leq c_{U}\leq Q_{p,\mathcal{A},V}[\varphi].$$ Because $Q_{p,\mathcal{A},V}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$,
it follows that $c_{U}=0$. Hence, a minimizing sequence of the above variational problem is a null-sequence of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
Suppose that $Q_{p,\mathcal{A},V}$ admits a null-sequence in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. By Theorem \ref{mainthm}, we get a positive solution~$v$ of~$Q'_{p,\mathcal{A},V}[u]=0.$ If $Q_{p,\mathcal{A},V}$ is subcritical in $\Omega$ with a nontrivial nonnegative Hardy-weight $W$, then the AAP type theorem gives a positive solution $w$ of the equation
$Q'_{p,\mathcal{A},V-W}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. The function $w$ is also a proper positive supersolution of the equation $Q'_{p,\mathcal{A},V}[u]=0$. Therefore, $w$ and $v$ are two positive supersolutions of the above equation, but this contradicts Theorem~\ref{mainthm}.
$(2)$ Suppose that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ admits a unique positiv
~supersolution. If~$Q_{p,\mathcal{A},V}$ does not admits a null-sequence, then~$Q_{p,\mathcal{A},V}$ is subcritical by~$(1)$. The same argument as in the second part of the proof of~$(1)$ leads to a contradiction. The other direction follows from Theorem \ref{mainthm}.
$(3)$ Suppose that~$Q_{p,\mathcal{A},V}$ is subcritical. Let~$\{U_{k}\}_{k\in\mathbb{N}}$ be an open covering of $\Omega$ with $U_k\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $\cup_{k\in\mathbb{N}}U_{k}=\Omega$. Let $\{\chi_{k}\}_{k\in\mathbb{N}}$ be a locally finite smooth partition of unity subordinated to the covering. Then by the proof of $(1)$, for every~$k\in \mathbb{N}$, we have~$c_{U_k}>0$. Then for all $\varphi\in C^{\infty}_{c}(\Omega)$ and every~$k\in\mathbb{N}$ we have
$$2^{-k}Q_{p,\mathcal{A},V}[\varphi]\geq 2^{-k}c_{U_k}\int_{U_{k}}|\varphi|^{p}\,\mathrm{d}x\geq 2^{-k}C_{k}\int_{\Omega}\chi_{k}|\varphi|^{p}\,\mathrm{d}x,$$
where~$C_{k}\triangleq \min\{c_{U_{k}},1\}$ for $k\in\mathbb{N}$.
Then for all $\varphi\in C^{\infty}_{c}(\Omega)$ and every~$k\in\mathbb{N}$ we have
$$2^{-k}Q_{p,\mathcal{A},V}[\varphi]\geq 2^{-k}C_{k}\int_{U_{k}}|\varphi|^{p}\,\mathrm{d}x\geq 2^{-k}C_{k}\int_{\Omega}\chi_{k}|\varphi|^{p}\,\mathrm{d}x.$$
Adding together all the above inequalities over all~$k\in\mathbb{N}$, we get
$$Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x \qquad \forall \varphi\in C^{\infty}_{c}(\Omega),$$
where $W\triangleq \sum_{k=1}^{\infty}2^{-k}C_{k}\chi_{k} >0$ is smooth (recall that this series is locally finite).
The other direction follows from the definition of subcriticality.
$(4)$ For every nonempty open set~$U\Subset\Omega$ and every~$\varphi\in C^{\infty}_{c}(\Omega)$, let
$$Q_{p,\mathcal{A},V}^{U}[\varphi]\triangleq Q_{p,\mathcal{A},V}[\varphi]+\int_{U}|\varphi|^{p}\,\mathrm{d}x,$$
which is subcritical because $Q_{p,\mathcal{A},V}$ is nonnegative. By~$(3)$, for every nonempty open set
~$U\Subset\Omega$, there is a positive continuous function~$W$ in~$\Omega$ such that for all~$\varphi\in C^{\infty}_{c}(\Omega)$,
\begin{equation}\label{eq_W}
Q_{p,\mathcal{A},V}^{U}[\varphi]\geq \int_{\Omega}W(x)|\varphi|^{p}\,\mathrm{d}x.
\end{equation}
Fix $\psi\in C^{\infty}_{c}(\Omega)$ with~$\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0$.
Assume that for every~$U\!\Subset\!\Omega$, there exists a nonnegative sequence $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\Omega)$ such that
$$\int_{U}|\varphi_{k}|^{p}\,\mathrm{d}x=1,\quad Q_{p,\mathcal{A},V}[\varphi_{k}]\rightarrow 0,\quad\mbox{and}~\int_{\Omega}\varphi_{k}\psi\,\mathrm{d}x\rightarrow 0,\quad\mbox{as}~k\rightarrow\infty.$$ Because~$\{\varphi_{k}\}_{k\in\mathbb{N}}$ is a null-sequence, by Theorem \ref{mainthm}, ~$\{\varphi_{k}\}_{k\in\mathbb{N}}$ converges (up to a multiplicative constant) in~$L^{p}_{{\rm loc}}(\Omega)$ to the ground state. Furthermore,$$\lim_{k\rightarrow\infty}\int_{\Omega}\varphi_{k}\psi\,\mathrm{d}x=\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0,$$
and we arrive at a contradiction.
Therefore, there exists a nonempty open~$U\Subset\Omega$ such that for all~$\varphi\in C^{\infty}_{c}(\Omega)$ and some positive constant~$C$,
$$\int_{U}|\varphi|^{p}\,\mathrm{d}x\leq C\Big(Q_{p,\mathcal{A},V}[\varphi]+\Big\vert\int_{\Omega}\varphi\psi\,\mathrm{d}x\Big\vert^{p}\Big).$$
By combining the above inequality with \eqref{eq_W}, we obtain the desired inequality.
\end{proof}
\begin{corollary}\label{subcriticaleg}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then $Q_{p,\mathcal{A},V}$ is subcritical in a domain~$\omega\Subset\Omega$ if and only if $\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$.
\end{corollary}
\begin{proof}
Suppose that $Q_{p,\mathcal{A},V}$ is subcritical in $\omega $. Therefore, it admits a Hardy-weight $W$ in~$\omega$. The AAP type theorem (Theorem~\ref{thm_AAP}) implies that there exists a positive solution $v$ of $Q_{p,\mathcal{A},V-W}'[u]=0$ in~$\omega$. Clearly, $v$ is a proper positive supersolution of $Q_{p,\mathcal{A},V}'[u]=0$ in~$\omega$. By Theorem \ref{complement}, we have~$\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$.
On the other hand, if $\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$, then~$\lambda_{1}$ is a Hardy-weight for~$Q_{p,\mathcal{A},V}$ in~$\omega$ and hence~$Q_{p,\mathcal{A},V}$ is subcritical.
\end{proof}
\subsection{Perturbation results and applications}\label{ssect_pert}
The present subsection is mainly intended for certain perturbation results. Our perturbations results are divided into two cases. One is a domain perturbation, and the other concerns certain potential perturbations. As an application we show that a critical operator admits a null-sequence that converges locally uniformly to its ground state.
\subsubsection{Criticality theory under a domain perturbation}
The following is a straightforward result (see \cite[Proposition 4.2]{Tintarev}).
\begin{proposition}\label{two}
Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Let $\Omega_{1}\subseteq\Omega_{2} \subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be subdomains such that $\Omega_{2}\setminus \overline{\Omega_{1}}\neq \emptyset$.
\begin{enumerate}
\item[$(a)$] If~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega_{2}$, then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega_{1}$.
\item[$(b)$] If~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega_{1}$, then~$Q_{p,\mathcal{A},V}$ is supercritical in~$\Omega_{2}$.
\end{enumerate}
\end{proposition}
\begin{corollary}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. If~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$, then for all domains~$\omega\Subset\Omega$, we have~$\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$.
\end{corollary}
\begin{proof}
The result follows directly from Proposition \ref{two} and Corollary \ref{subcriticaleg}.
\end{proof}
\subsubsection{Criticality theory under potential perturbations}
We state here certain results on perturbations by a potential whose proofs are as in the proofs of \cite[Proposition 4.8]{Pinchover}, \cite[Corollary 4.17]{Pinchover}, \cite[Propositions 4.4 and 4.5]{Tintarev}.
\begin{proposition}\label{prop1}
Suppose that~$\mathcal{A}$ satisfies Assumption \ref{ass8}, $V_{2}\geq V_{1}$ a.e. in $\Omega$, where $V_i\in M^{q}_{{\rm loc}}(p;\Omega)$ for~$i=1,2$, and
$\mathcal{L}^{n}(\{x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi : V_{2}(x)>V_{1}(x) \})>0$.
\begin{enumerate}
\item[$(1)$] If~$Q_{p,\mathcal{A},V_{1}}$ is nonnegative in~$\Omega$, then~$Q_{p,\mathcal{A},V_{2}}$ is subcritical in~$\Omega$.
\item[$(2)$] If~$Q_{p,\mathcal{A},V_{2}}$ is critical in~$\Omega$, then~$Q_{p,\mathcal{A},V_{1}}$ is supercritical in~$\Omega$.
\end{enumerate}
\end{proposition}
\begin{cor}\label{interval}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V_{i}\in M^{q}_{{\rm loc}}(p;\Omega)$, where $i=0,1$.
Assume that~$Q_{p,\mathcal{A},V_{i}}$ are nonnegative for~$i=1,2$.
Let~$V_{t}\triangleq(1-t)V_{0}+tV_{1} $ for~$t\in [0,1]$.
Then $Q_{p,\mathcal{A},V_{t}}$ is nonnegative in~$\Omega$ for all~$t\in [0,1]$. Moreover, if $\mathcal{L}^{n}\left(\{V_{0}\neq V_{1}\}\right)\!>\!0$, then~$Q_{p,\mathcal{A},V_{t}}$ is subcritical in~$\Omega$ for every~$t\in (0,1)$.
\end{cor}
\begin{proposition}\label{prop_subcritical}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$ and $\mathbf{V}\in L_c^{\infty}(\Omega)\setminus\{0\}$ is such that~$\mathbf{V}\ngeq 0$. Then there is~$\tau_{+}>0$ and~$\tau_{-}\in[-\infty,0)$ such that $Q_{p,\mathcal{A},V+t\mathbf{V}}$ is subcritical in~$\Omega$ if and only if~$t\in(\tau_{-},\tau_{+})$. In addition,~$Q_{p,\mathcal{A},V+\tau_{+}\mathbf{V}}$ is critical in~$\Omega$.
\end{proposition}
\begin{proposition}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$ with a ground state $v$. Let $\mathbf{V}\in L_c^{\infty}(\Omega)$. Then there is~$0<\tau_{+}\leq \infty$ such that~$Q_{p,\mathcal{A},V+t\mathbf{V}}$ is subcritical in~$\Omega$ for~$t\in (0,\tau_{+})$ if and only if~$\int_{\Omega}\mathbf{V}|v|^{p}\,\mathrm{d}x>0.$
\end{proposition}
\subsubsection{Locally uniformly convergent null-sequence}
The following is an important application of the above perturbation results.
\begin{lem}\label{localuniform}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$. Then~$Q_{p,\mathcal{A},V}$ admits a null-sequence $\{\phi_{i}\}_{i\in\mathbb{N}}\subseteq C^{\infty}_{c}(\Omega)$ converging locally uniformly to the ground state $\phi$.
\end{lem}
\begin{proof}
Let~$\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$,~$x_{0}\in\omega_{1}$, and~$\mathbf{V}\in C^{\infty}_{c}(\Omega)\setminus\{0\}$ a nonnegative function such that~$\supp(\mathbf{V})\Subset\omega_{1}$. By virtue of Proposition~\ref{prop_subcritical}, for every~$i\in\mathbb{N}$, there exists~$t_{i}>0$ such that the functional~$Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}$ is critical in~$\omega_{i}$.
Let $\phi_{i}'$ be the ground state of $Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}$ in~$\omega_{i}$ satisfying $\phi_{i}'(x_0)=1$. Clearly. $\lim_{i\to\infty}t_i =0$, and $\lambda_{1}(Q_{p,\mathcal{A},V-t_{i}\mathbf{V}};\omega_{i})\!=\!0$, hence, $\phi_{i} \!\in\! W^{1,p}_{0}(\omega_{i})$, and~$Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}[\phi_{i}']\!=\!0$.
By Theorems~\ref{HCP} and \ref{thm_Poincare}, it follows that the sequence~$\{\phi_{i}'\}_{i\in\mathbb{N}}$ converges locally uniformly to $c\phi$, the ground state of $Q'_{p,\mathcal{A},V}$ in $\Omega$, where $c>0$ is a constant, and $\int_{\omega_{1}}|\phi'_i|^{p}\,\mathrm{d}x \asymp \int_{\omega_{1}}|\phi|^{p}\,\mathrm{d}x\asymp 1$.
It follows that~$\displaystyle{\lim_{i\rightarrow\infty}}Q_{p,\mathcal{A},V}[\phi_{i}'] \!=\!\displaystyle{\lim_{i\rightarrow\infty}}t_{i} \! \int_{\omega_{1}}\!\mathbf{V}(\phi_{i}')^{p}\,\mathrm{d}x = 0$.
By virtue of \cite[Page 250, Theorem 1]{Evans} and \cite[Page 630, Theorem 6]{Evans}, there exists a nonnegative approximating sequence~$\{\phi_{i}\}_{i\in\mathbb{N}} \! \subseteq \! C^{\infty}_{c}(\Omega)$ such that~$\displaystyle{\lim_{i\rightarrow\infty}}Q_{p,\mathcal{A},V}[\phi_{i}] \!=\!0,$ and $\{\phi_{i}\}$ converges locally uniformly to~$\phi$ in~$\Omega$. Hence, $\int_{\omega_{1}}|\phi_i|^{p}\,\mathrm{d}x \!\asymp \!1.$ By Lemma \ref{simplelemma}, the desired result follows.
\end{proof}
\subsection{Hardy–Sobolev–Maz’ya inequality and $(\mathcal{A},V)$-capacity}\label{A,V-capacity}
The following definition of capacity is a counterpart of \cite[Definition 6.7]{Regev}.
\begin{Def}\label{AVcapacity}
\emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that the functional~$Q_{p,\mathcal{A},V}$ is nonnegative on~$C^{\infty}_{c}(\Omega)$. For every compact subset~$K$ of~$\Omega$, we define the\emph{~$(\mathcal{A},V)$-capacity} of~$K$ in~$\Omega$ as
$$\capacity_{\mathcal{A},V}(K,\Omega)\triangleq \inf\left\{Q_{p,\mathcal{A},V}[\varphi]:\varphi\in C^{\infty}_{c}(\Omega), \varphi\geq 1 \mbox{ on } K\right\}.$$
}
\end{Def}
\begin{remark}
\emph{For the $p$-capacity and the $(p;r)$-capacity, see \cite[Chapter 2]{HKM} and \cite[Section 2.1]{Maly}. For a relationship between the~$p$-capacity and the~$p$-parabolicity in a Riemannian manifold, see \cite{parabolicity1,parabolicity2}. Recall that $|\xi|_{\mathcal{A}}^{p}=pF(x,\xi)$ for a.e.~$x\in\Omega$ and all~$\xi\in\mathbb{R}^{n}$. For the variational~$F$-capacity, which is a Choquet capacity as guaranteed by \cite[Theorem 5.31]{HKM}, we refer to \cite[Section 5.30]{HKM}. }
\end{remark}
The following theorem is an extension of \cite[Theorem 6.8]{Regev}, \cite[Theorem 4.5]{Regev20}, and \cite[Theorem 3.4]{Regev21}. The proof is omitted since it is similar to that of \cite[Theorem 4.5]{Regev20}.
\begin{Thm}\label{newthm}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that $Q_{p,\mathcal{A},V}$ is nonnegative on~$C^{\infty}_{c}(\Omega)$. Then the following assertions are equivalent.
\begin{enumerate}
\item[$(1)$] The functional~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$;
\item[$(2)$] there exists a positive continuous function~$W^\ast$ in~$\Omega$ such that for all~$\varphi\in C_c^{\infty}(\Omega)$,
$$Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W^\ast(x)\left(|\nabla\varphi|_{\mathcal{A}}^{p}+|\varphi|^{p}\right)\,\mathrm{d}x;$$
\item[$(3)$] for every nonempty open set~$U\Subset\Omega$ there exists~$c_{U}>0$ such that for all~$\varphi\in C_c^{\infty}(\Omega)$,
$$Q_{p,\mathcal{A},V}[\varphi]\geq c_{U}\left(\int_{U}|\varphi|\,\mathrm{d}x\right)^{p};$$
\item[$(4)$] the~$(\mathcal{A},V)$-capacity of all closed balls~$B\Subset\Omega$ with positive radii in~$\Omega$ is positive;
\item[$(4')$] the~$(\mathcal{A},V)$-capacity of some closed ball~$B\Subset\Omega$ with a positive radius in~$\Omega$ is positive.
\medskip
Furthermore, in the case of~$p<n$,~$(1)$ holds if and only if
\item[$(5)$] there exists a positive continuous function~$\tilde{W}$ in~$\Omega$ such that the following weighted Hardy–Sobolev–Maz’ya inequality holds true:
$$Q_{p,\mathcal{A},V}[\varphi]\geq \left(\int_{\Omega}\tilde{W}(x)|\varphi|^{p^{\ast}}\,\mathrm{d}x\right)^{p/p^{\ast}} \qquad \forall~\varphi\inC_c^{\infty}(\Omega),$$ where~$p^{\ast}\triangleq pn/(n-p)$ is the critical Sobolev exponent.
\end{enumerate}
\end{Thm}
\begin{comment}
\subsection{Liouville comparison theorem}\label{Liouville}
In the present section we establish a Liouville comparison theorem in our setting, following the results and methods in \cite[Theorem 8.1]{Regev} and \cite{Lioupincho}.
To this end, we need the following two additional assumptions related to the {\em simplified energy} \cite{Lioupincho}. In particular, the assumptions are valid for the $(p,A)$-Laplacian in \cite{Pinchover}.
\begin{ass}\label{ass3}
\emph{We assume that there exists a constant~$C(p)>0$ such that
$$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2},$$
for all~$\xi,\eta\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$.}
\end{ass}
\begin{remark}\label{1rem}
\emph{The operator~$\mathcal{A}$ in Example \ref{exa} satisfies Assumption \ref{ass3} for~$p\geq 2$. Indeed, by \cite[(3.11)]{Regev}, we have for all~$i=1,2,\ldots,n$ and a.e.~$x\in\Omega$,
$$a_{i}(x)|\xi_{i}+\eta_{i}|^{p}-a_{i}(x)|\xi_{i}|^{p}-pa_{i}(x)|\xi_{i}|^{p-2}\xi_{i}\eta_{i}\leq C(p)|\eta_{i}|'^{2}\left(|\xi_{i}|'+|\eta_{i}|'\right)^{p-2},$$
where~$|\xi_{i}|'\triangleq\sqrt[p]{a_{i}(x)}|\xi_{i}|$ and~$|\eta_{i}|'\triangleq\sqrt[p]{a_{i}(x)}|\eta_{i}|$.
Adding these inequalities over all~$i=1,2,\ldots,n$, gives
$$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)\sum_{i=1}^{n}|\eta_{i}|'^{2}\left(|\xi_{i}|'+|\eta_{i}|'\right)^{p-2}.$$
Noting that~$|\xi_{i}|'\leq (\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p})^{1/p}=|\xi|_{\mathcal{A}}$ and~$|\eta_{i}|'\leq (\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p})^{1/p}=|\eta|_{\mathcal{A}}$, we get
$$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2}.$$}
\end{remark}
\begin{ass}\label{ass4}
\emph{We assume that there exists a constant~$C(p)>0$ such that
$$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\geq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2},$$
for all~$\xi,\eta\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$.}
\end{ass}
\begin{remark}
\emph{If~$p\geq 2$, Assumption \ref{ass4} implies Assumption \ref{ass2}.}
\end{remark}
\begin{remark}\label{pless2}
\emph{The operator~$\mathcal{A}$ in Example \ref{exa} satisfies Assumption \ref{ass4} for~$p<2$. This remark has a similar proof to Remark \ref{1rem}
}
\end{remark}
\begin{lem}[Simplified energy]\label{simplified}
Let~$\mathcal{A}$ satisfy assumptions \ref{ass8} and \ref{ass3}, and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Consider any positive
subsolution $v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[f]=0$ and any nonnegative function~$u\in W^{1,p}_{{\rm loc}}(\Omega)$ such that~$u^{p}/v^{p-1} \in W^{1,p}_{c}(\Omega)$, the product rule for~$u^{p}/v^{p-1}$ holds, and~$vw$ satisfies the product rule for~$w\triangleq u/v$.
Then,
$$Q_{p,\mathcal{A},V}[vw]\leq C(p) \int_{\Omega}v^{2}|\nabla w|^{2}_{\mathcal{A}}\left(w|\nabla v|_{\mathcal{A}}+v|\nabla w|_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x,$$
where~$C(p)$ is as in Assumption \ref{ass3}.
Similarly, if~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is a positive supersolution of~$Q'_{p,\mathcal{A},V}[f]=0$ and~$\mathcal{A}$ satisfies assumptions \ref{ass8} and \ref{ass4} with all the other above conditions on $u$ and $v$, then
$$Q_{p,\mathcal{A},V}[vw]\geq C(p)\int_{\Omega}v^{2}|\nabla w|^{2}_{\mathcal{A}}\left(w|\nabla v|_{\mathcal{A}}+v|\nabla w|_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x,$$
where~$C(p)$ is as in Assumption \ref{ass4}.
\end{lem}
\begin{proof}
The two inequalities follow from Assumption \ref{ass3} or \ref{ass4}, respectively, and Lemma \ref{lem_alter} with~$\xi=w\nabla v$ and~$\eta=v\nabla w$.
\end{proof}
The Liouville comparison theorem has a similar proof to \cite[Theorem 8.1]{Regev}.
\begin{Thm}[Liouville comparison theorem]\label{thm_Liouville}
Let~$\mathcal{A}_{0}$ satisfy assumptions \ref{ass8}, \ref{ass2}, and \ref{ass3},~$\mathcal{A}_{1}$ satisfy assumptions \ref{ass8} and \ref{ass4} (if~$p\geq 2$) or assumptions \ref{ass8}, \ref{ass2}, and \ref{ass4} (if~$p<2$), and $V_{i}\in M^{q}_{{\rm loc}}(p;\Omega)$, where $i=0,1$.
If the following conditions hold true:
\begin{enumerate}
\item[$(1)$] the functional~$Q_{p,\mathcal{A}_{1},V_{1}}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ with a ground state~$\phi$ in~$\Omega$;
\item[$(2)$] the functional~$Q_{p,\mathcal{A}_{0},V_{0}}$ is nonnegative in~$\Omega$ and the equation~$Q'_{p,\mathcal{A}_{0},V_{0}}[u]=0$ in~$\Omega$ has a positive subsolution~$\psi\in W^{1,p}_{{\rm loc}}(\Omega)$;
\item[$(3)$] there is~$M>0$ such that a.e. in~$\Omega$, for all~$\xi\in\mathbb{R}^{n}$,~$\psi|\xi|_{\mathcal{A}_{0}}\leq M\phi|\xi|_{\mathcal{A}_{1}};$
\item[$(4)$] there is~$N>0$ such that a.e. in~$\Omega$,~$|\nabla \psi|_{\mathcal{A}_{0}}^{p-2}\leq N^{p-2}|\nabla\phi|_{\mathcal{A}_{1}}^{p-2},$
\end{enumerate}
then the functional~$Q_{p,\mathcal{A}_{0},V_{0}}$ is critical in~$\Omega$, and~$\psi$ is its ground state. In particular,~$\psi$ is the unique positive supersolution of the equation~$Q'_{p,\mathcal{A}_{0},V_{0}}[u]=0$ in~$\Omega$.
\end{Thm}
\begin{Rem}
\emph{ In contrast to the counterparts in \cite{Pinchover} and \cite{Regev} of the above theorem, we assume here the $\psi$ is a positive subsolution, since we were unable to extend to our setting \cite[Lemma 2.4]{Regev} saying that $v^+$ is a subsolution if $v$ is a subsolution. Note that by~$(3)$, $\psi\in L^{\infty}_{{\rm loc}}(\Omega)$.}
\end{Rem}
%
\end{comment}
\section{Positive solution of minimal growth}\label{minimal}
This section concerns the removability of an isolated singularity, the existence of positive solutions of minimal growth in a neighborhood of infinity in $\Omega$, and their relationships with the criticality or subcriticality. We also study the minimal decay of a Hardy-weights.
\subsection{Removability of isolated singularity}
In this subsection, we consider the removability of an isolated singularity (see also \cite{Fraas, Serrin1964, Regev25} and references therein).
\begin{lemma}\label{newev}
Fix $x_0\in \Omega$. Denote by~$B_r\triangleq B_{r}(x_0)$ the open ball of the radius $r>0$ centered at $x_0$. Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8}, and let $V\in M^{q}(p;B_R)$ for some $R>0$ with~$B_{R}\Subset\Omega$.
Then there exists~$R_1\in(0,R)$ such that $\lambda_{1}(Q_{p,\mathcal{A},V};B_r)>0$ for all $0<r<R_1$.
\end{lemma}
\begin{proof}
By \cite[Theorem 13.19]{Leoni}, for all~$0<r<R$ and~$u\in W^{1,p}_{0}(B_{r})\setminus\{0\}$, we have the lower bound~$\Vert \nabla u\Vert^{p}_{L^{p}(B_{r})}\geq C(n,p)r^{-p}\Vert u\Vert^{p}_{L^{p}(B_{r})}.$ Let $\gd= \alpha_{B_{R}}/2$. Then by the Morrey-Adams theorem, for all~$0\!<\!r\!<\!R$ and $u\!\in\! W^{1,p}_{0}(B_{r})\setminus\{0\}$ with
$\Vert u\Vert^{p}_{L^{p}(B_{r})}=1$, we get
\begin{eqnarray*}
\lambda_{1}(Q_{p,\mathcal{A},V};B_{r})&\geq & \int_{B_{r}}|\nabla u|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{B_{r}}V|u|^{p}\,\mathrm{d}x
\geq \alpha_{B_{R}}\int_{B_{r}}|\nabla u|^{p}\,\mathrm{d}x+\int_{B_{r}}V|u|^{p}\,\mathrm{d}x\\
&\geq& \delta C(n,p)r^{-p}-\frac{C(n,p,q)}{\delta^{n/(pq-n)}}\Vert V\Vert^{n/(pq-n)}_{M^{q}(p;B_{R})} \,.
\end{eqnarray*}
Thus, for all sufficiently small radii~$r>0$, the principal eigenvalue $\lambda_{1}(Q_{p,\mathcal{A},V};B_{r})>0$.
\end{proof}
The following theorem can be proved by essentially the same arguments as those of \cite[Theorems 5.4]{Pinchover}, and therefore it is omitted.
\begin{Thm}\label{singularity}
Assume that~$p\leq n$,~$x_{0}\in\Omega$, $\mathcal{A}$ satisfies Assumption~\ref{ass8}, and $V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Consider a positive solution~$u$ of~$Q'_{p,\mathcal{A},V}[w]=0$ in a punctured neighborhood $U_{x_0}$ of $x_0$. If $u$ is bounded in some punctured neighborhood of~$x_{0}$, then~$u$ can be extended to a nonnegative solution in $U_{x_0}$. Otherwise,~$\displaystyle{\lim_{x\rightarrow x_{0}}}u(x)=\infty.
\end{Thm}
\subsection{Positive solutions of minimal growth}
In this subsection, we study positive solutions of minimal growth at infinity in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, a notion that was introduced by Agmon in \cite{Agmon} for second-order linear elliptic operators, and was later extended to the quasilinear case \cite{Tintarev} and graphs \cite{Keller}. In particular, we give a further characterization of criticality in terms of global minimal positive solutions.
\subsubsection{Positive solutions of minimal growth}
\begin{Def}
\emph{Let $\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let $K_{0}$ be a compact subset of~$\Omega$. A positive solution~$u$ of~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega\setminus K_{0}$, is called a \emph{positive solution of minimal growth in a neighborhood of infinity} in $\Omega$ if for any smooth compact subset~$K$ of~$\Omega$ with~$K_{0}\Subset \mathring{K}$, any positive supersolution~$v\in C\big(\Omega\setminus \mathring{K}\big)$ of ~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega\setminus K$ such that~$u\leq v$ on~$\partial K$, satisfies $u\leq v$ in~$\Omega\setminus K$. For such a positive solution~$u$, we write~$u\in \mathcal{M}_{\Omega;K_{0}}=\mathcal{M}_{\Omega;K_{0}}^{\mathcal{A},V}$. If~$K_{0}=\emptyset$, then $u \in \mathcal{M}_{\Omega;\emptyset}$ is said to be a \emph{global minimal positive solution} of~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega$.}
\end{Def}
\begin{comment}
\begin{Rem}
\emph{A compact set is \emph{smooth} if its interior is nonempty and has a smooth boundary.}
\end{Rem
} \end{comment}
\begin{Thm}
Let $Q_{p,\mathcal{A},V}\geq 0$ in~$C_c^{\infty}(\Omega)$ with~$\mathcal{A}$ satisfying Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then for every~$x_{0}\in\Omega$, the equation~$Q'_{p,\mathcal{A},V}[w]=0$ has a solution~$u\in \mathcal{M}_{\Omega;\{x_{0}\}}$.
\end{Thm}
\begin{proof}
Let~$\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$ with~$x_{0}\in\omega_{1}$. We define the inradius of~$\omega_{1}$ as $r_{1}\triangleq\sup_{x\in\omega_{1}}\mathrm{d}(x,\partial\omega_{1})$, and consider the open sets
$U_{i}\triangleq\omega_{i}\setminus \overline B_i=\omega_{i}\setminus \overline{B_{r_{1}/(i+1)}(x_{0})},$ for $i\in\mathbb{N}$. Fix a point~$x_{1}\in U_{1}$. Note that~$\{U_{i}\}_{i\in\mathbb{N}}$ is an exhaustion of~$\Omega\setminus\{x_{0}\}.$ Pick a sequence of nonnegative functions $f_{i}\in C^{\infty}_{c}\left(B_{i}(x_{0})\setminus \overline{B_{i+1}(x_{0})}\right)\setminus\{0\}$, for all~$i\in\mathbb{N}$. The principal eigenvalue
$$\lambda_{1}\left(Q_{p,\mathcal{A},V+1/i};U_{i}\right)>0,$$
because~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$. Then, by virtue of Theorem \ref{maximum}, for every~$i\in\mathbb{N}$, there exists a positive solution~$v_{i}\in W^{1,p}_{0}(U_{i})$ of~$Q'_{p,\mathcal{A},V+1/i}[u]=f_{i}$ in~$U_{i}$. The Harnack convergence principle yields a subsequence of
~$\big\{u_{i}\triangleq v_{i}(x)/v_{i}(x_{1})\big\}_{i\in\mathbb{N}}$ converging locally uniformly in $\Omega\setminus\{x_{0}\}$ to a positive solution~$u$ of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega\setminus\{x_{0}\}$.
We claim that~$u\in\mathcal{M}_{\Omega;\{x_{0}\}}$. Consider any smooth compact subset~$K$ of~$\Omega$ with~$x_{0}\in\mathring{K}$ and any positive supersolution~$v\in C\big(\Omega\setminus\mathring{K}\big)$ of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega\setminus K$ with~$u\leq v$ on~$\partial K$. For an arbitrary~$\delta>0$, there exists~$i_{K}\in\mathbb{N}$ such that~$\supp{f_{i}}\Subset K$ for all~$i\geq i_{K}$ and~$u_{i}\leq (1+\delta)v$ on~$\partial\left(\omega_{i}\setminus K\right)$. The weak comparison principle (Theorem~\ref{thm_wcp}) gives~$u_{i}\leq (1+\delta)v$ in~$\omega_{i}\setminus K$. Then by letting~$i\rightarrow\infty$ and then $\delta\rightarrow 0$, we obtain~$u\leq v$ in~$\Omega\setminus K$.
\end{proof}
\begin{Def}
\emph{A function~$u\in \mathcal{M}_{\Omega;\{x_{0}\}}$ is called a \emph{minimal positive Green function of~$Q'_{p,\mathcal{A},V}$ in~$\Omega$ with singularity} at~$x_{0}$, if~$u$ admits a nonremovable singularity at~$x_{0}$. We denote such a Green function by~$G^{\Omega}_{\mathcal{A},V}(x,x_{0})$.}
\end{Def}
\begin{Rem}
\emph{See \cite{PinchoverGreen, PinchoverGreen2,Pinchoverlinear} for more on minimal positive Green functions of linear elliptic operators of the second order.}
\end{Rem}
\subsubsection{Further characterization of criticality}
We characterize the criticality and subcriticality of $Q_{p,\mathcal{A},V}$ in terms of the existence of a global minimal positive solution and the existence of a Green function.
\begin{Thm}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Consider the nonnegative functional~$Q_{p,\mathcal{A},V}$. Then~$Q_{p,\mathcal{A},V}$ is subcritical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if and only if the equation $Q'_{p,\mathcal{A},V}[u]=0$ does not admit a global minimal positive solution in~$\Omega$. Moreover, a ground state of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$ is a global minimal positive solution of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$.
\end{Thm}
\begin{proof}
The proof is similar to that of \cite[Theorem 5.9]{Pinchover} and hence omitted.
\end{proof}
\begin{Thm}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that the functional~$Q_{p,\mathcal{A},V}$ is nonnegative in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and fix $u\in \mathcal{M}_{\Omega;\{x_{0}\}}$ for some $x_{0}\in\Omega$.
\begin{enumerate}
\item[$(1)$] If~$u$ has a removable singularity at $x_0$, then~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$.
\item[$(2)$] If~$p\leq n$ and~$u$ has a nonremovable singularity at~$x_{0}$, then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$.
\item[$(3)$] If~$p> n$,~$u$ has a nonremovable singularity at~$x_{0},$ and~$\lim_{x\rightarrow x_{0}}u(x)=c$ for some positive constant~$c,$ then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$.
\end{enumerate}
\end{Thm}
\begin{proof}
The proof is similar to that of \cite[Theorem 5.10]{Pinchover} and hence omitted.
\end{proof}
\subsection{How large can Hardy-weights be?}
The following theorem is a generalization of \cite[theorems~3.1 and 3.2]{Kovarik}.
\begin{Thm}
Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$.
Assume that~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$. For $K\Subset\Omega$,
let $\phi\in W^{1,p}_{{\rm loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K)$ be a positive solution of the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K$ of minimal growth in a neighborhood of infinity in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
Then for every~$K\Subset\mathring{\mathcal{K}}\Subset\Omega$ and every Hardy-weight $W$ of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K$, we have $$\int_{\mathcal{K}^{c}}W|\phi|^{p}\,\mathrm{d}x<\infty.$$
\end{Thm}
\begin{proof}
Let $K\Subset\mathring{\mathcal{K}}\Subset\Omega$, and let $\tilde V\in C_0^\infty(\mathring{\mathcal{K}})$ be a nonnegative function such that $Q_{p,\mathcal{A},V-\tilde V}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. There exists a null sequence $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C_c^{\infty}(\Omega)$ for $Q_{p,\mathcal{A},V-\tilde V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ converging locally uniformly to its ground state $\vgf$. So, $\vgf_{k}\geq 0$,~$\Vert\vgf_{k}\Vert_{L^{p}(K)}=1$, and~$\lim_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\tilde V}[\vgf_{k}]\!=\!0$.
Let $f\in C^{1}(\Omega)$ satisfy $0\leq f\leq 1$,~$f|_{K} = 0$, $f|_{\mathcal{K}^{c}}=1$, and~$|\nabla f(x)|_{\mathcal{A}}\leq C_{0}$ for some constant~$C_{0}$ and all~$x\in\Omega$. Then~$Q_{p,\mathcal{A},V}[f\vgf_{k}]\geq \int_{K^{c}}W|f\vgf_{k}|^{p}\,\mathrm{d}x\geq \int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x$. Moreover,
\begin{eqnarray*}
\int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x \!&\leq&\! Q_{p,\mathcal{A},V}[f\vgf_{k}]\!=\!\int_{\mathcal{K}^{c}} \!\!\! (|\nabla\vgf_{k}|_{\mathcal{A}}^{p} \!+\! V|\vgf_{k}|^{p})\,\mathrm{d}x+\int_{\mathcal{K}\setminus K} \!\!\!( |\nabla(f\vgf_{k})|_{\mathcal{A}}^{p}+V|f\vgf_{k}|^{p})\!\,\mathrm{d}x\\
\!&\leq&\!\!Q_{p,\mathcal{A},V-\tilde V}[\vgf_{k}]+2\!\int_{\mathcal{K}}\!(|V|+\tilde V) |\vgf_{k}|^{p}\!\,\mathrm{d}x +C\|\vgf_k\|^p_{W_{1,p}(\mathcal{K})}.
\end{eqnarray*}
Since the null-sequence $\{\varphi_{k}\}$ is locally bounded in $L^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap W_{1,p}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, it follows that $\int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x < C_1$. Consequently, the Fatou Lemma implies that $\int_{\mathcal{K}^{c}}W|\vgf|^{p}\,\mathrm{d}x\leq C_1$. Note that the ground state $\vgf$ is a positive solution of $Q'_{p,\mathcal{A},V}[u]=0$ of minimal growth at infinity of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, hence, $\phi} \def\vgf{\varphi} \def\gh{\eta\asymp \vgf$ in $\mathcal{K}^c$. Thus, $\int_{\mathcal{K}^{c}}W|\phi} \def\vgf{\varphi} \def\gh{\eta|^{p}\,\mathrm{d}x< \infty$.
\end{proof}
\subsection*{Data Availability Statement}
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
\subsection*{Acknowledgements}
This paper is based on the thesis of the first author for the degree of Master of Science of in Mathematics at the Technion-Israel Institute of Technology under the supervision of Professors Yehuda Pinchover and Antti Rasila. Y.H. and A.R. gratefully acknowledge the generous financial help of NNSF of China (No. 11971124) and NSF of Guangdong Province (No. 2021A1515010326). Y.P. acknowledges the support of the Israel Science Foundation (grant 637/19) founded by the Israel Academy of Sciences and Humanities.
{\small
| proofpile-arXiv_065-77 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Hyperspectral remote sensing technology is a method that organically combines the spectrum of ground objects determined by their unique material composition with the spatial image reflecting the shape, texture and layout of ground objects, to realize the accurate detection, recognition and attribute analysis of ground objects. The resultant hyperspectral images (HSIs) not only contain abundant spectral information reflecting the unique physical properties of the ground features but also provide rich spatial information of the ground features. Therefore, HSIs can be utilized to solve problems that cannot be solved well in multispectral or natural images, such as the precise identification of each pixel.
Since different materials exhibit specific spectral characteristics, the classification performance of HSI can be more accurate. Due to these advantages, hyperspectral remote sensing has been widely used in many applications, such as precision agriculture~\cite{teke2013agriculture}, crop monitoring~\cite{strachan2002environmental}, and land resources~\cite{bannari2006agricultural, chabrillat2014soilerosion}.
In environmental protection, HSI has been employed to detect gas~\cite{gas_dectection}, oil spills~\cite{salem2001hyperspectral}, water quality~\cite{awad_2014, jay_guillaume_2014} and vegetation coverage~\cite{0Brightness, 2020Tree}, to better protect our living environment. In the medical field, HSI has been utilized for skin testing to examine the health of human skin~\cite{skin_detection}.
As a general pattern recognition problem, HSI classification has received a substantial amount of attention, and a large number of research results have been achieved in the past several decades.
According to the previous work~\cite{2019Deep_overview_domestic}, all researches can be divided into the spectral-feature method, spatial-feature method, and spectral-spatial-feature method. The spectral feature is the primitive characteristic of the hyperspectral image, which is also called the spectral vector or spectral curve. And the spatial feature~\cite{2009Incorporation} means the relationship between the central pixel and its context, which can greatly increase the robustness of the model.
In the early period of the study on HSI classification, researchers mainly focused on the pure spectral feature-based methods, which simply apply classifiers to pixel vectors, such as support vector machines (SVM)~\cite{2004A_Melgani}, neural networks~\cite{zhong2011immune_network}, logistic regression~\cite{li2012logistic}, to obtain classification results without any feature extraction.
But raw spectra contain much redundant information and the relation between spectra and ground objects is non-linear, which enlarges the difficulty of the model classification. Therefore, most later methods give more attention to dimension reduction and feature extraction to learn the more discriminative feature.
For the approaches based on dimension reduction, principle component analysis~\cite{licciardi2011linear}, independent component analysis~\cite{2009C_Villa}, linear discriminant analysis~\cite{2014C_Zhang}, and low-rank~\cite{2016A_He_Gabor} are widely used.
Nevertheless, the performance of those models is still unsatisfactory. Because, there is a common phenomenon in the hyperspectral image which is that different surface objects may have the same spectral characteristic and, otherwise, the same surface objects may have different spectral characteristics. The variability of spectra of ground objects is caused by illumination, environmental, atmospheric, and temporal conditions. Those enlarge the probability of misclassification. Thus, those methods are only based on spectral information, and ignore spatial information, resulting in unsatisfactory classification performance. The spatial characteristic of ground objects supply abundant information of shape, context, and layout about ground objects, and neighboring pixels belong to the same class with high probability, which is useful for improving classification accuracy and robustness of methods.
Then, a large number of feature extraction methods that integrate the spatial structural and texture information with the spectral features have been developed, including morphological~\cite{2010A_DallaMura,2015A_Falco_TGRS,2011A_Mura}, filtering~\cite{2015_Jia_p1118_1129,2013A_Qian}, coding~\cite{li2015local}, etc. Since deep learning-based methods are mainly concerned in this paper, the readers are referred to~\cite{2015_Ghamisi_p2335_2353} for more details on these conventional techniques.
In the past decade, deep learning technology has developed rapidly and received widespread attention. Compared with traditional machine learning model, deep learning technology does not need to artificially design feature patterns and can automatically learn patterns from data. Therefore, it has been successfully applied in the fields of natural language processing, speech recognition, semantic segmentation, autonomous driving, and object detection, and gained excellent performance. Recently, it also has been introduced into the field of HSI classification.
Researchers have proposed a number of new deep learning-based HIS classification approaches, as shown in the left part of Figure \ref{frame}. Currently, all methods, based on the joint spectral-spatial feature, can be divided into two categories—Two-Stream and Single-Stream, according to whether they simultaneously extract the joint spectral-spatial feature. The architecture of two-stream usually includes two branches—spectral branch and spatial branch. The former is to extract the spectral feature of the pixel, and the latter is to capture the spatial relation of the central pixel with its neighbor pixels. And the existing methods have covered all deep learning modules, such as fully connected layer, convolutional layer, and recurrent unit.
In the general deep learning framework, a large number of training samples should be provided to well train the model and tune the numerous parameters. However, in practice, manually labeling is often very time-consuming and expensive due to the need for expert knowledge, and thus, a sufficient training set is often unavailable. As shown in Figure \ref{sample-distribution} (here the widely used Kennedy Space Center (KSC) hyperspectral image is utilized for illustration), the left figure randomly selects 10 samples per class and contains 130 labeled samples in total, which is very scattered and can hardly be seen. Alternatively, the right figure in Figure \ref{sample-distribution} displays 50\% of labeled samples, which is more suitable for deep learning-based methods. Hence, there is a vast gap between the training samples required by deep learning models and the labeled samples that can be collected in practice.
And there are many learning paradigms proposed for solving the problem of few label samples, as shown in the right part of Figure \ref{frame}. In section 2, we will discuss them in detail. And they can be integrated with any model architecture.
Some pioneering works such as~\cite{yu2017convolutional} started the topic by training a deep model with good generalization only using few labeled samples. However, there are still many challenges for this topic.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure/sample_distribution.pdf}
\caption{Illustration of the massive gap between practical situations (i.e., few labeled samples) and a large number of labeled samples of deep learning-based methods. Here, the widely used Kennedy Space Center (KSC) hyperspectral image is employed, which contains 13 land covers and 5211 labeled samples (detailed information can be found in the experimental section). Generally, sufficient samples are required to well train a deep learning model (as illustrated in the right figure), which is hard to be achieved in practice due to the difficulty of manually labeling (as shown in the left figure).}
\label{sample-distribution}
\end{figure}
In this paper, we hope to provide a comprehensive review of the state-of-the-art deep learning-based methods for HSI classification with few labeled samples. First, instead of separating the various methods according to feature fusion manner, such as spectral-based, spatial-based, and joint spectral-spatial-based methods, the research progress of methods related to few training samples is categorized according to the learning paradigm, including transfer learning, active learning, and few-shot learning. Second, a number of experiments with various state-of-the-art approaches have been carried out, and the results are summarized to reveal the potential research directions. Further, it should be noted that different from the previous review papers~\cite{2019Deep_overview_domestic, 2019Deep_overview_foreign}, this paper mainly focuses on the few labeled sample issue, which is considered as the most challenging problem in the HSI classification scenario. For reproducibility, the source codes of the methods conducted in the paper can be found at the web site for the paper\footnote{\url{https://github.com/ShuGuoJ/HSI-Classification.git}}.
The remainder of this paper is organized as follows. Section \ref{deep-learning-model} introduces the deep models that are popular in recent years. In Section \ref{learning-paradigm}, we divide the previous works into four mainstream learning paradigms, including transfer learning, active learning, and few-shot learning.
In Section \ref{experiments}, we performed many experiments, and a number of representative deep learning-based classification methods are compared on several real hyperspectral image data sets. Finally, conclusions and suggestions are provided in Section \ref{conclutions}.
\begin{figure}[hbpt]
\centering
\includegraphics[scale=0.2]{figure/frame/model_frame_solid.pdf}
\caption{The category of deep learning-based methods for hyperspectral image classification. The left is from the model architecture point of view, while the right is from the learning paradigm point of view. It is worth noting that the both kinds of methods can be combined arbitrarily.}
\label{frame}
\end{figure}
\section{Deep learning models for HSI classification}
\label{deep-learning-model}
In this section, three classical deep learning models, including the autoencoder, convolutional neural network (CNN), and recurrent neural network (RNN), for HSI classification are respectively described, and the relevant references are reviewed.
\subsection{Autoencoder for HSI classification}
An autoencoder~\cite{hinton_2006_reducing} is a classic neural network, which consists of two parts: an encoder and a decoder. The encoder $p_{encoder}(\bm{h} \vert \bm{x})$ maps the input $\bm{x}$ as a hidden representation $\bm{h}$, and then, the decoder $p_{decoder}(\hat{\bm{x}} \vert \bm{h})$ reconstructs $\hat{\bm{x}}$ from $\bm{h}$. It aims to make the input and output as similar as possible. The loss function can be formulated as follows:
\begin{equation}
\mathcal{L}(\bm{x},\hat{\bm{x}})=\min \vert \bm{x}-\hat{\bm{x}} \vert
\end{equation}
where $\mathcal{L}$ is the similarity measure. If the dimension of $\bm{h}$ is smaller than $\bm{x}$, the autoencoder procedure is undercomplete and can be used to reduce the data dimension. Evidently, if there is not any constraint on $\bm{h}$, the autoencoder is the simplest identical function. In other words, the network does not learn anything. To avoid such a situation, the usual way is to add the normalization term $\Omega(h)$ to the loss. In~\cite{2011An_sparse_autoencoder, zeng2018facial}, the normalization of the autoencoder, referred as a sparse autoencoder, is $\Omega(h)=\lambda \sum_ih_i$, which will make most of the parameters of the network very close to zero. Therefore, it is equipped with a certain degree of noise immunity and can produce the sparsest representation of the input. Another way to avoid the identical mapping is by adding some noise into $\bm{x} $ to make the damaged input $\bm{x_{noise}}$ and then forcing the decoder to reconstruct the $\bm{x}$. In this situation, it becomes the denoising autoencoder~\cite{2008Extracting_denoise_autoencoder}, which can remove the additional noise from $\bm{x_{noise}}$ and produce a powerful hidden representation of the input. In general, the autoencoder plays the role of feature extractor~\cite{windrim2019unsupervised} to learn the internal pattern of data without labeled samples. Figure \ref{auto-encoder} illustrates the basic architecture of the autoencoder model.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.75\textwidth]{figure/architecture/auto-encoder.pdf}
\caption{The architecture of the autoencoder. The solid line represents training, while the dashed line represents inference.}
\label{auto-encoder}
\end{figure}
Therefore, Chen \emph{et al.}~\cite{chen2014deep} used an autoencoder for the first time for feature extraction and classification of HSIs. First, in the pretraining stage, the spectral vector of each pixel directly inputs the encoder module, and then, the decoder is used to reconstruct it so that the encoder has the ability to extract spectral features. Alternatively, to obtain the spatial features, principal component analysis (PCA) is utilized to reduce the dimensionality of the hyperspectral image, and then, the image patch is flattened into a vector. Another autoencoder is employed to learn the spatial features. Finally, the spatial-spectral joint information obtained above is fused and classified. Subsequently, a large number of hyperspectral image classification methods~\cite{abdi2017deep_sparse_autoencoder, xing2016stacked} based on autoencoders appeared. Most of these methods adopt the same training strategy as~\cite{chen2014deep}, which is divided into two modules: fully training the encoder in an unsupervised manner and fine-tuning the classifier in a supervised manner. Each of these methods attempts different types of encoders or preprocessing methods to adapt to HSI classification under the condition of small samples. For example, Xing \emph{et al.}~\cite{xing2016stacked} stack multiple denoising autoencoders to form a feature extractor, which has a stronger anti-noise ability to extract more robust representations. Given that the same ground objects may have different spectra while different ground objects may exhibit similar spectra, spectral-based classification methods often fail to achieve satisfactory performance, and spatial structural information of objects provides an effective supplement. To gain a better spatial description of an object, some autoencoder models combined with convolutional neural networks (CNNs) have been developed~\cite{yue2016spatial_pyramid_pooling, hao2017two_stream}.
Concretely, the autoencoder module is able to extract spectral features on large unlabeled samples, while the CNN is proven to be able to extract spatial features well. After fusion, the spatial-spectral features can be achieved. Further, to reduce the number of trainable parameters, some researchers use the lightweight models, such as SVMs~\cite{sun2017encoding, mei2019_3d_convolutional_autoencoder}, random forests~\cite{zhao2017autoencoder_random_forest, wan2017multifractal_spectrum_features} or logistic regression~\cite{chen2014deep,wang2016multi_label}, to serve as the classifier.
Due to the three-dimensional (3D) pattern of hyperspectral images, it is desirable to simultaneously investigate the spectral and spatial information such that the joint spatial-spectral correlation can be better examined. Some three-dimensional operators and methods have been proposed. In the preprocessing stage, Li \emph{et al.}~\cite{li2015deep_gabor} utilized the 3D Gabor operator to fuse spatial information and spectral information to obtain spatial-spectral joint features, which were then fed into the autoencoder to obtain more abstract features. Mei \emph{et al.}~\cite{mei2019_3d_convolutional_autoencoder} used a 3D convolutional operator to construct an autoencoder to extract spatial-spectral features directly. In addition, image segmentation has been introduced to
characterize the region structure of objects to avoid misclassification of pixels at the boundary~\cite{mughees2016efficient}. Therefore, Liu \emph{et al.}~\cite{liu2015learnt_features} utilized superpixel segmentation technology as a postprocessing method to perform boundary regularization on the classification map.
\subsection{Convolutional Neural Networks (CNNs) for HSI classification}
In theory, the CNN uses a group of parameters that refer to a kernel function or kernel to scan the image and produce a specified feature. It has three main characteristics that make it very powerful for feature representation, and thus, the CNN has been successfully applied in many research fields. The first one is the local connection that greatly decreases the number of trainable parameters and makes itself suitable for processing large images. This is the most obvious difference from the fully connected network, which has a full connection between two neighboring neural layers and is unfriendly for large spatial images. To further reduce the number of parameters, the same convolutional kernel shares the same parameters, which is the second characteristic of CNNs. In contrast, in the traditional neural network, the parameters of the output are independent from each other. However, the CNN applies the same parameters for all of the output to cut back the number of parameters, leading to the third characteristic: shift invariance. It means that even if the feature of an object has shifted from one position to another, the CNN model still has the capacity to capture it regardless of where it appears. Specifically, a common convolutional layer consists of three traditional components: linear mapping, the activation function and the pooling function.
Similar to other modern neural network architectures, activation functions are used to bring a nonlinear mapping feature into the network. Generally, the rectified linear unit (ReLU) is the prior choice.
Pooling makes use of the statistical characteristic of the local region to represent the output of a specified position. Taking the max pooling step as an example, it employs the max value to replace the region of input. Clearly, the pooling operation is robust to small changes and noise interfere, which could be smoothed out by the pooling operation in the output, and thus, more abstract features
can be reserved.
In the early works of applying CNNs for HSI classification, two-dimensional convolution was the most widely used method, which is mainly employed to extract spatial texture information~\cite{lee2017contextual_cnn, yu2017convolutional, leng2016cube_cnn_svm}, but the redundant bands greatly enlarge the size of the convolutional kernel, especially the channel dimensionality.
Later, a combination of one-dimensional convolution and two-dimensional convolution appeared~\cite{zhang2017_dual_channel_convolutional} to solve the above problem. Concretely, one-dimensional and two-dimensional convolutions are responsible for extracting spectral and spatial features, respectively. The two types of features are then fused before being input to the classifier. For the small training sample problem, due to insufficient labeled samples, it is difficult for CNNs to learn effective features. For this reason, some researchers usually introduced traditional machine learning methods, such as attribute profiles~\cite{aptoula2016_cnn_attribute_profiles}, GLCM~\cite{zhao2019_cnn_textural_feature}, hash learning~\cite{yu2019cnn_embedding_semantic}, and Markov Random fields~\cite{qing2018cnn_markov}, to introduce prior information to the convolutional network and improve the performance of the network.
Similar to the trend of autoencoder-based classification methods, three-dimensional CNN models have also been applied to HSI classification in recent years and have shown better feature fusion capabilities~\cite{zhong20173d_residual, liu2018_3d_convolution}. However, due to the large number of parameters, three-dimensional convolution is not suitable for solving small-sample classification problems under supervised learning. To reduce the number of parameters of 3D convolution, Fang \emph{et al.}~\cite{fang2020lightweight_deep_clustering} designed a 3D separable convolution. In contrast, Mou \emph{et al.}~\cite{mou2017residual_conv_deconv, sellami2019_3D_network_band_selection} introduced an autoencoder scheme into the three-dimensional convolution module to solve this problem. By a combination with the classic autoencoder training method, the three-dimensional convolution autoencoder can be trained in an unsupervised learning manner, and then, the decoder is replaced with a classifier, while the parameters of the encoder are frozen. Finally, a small classifier is trained by supervised learning. Moreover, due to the success of ResNet~\cite{he2016deep_residual}, scholars have studied the HSI classification problem based on convolutional residuals~\cite{mou2017residual_conv_deconv, sellami2019_3D_network_band_selection, paoletti2018_pyramidal_residual,ma2018_deconvolution_skip_architecture}. These methods try to use jump connections to enable the network to learn complex features with a small number of labeled samples. Similarly, CNNs with dense connections have also been introduced into this field~\cite{paoletti2018dense_convolutional, wang2018dense_convolution}. In addition, the attention mechanism is another hotpot for fully mining sample features. Concretely, Haut and Xiong \emph{et al.}~\cite{haut2019visual_attention_driven, xiong2018attention_inception} incorporated the attention mechanism with CNNs for HSI classification. Although the above models can work well on HSI, they cannot overcome the disadvantage of the low spatial resolution of HSIs, which may cause mixed pixels. To make up for this shortcoming, multimodality CNN models have been proposed. These methods~\cite{feng2019multisource_convolutional, xu2017multisource_convolutional, li2018_three_stream_convolutional} combine HSIs and LiDAR data together to increase the discriminability of sample features. Moreover, to achieve good performance under the small-sample scenario, Yu \emph{et al.}~\cite{yu2017convolutional} enlarged the training set through data augmentation by implementing rotation and flipping. On the one hand, this method increases the number of samples and improves their diversity. On the other hand, it enhances the model's ability of rotation invariance, which is important in some fields such as remote sensing. Subsequently, Li \emph{et al.}~\cite{li2018data_augmentation, wei2018_cube_pair_network} designed a data augmentation scheme for HSI classification. They combined the samples in pairs so that the model no longer learns the characteristics of the samples themselves but learns the differences between the samples. Different combinations make the scale of the training set larger, which is more conducive for model training.
\subsection{Recurrent neural network (RNN) for HSI classification}
Compared with other forms of neural networks, recurrent neural networks (RNNs)~\cite{hochreiter1997long} have memory capabilities and can record the context information of sequential data. Because of this memory characteristic, recurrent neural networks are widely used in tasks such as speech recognition and machine translation. More precisely, the input of a recurrent neural network is usually a sequence of vectors. At each time step $t$, the network receives an element $\bm{x}_t$ in a sequence and the state $\bm{h}_{t-1}$ of the previous time step, and produces an output $\bm{y}_t$ and a state $\bm{h}_t$ representing the context information at the current moment. This process can be formulated as:
\begin{equation}
\bm{h}_t= f(\mathbf{W}_{hh}\bm{h}_{t-1}+\mathbf{W}_{xh}\bm{x}_t+\mathbf{b})
\end{equation}
where $\mathbf{W}_{xh}$ represents the weight matrix from the input layer to the hidden layer, $\mathbf{W}_{hh}$ denotes the state transition weight in the hidden layer, and $\mathbf{b}$ is the bias. It can be seen that the current state of the recurrent neural network is controlled by both the state of the previous time step and the current input. This mechanism allows the recurrent neural network to capture the contextual semantic information implicitly between the input vectors. For example, in the machine translation task, it can enable the network to understand the semantic relationship between words in a sentence.
However, the classic RNN is prone to encounter gradient explosion or gradient vanishing problems during the training process. When there are too many inputs, the derivation chain of the RNN will become too long, making the gradient value close to infinity or zero. Therefore, the classic RNN model is replaced by a long short-term memory (LSTM) network~\cite{hochreiter1997long} or a gated recurrent unit (GRU)~\cite{cho2014GRU} in the HSI classification task.
Both LSTM and GRU use gating technology to filter the input and the previous state so that the network can forget unnecessary information and retain the most valuable context. LSTM maintains an internal memory state, and there are three gates: input gate $\bm{i}_t$, forget gate $\bm{f}_t$ and output gate $\bm{o}_t$, which are formulated as:
\begin{equation}
\bm{i}_t = \sigma(\mathbf{W_{i}} \cdot [\bm{x}_t, \bm{h}_{t-1}])
\end{equation}
\begin{equation}
\bm{f}_t = \sigma(\mathbf{W_{f}} \cdot [\bm{x}_t, \bm{h}_{t-1}])
\end{equation}
\begin{equation}
\bm{o}_t = \sigma(\mathbf{W_{io}} \cdot [\bm{x}_t, \bm{h}_{t-1}])
\end{equation}
It can be found that the three gates are generated based on the current input and the previous state. First, the current input and the previous state will be spliced and mapped to a new input $\bm{g}_t$ according to the following formula:
\begin{equation}
\bm{g}_t = \tanh(\mathbf{W_{g}} \cdot [\bm{x}_t, \bm{h}_{t-1}])
\end{equation}
Subsequently, the input gate, the forget gate, the new input $\bm{g}_t$ and the internal memory unit $\hat{\bm{h}}_{t-1}$ update the internal memory state tegother. In this process, LSTM discards invalid information and adds new semantic information.
\begin{equation}
\hat{\bm{h}}_t = \bm{f}_t \odot \hat{\bm{h}}_{t-1} + \bm{i}_t \odot \bm{g}_t
\end{equation}
Finally, the new internal memory state is filtered by the output gate to form the output of the current time step
\begin{equation}
\bm{h}_t = \bm{o}_t \odot \tanh(\hat{\bm{h}}_t)
\end{equation}
Concerning HSI processing, each spectral image is a high-dimensional vector and can be regarded as a sequence of data. There are many works using LSTM for HSI classification tasks. For instance, Mou \emph{et al.}~\cite{mou2017deep_recurrent_hyperspectral} proposed an LSTM-based HSI classification method for the first time, and their work only focused on spectral information. For each sample pixel vector, each band is input into the LSTM step by step. To improve the performance of the model, spatial information is considered in subsequent research. For example, Liu \emph{et al.} fully considered the spatial neighborhood of the sample and used a multilayer LSTM to extract spatial spectrum features~\cite{liu2018spectral_spatia_recurrent}. Specifically, in each time step, the sampling points of the neighborhood are sequentially input into the network to deeply mine the context information in the spatial neighborhood. In~\cite {zhou2019hyperspectral_ss_LSTMs}, Zhou \emph{et al.} used two LSTMs to extract spectral features and spatial features. In particular, for the extraction of spatial features, PCA is first used to extract principal components from the sample rectangular space neighborhood. Then, the first principal component is divided into several lines to form a set of sequence data, and gradually input into the network. In contrast, Ma and Zhang \emph{et al.}~\cite{ma2019hyperspectral_measurements_recurrent, zhang2018spatial_sequential_recurrent} measures the similarity between the sample point in the spatial neighborhood and the center point. The sample points in the neighborhood will be reordered according to the similarity and then input into the network step by step. This approach allows the network to focus on learning sample points that are highly similar to the center point, and the memory of the internal hidden state can thus be enhanced. Erting Pan \emph{et al.}~\cite{pan2020spectral_spatial_GRU} proposed an effective tiny model for spectral-spatial classification on HSIs based on a single gate recurrent unit (GRU). In this work, the rectangular space neighborhood is flattened into a vector, which is used to initialize the hidden vector $ h_0 $ of GRU, and the center point pixel vector is input into the network to learn features.
In addition, Wu and Saurabh argue that it is difficult to dig out the internal features of the sample by directly inputting a single original spectral vector into the RNN~\cite{wu2017pseudo_labels_deep_learning, wu2017convolutional_recurrent}. The authors use a one-dimensional convolution operator to extract multiple feature vectors from the spectrum vector, which form a feature sequence and are then input to the RNN. Finally, the fully connected layer and the softmax function are adopted to obtain the classification result.
It can be seen that only using recurrent neural networks or one-dimensional convolution to extract the spatial-spectrum joint features is actually not efficient because this will cause the loss of spatial structure information. Therefore, some researchers combine two-dimensional/three-dimensional CNNs with an RNN and use convolution operators to extract spatial-spectral joint features. For example, Hao \emph{et al.}~\cite{hao2020geometry_aware_recurrent} utilized U-Net to extract features and input them into an LSTM or GRU so that the contextual information between features could be explored. Moreover, Shi \emph{et al.}~\cite{shi2018hierarchical_recurrent} introduced the concept of the directional sequence to fully extract the spatial structure information of HSIs. First, the rectangular area of the sampling point is divided into nine overlapping patches. Second, the patch will be mapped to a set of feature vectors through a three-dimensional convolutional network, and the relative position of the patch can generate 8 combinations of directions (for example, top, middle, bottom, left, center, and right) to form a direction sequence. Finally, the sequence is input into the LSTM or GRU to obtain the classification result. In this way, the spatial distribution and structural characteristics of the features can be explored.
\section{Deep learning paradigms for HSI classification with few labeled samples}
\label{learning-paradigm}
Although different HSI classification methods have different specific designs, they all follow some learning paradigms. In this section, we mainly introduce several learning paradigms that are applied to HSI classification with few labeled training samples. These learning paradigms are based on specific learning theories. We hope to provide a general guide for researchers to design algorithms.
\subsection{Deep Transfer Learning for HSI classification}
Transfer learning~\cite{pan_yang_2010} is an effective method to deal with the small-sample problem. Transfer learning tries to transfer knowledge learned from one domain to another. First, there are two data sets/domains, one is called a source domain that contains abundant labeled samples, and the other is called a target domain and only contains few labeled samples. To facilitate the subsequent description, we define the source domain as $\mathbf{D}_s$, the target domain as $\mathbf{D}_t$, and their label spaces as $\mathbf{Y}_s$ and $\mathbf{Y}_t$, respectively. Usually, the data distribution of the source domain and the target domain are inconsistent: $P(\bm{X}_s) \neq P(\bm{X}_t)$. Therefore, the purpose of transfer learning is to use the knowledge learned from $\mathbf{D}_s$ to identify the labels of samples in $\mathbf{D}_t$.
Fine-tuning is a general method in transfer learning that uses $\mathbf{D}_s$ to train the model and adjust it by $\mathbf{D}_t$.
Its original motivation is to reduce the number of samples needed during the training process. Since deep learning models generally contain a vast number of parameters and if it is trained on the target domain $\mathbf{D}_t$, it is easy to overfit and perform poorly in practice. However, fine-tuning allows the model parameters to reach a suboptimal state, and a small number of training samples of the target domain can tune the model to reach the optimal state. It involves two steps. First, the specific model will be fully trained on the source domain $\mathbf{D}_s$ with abundant labeled samples to make the model parameters arrive at a good state. Then, the model is transferred to the target domain $\mathbf{D}_t$, except for some task-related modules, and slightly tuned on $\mathbf{D}_t$ so that the model fits the data distribution of the target domain $\mathbf{D}_t$.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.5\textwidth]{figure/architecture/transfer-learning.pdf}
\caption{Flowchart of the fine-tuning method. The solid line represents pretraining, and the dashed line represents fine-tuning. $f_\omega$ is a learning function.}
\label{transfer-learning}
\end{figure}
Because the fine-tuning method is relatively simple, it is widely used in the transfer learning method for hyperspectral image classification. To our knowledge, Yang \emph{et al.}~\cite{yang2016two_channel_transfer} are the first to combine deep learning with transfer learning to classify hyperspectral images. The model consists of two convolutional neural networks, which are used to extract spectral features and spatial features. Then, the joint spectral-spatial feature will be input into the fully connected layer to gain a final result. According to fine-tuning, the model is first fully trained on the hyperspectral image of the source domain. Next, the fully connected layer is replaced and the parameters of the convolutional network are reserved. Finally, the transfer model will be trained on the target hyperspectral image to adapt to the new data distribution. The later transfer learning models based on fine-tuning basically follow that architecture~\cite{yang2017_deep_joint_transferring,lin2019deep_transfer_information_measure,zhang2019transfer_lightweight_3DCNN,jiang2019transfer_3Dseparable_ResNet}. It is worth noting that Deng \emph{et al.}~\cite{deng2018active_transfer} combined transfer learning with active learning to classify HSI.
Data distribution adaptation is another commonly used transfer learning method. The basic idea of this theory is that in the original feature space, the data probability distributions of the source domain and the target domain are usually different. However, they can be mapped to a common feature space together. In this space, their data probability distributions become similar. In 2014, Ghifary \emph{et al.}~\cite{ghifary2014deep_domain_adaptive} first proposed a shadow neural network-based domain adaptation model, called DaNN.
The innovation of this work is that a maximum mean discrepancy (MMD) adaptation layer is added to calculate the distance between the source domain and the target domain. Moreover, the distance is merged into the loss function to reduce the difference between the two data distributions.
Subsequently, Tzeng \emph{et al.}~\cite{tzeng2014deep_domain_confusion} extended this work with a deeper network and proposed deep domain confusion to solve the adaptive problem of deep networks.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.8\textwidth]{figure/architecture/DANN.pdf}
\caption{Flowchart of DANN.}
\label{DANN}
\end{figure}
Wang \emph{et al.}~\cite{Wang2019deep_domain_hyperspectral} introduced the deep domain adaptation model to the field of hyperspectral image classification for the first time. In~\cite{Wang2019deep_domain_hyperspectral}, two hyperspectral images from different scenes will be mapped to two low-dimensional subspaces by the deep neural network, in which the samples are represented as manifolds. MMD is used to measure the distance between two low-dimensional subspaces and is added to the loss function to make two low-dimensional subspaces have high similarity. In addition, they still add the sum of the distances between samples and their neighbor into the loss function to ensure that the low-dimensional manifold is discriminative.
Motivated by the excellent performance of generative adversarial net (GAN), Yaroslav \emph{et al.}~\cite{ganin2016domain_adversarial_training} first introduced it into transfer learning. The network is named DANN (domain-adversarial neural network), which is different from DaNN proposed by Ghifary \emph{et al.}~\cite{ghifary2014deep_domain_adaptive}.
The generator $\mathbf{G}_f$ and the discriminator $\mathbf{G}_d$ compete with each other until they have converged. In transfer learning, the data in one of the domains (usually the target domain) are regarded as the generated sample. The generator aims to learn the characteristics of the target domain sample so that the discriminator cannot distinguish which domain the sample comes from to achieve the purpose of domain adaptation. Therefore, $\mathbf{G}_f$ is used to represent the feature extractor here.
Elshamli \emph{et al.}~\cite{elshamli2017domain_DANN} first introduced the concept of DANN to the task of hyperspectral image classification. Compared to general GNN, it has two discriminators. One is the class discriminator predicting the class labels of samples, and the other is the domain discriminator predicting the source of the samples. Different from the two-stage method, DANN is an end-to-end model that can perform representation learning and classification tasks simultaneously. Moreover, it is easy to train. Further, it outperforms two-stage frameworks such as the denoising autoencoder and traditional approaches such as PCA in hyperspectral image classification.
\subsection{Deep Active Learning for HSI classification}
Active learning~\cite{settles2009active} in the supervised learning method can efficiently deal with small-sample problems. It can effectively learn discriminative features by autonomously selecting representative or high-information samples from the training set, especially when the labeled samples are scarce. Generally speaking, active learning consists of five components, $A=(C, L, U, Q, S)$. Among them, $C$ represents one or a group of classifiers. $L$ and $U$ represent the labeled samples and unlabeled samples, respectively. $Q$ is the query function, which is used to query the samples with a large amount of information among the unlabeled samples. $S$ is an expert and can label unlabeled samples. In general, active learning has two stages. The first stage is the initialization stage. In this stage, a small number of samples will be randomly selected to form the training set $L$ and be labeled by experts to train the classifier. The second stage is the iterative query. $Q$ will select new samples from the unlabeled sample set $U$ for $S$ to mark them based on the results of the previous iteration and add them to the training set $L$.
The active learning method applied to hyperspectral image classification is mainly based on the active learning algorithm of the committee and the active learning algorithm based on the posterior probability.
In the committee-based active learning algorithm, the EQB method uses entropy to measure the amount of information in unlabeled samples. Specifically, the training set L will be divided into $k$ subsets to train $k$ classifiers and then use these $k$ classifiers to classify all unlabeled samples. Therefore, each unlabeled sample corresponds to k predicted labels. The entropy value is calculated from this:
\begin{equation}
\bm{x}^{EQB}=\mathop{\arg\min}_{x_i \in U}\frac{H^{EQB}(x_i)}{log(N_i)}
\end{equation}
where $H$ represents the entropy value, and $N_i$ represents the number of classes predicted by the sample $x_i$. Samples with large entropy will be selected and manually labeled~\cite{haut2018active_deep}. In~\cite{liu2016active_deep}, the deep belief network is used to generate the mapping feature $h$ of the input $x$ in an unsupervised way, and then, $h$ will be used to calculate the information entropy. At the same time, sparse representation is used to estimate the representations of the sample. In the process of selecting samples for active learning, the information entropy and representations of the samples are comprehensively considered.
In contrast, the active learning method based on posterior probability~\cite{li2015active_autoencoders, sun2016active_autoencoder, cao2020convolutional_active} is more widely used. Breaking ties belongs to the active learning method of posterior probability, which is widely used in hyperspectral classification tasks. This method first uses specifies models, such as convolutional networks, maximum likelihood estimation classifiers, support vector machines, etc., to estimate the posterior probabilities of all samples in the candidate pool. Then, the approach uses the posterior probability to input the following formula to produce a measure of sample uncertainty:
\begin{equation}
\label{BvSB}
\bm{x}^{BT}=\mathop{\arg\min}_{x_i \in U} \left\{ \mathop{\max}_{w \in N}p \left ( y_i^*=w|x_i\right ) - \mathop{\max}_{w \in N\setminus w^+}p(y_i^*=w|x_i)\right\}
\end{equation}
In the above formula, we first calculate the difference between the largest probability and the second-largest probability among the posterior probabilities of all candidate samples and select the sample with the minimum difference to join the valuable data set. The lower $x^{BT}$ is, the more uncertain is the sample. In~\cite{li2015active_autoencoders}, Li \emph{et al.} first used an autoencoder to construct an active learning model for hyperspectral image classification tasks. At the same time, Sun \emph{et al.}~\cite{sun2016active_autoencoder} also proposed a similar method. However, this method only uses spectral features. Because of the effectiveness of spatial information, in~\cite{deng2018active_deep_spatial_spectral}, when generating the posterior probability, the space-spectrum joint features are considered at the same time. In contrast, Cao \emph{et al.}~\cite{cao2020convolutional_active} use convolutional neural networks to generate the posterior probability.
In general, the active learning method can automatically select effective samples according to certain criteria, reduce inefficient redundant samples, and thus well alleviate the problem of missing training samples in the small-sample problem.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.8\textwidth]{figure/architecture/active-learning.pdf}
\caption{Architecture of active learning.}
\label{active-learning}
\end{figure}
\subsection{Deep Few-shot Learning for HSI classification}
Few-shot learning is among meta-learning approaches and aims to study the difference between the samples instead of directly learning what the sample is, different from most other deep learning methods. It makes the model learn to learn. In few-shot classification, given a small support set with N labeled samples $S_N^k=\lbrace(\bm{x}_1, y_1), \cdots, (\bm{x}_N, y_N)\rbrace$, which have $k$ categories, the classifier will mask the query sample with the label of the largest similarity sample among $S_N^k$. To achieve this target, many learning frameworks have been proposed and they can be divided into two categories: meta-based model and metric-based model.
The prototype network~\cite{snell2017prototypical} is one of the metric-based models of few-shot learning. Its basic idea is that every class can be depicted by a prototype representation, and the samples that belong to the same category should be around the class prototype. First, all samples will be transformed into a metric space through an embedding function $f_\phi: \mathbb{R}^D \rightarrow \mathbb{R}^M$ and represented by the embedding vector $\mathbf{c}_k \in \mathbb{R}^M$. Due to the powerful ability of the convolutional network, it is used as the embedding function. Moreover, the prototype vector is usually the mean of the embedding vector of the samples in the support set for each class $c_i$.
\begin{equation}
\bm{c}_i = \frac{1}{|S^i|}\sum_{(\bm{x}_j, y_j)\in S^i}f_\phi(\bm{x}_j)
\end{equation}
In~\cite{liu2020deep}, Liu \emph{et al.} simply introduce the prototype network into hyperspectral image classification task and use ResNet~\cite{he2016deep_residual} to serve as a feature extractor that maps the samples into a metric space. Then, the prototype network is significantly improved for the hyperspectral image classification task by~\cite{tang2019SSPrototypical}. In the paper, the spatial-spectral feature is first integrated by the local pattern coding, and the 1D-CNN converts it to an embedding vector. The prototype is the weighted mean of these embedding vectors, which is contrary to the general prototype network. In~\cite{xi2020ResidualPrototypical} Xi \emph{et al.} replace the mapping function with hybrid residual attention~\cite{muqeet2019hran} and introduce a new loss function to force the network to increase the interclass distance and decrease the intraclass distance.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.75\textwidth]{figure/architecture/prototype-network.pdf}
\caption{Architecture of a prototype network.}
\label{prototype-network}
\end{figure}
The relation network~\cite{Sung2018RelationNetwork} is another metric-based model of few-shot learning. In general, it has two modules: the embedding function $f_\phi: \mathbb{R}^D \rightarrow \mathbb{R}^M$ and relation function $f_\psi: \mathbb{R}^{2M} \rightarrow \mathbb{R}$. The function of the embedding module is the same as the prototype network, and its key idea is the relation module. The relation module is to calculate the similarity of samples. It is a learnable module that is different from the Euclidean distance or cosine distance. In other words, the relation network introduces a learnable metric function based on the prototype network. The relation module can more precisely describe the difference of samples by the study. During inference, the query embedding $f_\psi(x_i)$ will be combined with the support embedding $f_\psi(\bm{x}_j)$ as $\mathcal{C}(f_\psi(\bm{x}_i), f_\psi(\bm{x}_j))$. Usually, $\mathcal{C}(\cdot, \cdot)$ is a concatenation operation. Then, the relation function will transform the splicing vector to a relation score $r_{i,j}$, which indicates the similarity between $x_i$ and $x_j$.
\begin{equation}
r_{i,j} = f_\psi(\mathcal{C}(f_\psi(\bm{x}_i), f_\psi(\bm{x}_j)))
\end{equation}
Several works have introduced the relation network into hyperspectral image classification to solve the small sample set problem. Deng \emph{et al.}~\cite{deng2019relation} first introduced the relation network into HSI. They use a 2-dimensional convolutional neural network to construct both the embedding function and relation function. Gao \emph{et al.}~\cite{gao2020relation} and Ma \emph{et al.}~\cite{ma2019Two_Phase_Relation} have also proposed a similar architecture. In~\cite{rao2019SSRelation}, to extract the joint spatial-spectral feature, Rao \emph{et al.} implemented the embedding function with a 3-dimensional convolutional neural network.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.9\textwidth]{figure/architecture/relation-network.pdf}
\caption{Architecture of relation network.}
\label{relation-network}
\end{figure}
The Siamese network~\cite{bromley1994signature,chopra2005similarity_learning,norouzi2012hamming} is a typical network in few-shot learning. Compared with the above network, its input is a sample pair. Thus, it is composed by two parallel subnetworks $f_{\phi 1}: \mathbb{R}^D \rightarrow \mathbb{R}^M$ with the same structure and sharing parameters. The subnetworks respectively accept an input sample and map it to a low-dimensional metric space to generate their own embedding $f_{\phi 1}(\bm{x}_i)$ and $f_{\phi 1}(\bm{x}_j)$. The Euclidean distances $D(\bm{x}_i, \bm{x}_j)$ is used to measure their similarity.
\begin{equation}
D(\bm{x}_i, \bm{x}_j) = \Vert f_{\phi 1}(\bm{x}_i)- f_{\phi 1}(\bm{x}_j) \Vert_2
\end{equation}
The higher the similarity between the two samples is, the more likely they are to belong to the same class. Recently, the Siamese network was introduced into HSI classification.
Usually, a 2-dimensional convolutional neural network~\cite{liu2017siamese, liu2018transfer} is used to serve as the embedding function, as in the above two networks. In the same way, several methods combined the 1-dimensional convolution neural network with the 2-dimensional one~\cite{li2020adaptation, huang2020dual_siamese} or use a 3-dimensional network~\cite{rao2020Siamese3D} for the joint spectral-spatial feature. Moreover, Miao \emph{et al.}~\cite{miao2019Siamese_Encoder} have tried to use the stack autoencoder to construct the embedding function $f_{\phi 1}$. After training, the model has the ability to identify the difference between samples. To obtain the final classification result, we still need a classifier to classify the embedding feature of the sample, which is different from the prototype network and the relation network. To avoid overfitting under limited labeled samples, an SVM is usually used as a classifier since it is famous for its lightweight.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.8\textwidth]{figure/architecture/siamese-network.pdf}
\caption{Architecture of the Siamese network.}
\label{siamese-network}
\end{figure}
\section{Experiments}
\label{experiments}
In most papers, comprehensive experiments and analysis are introduced to describe the advantages and disadvantages of the methods in the paper. However, the problem is that different papers may choose different experimental settings. For example, the same number of samples for training or test is used in the experiments, and the chosen samples are normally different since they are chosen randomly.
To evaluate different methods fairly, we should use the exact same experimental setting. That is the reason why we design experiments to evaluate different methods.
As described above, the main methods of small-sample learning currently include the autoencoder, few-shot learning, transfer learning, active learning, and data augmentation. Therefore, some representative networks of the following methods--S-DMM~\cite{2020deep_metric_embedding}, SSDL~\cite{yue2016spatial_pyramid_pooling}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, TwoCnn~\cite{Yang2017deep_transferring_SS}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs} and 3DVSCNN~\cite{2020valuable_selection_cnn}, which contain convolutional network models and recurrent network models, are selected to conduct experiments on three benchmark data sets--PaviaU, Salinas and KSC. All models are based on deep learning. Here, we only focus on the robustness of the model on a small-sample data set, so they classify hyperspectral images based on joint spectral-spatial features.
According to the sample size per category in the training data set, the experiment is divided into three groups. The first has 10 samples for each category, the second has 50 samples for each category and the third has 100 samples for each category. At the same time, to ensure the stability of the model, each group of experiments is performed ten times, and the training data set is different each time. Finally, models are evaluated by average accuracy (AA) and overall accuracy (OA).
\subsection{Introduction of data sets}
\begin{itemize}
\item \textbf{Pavia University (PaviaU)}: The Pavia University data set consists of hyperspectral images, each with 610*340 pixels and a spatial resolution of 1.3 meters, which was taken by the ROSIS sensor above Pavia University in Italy. The spectral imagery continuously images 115 wavelengths in the range of 0.43$\sim$0.86 um. Since 12 of the wavelengths are polluted by noise, each pixel in the final data set contains 103 bands. It contains 42,776 labeled samples in total, covering 9 objects. In addition, its sample size of each object is shown in Table \ref{PaviaU}.
\item \textbf{Salinas}: The Salinas data set consists of hyperspectral images with 512*217 pixels and a spatial resolution of 3.7 meters, taken over the Salinas Valley in California by the AVIRIS sensor. The spectral imagery continuously images 224 wavelengths in the range of 0.2$\sim$2.4 um. Since 20 of the bands cannot be reflected by water, each pixel in the final data set contains 204 bands. It contains 54,129 labeled samples in total, covering 16 objects. In addition, its sample size of each object is shown in Table \ref{Salinas}.
\item \textbf{Kennedy Space Center (KSC)}: The KSC data set was taken at the Kennedy Space Center (KSC), above Florida, and used the AVIRIS sensor. Its hyperspectral images contain 512*641 pixels, and the spatial resolution is 18 meters. The spectral imagery continuously images 224 wavelengths in the range of 400$\sim$2500 nm. Similarly, after removing 48 bands that are absorbed by water and have a low signal-to-noise ratio, each pixel in the final data set contains 176 bands. It contains 5211 label samples, covering 13 objects. Moreover, its sample size of each object is shown in Table \ref{KSC}.
\end{itemize}
\begin{table}[htbp]
\centering
\small
\caption{Pavia University. It contains 9 objects. The second column and last column represent the name of objects and sample number, respectively.}
\begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr}
\toprule
NO.&Class&Total \\
\midrule
C1&Asphalt&6631 \\
C2&Meadows&18649 \\
C3&Gravel&2099 \\
C4&Trees&3064 \\
C5&Painted metal sheets&1345 \\
C6&Bare Soil&5029 \\
C7&Bitumen&1330 \\
C8&Self-Blocking Bricks&3682 \\
C9&Shadows&947 \\
\bottomrule
\end{tabular*}
\label{PaviaU}
\end{table}
\begin{table}[htbp]
\centering
\caption{Salinas. It contains 16 objects. The second column and last column represent the name of objects and sample number, respectively.}
\begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr}
\toprule
NO.&Class&Total \\
\midrule
C1&Broccoli green weeds 1&2009 \\
C2&Broccoli green weeds 22&3726 \\
C3&Fallow&1976 \\
C4&Fallow rough plow&1394 \\
C5&Fallow smooth&2678 \\
C6&Stubble&3959 \\
C7&Celery&3579 \\
C8&Grapes untrained&11271 \\
C9&Soil vineyard develop&6203 \\
C10&Corn senesced green weeds&3278 \\
C11&Lettuce romaine 4wk&1068 \\
C12&Lettuce romaine 5wk&1927 \\
C13&Lettuce romaine 6wk&916 \\
C14&Lettuce romaine 7wk&1070 \\
C15&Vineyard untrained&7268 \\
C16&Vineyard vertical trellis&1807 \\
\bottomrule
\end{tabular*}
\label{Salinas}
\end{table}
\begin{table}[htbp]
\centering
\caption{KSC. It contains 13 objects. The second column and last column represent the name of objects and sample number, respectively.}
\begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr}
\toprule
NO.&Class&Total \\
\midrule
C1&Scrub&761 \\
C2&Willow swamp&243 \\
C3&Cabbage palm hammock&256 \\
C4&Cabbage palm/oak hammock&252 \\
C5&Slash pine&161 \\
C6&Oak/broadleaf hammock&229 \\
C7&Hardwood swamp&105 \\
C8&Graminoid marsh&431 \\
C9&Spartina marsh&520 \\
C10&Cattail marsh&404 \\
C11&Salt marsh&419 \\
C12&Mud flats&503 \\
C13&Water&927 \\
\bottomrule
\end{tabular*}
\label{KSC}
\end{table}
\subsection{Selected models}
Some state-of-the-art methods are choose to evaluate their performance. They were trained using different platforms, including Caffe, PyTorch, etc. Some platforms such Caffe are not well supported by the new development environments. Most models are our re-implementations and are trained using the exact same setting.
Most of the above model settings are based on the original paper, and some are modified slightly based on the experiment. All models are trained and tested on the same training data set that is picked randomly based on pixels and the test data set, and their settings have been optimally tuned. The implementation situation of the code is shown in Table \ref{code-of-model}. The descriptions of the chosen models are provided in the following part.
\begin{table}[htbp]
\centering
\caption{Originators of model implementations. F denotes that the code of the model comes from the original paper. T denotes our implemented model.}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
S-DMM&SSDL&3DCAE&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR \\
\hline
F&T&F&T&T&T&T&T \\
\hline
\end{tabular}}
\label{code-of-model}
\end{table}
\begin{itemize}
\item \textbf{SAE\_LR~\cite{chen2014deep}}. This is the first paper to introduce the autoencoder into hyperspectral image classification, opening a new era of hyperspectral image processing. It adopts a raw autoencoder composed of linear layers to extract the feature. The size of the neighbor region is $5\times 5$, and the first 4 components of PCA are chosen. Subsequently, we can gain a spatial feature vector. Before inputting into the model, the raw spatial feature and the spatial feature are stacked to form a joint feature. To reduce the difficulty of training, it uses a greedy layerwise pretraining method to train each layer, and the parameters of the encoder and decoder are symmetric. Then, the encoder concatenates a linear classifier for fine tuning. According to~\cite{chen2014deep}, the hidden size is set to 60, 20, and 20 for PaviaU, Salinas, and KSC, respectively.
\item \textbf{S-DMM~\cite{2020deep_metric_embedding}}. This is a relation network that contains an embedding module and relation module implemented by 2D convolutional networks. The model aims to make samples in the feature space have a small intraclass distance and a large interclass distance through a learnable feature embedding function and a metric function. After training, all samples will be assigned to the corresponding clusters. Finally, a simple KNN is used to classify the query sample. In the experiment, the neighbor region of the pixel is fixed as $5\times 5$ and the feature dimension is set to 64.
\item \textbf{3DCAE~\cite{mei2019_3d_convolutional_autoencoder}}. This is a 3D convolutional autoencoder adopting a 3D convolution layer to extract the joint spectral-spatial feature.
First, 3DCAE is trained by the traditional method, and then, an SVM classifier is adopted to classify the hidden features on the top of 3DCAE. In the experiment, the neighbor region of the pixel is set to $5\times 5$ and 90\% of the samples are used to train the 3D autoencoder.
There are two different hyperparameter settings corresponding to Salinas and PaviaU, and the model has not been tested on KSC in~\cite{2020deep_metric_embedding}. Therefore, on the KSC, the model uses the same hyperparameter configuration as on the Salinas because they are collected by the same sensor.
\item \textbf{SSDL~\cite{yue2016spatial_pyramid_pooling}}. This is a typical two-stream structure extracting the spectral and spatial feature separately through two different branches and merging them at the end. Inspired by~\cite{chen2014deep}, the author adopts a 1D autoencoder to extract the spectral feature.
In the branch of spatial feature extraction, the model uses a spatial pyramid pooling layer to replace the traditional pooling layer on the top convolutional layer. The spatial pyramid pooling layer enables the deep convolutional neural network to generate a fixed-length feature. On the one hand, it enables the model to convert the input of different sizes into a fixed-length, which is good for the module that is sensitive to the input size; on the other hand, it is useful for the model to better adapt to objects of different scales, and the output will include features from coarse to fine, achieving multiscale feature fusion. Then, a simple logistic classifier is used to classify the spectra-spatial feature. In the experiment, 80\% of the data are used to train the autoencoder through the method of greedy layer-wise pretraining.
Moreover, in the spatial branch, the size of the neighbor region is set to 42*42 and PCA is used to extract the first component. Then, the overall model is trained together.
\item \textbf{TwoCnn~\cite{Yang2017deep_transferring_SS}}. This is a two-stream structure based on fine-tuning. In the spectral branch, it adopts a 1D convolutional layer to capture local information of spectral features, which is entirely different from SSDL. In particular, transfer learning is used to pretrain parameters of the model and endow it with good robustness on limited samples. The pairs of the source data set and target data set are Pavia Center--PavaU, Indian pines-Salinas, and Indian pines-KSC. In~\cite{Yang2017deep_transferring_SS}, they also did not test the model on KSC. Thus, we regard Indian pines as the source domain for KSC, given that both data sets come from the same type of sensor. The neighbor region of the pixel is set to 21*21. Additionally, it averages along the spectral channel to reduce the input dimension, instead of PCA. In the pretraining process, 15\% of samples of each category of Pavia and 90\% of samples of each category of Indian pines are treated as the training data set, and the rest serve as the test data set.
To make the number of bands in the source data set and target data set the same, we filter out the band that has the smaller variance. According to~\cite{Yang2017deep_transferring_SS}, all other layers are transferred except for the softmax layer. Finally, the model is fine-tuned on the target data set with the same configuration.
\item \textbf{3DVSCNN~\cite{2020valuable_selection_cnn}}. This is a general CNN-based image classification model, but it uses a 3D convolutional network to extract spectral-spatial features simultaneously followed by a fully connected network for classification. The main idea of~\cite{2020valuable_selection_cnn} is the usage of active learning. The process can be divided into two steps: the selection of valuable samples and the training of the model. In~\cite{2020valuable_selection_cnn}, an SVM serves as a selector to iteratively select some of the most valuable samples according to Eq.\eqref{BvSB}.
Then, the 3DVSCNN is trained on the valuable data set. The size of its neighbor region is set to 13*13. During data preprocessing, it uses PCA to extract the top 10 components for PaviaU and Salinas, and the top 30 components for KSC, which contain more than 99\% of the original spectral information and still keep a clear spatial geometry. In the experiment, 80\% of samples will be picked by the SVM to form a valuable data set for 4 samples in each iteration. Then, the model is trained on the valuable data set.
\item \textbf{CNN\_HSI~\cite{yu2017convolutional}}. The model combines multilayer $1\times 1$ 2D convolutions followed by local response normalization to capture the feature of hyperspectral images. To avoid the loss of information after PCA, it uses 2D convolution to extract spectral and spatial joint features directly, instead of 3D convolution. At the same time, it also adopts a dropout layer and data augmentation, including rotation and flipping, to improve the generalization of the model and reduce overfitting. After data augmentation, an image can generate eight different orientation images. Moreover, the model removes the linear classifier to decrease the number of trainable parameters. According to~\cite{yu2017convolutional}, the dropout rate is set to 0.6, the size of the neighbor region is $5\times 5$, and the batch size is 16 in the experiment.
\item \textbf{SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}}. Unlike the above methods, SSLstm adopts recurrent networks to process spectral and spatial features simultaneously.
In the spectral branch, called SeLstm, the spectral vector is seen as a sequence. In the spatial branch, called SaLstm, it treats each line of the image patch as a sequence element. Therefore, along the column direction, the image patch can be well converted into a sequence. In particular, it fuses the predictions of the two branches in the label space to obtain the final prediction result, which is defined as
\begin{equation}
\begin{split}
P(y=j|x_i) = w_{spe}P_{spe}(y=j|x_i)+w_{spa}P_{spa}(y=j|x_i)
\end{split}
\end{equation}
where $P(y=j|x_i)$ denotes the final posterior probability, $P_{spe}(y=j|x_i)$ and $P_{spa}(y=j|x_i)$ denote the posterior probabilities from spectral and spatial modules, respectively, and $w_{spe}$ and $w_{spa}$ are fusion weights that satisfy the sum of 1. In the experiment, the size of the neighbor region is set to 32*32 for PaviaU and Salinas. In addition, for KSC, it is set to 64*64. Next, the first component of PCA is reserved on all data sets. The number of hidden nodes of the spectral branch and the spatial branch are 128 and 256, respectively. In addition, $w_{spe}$ and $w_{spa}$ are set to 0.5 and 0.5 separately.
\end{itemize}
\subsection{Experimental results and analysis}
The accuracy of the test data set is shown in Table \ref{ACCURACY-PAVIAU-TABLE}, Table \ref{ACCURACY-SALINAS-TABLE}, and Table \ref{ACCURACY-KSC-TABLE}. Corresponding classification maps are shown in Figure~\ref{PaviaU-10}$\sim$\ref{KSC-100}. The final classification result of the pixel is decided by the voting result of 10 experiments.
Taking Table \ref{ACCURACY-PAVIAU-TABLE} as an example, the experiment is divided into three groups, and the sample sizes in each group are 10, 50, and 100, respectively. The aforementioned models are conducted 10 times in every experiment sets. Then, we count the average of their class classification accuracy, AA, and OA for comparing their performance. When sample size is 10, S-DMM has the highest AA and OA, which are 91.08\% and 84.45\% respectively, in comparison with the AA and OA of 71.58\% and 60.00\%, 75.34 \% and 74.79\%, 74.60\% and 78.61\%, 75.64\% and 75.17\%, 72.77\% and 69.59\%, 85.12\% and 82.13\%, 72.40\% and 66.05\% for 3DCAE, SSDL, TwoCnn, 3DVSCNN, SSLstm, CNN\_HSI and SAE\_LR. Besides, S-DMM has the largest number of class classification accuracy. When the sample size is 50, S-DMM and CNN\_HSI have the highest AA and OA respectively, which are 96.47\% and 95.21\%. In the last group, 3DVSCNN and CNN\_HSI have the highest AA and OA, which are 97.13\% and 97.35\%. According to the other two tables, we can conclude with a similar result.
As shown in Table \ref{ACCURACY-PAVIAU-TABLE}, Table \ref{ACCURACY-SALINAS-TABLE} and Table \ref{ACCURACY-KSC-TABLE}, we can conclude that most models' performance on KSC, except for 3DCAE, is better than the other two data sets. Especially when the data set contains few samples, the accuracy of S-DMM is up to 94\%, superior to other data sets. This is because the surface objects on the KSC itself have a discriminating border between each other, regardless of its higher spatial resolution than that of the other data sets, as shown in Figure \ref{KSC-10}$\sim$\ref{KSC-100}. In the other data sets, models easily misclassify the objects that have a similar spatial structure, as illustrated in Meadows (class 2) and Bare soil (class 6) in PaviaU and Fallow rough plow (class 4) and Grapes untrained (class 8) in Salinas, as shown in \ref{PaviaU-10}$\sim$\ref{Salinas-100}. The accuracy of all models on Grapes untrained is lower than other classes in Salinas. In Figure~\ref{accuracy-curve}, on all data sets, as the number of samples increases, the accuracy of all models will improve together.
As shown in Figure~\ref{accuracy-curve}, when the sample size of each category is 10, S-DMM and CNN\_HSI have achieved stable and excellent performance on all data sets. They are not sensitive to the size of the data set. In Figure~\ref{accuracy-curve-Salinas} and Figure~\ref{accuracy-curve-KSC}, with increasing sample size, the accuracy of S-DMM and CNN\_HSI have improved slightly, but their increase is lower than that of others. In Figure~\ref{accuracy-curve-PaviaU}, when the sample size increases from 50 to 100, we can obtain the same conclusion. This result shows that both of them can be applied to solve the small-sample problem in hyperspectral images. Especially for S-DMM, it has gained the best performance on the metric of AA and OA on Salinas and KSC in the experiment with a sample size of 10. On PaviaU, it still wins the third place. This result also proves that it can work well on a few samples. Although TwoCnn, 3DVSCNN, and SSLstm achieve good performance on all data sets, when the data set contains fewer samples, they will not work well. It is worth mentioning that 3DVSNN uses fewer samples to train than other models for selecting valuable samples. This approach may not be beneficial for those classes with few samples. As shown in \ref{ACCURACY-KSC-TABLE}, 3DVSCNN has a good performance on OA, but a bad performance on AA. For class 7, when its sample size increases from 10 to 50 and 100, its accuracy drops. This is because the total sample size of it is the smallest on KSC. Therefore, it contains few valuable samples. Moreover, the step of selecting valuable samples would cause an imbalance between the classes, which leads to the accuracy of class 7 decreasing. On almost all data sets, autoencoder-based models achieve poor performance compared with other models. Although unsupervised learning does not need to label samples, if there are no constraints, the autoencoder might actually learn nothing. Moreover, since it has a symmetric architecture, it would result in a vast number of parameters and increase the difficulty of training. Therefore, SSDL and SAE\_LR use a greedy layerwise pretraining method to solve this problem. However, 3DCAE does not adopt this approach and achieves the worst performance on all data sets.
As shown in Figure~\ref{accuracy-curve}, it still has considerable room for improvement.
Overall, classification results based on few-shot learning, active learning, transfer learning, and data augmentation are better than autoencoder-based unsupervised learning methods on the limited sample in all experiments. Few-shot learning benefits from the exploration of the relationship between samples to find a discriminative decision boarder. Active learning benefits from the selection of valuable samples, which enables the model to focus more attention to indistinguishable samples. Transfer learning makes good use of the similarity between different data sets, which reduces the quantity of data required for training and trainable parameters, improving the model's robustness. According to raw data, the method of data augmentation generates more samples to expand the diversity of samples. Although the autoencoder can learn the internal structure of the unlabeled data set, the final feature representation might not have task-related characteristics. This is the reason why its performance on a small-sample data set is inferior to supervised learning.
\begin{table}[htbp]
\centering
\caption{PaviaU. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN~\cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on PaviaU. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.}
\large
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccc}
\hline
size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline
\multirow{11}{*}{10}&1&\textbf{94.34}&49.41&68.33&71.80&63.03&72.59&84.60&66.67\\
&2&73.13&51.60&72.94&\textbf{88.27}&69.22&68.86&67.57&56.68\\
&3&\textbf{86.85}&54.06&53.71&47.58&71.77&48.08&72.80&46.37\\
&4&95.04&94.81&88.58&\textbf{96.29}&85.10&79.06&93.65&80.10\\
&5&\textbf{99.98}&99.86&97.21&94.99&98.61&93.80&99.84&98.81\\
&6&\textbf{85.58}&57.40&66.21&49.75&75.17&62.53&78.35&55.87\\
&7&\textbf{98.55}&80.34&68.17&58.65&65.61&65.39&92.14&81.42\\
&8&\textbf{86.47}&57.97&64.07&66.95&55.77&67.60&78.17&66.83\\
&9&\textbf{99.81}&98.76&98.83&97.15&96.48&97.02&98.92&98.90\\\cline{2-10}
&AA&\textbf{91.08}&71.58&75.34&74.60&75.64&72.77&85.12&72.40\\
&OA&\textbf{84.55}&60.00&74.79&78.61&75.17&69.59&82.13&66.05\\\hline
\hline
\multirow{11}{*}{50}&1&\textbf{97.08}&80.76&76.11&88.50&90.60&82.96&93.66&78.83\\
&2&90.09&63.14&87.39&86.43&93.68&82.42&\textbf{94.82}&65.36\\
&3&\textbf{95.15}&62.57&70.28&69.21&90.64&81.59&94.87&65.50\\
&4&97.35&97.33&89.27&\textbf{98.80}&93.47&91.31&94.49&92.43\\
&5&\textbf{100.00}&\textbf{100.00}&98.14&99.81&99.92&99.67&\textbf{100.00}&99.47\\
&6&\textbf{96.32}&80.15&75.12&84.93&94.15&82.58&88.14&72.30\\
&7&\textbf{99.31}&88.45&75.80&83.12&94.98&92.34&97.21&86.04\\
&8&\textbf{92.97}&75.11&70.57&83.57&91.55&84.75&87.52&79.74\\
&9&\textbf{99.98}&99.69&99.61&99.91&98.72&99.39&99.78&99.29\\\cline{2-10}
&AA&\textbf{96.47}&83.02&82.48&88.25&94.19&88.56&94.50&82.10\\
&OA&94.04&64.17&84.92&90.69&94.23&84.50&\textbf{95.21}&77.42\\\hline
\hline
\multirow{11}{*}{100}&1&\textbf{97.11}&83.05&85.59&92.21&94.38&90.84&94.44&78.64\\
&2&91.64&73.45&86.17&76.86&\textbf{95.90}&83.26&97.75&74.28\\
&3&94.23&73.02&80.29&72.24&\textbf{95.96}&80.66&95.37&79.87\\
&4&98.70&97.87&97.14&\textbf{99.28}&97.65&92.54&95.88&93.54\\
&5&\textbf{100.00}&\textbf{100.00}&99.06&99.89&99.95&99.57&99.99&99.24\\
&6&93.51&86.82&83.16&95.90&\textbf{97.92}&87.61&91.01&69.83\\
&7&\textbf{99.21}&90.17&94.08&89.88&98.39&93.45&98.37&89.42\\
&8&92.73&88.31&88.43&90.03&\textbf{94.21}&90.08&92.41&85.05\\
&9&\textbf{99.99}&99.82&99.65&99.98&99.85&99.80&99.70&99.55\\\cline{2-10}
&AA&96.35&88.06&90.40&90.70&\textbf{97.13}&90.87&96.10&85.49\\
&OA&94.65&70.15&89.33&94.76&97.05&87.19&\textbf{97.35}&81.44 \\\hline
\end{tabular}}
\label{ACCURACY-PAVIAU-TABLE}
\end{table}
\begin{table}[htbp]
\centering
\caption{Salinas. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN \cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on Salinas. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.}
\large
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccc}
\hline
size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline
\multirow{18}{*}{10}&1&\textbf{99.45}&99.28&76.01&88.22&97.92&79.38&98.80&86.01\\
&2&99.21&59.04&69.24&78.09&\textbf{99.71}&72.49&98.77&44.21\\
&3&\textbf{96.70}&66.54&69.89&74.80&95.09&86.83&95.48&44.72\\
&4&\textbf{99.56}&98.65&94.96&98.19&99.28&99.45&98.36&97.40\\
&5&\textbf{97.12}&81.94&89.43&96.54&93.35&94.95&92.55&83.93\\
&6&89.64&98.52&96.19&98.96&99.81&93.65&\textbf{99.96}&87.28\\
&7&\textbf{99.82}&97.31&76.83&92.52&96.73&87.82&99.61&96.94\\
&8&\textbf{70.53}&68.11&42.58&54.35&67.89&61.64&77.51&41.58\\
&9&99.02&95.06&89.58&81.22&\textbf{99.42}&90.47&97.19&78.45\\
&10&91.13&9.43&76.40&75.18&\textbf{91.75}&86.66&89.23&30.75\\
&11&\textbf{97.56}&72.26&93.04&92.26&95.26&91.37&95.45&23.52\\
&12&99.87&72.16&86.60&86.40&96.65&95.38&\textbf{99.96}&82.63\\
&13&99.25&\textbf{99.78}&95.46&98.18&96.64&96.90&99.22&92.88\\
&14&96.30&89.93&90.50&96.10&\textbf{99.68}&91.68&96.80&62.40\\
&15&72.28&56.98&65.40&55.60&\textbf{83.86}&75.55&72.03&57.10\\
&16&\textbf{95.29}&44.35&75.89&92.39&92.03&88.43&94.07&76.75\\\cline{2-10}
&AA&93.92&75.58&80.50&84.94&\textbf{94.07}&87.04&94.06&67.91\\
&OA&89.69&71.50&74.29&77.54&90.18&81.20&\textbf{91.31}&67.43\\\hline
\hline
\multirow{18}{*}{50}&1&99.97&98.81&92.70&97.99&\textbf{99.99}&94.18&99.20&85.37\\
&2&99.84&86.97&88.30&91.35&\textbf{99.94}&92.34&99.57&92.51\\
&3&\textbf{99.84}&54.83&87.50&94.87&99.74&97.02&99.62&81.25\\
&4&99.93&98.87&99.41&99.96&99.89&\textbf{99.95}&99.63&98.40\\
&5&\textbf{99.40}&95.62&95.83&98.96&99.38&98.34&98.79&95.12\\
&6&99.92&99.62&98.95&99.87&\textbf{100.00}&98.78&99.98&98.86\\
&7&\textbf{99.92}&98.17&96.47&96.60&99.85&97.80&99.78&98.55\\
&8&68.92&81.74&62.99&68.05&\textbf{85.35}&77.17&77.93&46.04\\
&9&99.76&94.87&95.34&86.01&\textbf{99.99}&96.15&99.71&94.84\\
&10&97.18&12.87&95.31&93.94&\textbf{98.23}&97.23&97.33&77.69\\
&11&\textbf{99.57}&75.82&97.73&97.10&98.59&97.71&99.54&77.14\\
&12&\textbf{99.90}&58.18&97.51&97.16&99.89&98.88&99.84&96.87\\
&13&99.84&99.98&98.55&98.60&\textbf{100.00}&99.12&99.87&97.33\\
&14&98.15&93.80&97.54&99.37&\textbf{99.91}&99.24&99.53&91.49\\
&15&76.12&41.84&69.04&67.21&\textbf{88.77}&86.24&83.39&65.15\\
&16&\textbf{98.87}&69.00&94.34&97.78&98.55&97.64&98.15&91.94\\\cline{2-10}
&AA&96.07&78.81&91.72&92.80&\textbf{98.00}&95.49&96.99&86.78\\
&OA&90.92&74.73&85.79&87.01&\textbf{95.30}&91.37&95.08&79.49\\\hline
\hline
\multirow{18}{*}{100}&1&99.86&98.81&98.22&98.74&\textbf{99.99}&97.86&99.77&92.44\\
&2&99.74&91.88&96.54&96.70&\textbf{99.99}&97.74&99.86&89.46\\
&3&\textbf{99.99}&63.20&95.40&97.47&99.16&98.91&99.79&92.05\\
&4&99.84&99.12&99.29&\textbf{99.95}&99.85&99.78&99.44&99.03\\
&5&99.58&98.24&98.09&99.61&\textbf{99.70}&98.89&99.54&96.32\\
&6&99.99&99.95&99.12&99.79&\textbf{100.00}&99.62&\textbf{100.00}&98.96\\
&7&\textbf{99.93}&98.71&97.14&97.94&99.88&98.97&99.86&98.42\\
&8&67.88&71.43&59.51&66.83&\textbf{90.54}&86.00&79.90&39.73\\
&9&99.81&95.51&94.87&90.65&\textbf{99.98}&98.15&99.75&96.34\\
&10&96.54&22.92&96.97&96.21&97.77&\textbf{98.55}&97.29&84.35\\
&11&99.75&76.67&99.28&99.25&\textbf{99.82}&99.39&99.70&92.76\\
&12&\textbf{100.00}&64.12&99.39&98.01&99.99&99.84&99.99&96.97\\
&13&99.87&\textbf{99.98}&98.74&99.34&\textbf{99.98}&99.38&99.75&97.48\\
&14&98.66&94.73&98.62&99.72&\textbf{99.91}&99.44&99.67&93.52\\
&15&78.73&63.65&83.03&70.16&91.31&86.77&\textbf{91.86}&69.09\\
&16&\textbf{99.27}&79.70&96.65&99.26&99.26&98.69&99.10&93.21\\\cline{2-10}
&AA&96.21&82.41&94.43&94.35&\textbf{98.57}&97.37&97.83&89.38\\
&OA&91.56&76.61&88.67&90.25&\textbf{96.89}&94.41&96.28&81.95\\\hline
\end{tabular}}
\label{ACCURACY-SALINAS-TABLE}
\end{table}
\begin{table}[htbp]
\centering
\caption{KSC. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN~\cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on KSC. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.}
\large
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccc}
\hline
size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline
\multirow{15}{*}{10}&1&93.49&35.46&79.21&67.11&\textbf{95.33}&73.58&92.17&83.95\\
&2&\textbf{89.74}&49.40&67.68&58.37&40.39&68.45&81.67&69.01\\
&3&\textbf{95.16}&40.41&76.87&77.20&75.41&81.59&86.91&50.61\\
&4&58.72&5.54&70.33&75.12&35.87&\textbf{76.16}&60.83&20.21\\
&5&87.95&33.38&81.26&\textbf{88.08}&47.42&87.22&64.37&23.11\\
&6&\textbf{93.42}&51.05&79.18&66.44&64.29&76.71&66.16&45.39\\
&7&\textbf{98.63}&16.32&95.26&92.74&57.79&96.42&96.00&63.58\\
&8&\textbf{97.93}&46.44&72.42&61.92&71.88&52.95&85.77&58.05\\
&9&\textbf{94.88}&86.25&87.00&92.31&79.00&90.65&91.06&76.24\\
&10&\textbf{98.12}&8.76&72.59&86.27&56.57&89.04&85.13&63.12\\
&11&\textbf{97.51}&76.21&88.68&78.17&86.99&89.32&95.60&89.98\\
&12&\textbf{93.69}&8.54&83.65&78.09&60.79&83.96&89.66&69.59\\
&13&\textbf{100.00}&46.95&99.98&\textbf{100.00}&84.92&\textbf{100.00}&99.95&97.90\\\cline{2-10}
&AA&\textbf{92.25}&38.82&81.09&78.60&65.90&82.00&84.25&62.36\\
&OA&\textbf{94.48}&49.73&83.71&82.29&77.40&83.07&91.13&72.68\\\hline
\hline
\multirow{15}{*}{50}&1&97.99&22.53&96.12&72.95&\textbf{98.45}&96.77&94.40&88.21\\
&2&\textbf{98.24}&30.98&94.56&94.04&39.90&98.19&91.50&78.50\\
&3&98.69&45.10&96.55&90.10&99.13&\textbf{99.47}&94.47&83.06\\
&4&78.22&3.86&93.51&92.33&74.01&\textbf{98.32}&76.49&43.07\\
&5&92.16&40.54&96.94&97.12&64.32&\textbf{99.55}&87.03&53.33\\
&6&98.49&62.07&96.70&93.80&77.21&\textbf{99.72}&70.89&51.90\\
&7&98.36&18.00&99.64&97.82&20.36&\textbf{100.00}&98.00&84.73\\
&8&\textbf{99.21}&43.04&91.92&90.60&96.25&97.40&93.86&77.77\\
&9&\textbf{99.96}&89.77&98.57&89.55&63.91&98.83&98.77&86.47\\
&10&\textbf{99.92}&12.12&93.70&95.56&54.72&99.52&91.67&85.28\\
&11&98.62&80.38&97.86&98.40&90.95&\textbf{99.11}&87.75&96.56\\
&12&\textbf{99.07}&19.85&94.99&95.01&87.37&99.67&89.54&82.19\\
&13&\textbf{100.00}&91.24&\textbf{100.00}&90.00&96.77&99.46&98.95&99.44\\\cline{2-10}
&AA&96.84&43.04&96.24&92.10&74.10&\textbf{98.92}&90.25&77.73\\
&OA&98.68&54.01&96.88&96.61&96.03&\textbf{98.72}&97.39&84.93\\\hline
\hline
\multirow{15}{*}{100}&1&98.17&19.03&97.41&96.51&98.94&\textbf{99.74}&93.93&89.77\\
&2&98.74&34.13&98.60&99.58&56.50&\textbf{99.79}&89.93&80.77\\
&3&99.55&57.18&96.67&99.42&\textbf{99.81}&99.23&98.33&82.88\\
&4&88.29&1.38&97.96&98.68&88.29&\textbf{99.14}&85.86&53.95\\
&5&93.11&52.46&99.51&\textbf{100.00}&76.23&\textbf{100.00}&93.77&58.52\\
&6&\textbf{99.61}&59.77&98.68&97.36&80.62&99.53&74.96&58.22\\
&7&\textbf{100.00}&8.00&\textbf{100.00}&\textbf{100.00}&32.00&\textbf{100.00}&98.00&86.00\\
&8&\textbf{99.79}&51.81&95.53&98.07&98.91&99.40&97.37&83.96\\
&9&99.74&87.40&98.74&98.74&63.93&99.12&\textbf{99.76}&91.95\\
&10&\textbf{100.00}&13.16&98.22&99.61&72.47&\textbf{100.00}&97.70&91.28\\
&11&99.91&83.76&99.06&\textbf{99.97}&94.42&99.81&99.84&97.81\\
&12&99.33&24.94&97.99&99.03&94.32&\textbf{99.80}&95.31&85.73\\
&13&\textbf{100.00}&90.07&99.96&99.94&97.62&99.94&99.85&99.58\\\cline{2-10}
&AA&98.17&44.85&98.33&98.99&81.08&\textbf{99.65}&94.20&81.57\\
&OA&98.96&59.63&98.75&99.15&98.55&\textbf{99.68}&98.05&89.15\\\hline
\end{tabular}}
\label{ACCURACY-KSC-TABLE}
\end{table}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{accuracy-curve-PaviaU}
\includegraphics[width=0.47\textwidth]{figure/accuracy-curve/PaviaU.pdf}
}
\subfigure[]{
\label{accuracy-curve-Salinas}
\includegraphics[width=0.47\textwidth]{figure/accuracy-curve/Salinas.pdf}
}
\subfigure[]{
\label{accuracy-curve-KSC}
\includegraphics[width=0.47\textwidth]{figure/accuracy-curve/KSC.pdf}
}
\caption{Change in accuracy over the number of samples for each category. \subref{accuracy-curve-PaviaU} PaviaU. \subref{accuracy-curve-Salinas} Salinas. \subref{accuracy-curve-KSC} KSC.}
\label{accuracy-curve}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{PaviaU-10-Original}
\includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf}
}
\subfigure[]{
\label{PaviaU-10-S-DMM}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/S-DMM.pdf}
}
\subfigure[]{
\label{PaviaU-10-3DCAE}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/3DCAE.pdf}
}
\subfigure[]{
\label{PaviaU-10-SSDL}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/SSDL.pdf}
}
\subfigure[]{
\label{PaviaU-10-TwoCnn}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/TwoCnn.pdf}
}
\subfigure[]{
\label{PaviaU-10-3DVSCNN}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/3DVSCNN.pdf}
}
\subfigure[]{
\label{PaviaU-10-SSLstm}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/SSLstm.pdf}
}
\subfigure[]{
\label{PaviaU-10-CNN_HSI}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/CNN_HSI.pdf}
}
\subfigure[]{
\label{PaviaU-10-SAE_LR}
\includegraphics[scale=0.25]{figure/map/PaviaU/10/SAE_LR}
}
\caption{Classification maps on the PaviaU data set (10 samples per class). \subref{PaviaU-10-Original} Original. \subref{PaviaU-10-S-DMM} S-DMM. \subref{PaviaU-10-3DCAE} 3DCAE. \subref{PaviaU-10-SSDL} SSDL. \subref{PaviaU-10-TwoCnn} TwoCnn. \subref{PaviaU-10-3DVSCNN} 3DVSCNN. \subref{PaviaU-10-SSLstm} SSLstm. \subref{PaviaU-10-CNN_HSI} CNN\_HSI. \subref{PaviaU-10-SAE_LR} SAE\_LR.}
\label{PaviaU-10}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{PaviaU-50-Original}
\includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf}
}
\subfigure[]{
\label{PaviaU-50-S-DMM}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/S-DMM.pdf}
}
\subfigure[]{
\label{PaviaU-50-3DCAE}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/3DCAE.pdf}
}
\subfigure[]{
\label{PaviaU-50-SSDL}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/SSDL.pdf}
}
\subfigure[]{
\label{PaviaU-50-TwoCnn}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/TwoCnn.pdf}
}
\subfigure[]{
\label{PaviaU-50-3DVSCNN}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/3DVSCNN.pdf}
}
\subfigure[]{
\label{PaviaU-50-SSLstm}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/SSLstm.pdf}
}
\subfigure[]{
\label{PaviaU-50-CNN_HSI}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/CNN_HSI.pdf}
}
\subfigure[]{
\label{PaviaU-50-SAE_LR}
\includegraphics[scale=0.25]{figure/map/PaviaU/50/SAE_LR.pdf}
}
\caption{Classification maps on the PaviaU data set (50 samples per class). \subref{PaviaU-50-Original} Original. \subref{PaviaU-50-S-DMM} S-DMM. \subref{PaviaU-50-3DCAE} 3DCAE. \subref{PaviaU-50-SSDL} SSDL. \subref{PaviaU-50-TwoCnn} TwoCnn. \subref{PaviaU-50-3DVSCNN} 3DVSCNN. \subref{PaviaU-50-SSLstm} SSLstm. \subref{PaviaU-50-CNN_HSI} CNN\_HSI. \subref{PaviaU-50-SAE_LR} SAE\_LR.}
\label{PaviaU-50}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{PaviaU-100-Original}
\includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf}
}
\subfigure[]{
\label{PaviaU-100-S-DMM}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/S-DMM.pdf}
}
\subfigure[]{
\label{PaviaU-100-3DCAE}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/3DCAE.pdf}
}
\subfigure[]{
\label{PaviaU-100-SSDL}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/SSDL.pdf}
}
\subfigure[]{
\label{PaviaU-100-TwoCnn}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/TwoCnn.pdf}
}
\subfigure[]{
\label{PaviaU-100-3DVSCNN}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/3DVSCNN.pdf}
}
\subfigure[]{
\label{PaviaU-100-SSLstm}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/SSLstm.pdf}
}
\subfigure[]{
\label{PaviaU-100-CNN_HSI}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/CNN_HSI.pdf}
}
\subfigure[]{
\label{PaviaU-100-SAE_LR}
\includegraphics[scale=0.25]{figure/map/PaviaU/100/SAE_LR.pdf}
}
\caption{Classification maps on the PaviaU data set (100 samples per class). \subref{PaviaU-100-Original} Original. \subref{PaviaU-100-S-DMM} S-DMM. \subref{PaviaU-100-3DCAE} 3DCAE. \subref{PaviaU-100-SSDL} SSDL. \subref{PaviaU-100-TwoCnn} TwoCnn. \subref{PaviaU-100-3DVSCNN} 3DVSCNN. \subref{PaviaU-100-SSLstm} SSLstm. \subref{PaviaU-100-CNN_HSI} CNN\_HSI. \subref{PaviaU-100-SAE_LR} SAE\_LR.}
\label{PaviaU-100}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{Salinas-10-Original}
\includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf}
}
\subfigure[]{
\label{Salinas-10-S-DMM}
\includegraphics[scale=0.4]{figure/map/Salinas/10/S-DMM.pdf}
}
\subfigure[]{
\label{Salinas-10-3DCAE}
\includegraphics[scale=0.4]{figure/map/Salinas/10/3DCAE.pdf}
}
\subfigure[]{
\label{Salinas-10-SSDL}
\includegraphics[scale=0.4]{figure/map/Salinas/10/SSDL.pdf}
}
\subfigure[]{
\label{Salinas-10-TwoCnn}
\includegraphics[scale=0.4]{figure/map/Salinas/10/TwoCnn.pdf}
}
\subfigure[]{
\label{Salinas-10-3DVSCNN}
\includegraphics[scale=0.4]{figure/map/Salinas/10/3DVSCNN.pdf}
}
\subfigure[]{
\label{Salinas-10-SSLstm}
\includegraphics[scale=0.4]{figure/map/Salinas/10/SSLstm.pdf}
}
\subfigure[]{
\label{Salinas-10-CNN_HSI}
\includegraphics[scale=0.4]{figure/map/Salinas/10/CNN_HSI.pdf}
}
\subfigure[]{
\label{Salinas-10-SAE_LR}
\includegraphics[scale=0.4]{figure/map/Salinas/10/SAE_LR.pdf}
}
\caption{Classification maps on the Salinas data set (10 samples per class). \subref{Salinas-10-Original} Original. \subref{Salinas-10-S-DMM} S-DMM. \subref{Salinas-10-3DCAE} 3DCAE. \subref{Salinas-10-SSDL} SSDL. \subref{Salinas-10-TwoCnn} TwoCnn. \subref{Salinas-10-3DVSCNN} 3DVSCNN. \subref{Salinas-10-SSLstm} SSLstm. \subref{Salinas-10-CNN_HSI} CNN\_HSI. \subref{Salinas-10-SAE_LR} SAE\_LR.}
\label{Salinas-10}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{Salinas-50-Original}
\includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf}
}
\subfigure[]{
\label{Salinas-50-S-DMM}
\includegraphics[scale=0.4]{figure/map/Salinas/50/S-DMM.pdf}
}
\subfigure[]{
\label{Salinas-50-3DCAE}
\includegraphics[scale=0.4]{figure/map/Salinas/50/3DCAE.pdf}
}
\subfigure[]{
\label{Salinas-50-SSDL}
\includegraphics[scale=0.4]{figure/map/Salinas/50/SSDL.pdf}
}
\subfigure[]{
\label{Salinas-50-TwoCnn}
\includegraphics[scale=0.4]{figure/map/Salinas/50/TwoCnn.pdf}
}
\subfigure[]{
\label{Salinas-50-3DVSCNN}
\includegraphics[scale=0.4]{figure/map/Salinas/50/3DVSCNN.pdf}
}
\subfigure[]{
\label{Salinas-50-SSLstm}
\includegraphics[scale=0.4]{figure/map/Salinas/50/SSLstm.pdf}
}
\subfigure[]{
\label{Salinas-50-CNN_HSI}
\includegraphics[scale=0.4]{figure/map/Salinas/50/CNN_HSI.pdf}
}
\subfigure[]{
\label{Salinas-50-SAE_LR}
\includegraphics[scale=0.4]{figure/map/Salinas/50/SAE_LR.pdf}
}
\caption{Classification maps on the Salinas (50 samples per class). \subref{Salinas-50-Original} Original. \subref{Salinas-50-S-DMM} S-DMM. \subref{Salinas-50-3DCAE} 3DCAE. \subref{Salinas-50-SSDL} SSDL. \subref{Salinas-50-TwoCnn} TwoCnn. \subref{Salinas-50-3DVSCNN} 3DVSCNN. \subref{Salinas-50-SSLstm} SSLstm. \subref{Salinas-50-CNN_HSI} CNN\_HSI.
\subref{Salinas-50-SAE_LR} SAE\_LR.}
\label{Salinas-50}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{Salinas-100-Original}
\includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf}
}
\subfigure[]{
\label{Salinas-100-S-DMM}
\includegraphics[scale=0.4]{figure/map/Salinas/100/S-DMM.pdf}
}
\subfigure[]{
\label{Salinas-100-3DCAE}
\includegraphics[scale=0.4]{figure/map/Salinas/100/3DCAE.pdf}
}
\subfigure[]{
\label{Salinas-100-SSDL}
\includegraphics[scale=0.4]{figure/map/Salinas/100/SSDL.pdf}
}
\subfigure[]{
\label{Salinas-100-TwoCnn}
\includegraphics[scale=0.4]{figure/map/Salinas/100/TwoCnn.pdf}
}
\subfigure[]{
\label{Salinas-100-3DVSCNN}
\includegraphics[scale=0.4]{figure/map/Salinas/100/3DVSCNN.pdf}
}
\subfigure[]{
\label{Salinas-100-SSLstm}
\includegraphics[scale=0.4]{figure/map/Salinas/100/SSLstm.pdf}
}
\subfigure[]{
\label{Salinas-100-CNN_HSI}
\includegraphics[scale=0.4]{figure/map/Salinas/100/CNN_HSI.pdf}
}
\subfigure[]{
\label{Salinas-100-SAE_LR}
\includegraphics[scale=0.4]{figure/map/Salinas/100/SAE_LR.pdf}
}
\caption{Classification maps on the Salinas data set (100 samples per class). \subref{Salinas-100-Original} Original. \subref{Salinas-100-S-DMM} S-DMM. \subref{Salinas-100-3DCAE} 3DCAE. \subref{Salinas-100-SSDL} SSDL. \subref{Salinas-100-TwoCnn} TwoCnn. \subref{Salinas-100-3DVSCNN} 3DVSCNN. \subref{Salinas-100-SSLstm} SSLstm. \subref{Salinas-100-CNN_HSI} CNN\_HSI. \subref{Salinas-100-SAE_LR} SAE\_LR.}
\label{Salinas-100}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{KSC-10-Original}
\includegraphics[scale=0.17]{figure/map/Original/KSC.pdf}
}
\subfigure[]{
\label{KSC-10-S-DMM}
\includegraphics[scale=0.17]{figure/map/KSC/10/S-DMM.pdf}
}
\subfigure[]{
\label{KSC-10-3DCAE}
\includegraphics[scale=0.17]{figure/map/KSC/10/3DCAE.pdf}
}
\subfigure[]{
\label{KSC-10-SSDL}
\includegraphics[scale=0.17]{figure/map/KSC/10/SSDL.pdf}
}
\subfigure[]{
\label{KSC-10-TwoCnn}
\includegraphics[scale=0.17]{figure/map/KSC/10/TwoCNN.pdf}
}
\subfigure[]{
\label{KSC-10-3DVSCNN}
\includegraphics[scale=0.17]{figure/map/KSC/10/3DVSCNN.pdf}
}
\subfigure[]{
\label{KSC-10-SSLstm}
\includegraphics[scale=0.17]{figure/map/KSC/10/SSLstm.pdf}
}
\subfigure[]{
\label{KSC-10-CNN_HSI}
\includegraphics[scale=0.17]{figure/map/KSC/10/CNN_HSI.pdf}
}
\subfigure[]{
\label{KSC-10-SAE_LR}
\includegraphics[scale=0.17]{figure/map/KSC/10/SAE_LR.pdf}
}
\caption{Classification maps on the KSC data set (10 samples per class). \subref{KSC-10-Original} Original. \subref{KSC-10-S-DMM} S-DMM. \subref{KSC-10-3DCAE} 3DCAE. \subref{KSC-10-SSDL} SSDL. \subref{KSC-10-TwoCnn} TwoCnn. \subref{KSC-10-3DVSCNN} 3DVSCNN. \subref{KSC-10-SSLstm} SSLstm. \subref{KSC-10-CNN_HSI} CNN\_HSI. \subref{KSC-10-SAE_LR} SAE\_LR.}
\label{KSC-10}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{KSC-50-Original}
\includegraphics[scale=0.17]{figure/map/Original/KSC.pdf}
}
\subfigure[]{
\label{KSC-50-S-DMM}
\includegraphics[scale=0.17]{figure/map/KSC/50/S-DMM.pdf}
}
\subfigure[]{
\label{KSC-50-3DCAE}
\includegraphics[scale=0.17]{figure/map/KSC/50/3DCAE.pdf}
}
\subfigure[]{
\label{KSC-50-SSDL}
\includegraphics[scale=0.17]{figure/map/KSC/50/SSDL.pdf}
}
\subfigure[]{
\label{KSC-50-TwoCnn}
\includegraphics[scale=0.17]{figure/map/KSC/50/TwoCNN.pdf}
}
\subfigure[]{
\label{KSC-50-3DVSCNN}
\includegraphics[scale=0.17]{figure/map/KSC/50/3DVSCNN.pdf}
}
\subfigure[]{
\label{KSC-50-SSLstm}
\includegraphics[scale=0.17]{figure/map/KSC/50/SSLstm.pdf}
}
\subfigure[]{
\label{KSC-50-CNN_HSI}
\includegraphics[scale=0.17]{figure/map/KSC/50/CNN_HSI.pdf}
}
\subfigure[]{
\label{KSC-50-SAE_LR}
\includegraphics[scale=0.17]{figure/map/KSC/50/SAE_LR.pdf}
}
\caption{Classification maps on the KSC data set (50 samples per class). \subref{KSC-50-Original} Original. \subref{KSC-50-S-DMM} S-DMM. \subref{KSC-50-3DCAE} 3DCAE. \subref{KSC-50-SSDL} SSDL. \subref{KSC-50-TwoCnn} TwoCnn. \subref{KSC-50-3DVSCNN} 3DVSCNN. \subref{KSC-50-SSLstm} SSLstm. \subref{KSC-50-CNN_HSI} CNN\_HSI. \subref{KSC-50-SAE_LR} SAE\_LR.}
\label{KSC-50}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{KSC-100-Original}
\includegraphics[scale=0.17]{figure/map/Original/KSC.pdf}
}
\subfigure[]{
\label{KSC-100-S-DMM}
\includegraphics[scale=0.17]{figure/map/KSC/100/S-DMM.pdf}
}
\subfigure[]{
\label{KSC-100-3DCAE}
\includegraphics[scale=0.17]{figure/map/KSC/100/3DCAE.pdf}
}
\subfigure[]{
\label{KSC-100-SSDL}
\includegraphics[scale=0.17]{figure/map/KSC/100/SSDL.pdf}
}
\subfigure[]{
\label{KSC-100-TwoCnn}
\includegraphics[scale=0.17]{figure/map/KSC/100/TwoCNN.pdf}
}
\subfigure[]{
\label{KSC-100-3DVSCNN}
\includegraphics[scale=0.17]{figure/map/KSC/100/3DVSCNN.pdf}
}
\subfigure[]{
\label{KSC-100-SSLstm}
\includegraphics[scale=0.17]{figure/map/KSC/100/SSLstm.pdf}
}
\subfigure[]{
\label{KSC-100-CNN_HSI}
\includegraphics[scale=0.17]{figure/map/KSC/100/CNN_HSI.pdf}
}
\subfigure[]{
\label{KSC-100-SAE_LR}
\includegraphics[scale=0.17]{figure/map/KSC/100/SAE_LR.pdf}
}
\caption{Classification maps on the KSC data set (100 samples per class). \subref{KSC-100-Original} Original. \subref{KSC-100-S-DMM} S-DMM. \subref{KSC-100-3DCAE} 3DCAE. \subref{KSC-100-SSDL} SSDL. \subref{KSC-100-TwoCnn} TwoCnn. \subref{KSC-100-3DVSCNN} 3DVSCNN. \subref{KSC-100-SSLstm} SSLstm. \subref{KSC-100-CNN_HSI} CNN\_HSI. \subref{KSC-100-SAE_LR} SAE\_LR.}
\label{KSC-100}
\end{figure}
\subsection{Model parameters}
To further explore the reasons why the model has achieved different results on the benchmark data set, we also counted the number of trainable parameters of each framework (including the decoder module) on different data sets, which are shown in Table \ref{AMOUNT}. On all data sets, the model with the least number of training parameters is the SAE\_LR, the second is the CNN\_HSI and the most is the TwoCnn. SAE\_LR is a lightweight architecture in all models for the simple linear layer, but its performance is poor. Different from other 2D convolution approaches in HSI, CNN\_HSI solely uses a $1\times 1$ kernel to process an image. Moreover, it uses a $1\times 1$ convolution layer to serve as a classifier instead of the linear layer, which greatly reduces the number of trainable parameters. The next is the S-DMM. This also explains why S-DMM and CNN\_HSI are less affected by augmentation in sample size but very effective on few samples. Additionally, the problem of overfitting is of little concern in these approaches. Stacking the spectral and spatial feature to generate the final fused feature is the main reason for the large number of parameters of TwoCnn. However, regardless of its potentially millions of trainable parameters, it can work well on limited samples, benefiting from transfer learning, which decreases trainable parameters and achieves good performance on all target data sets.
Next, the models with the most parameters are successively 3DCAE and SSLstm. 3DCAE's trainable parameters are at most eight times those of SSDL, which contains not only a 1D autoencoder in the spectral branch but also a spatial branch based on a 2D convolutional network, but 3DCAE is still worse than SSDL.
Although 3D convolutional and pooling modules can greatly avoid the problem of data structure information loss caused by the flattening operation, the complexity of the 3D structure and the symmetric structure of the autoencoder increase the number of model parameters, which make it easy to overfit the model. 3DVSCNN also uses a 3D convolutional module and is better than 3DCAE, which first reduces the number of redundant bands by PCA. That may also be applied to 3DCAE to decrease the number of model parameters and make good use of characteristics of 3D convolution, extracting spectral and spatial information simultaneously. The main contribution of the parameter of SSLstm comes from the spatial branch. Although the gate structure of LSTM improves the model's capabilities of long and short memory, it increases the complexity of the model. When the number of hidden layer units increases, the model's parameters will also skyrocket greatly.
Perhaps it is the coupling between the spectral features and recurrent network that make performance of SSLstm not as bad as that of 3DCAE on all data sets, which has a similar number of parameters and even achieved superior results on KSC. Moreover, there are no methods that were adopted for solving the problem of few samples. This finding also shows that supervised learning is better than unsupervised learning in some tasks.
\begin{table}[htbp]
\centering
\caption{The number of trainable parameters}
\begin{tabular}{crrr}
\toprule
&PaviaU&Salinas&KSC \\
\midrule
S-DMM&33921&40385&38593\\
3DCAE&256563&447315&425139 \\
SSDL&35650&48718&44967 \\
TwoCnn&1379399&1542206&1501003\\
3DVSCNN&42209&42776&227613\\
SSLstm&367506&370208&401818\\
CNN\_HSI&22153&33536&31753\\
SAE\_LR&\textbf{21426}&\textbf{5969}&\textbf{5496}\\
\bottomrule
\end{tabular}
\label{AMOUNT}
\end{table}
\subsection{The speed of model convergence}
In addition, we compare the convergence speed of the model according to the changes in training loss of each model in the first 200 epochs on each group of experiments (see Figure~\ref{PAVIAU-CURVE}$\sim$\ref{KSC-CURVE}). Because the autoencoder and classifier of 3DCAE are be trained separately, and all data are used during training the autoencoder, it is not comparable to other models. Therefore, it is not be listed here. On all data sets, S-DMM has the fastest convergence speed. After approximately 3 epochs, the training loss tends to become stable given its fewer parameters. Although CNN\_HSI has a similar performance to S-DMM and fewer parameters, the learning curve of CNN\_HSI's convergence rate is slower than that of S-DMM and is sometimes accompanied by turbulence. The second place regarding performance is held by TwoCnn, which is mainly due to transfer learning to better position the initial parameters, and it actually has fewer parameters requiring training.
Thus, it just needs a few epochs to fine-tune on the target data set. Moreover, the training curve of most models stabilizes after 100 epochs. The training loss of the SSLstm has severe oscillations in all data sets. This is especially noted in the SeLstm, where the loss sometimes has difficulty in decreasing. When the sequence is very long, the challenge might be that the recurrent neural network is more susceptible to a vanishing or exploding gradient.
Moreover, the pixels of the hyperspectral image usually contain hundreds of bands, which is the reason why the training loss has difficulty decreasing or oscillations occur in SeLstm. In the spatial branch, it does not have this serious condition because the length of the spatial sequence depending on patch size is shorter than spectral sequences. During training, the LSTM-based model spent a considerable amount of time because it cannot train in parallel.
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{PAVIAU-CURVE-10}
\includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample10.pdf}
}
\subfigure[]{
\label{PAVIAU-CURVE-50}
\includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample50.pdf}
}
\subfigure[]{
\label{PAVIAU-CURVE-100}
\includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample100.pdf}
}
\caption{Training Loss on the PaviaU data set. \subref{PAVIAU-CURVE-10} 10 samples per class. \subref{PAVIAU-CURVE-50} 50 samples per class. \subref{PAVIAU-CURVE-100} 100 samples per class.}
\label{PAVIAU-CURVE}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{SALINAS-CURVE-10}
\includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample10.pdf}
}
\subfigure[]{
\label{SALINAS-CURVE-50}
\includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample50.pdf}
}
\subfigure[]{
\label{SALINAS-CURVE-100}
\includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample100.pdf}
}
\caption{Training Loss on the Salinas data set.\subref{SALINAS-CURVE-10} 10 samples per class. \subref{SALINAS-CURVE-50} 50 samples per class. \subref{SALINAS-CURVE-100} 100 samples per class.}
\label{SALINAS-CURVE}
\end{figure}
\begin{figure}[hbpt]
\centering
\subfigure[]{
\label{KSC-CURVE-10}
\includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample10.pdf}
}
\subfigure[]{
\label{KSC-CURVE-50}
\includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample50.pdf}
}
\subfigure[]{
\label{KSC-CURVE-100}
\includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample100.pdf}
}
\caption{Training Loss on the KSC data set. \subref{KSC-CURVE-10} 10 samples per class. \subref{KSC-CURVE-50} 50 samples per class. \subref{KSC-CURVE-100} 100 samples per class.}
\label{KSC-CURVE}
\end{figure}
\section{Conclusions}
\label{conclutions}
In this paper, we introduce the current research difficulties, namely, few samples, in the field of hyperspectral image classification and discuss popular learning frameworks. Furthermore, we also introduce several popular learning algorithms to solve the small-sample problem, such as autoencoders, few-shot learning, transfer learning, activate learning, and data augmentation. According to the above methods, we select some representative models to conduct experiments on hyperspectral benchmark data sets. We developed three different experiments to explore the performance of the models on small-sample data sets and documented their changes with increasing sample size, finally evaluating their effectiveness and robustness through AA and OA. Then, we also compared the number of parameters and convergence speeds of various models to further analyze their differences. Ultimately, we also
highlight several possible future directions of hyperspectral image classification on small samples:
\begin{itemize}
\item Autoencoders, including linear autoencoders and 3D convolutional autoencoders, have been widely explored and applied to solve the sample problem in HSI. Nevertheless, their performance does not approach excellence.
The future development trend should be focused on few-shot learning, transfer learning, and active learning.
\item We can fuse some learning paradigms to make good use of the advantages of each approach. For example, regarding the fusion of transfer learning and active learning, such an approach can select the valuable samples on the source data set and transfer the model to the target data set to avoid the imbalance of the class sample size.
\item According to the experimental results, the RNN is also suitable for hyperspectral image classification. However, there is little work focused on combining the learning paradigms with RNN. Recently, the transformer, as an alternative to the RNN that is capable of processing in parallel, has been introduced into the computer vision domain and has achieved good performance on some tasks such as object detection. Therefore, we can also employ this method in hyperspectral image classification and combine it with some learning paradigms.
\item Graph convolution network has been growing more and more interested in hyperspectral image classification. Fully connected network, convolution network, and recurrent network are just suitable for processing the euclidean data and do not solve with the non-euclidean data directly. And image can be regarded as a special case of the euclidean-data. Thus, there are many researches~\cite{wan2019multiscale, liu2020semisupervised, wan2020hyperspectral} utilizing graph convolution networks to classify HSI.
\item The reason for requiring a large amount of label samples is the tremendous trainable parameters of the deep learning model. There are many methods proposed, such as group convolution~\cite{howard2017mobilenets}, to light the weight of a deep neural network. So, how to construct a light-weight model further is also a future direction.
\end{itemize}
Although few label classification can save much time and labor force to collect and label diverse samples, the models are easy to suffer from over-fit and gaining a weak generalization. Thus, how to avoid the over-fitting and improve model's generalization is the huge challenge of HSI few label classification in the application potential.
\section*{Acknowledgments}
The work is partly supported by the National Natural Science Foundation of China (Grant No. 61976144).
\bibliographystyle{elsarticle-num}
| proofpile-arXiv_065-83 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Computed Tomography Angiography (CTA) is a commonly used modality in many clinical scenarios including the diagnosis of ischemic strokes. An ischemic stroke is caused by an occluded blood vessel resulting in a lack of oxygen in the affected brain parenchyma. An occlusion in the internal carotid artery (ICA), proximal middle cerebral artery (MCA) or basilar artery is often referred to as large vessel occlusion (LVO). These LVOs are visible in CTA scans as a discontinuation of contrast agent in the vascular tree, which is a complex system of arteries and veins and varies from patient to patient. Consequently the diagnosis takes time and requires expertise. Clinics and patients would therefore benefit from an automated classification of LVOs on CTA scans.
Prior research in that field is described in literature. Amukotuwa et al.~detected LVOs in CTA scans with a pipeline consisting of 14 steps and tested their commercially available algorithm on two different data cohorts, reporting a performance of 0.86 ROC AUC in the first trial with 477 patients \cite{amukotuwa2019automated} and 0.94 in the second trial \cite{amukotuwa2019fast} with 926 patients. Stib et al.~\cite{stib2020detecting} computed maximum-intensity projections of segmentations of the vessel tree based on multi-phase CTA scans (three CTA scans covering the arterial, peak venous and late venous phases), and trained a 2D-DenseNet \cite{huang2017densely} on 424 patients to classify the presence of an LVO. They report ROC AUC values between 0.74 and 0.85 depending on the phase. Luijten et al.'s work \cite{luijten2021diagnostic} investigated the performance of another commercially available LVO detection algorithm based on a Convolutional Neural Network (CNN) and determined a ROC AUC of 0.75 on 646 test patients.
In all studies, very large data cohorts were available. This appears mandatory to train (and test) AI-based detection algorithms, since in case-wise classification the number of training samples equals the number of available patients. In this work we present a data-efficient method that achieves performance comparable to what is seen in related work while relying on only 100 data sets for training.
\section{Materials and methods}
\subsection{Data}
Altogether, 168 thin-sliced ($0.5$ to $1$\,mm) head CTA data sets were available. Of these, 109 patients were LVO positive due to an occlusion either in the middle cerebral artery or the internal carotid. Regarding the affected hemisphere, 54 (52) LVOs were located on the left (right) side. The data was acquired from a single site with a Somatom Definition AS+ (Siemens Healthineers, Forchheim, Germany).
\subsection{Methodology}
The method we propose is based on the idea of aggressively augmenting the vessel tree segmentations in order to artificially extend the amount of trainable data. The classification pipeline itself (Fig.~\ref{fig:pipe}), consists of three subsequent steps. In the first step, the cerebrovascular tree is segmented using segmentation approach published by Thamm et al.~\cite{thamm2020virtualdsa}. Additionally, the algorithm prunes the vessel tree to the relevant arteries by masking out all vascular structures which are a walking distance (geodesic distance w.r.t.~vessel center lines) of more than 150\,mm away from the Circle of Willis. Thereby veins, which are not relevant in the diagnosis of LVOs, like the Sinus Sagittalis are mostly excluded from further processing. In the second step, the original CTA scan is non-rigidly registered to the probabilistic brain atlas by Kemmling et al. \cite{kemmling2012decomposing}. The registration is based on Chefd'hotel et al.'s method \cite{chefd2002flows} but may be done using other, publicly available registration methods as well. The resulting deformation field is used to transform the segmentation mask into the atlas coordinate system. An accurate registration between an atlas and the head scan is not crucial in our work as variations in the vessel tree are present in all patients anyway. Once the volumes are in the atlas coordinate space, they are equally sized with 182 $\times$ 205 $\times$ 205 voxels with isotropic spacing of 1\,mm in all dimensions. The primary purpose of the registration is to consistently orient and somewhat anatomically ``normalize'' the segmentation for the next step, in which a convolutional neural network classifies the presence of an LVO. The network receives the binary segmentation masks volume-wise and predicts a softmax activated vector of length 3, representing the three classes: No LVO, left LVO and right LVO. In our work, we tested a 3D-version of DenseNet \cite{hara3dcnns} ($\approx$ 4.6m parameters) and EfficientNetB1 \cite{tan2019efficientnet} ($\approx$ 6.5m parameters) where the channel dimension has been repurposed as the z-Axis. Cross entropy serves as the loss optimized with Adam on PyTorch 1.6 \cite{pytorch} and Python 3.8.5.
\begin{figure}[b]
\centering
\caption{Proposed pipeline. First, the CTA volume is registered to a probabilistic brain atlas. The cerebrovascular vessel tree is then segmented and transformed into the atlas coordinate system by applying the resulting deformation vector field. For augmentation purposes, the vessel tree segmentation masks are elastically deformed for training only. A network predicts the three mutually exclusive classes: No LVO, left or right LVO.}
\label{fig:pipe}
\includegraphics[width = \textwidth]{3214_Picture2.eps}
\end{figure}
\subsection{Augmentation}
\label{sec:augmentation}
From patient to patient, the cerebrovascular anatomy follows coarsely the same structure. However, anatomical variations (e.g.,~absence ICA, accessory MCA) combined with the individual course of the vessels lead to a wide variety of configurations in intracranial vascular systems such that no vessel tree is quite like another. Considering this, augmentations can be used in order to artificially generate more vessel trees and from a network's perspective visually new patients. Therefore, we propose to elastically and randomly deform the segmentation masks for training.
While the use of elastic deformation for augmentation per se is not a novel technique \cite{nalepa2019data}, in our setup we are uniquely able to apply it much more aggressively than otherwise possible, enabling us to dramatically increase its benefit compared to typical use cases.
This is possible due to the fact that only vessel segmentations are used as input for our CNN-based classifier model. Whereas strong deformations on a conventional image volume will quickly introduce resampling artifacts that render the image unrealistic, masks remain visually comparable to the original samples even when heavily deformed. As the segmentation is performed on full volumes, an online augmentation, i.e.~deforming while the network is trained, is computationally too expensive and would increase the training time to an impractical level. Instead, we suggest to elastically deform the segmentation masks prior to the training for a fixed number of random fields for each original volume.
As masks, unlike regular image volumes, are highly compressible, this does not notably increase data storage requirements as would typically be associated with such an approach.
In this work we aim to demonstrate the impact of this data augmentation on the performance for the classification of LVOs. Using the RandomElasticDeformation of TorchIO \cite{torchio} which interpolates displacements with cubic B-splines, we randomly deformed each segmentation mask 10 times with 4 and 10 times with 5 random anchors, all 20 augmentations with a maximal displacement of 90 voxels (Examples in Fig.~\ref{fig:deform}). Additionally, we mirror the original data sets sagitally and again apply the above procedure to create 20 variants, which flips the right/left labels but has no effect if no LVO is present. We thus create 40 samples out of one volume, resulting in a total of 6720 vessel tree samples generated from 168 patients.
\begin{figure}[b]
\centering
\caption{Two examples with four augmentations of the original tree on the left, all viewed axially caudal with the same camera parameters. The upper row shows a case with an occlusion in the left middle cerebral artery, indicated by the arrow. The lower row shows an LVO-negative vessel tree. Instead of binary masks, surface meshes of the deformed masks were rendered for a sharper and clearer visualization.}
\label{fig:deform}
\includegraphics[width = \textwidth]{3214_deformations_withLVO3.png}
\end{figure}
\section{Results}
We investigated the impact on the elastic augmentations in an ablation study considering two architectures, where we systematically disable features (deformation and mirroring). For a fully 3D variant we evaluated the 3D-DenseNet architecture \cite{hara3dcnns} and a 2D variant where the channel axis of the input is used for the axial ($z$) dimension using the EfficientNetB1 architecture \cite{tan2019efficientnet}. A 5-fold cross validation setup with a 3-1-1 split ratio for training, validation and testing was conducted, where on average, 100 original data sets were used for training per cycle. The baseline (no augmentation) was trained for 200 epochs, a variant using the original and the deformed, but not mirrored data sets was trained for 100 epochs, and finally, as the proposed setting, models were trained for 50 epochs using the original, the deformed and mirrored data. Epoch numbers differ as there are more samples available when augmentation is used. All models overfitted by the end of their allotted epochs. The validation loss was used to pick the best performing network out of all epochs. The test data was not augmented to provide a fair comparison between all setups. Both architectures significantly benefit from the deformation-based augmentation (Tab.~\ref{tab:results}); in particular, EfficientNet failed to grasp the problem at all without it. The 3D-DenseNet trained with deformed and mirrored data sets outperformed the other setups by a significant margin, especially in detecting LVOs and left LVOs. Depending on the chosen threshold, this variant achieved a sensitivity of 80\% (or 90\%) and a specificity of 82\% (or 60\% respectively) for the detection of LVOs.
\begin{table}[t]
\caption{ROC AUCs with 95\% confidence intervals (by bootstrapping) for the 3D-DenseNet and EfficientNetB1 architecture. ``D'' stands for ``deformation'' and ``M'' for ``mirroring''. To compute ``AUC Left'', the right and no-LVO class were combined to one class enabling a binary classification. ``AUC Right'' was calculated analogously.}
\label{tab:results}
\begin{tabular*}{\textwidth}{l@{\extracolsep\fill}lll}
\hline
Setup & AUC LVO & AUC Left & AUC Right \\ \hline
3D-DenseNet + D + M & \textbf{0.87} {[}0.81, 0.92{]} & \textbf{0.93} {[}0.87, 0.97{]} & 0.93 {[}0.88, 0.97{]} \\
3D-DenseNet + D & 0.84 {[}0.77, 0.90{]} & 0.89 {[}0.84, 0.94{]} & \textbf{0.94} {[}0.90, 0.97{]} \\
3D-DenseNet & 0.77 {[}0.69, 0.84{]} & 0.85 {[}0.78, 0.91{]} & 0.85 {[}0.78, 0.92{]} \\
EfficientNetB1 + D + M & 0.85 {[}0.79, 0.90{]} & 0.86 {[}0.79, 0.91{]} & 0.90 {[}0.84, 0.96{]} \\
EfficientNetB1 + D & 0.83 {[}0.77, 0.89{]} & 0.85 {[}0.79, 0.91{]} & 0.91 {[}0.85, 0.96{]} \\
EfficientNetB1 & 0.56 {[}0.47, 0.65{]} & 0.59 {[}0.49, 0.68{]} & 0.68 {[}0.58, 0.78{]} \\ \hline
\end{tabular*}
\end{table}
\section{Discussion}
We presented a method for automated classification of LVOs based on CTA data which makes heavy use of deformation fields for augmentation. With an AUC of 0.87 for LVO detection, we achieved a performance comparable to that of other DL-based approaches while using as few as 100 patient data sets for training. While not novel in itself, elastic deformation for the purpose of augmentation could be applied much more aggressively in our setup compared to regular use cases as our model relies exclusively on segmented vessel tree masks as input; for these, even strong deformations---that would cause severe resampling artifacts when applied to regular image volumes---still lead to anatomically meaningful representations that are virtually indistinguishable from real samples. In an ablation study we showed that the performed augmentation was crucial to properly learn the task at hand from a small number of data sets. This leads us to the conclusion that a learning-based detection of LVOs stands and falls with the number of training data sets. The cerebrovascular system is highly patient-specific, which is why the use of sophisticated augmentation techniques offers great potential. We postulate that also larger data pools could benefit from more extensive data augmentation if applied meaningfully.
\bibliographystyle{bvm}
| proofpile-arXiv_065-84 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{introduction}
Assembly of galaxies in the early universe is a matter of intense debate in current astrophysics. Among others, the formation of bulges is a key ingredient which brought about the
morphological diversity of the present-day disc galaxies.
Despite much effort in clarifying the bulge formation process from both observational
and theoretical perspectives, we are still far from satisfactory understanding
of this important piece of galaxy formation.
Complex structures and kinematics of galactic bulges, especially the dichotomization
into classical and pseudo bulges,
suggest contribution of
several mechanisms in their formation process.
\citep{ko04}.
Classical bulges are usually linked to early formation by direct collpase \citep{la76,zo15} and/or
minor galaxy mergers \citep{ho09}.
Pseudo bulges are often allged to be the product of the secular formation processes
from the disc material, such as gas infall induced by galactic bars
\citep[e.g.][]{at92},
the bending instability of the bars themselves
\citep[e.g.][]{ra91}, and inward migration of massive clumps formed in gas-rich
young galactic discs
\citep[e.g.][]{no98,no99,in12,bo08}.
Any consistent theory of bulge formation must explain the observed properties of other
galactic components in the same framework.
In seeking such a picture, we are working on the galaxy evolution model based on
the cold accretion picture for gas accretion onto forming galaxies
\citep[e.g.][]{fa01,ke05,de06}.
\citet{no20} suggested a new possibility that the bulge formation is fueled
by the cold gas streams
characteristic of the halo gas in massive galaxies at high redshifts.
It was found that this picture can reproduce the observed trend that
the mass fraction of the bulge relative to total stellar mass of the galaxy
increases with the galaxy mass.
We here report that the same model can also explain the observed age structures of
galactic bulges, namely the mass dependence of
the mean stellar age and the age difference within the bulge region.
\begin{figure*}
\includegraphics[width=0.6\linewidth]{Fig1}
\caption{
Evolution of the virial mass is indicated by sold lines for
eight models analyzed in the present study overlaid on the three domains
for the different gas states.
Black circles on each evolution path indicate the two epochs bewteen which
the cold gas in Domain H arrives at the disc plane.
Red circles indicate, in increasing size,
the times $t_{\rm arr}+t_{\rm dyn}, t_{\rm arr}+10t_{\rm dyn}$,
and $t_{\rm arr}+20t_{\rm dyn}$.
Here, $t_{\rm dyn} \equiv (G M_{\rm gal}/R_{\rm gal})^{1/2}$, where the galaxy radius is
set to be $R_{\rm gal} = 0.1 R_{\rm vir}$ considering the high spin parameter $\sim 0.1$
at high redshift \citep{da15} and the galaxy mass $M_{\rm gal}$ includes all stars and
the portion of dark matter within the galaxy radius.
The peak masses for the blue nuggets observed by \citet{hu18} for three
different redshifts are shown by green crosses, whereas the blue diamonds
are the maximum compaction in three galaxies in VELA simulation by \citet{zo15}.
The virial masses for the observed BNs are derived by using the stellar-to-halo
mass ratio (SHMR) by \citet{ro17} for the corresponding redshifts. The virial masses
for the simulated galaxies are extrapolated along the expected evolutionary tracks
(black lines) from the cited values at $z=2$.
\label{Fig.1}}
\end{figure*}
\section{Models}
The cold accretion theory has been proposed on the basis of realistic simulations for
thermal and hydrodynamical evolution of the primordial gas in the cold dark matter universe
\citep[e.g.][]{fa01,bi03,ke05,de06,oc08,va12,ne13}.
It states that
the intergalactic gas flows into the hierarchically growing dark matter halos in unheated state
and fuels the forming galaxies
except in the most massive halos in recent cosmological epochs, where the cooling flow
of the shock heated halo gas prevails.
This picture represents a major modification to the long-standing paradigm
which aruges that the heating by shock waves is the universal
behaviour of the gas that enters growing dark matter halos
\citep[e.g.][]{re77}.
This new scenatio provides possible solutions to several
observations unexplained in the shock-heating theory, including
the existence of abundant luminous galaxies at high redshifts
and very red colors (and therefore complete quenching of star formation)
of present massive elliptical galaxies.
\citep[e.g.][]{ca06,de09}.
Recently, the application of this scenario was extended to subgalactic scales.
\citet{no20} examined the morphological buildup of disc galaxies under the cold accretion while
\citet{no18} tried to explain the chemical bimodality
observed in the Milky Way disc stars
\citep[e.g.][]{ad12,ha16,qu20}.
Especially, \citet{no20} succeeded in reproducing the structural variation
of disc galaxies as a function of the galaxy mass that is revealed by the photometric decomposition
of stellar contents into thin and thick discs and bulges
\citep[e.g.][]{yo06,co14}.
Examinations in the present work are based on the same evolution model
as employed in \citet{no20}, but we here concentrate on the age structures of the bulges
and compare the model with the currently available observational data.
The cold accretion theory which underlies the present study
predicts three different regimes for the properties
of the gas distributed in the dark matter halos depending on
the virlal mass and the redshift
\citep[e.g.][]{de06,oc08}.
It introduces two characteristic mass scales :
$M_{\rm shock}$ above which the halo gas develops a stable shock that heats the gas
nearly to the virial temperature and
$M_{\rm stream}$ below which part of the halo gas remains cold and is confined
into narrow filements that thread the smoothly distributed shock-heated gas.
The latter mass scale is valid only for high redshifts.
These mass scales demarcate three different regions as depicted in Fig.1.
The halo gas in Domain F is unheated and expected to accrete in free-fall to the inner region (the disc plane).
In domain G, the gas heated to the virial temperature
attains near hydrostatic equilibruium in the halo gravitational field
and the radiative cooling induces cooling flow to the center
with the cooling timescale.
The gas behaviour is not so clear in Domain H, where cold gas streams coexist with
the surrounding shock-heated hot gas.
They may behave independently and accrete with their own timesclaes
or they may interact with each other leading to modification
of accretion timescales. Because no detailed information is available,
we assume that the cold and hot gases in Domain H accrete
with the free-fall time and the radiative cooling time, respectively.
In \citet{no20}, the one-to-one correspondence was assumed between the gas components in this
diagram and the three galactic mass components of disc galaxies.
Namely, the cold gas in Domain F produces thick discs, and thin discs are formed from
the hot gas in Domain G. This part of correspondence is supported from the chemical
point of view because it
gives a satisfactory reproduction of the stellar abundance distribution
for the Milky Way thin and thick discs \citep{no18}.
The hot gas in Domain H is assumed to result into additional thin discs,
while the cold gas produces bulges. This whole hypothesis can reproduce the observed variation
in the mass ratios of thin discs, thick discs, and bulges with the galaxy mass
as shown in \citet{no20}.
The cosmological simulation of \citet{br09} also suggests the correspondence between
the gas properties and the resultant galactic components similar to the one assumed
in \citet{no20}.
The existence of surrounding hot gas, characteristic of Domain H, may indeed provide favourable
condition for bulge formation from the embedded cold gas. \citet{bi16} have shown
that the external pressure resulting from AGN feedback triggers active star formation in galactic discs by
promoting the formation of massive clumps in the destabilized discs.
The simulation by \citet{du19} may provide another relevant result. It shows that
the ram pressure of the hot gas around massive galaxies exterted on inplunging dwarf galaxies
confines their metal-rich gas produced by supernovae and stellar winds, leading to subsequent
star formation.
The hot halo gas in Domain H
thus may help clumps formed in the cold gas streams survive until they create centrally concentrated stellar systems for example by radial migration. Based on these considerations,
we adopt the same correspondence hypothesis as in \citet{no20} in this study.
The locations of the borders of three domains shown in Fig.1 actually depend on the detailed physical state (e.g., metallicity, temperature)
of the gas infalling into dark matter halos which is only poorly constrained
by the observation
\citep[e.g.][]{de06,oc08}.
We use the same configuration as adopted in \citet{no20} because it leads to
good reproduction of the observed
mass fractions of the thin discs, thick discs, and bulges
as a function of the galaxy mass observed by \citet{yo06} and \citet{co14}.
To be specific, we assume that
$M_{\rm shock} = 1.5 \times 10^{11} M_\odot$ and
${\rm log } M_{\rm stream} = 9.2+1.067z$.
Both $M_{\rm shock}$ and
$M_{\rm stream}$ are smaller than those indicated in Fig. 7 of \citet{de06} but
closer to the \citet{ke05} shock mass and the \citet{oc08} stream mass.
The model used here treats a disc galaxy as a three-component stellar system
comprising a thin disc, a thick disc and a bulge embedded in a dark matter halo
that grows in mass as specified by the hierarchical mergers of dark matter halos.
Following \citet{we02}, the growth of the virial mass is given by
\begin{equation}
M_{\rm vir} = M_{\rm vir,0} e^{-2z/(1+z_{\rm c})}
\end{equation}
where $M_{\rm vir,0}$ is the present halo mass and
$z_{\rm c}$ is the collapse redshift explained below.
The NFW density profile is assumed with the evolving concentration parameter
\begin{equation}
c(z) = \min \left[ K{{1+z_{\rm c}} \over {1+z}} , K \right]
\end{equation}
with $K=3.7$ \citep{bu01}. $z_{\rm c}$ is calculated
once the present concentration $c(0)$ is specified.
We assume following \citet{ma08} that
\begin{equation}
\log c(0) = 0.971 - 0.094 \log(M_{\rm vir,0}/[10^{12}h^{-1}{\rm M}_{\odot}])
\end{equation}
Growth of each component is driven by the accretion of gas from the halo, the timescale of which
is determined by the cold accretion theory.
Namely, the gas newly added to the halo in Domain F is asummed to accrete with the
free-fall time (dynamical time) defined by $(G M_{\rm vir}/R_{\rm vir})^{1/2}$ at that moment, where $R_{\rm vir}$ is the virial radius. The accretion timescale in Domain G is the radiative cooling
time of the collisionally excited gas with the halo virial temperature and metallicty
$Z=0.01 Z_\odot$,
calculated from \citet{su93}. The gas density is taken to be the halo density at the
virial radius multiplied by the cosmic baryon fraction of 0.17, assuming the NFW
density profile.
The cold gas which occupies half the newly added gas in mass in Domain
H is assumed to accrete with the free-fall time whereas the residual hot gas accretes with
the radiative cooling time. The gas mass added is assumed to be the increase of the halo total
mass multiplied by the cosmic baryon fraction. These specifications determines the mass accretion rate for each gas
component completely.
We do not consider the internal structure (i.e., the density distribution) of each
stellar component. Each component is characterized only by its mass
and we calculate its time variation under the gas accretion from the halo.
Actually, a significant part
of the accreted gas is expected to escape from the galaxy due to feedback from
star formation events such as supernova explosions especially for
low mass galaxies. The fraction of this expelled gas
is assumed to be proportional to the inverse of the halo virial velocity at the
accretion time and the mass of the expelled gas is adjusted so that
the stellar-to-virial mass ratio at present agrees with the observed one
(Fig.5 of \citet{ro15} for blue galaxies).
In this study, we assume that the cold gas contained in the halo in Domain H is
turned into bulge stars immediately when it accretes onto the disc plane
(the disc arrival time, $t_{\rm arr}$). This is likely to be
oversimplification and the possible effect of delay is discussed in section 4.
Another caveat is that it is not clear if galaxies at high redshifts have a disc.
\citet{de20} argue that thin discs cannot develop below the critical stellar mass
of $\sim 10^{10} M_\odot$ due to frequent mergers. The observation by \citet{zh19} suggests that galaxies in this mass range
tend to be not discy but prolate at $z \sim 2$. The present model cannot discuss the shape evolution
of galaxies by construction and the disc envisaged in the present study should not
be taken literally but shoud be more appropriately regarded as the inner part where most stars are distributed.
Bulge formation in the present scenario is restricted to relatively massive galaxies.
We run a series of models with the present halo virial masses in the range
$4.33 \times 10^{11}{\rm M}_\odot \leq M_{\rm vir,0} \leq 4.98 \times 10^{12}{\rm M}_\odot$.
The evolution of more massive galaxies
is likely to be dominated by mergers that could turn
those galaxies into elliptical galaxies.
The tracks of calculated models are shown in Fig.1. The least massive model
(and models less massive than this) does not
exter Domain H so that no bulge component is formed.
\section{Results}
Fig.2 illustrates the star formation history for each model.
It is seen that more massive models form bulges earlier than less massive ones
and the bulge formation in those models spans longer periods in time.
These trends are illustrated in a different form in Fig.1, where two black circles on
each evolutionary track indicate the redshifts at which the cold accretion
originating in Domain H reaches to the disc plane first and last.
This mass dependence of bulge formation history is quantified and compared with observations later.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Fig2}
\caption{
Star formation rate for the bulge component as a function of time,
with thicker black lines indicating models with larger virial masses
at present.
Plotted values are running means with the width of
0.28 Gyr. Tiny spikes are caused by the numerical method used in the evolution
model and do not affect our conclusions.
Red lines indicate the star formation history for which delay of
twenty dynamical times is taken into account.
}
\label{Fig.2}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.3]{Fig3}
\caption{
Model bulge fractions compared with three sets of observations.
Black dots connected by solid lines are model results, whereas
observational data are represented by small dots.
Green circles and pluses indicate, respectively, the running mean and median
in the mass bin
having the width of 0.25 dex and moved by every 0.125 dex in the galaxy total stellar mass.
In the left and central panels, orange symbols denote means and medians only for classical
bulges (orange small dots) defined to have the Sersic index larger than 2
in i-band and H-band, respectively.
The orange lines indicate the number fraction of classic bulges in each mass bin.
\citet{br18} do not derive the Sersic index and no classification of bulges is
possible.
}
\label{Fig.3}
\end{figure*}
\begin{figure}
\includegraphics[scale=0.4]{Fig4}
\caption{
The mass-weighted mean stellar age of the bulge (upper panel)
and the age difference over the bulge radius (bottom panel)
are compared with the observational data of \citet{br18} and
\citet{br20}, respectively.
Each observed value is plotted with gray. Orange circles and pluses
indicate the mean and median in each mass bin with the width of 0.25 dex.
No star formation delay is taken into account for black circles.
Red circles indicate, in increasing size, the results with the delay time,
$t_{\rm dyn}, 10t_{\rm dyn}$,
and $20t_{\rm dyn}$.
}
\label{Fig.4}
\end{figure}
Fig.3 shows the mass fraction of the bulge as a function of the total stellar mass of the
galaxy at the present epoch.
We compare the model with three sets of the observation.
The sample of \citet{ga09} comprises nearly face-on galaxies extracted fron the Sloan
Digital Sky Survey.
\citet{we09} decomposed H-band images of
S0/a-Sm galaxies in the Ohio State University Bright Spiral Galaxy Survey \citep{es02}.
Finally, \citet{br18} analyzed 135 late-type galaxies from the CALIFA survey \citep{sa12,sa16}.
The observed bulge masses
likely suffer from large uncertainties as guessed from this figure. Nevertheless,
the three different analyses consistently indicate the increase in bulge mass fraction
with the total stellar mass. The model reproduces this qualitative trend.
Model values agree well with the result of \citet{we09} but about half the values
reported by \citet{ga09} and \citet{br18}.
We discuss possible reasons for this discrepancy later.
Fig.4 summarizes the age structures of the model bulges.
The mean stellar age plotted in the upper panel increases with the total stellar mass
in qualitative agreement with the observation by \citet{br18}.
However, the model dependence (black circles) is shallower than the observed one and the discrepancy
increases toward lower galaxy masses.
We discuss later the effect of including
possible delay in the bulge star formation.
The present model predicts the lower mass limit
for bulge formation
originating in Domain H around
$M_{\rm star} \sim 10^{10}{\rm M}_\odot$.
We discuss later possible different origins for bulges in lower mass galaxies
plotted in Fig.4.
The age difference plotted in the bottom panel is simply the time at which the
bulge star formation starts minus the time at which it ends. Because the gas that accretes
to the disc plane later ends up at a larger distance from the galactic center,
the age difference thus defined essentially corresponds to the 'age gradient within
the bulge radius' shown in
Fig.2 of \citet{br20}. It should be noted that the 'gradient' given in \citet{br20}
is not the age difference per unit length but the difference
between the outer and inner edges of the bulge.
Reflecting the inside-out nature of gas accretion, all the models
produce negative gradients.
Furthermore the absolute value of the gradient increases with the galaxy
mass. Over the mass range for which the model produces bulges, the model values
are in good agreement with the observed ones.
\section{discussion and conclusions}
We have shown that the present model which is based on the cold accretion scenario
for galactic gas accretion reproduces the
observed bulge properties despite its idealized nature although some discrepany remains.
In alliance with its success in explaining the chemical bimodality in the Milky Way
disc \citep{no18} and the morphological variation with the galaxy mass
observed for extra-galaxies \citep{no20},
this result may be regarded to reinforce the cold accretion scenario from the viewpoint
of internal structures of individual disc galaxies.
Nevertheless, there are missing ingredients in the simplified approach taken here.
We touch upon these unresolved issues in the following.
In addition to the bulge mass fraction, the bulge size is also known to increase with
the galaxy total stellar mass
\citep[e.g.][]{ga09}.
Although the present model cannot determine the bulge size
because of its one-zone nature, it may be instructive to make rough estimate
for the expected size from the virial radius $R_{\rm vir}$ and the
spin parameter $\lambda$ of the dark matter halo.
The size calculated as $r_{\rm bulge} = \lambda R_{\rm vir}$ at the bulge formation
epoch is $2 \sim 3 $ kpc assuming $\lambda=0.03$, which is similar to the obseved sizes
for the most massive galaxies but depends little on the galaxy mass
for the calculated mass range.
This is because the lower-mass galaxies experience bulge formation later than
the higher-mass galaxies so that the virial radius at the bulge formation epoch
as defined in this study
is nearly constant with the galaxy mass.
We assumed that the cold gas in Domain H is turned into stars upon its arrival
at the disc plane. This assumption may be oversimplified. It is conceivable that
the cold gas streams contain gas clumps and after disc arrival individual clumps are transported inward due to
violent disc instability (VDI) before star formation occurs in them (or while making stars en route
to the galactic center). Clump formation within the cold gas filaments due to
gravitational instability is suggested by \citet{ma18} in relation to globular cluster
formation. Many cosmological simulations also reveal gas clumps
in those filaments
\citep[e.g.][]{ke05,de06,oc08,va12,ne13}, part of which are brought in to forming
galactic disks \citep{ds09}.
Radial migration timescale due to VDI is estimated to be of the order of ten times
the dynamical time \citep{de14}. Red circles in Fig.1 and Fig.4 illustrate how this delay affects
the star formation epochs and age structures of the bulges. We see that the inclusion
of star formation delay improves the agreement with the observation (especially bulge ages) with the delay time
of ten times the dynamical time giving the best fit.
Influence of delay is larger for smaller galaxies because of longer migration times,
resulting to significantly younger bulge ages than the fiducial case (black circles
in Fig.4).
Bulge formation in the present study may be related to
the compaction and blue nuggets (BNs) reported in the cosmological simulation by \citet{zo15}.
Fig.1 plots the simulated compaction events on the $z-M_{\rm vir}$ plane.
They are located in the bulge formation region
in the present model bordered by black and red circles.
The peak masses for the BNs observed by \citet{hu18} in different redshift ranges also fall
on the domain expected for bulge formation once a certain delay from the disc arrival
is taken into account.
The star forming galaxies at $z\sim2$ observed by \citet{ta16}
exhibit different star formation profiles depending on the stellar mass with galaxies
of intermediate masses
($10^{10.1} {\rm M}_\odot < M_{\rm star} < 10^{10.6} {\rm M}_\odot$) showing more
centrally-concentrated profiles than either less massive or more massive galaxies.
This result also seems to be in line with the present study which proposes bulge
formation in the restricted mass range,
$M_{\rm shock} < M_{\rm vir} < M_{\rm stream}$.
The present model predicts bulge formation only above a certain threshold for
the present galaxy mass
around $M_{\rm star} \sim 10^{10}{\rm M}_\odot$.
It is possible that bulge formation involves several mechanisms and those bulges
in less massive galaxies are formed by different processes. One possibility is
the secondary bulge formation from disc material in later cosmological epochs
as mentioned in introduction.
Indeed, the upper panel of Fig.4 shows a steep decrease in bulge ages below
$M_{\rm star} \sim 10^{10}{\rm M}_\odot$ in the observation by \citet{br20}.
The age gradient (the bottom panel) also turns to positive below this critical mass, suggesting
a different mechanism operating other than the inside-out gas accretion form the halo.
There seems to be a tendency that classical bulges inhabit massive galaxies whereas
pseudo bulges are observed in less massive galaxies \citep{ga09,we09,fi11}.
This habitat segregation may make the cold-accretion driven bulge formation
proposed in this study a likely candidate specific to classical bulge formation.
Indeed, Fig.3 shows that the threshold mass for bulge formation in the model nearly
coincides with the mass above which the classical bulges start to emerge in actual disc galaxies.
If this inference is correct, part of the discrepancy between the model and observations
appearing in Fig.3 may be also solved. The observed excess of the bulge mass in
\citet{ga09} and \citet{br18} could be contributed by secular processes.
On the other hand, the bulge fraction in \citet{we09}, which is actually the luminosity
fraction in H-band, could be underestimated if the stellar population in the bulge
is systematically older (and threfore redder) than the disc in those galaxies, which is
quite likely.
In either case, we need not consider that the classical and pseud bulge formation
processes occur excluisively with each other. Regading bulge formation,
the galaxy mass sequence may be a continuous sequence along which the relative
importance of two (or more) bulge formation processes changes gradually.
We applied for the first time the cold-accretion driven galaxy evolution model
to the currently available observational data for bulge properties
in galaxies with various masses. The model, despite its highly idealized nature,
can reproduce the observed behaviours at least qualitatively, although
observational data are still meager and future observations are required
to construct a more concrete picture for bulge formation.
Especially, galaxies at $z \sim 2-3$ will provide wealth of information on
the bulge formation because galactic bulges are thought to grow vigorously
in this epoch (see Fig.1). It is interesting that \citet{ta16} found a sign for
increasing bulge dominance for more massive galaxies in this redshift range
in agreement with the theoretical result by \citet{no20}.
The scrutinization of internal properties of the nearby bulges such as performed by
\citet{br18} and \citet{br20} will put constraints at the present cosmological epoch,
playing a complementary role with high-redshift surveys.
On the theoretical side, recent cosmological simulations start to produce disc galaxies
with realistic bulge-to-disc mass ratios unlike early simulations that produced too massive bulge components
\citep[e.g.][]{ma14,ga19}. \citet{ga19} report that their bulges in the Auriga simulation
comprise mostly in-Situ stars and merger contribution is negligible.
The work of \citet{br09} is pioneering in that it related different structural components
of disc galaxies formed in the cosmological simulations to different modes of
gas accretion, namely accretion of clumpy, shocked, and unshocked gas. Although high-resolution cosmological simulations are
very expensive, such close inspection of even a small number of created galaxies
will provide valuable insight into the build-up of disc galaxies free from
idealization made in the present work.
\section*{Acknowledgements}
We thank Iris Breda and Polychronis Papaderos for providing the observational data for galactic bulges and stimulating discussion on the bulge formation mechanisms.
We also thank the anonymous referees for invaluable comments which helped improve the
manuscript.
\vspace{16pt}
\noindent{Data availability}
\vspace{6pt}
\noindent{The data underlying this article will be shared on reasonable request to the corresponding author.}
| proofpile-arXiv_065-85 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper aims at broadening the understanding of the link between uncertainty principles and localized controllability of evolution equations. An uncertainty principle is a property which gives some limitations on the simultaneous concentration of a function and its Fourier transform. There exist different forms of uncertainty principles and one of them consists in studying the support of functions whose Fourier transforms are localized. The Logvinenko-Sereda Theorem \cite{logvinenko-sereda} ensures the equivalence of the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$, where $\omega \subset \mathbb{R}^d$ is a measurable subset, on the subspace
$$\big\{f \in L^2(\mathbb{R}^d); \ \operatorname{supp} \hat{f} \subset \overline{B(0,R)} \big\} \quad \text{with} \quad R>0,$$ where $\hat{f}$ denotes the Fourier transform of $f$,
as soon as $\omega$ is thick. The thickness property is defined as follows:
\begin{definition}\label{thick_def}
Let $d \in \mathbb{N}^*$ and $\omega$ be a measurable subset of $\mathbb{R}^d$. For $0<\gamma \leq 1$ and $L>0$, the set $\omega$ is said to be $\gamma$-thick at scale $L>0$ if and only if
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad |\omega \cap (x+[0,L]^d)| \geq \gamma L^d,
\end{equation*}
where $|A|$ denotes the Lebesgue measure of the measurable set $A$.
The set $\omega$ is said to be thick if and only if $\omega$ is $\gamma$-thick at scale $L>0$ for some $0<\gamma \leq 1$ and $L>0$.
\end{definition}
We define more generally the thickness with respect to a density:
\begin{definition}\label{thick_density}
Let $d \in \mathbb{N}^*$, $0<\gamma\leq 1$, $\omega$ be a measurable subset of $\mathbb{R}^d$ and $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ a positive function. The set $\omega$ is said to be $\gamma$-thick with respect to $\rho$ if and only if
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad |\omega \cap B(x,\rho(x))| \geq \gamma |B(x,\rho(x))|,
\end{equation*}
where $B(x,L)$ denotes the Euclidean ball of $\mathbb{R}^d$ centered at $x$ with radius $L$.
\end{definition}
Of course, a measurable subset of $\mathbb{R}^d$ is thick if and only if it is thick with respect to a positive constant density.
Kovrijkine provided a quantitative version of the Logvinenko-Sereda Theorem in \cite[Theorem~3]{Kovrijkine}:
\begin{theorem}[Kovrijkine {\cite[Theorem~3]{Kovrijkine}}]\label{Kovrijkine1} Let $\omega \subset \mathbb{R}^d$ be a measurable subset $\gamma$-thick at scale $L>0$.
There exists a universal positive constant $C>0$ independent on the dimension $d \geq 1$ such that for all $f \in L^2(\mathbb{R}^d)$ satisfying $\operatorname{supp} \hat{f} \subset J$, with $J$ a cube with sides of length $b$ parallel to coordinate axes,
\begin{equation}\label{kovrijkine1.1}
\|f\|_{L^2(\mathbb{R}^d)} \leq c(\gamma,d, L, b) \|f\|_{L^2(\omega)},
\end{equation}
with $$c(\gamma, d, L, b)= \Big( \frac{C^d}{\gamma} \Big)^{Cd(Lb+1)}.$$
\end{theorem}
The thickness property was recently shown to play a key role in spectral inequalities for finite combinations of Hermite functions. In \cite[Theorem~2.1]{MP}, the authors establish quantitative estimates with an explicit dependence on the energy level $N$ with respect to the growth of the density appearing in Definition~\ref{thick_density}:
\medskip
\begin{theorem}[Pravda-Starov \& Martin]\label{Spectral}
Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a $\frac{1}{2}$-Lipschitz positive function with $\mathbb{R}^d$ being equipped with the Euclidean norm,
such that there exist some positive constants $0< \varepsilon \leq 1$, $m>0$, $R>0$ such that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{1-\varepsilon}.
\end{equation*}
Let $\omega$ be a measurable subset of $\mathbb{R}^d$ which is $\gamma$-thick with respect to the density $\rho$.
Then, there exist some positive constant $\kappa_d(m, R, \gamma, \varepsilon)>0$, $\tilde{C}_d(\varepsilon, R) >0$ and a positive universal constant $\tilde{\kappa}_d >0$ such that
\begin{equation}\label{spec_ineq}
\forall N \geq 1, \ \forall f \in \mathcal{E}_N, \quad \|f\|_{L^2(\mathbb{R}^d)} \leq \kappa_d(m, R, \gamma, \varepsilon) \Big( \frac{\tilde{\kappa}_d}{\gamma} \Big)^{\tilde{C}_d(\varepsilon, R) N^{1-\frac{\varepsilon}{2}}} \|f\|_{L^2(\omega)},
\end{equation}
with $\mathcal E_{N}$ being the finite dimensional vector space spanned by the Hermite functions $(\Phi_{\alpha})_{\val \alpha \leq N}$.
\end{theorem}
\medskip
We refer the reader to Section~\ref{Hermite_functions} for the definition and some notations related to Hermite functions $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$.
We emphasize that Theorem~\ref{Spectral} ensures, in particular, the equivalence of the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$ on the subspace $\mathcal{E}_N$ as soon as the measurable subspace $\omega$ is thick with respect to a suitable density. Actually, contrary to the case when the functional subspace is the space of functions whose Fourier transforms are compactly supported, this fact holds true as soon as $\omega$ is a measurable subset of positive measure. As explained by the authors of \cite[Section~2]{kkj}, the analyticity property of finite combinations of Hermite functions together with an argument of finite dimension imply that for all $N \in \mathbb{N}$, there exists a positive constant $C_N(\omega)>0$ such that $$\forall f \in \mathcal{E}_N, \quad \|f\|_{L^2(\mathbb{R}^d)} \leq C_N(\omega) \|f \|_{L^2(\omega)},$$
as soon as $|\omega| >0$. The main interest of Theorem~\ref{Spectral} is the quantitative estimate from above on the growth of the positive constant $C_N(\omega)$ with respect to the energy level $N$, which is explicitly related to the growth of the density $\rho$ thanks to $\varepsilon$. As the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$ are not equivalent on $L^2(\mathbb{R}^d)$ when $|\mathbb{R}^d \setminus \omega|>0$, the constant $C_N(\omega)$ does have to blow up when $N$ tends to infinity. However, the asymptotic of this blow-up is very much related to the geometric properties of the control set $\omega$, and understanding this asymptotic can be assessed as an uncertainty principle.
One of the purpose of this work is to establish new uncertainty principles holding in a general class of Gelfand-Shilov spaces and to provide sufficient conditions on the growth of the density allowing these uncertainty principles to hold. Furthermore, this paper aims at providing new null-controllability results as a byproduct of these uncertainty principles.
Indeed, some recent works have highlighted the key link between uncertainty principles and localized control of evolution equations matters. Thanks to the explicit dependence of the constant with respect to the length of the sides of the cube in \eqref{kovrijkine1.1}, Egidi and Veseli\'c~ \cite{veselic}; and Whang, Whang, Zhang and Zhang \cite{Wang} have independently established that the heat equation
\begin{equation*}\label{heat}
\left\lbrace \begin{array}{ll}
(\partial_t -\Delta_x)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x)\,, \quad & x \in \mathbb{R}^d,\ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation*}
is null-controllable in any positive time $T>0$ from a measurable control subset $\omega \subset \mathbb{R}^d$ if and only if the control subset $\omega$ is thick in $\mathbb{R}^d$. By using the same uncertainty principle, Alphonse and Bernier established in \cite{AlphonseBernier} that the thickness condition is necessary and sufficient for the null-controllability of fractional heat equations
\begin{equation}\label{fractional_heat}
\left\lbrace \begin{array}{ll}
(\partial_t + (-\Delta_x)^s)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x)\,, \quad & x \in \mathbb{R}^d,\ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation}
when $s >\frac{1}{2}$. On the other hand, Koenig showed in \cite[Theorem~3]{Koenig} and \cite[Theorem~2.3]{Koenig_thesis} that the null-controllability of \eqref{fractional_heat} fails from any non-dense measurable subset of $\mathbb{R}$ when $0< s \leq \frac{1}{2}$. In \cite{AlphonseMartin}, Alphonse and the author point out the fact that the half heat equation, which is given by \eqref{fractional_heat} with $s= \frac{1}{2}$, turns out to be approximately null-controllable with uniform cost if and only if the control subset is thick.
Regarding the spectral inequalities in Theorem~\ref{Spectral}, thanks to the quantitative estimates \eqref{spec_ineq}, Pravda-Starov and the author established in \cite[Corollary~2.6]{MP} that the fractional harmonic heat equation
\begin{equation*}
\left\lbrace \begin{array}{ll}
\partial_tf(t,x) + (-\Delta_x+ |x|^2)^s f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d,\ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation*}
with $\frac{1}{2} < s \leq 1$, is null-controllable at any positive time from any measurable set $\omega$ which is thick with respect to the density
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad \rho(x)= R \langle x \rangle^{\delta},
\end{equation*}
with $0 \leq \delta < 2s-1$ and $R>0$.
More generally, the result of \cite[Theorem~2.5]{MP} shows that this thickness condition is a sufficient condition for the null-controllability of a large class of evolution equations associated to a closed operator whose $L^2(\mathbb{R}^d)$-adjoint generates a semigroup enjoying regularizing effects in specific symmetric Gelfand-Shilov spaces $S_{\frac{1}{2s}}^{\frac{1}{2s}}$.
The sufficiency of the thickness conditions for control subsets to ensure null-controllability results for these evolution equations is derived from an abstract observability result based on an adapted Lebeau-Robbiano method established by Beauchard and Pravda-Starov with some contributions of Miller in \cite[Theorem~2.1]{BeauchardPravdaStarov}. This abstract observability result was extended in~\cite[Theorem~3.2]{BEP} to the non-autonomous case with moving control supports under weaker dissipation estimates allowing a controlled blow-up for small times in the dissipation estimates.
The main limitation in the work \cite{MP} is that Hermite expansions can only characterize symmetric Gelfand-Shilov spaces (see Section~\ref{gelfand}) and therefore, the null-controllability results in \cite{MP} are limited to evolution equations enjoying only symmetric Gelfand-Shilov smoothing effects. This work partially adresses this matter by investigating the null-controllability of evolution equations associated to anharmonic oscillators, which are known to regularize in non-symmetric Gelfand-Shilov spaces. More generally, we establish null-controllability results for abstract evolution equations whose adjoint systems enjoy smoothing effects in non-symmetric Gelfand-Shilov spaces. This work precisely describes how the geometric properties of the control subset are related to the two indexes $\mu, \nu$ defining the Gelfand-Shilov space $S^{\mu}_{\nu}$.
This paper is organized as follows:
In Section~\ref{uncertainty_principle_general}, new uncertainty principles and quantitative estimates are presented. We first establish uncertainty principles for a general class of Gelfand-Shilov spaces in Section~\ref{general_GS}. In a second time, we deal with the particular case of spaces of functions with weighted Hermite expansions in Section~\ref{up_symmetric_GS}. These results are derived from sharp estimates for quasi-analytic functions established by Nazarov, Sodin and Volberg in \cite{NSV}. Some facts and results related to quasi-analytic functions are recalled in Sections~\ref{main_results} and \ref{qa_section}.
Thanks to these new uncertainty principles, we establish sufficient geometric conditions for the null-controllability of evolutions equations with adjoint systems enjoying quantitative Gelfand-Shilov smoothing effects in Section~\ref{null_controllability_results}.
\section{Statement of the main results}\label{main_results}
The main results contained in this work are the quantitative uncertainty principles holding for general Gelfand-Shilov spaces given in Theorem~\ref{general_uncertaintyprinciple}. The first part of this section is devoted to present these new uncertainty principles and to discuss the particular case of spaces of functions with weighted Hermite expansions. In a second part, we deduce from these new uncertainty principles some null-controllability results for abstract evolution equations with adjoint systems enjoying Gelfand-Shilov smoothing effects. Before stating these results, miscellaneous facts and notations need to be presented.
A sequence $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ of positive real numbers is said to be \textit{logarithmically convex} if
\begin{equation*}\label{log_conv}
\forall p \geq 1, \quad M_p^2 \leq M_{p+1} M_{p-1},
\end{equation*}
where $\mathbb{N}$ denotes the set of non-negative integers.
Let $U$ be an open subset of $\mathbb{R}^d$, with $d \geq 1$. We consider the following class of smooth functions defined on $U$ associated to the sequence $\mathcal{M}$,
\begin{equation*}\label{function_class}
\mathcal{C}_{\mathcal{M}}(U)= \left\{ f \in \mathcal{C}^{\infty}(U, \mathbb{C}): \quad \forall \beta \in \mathbb{N}^d, \; \|\partial_x^{\beta} f \|_{L^{\infty}(U)} \leq M_{|\beta|} \right\}.
\end{equation*}
A logarithmically convex sequence $\mathcal{M}$ is said to be quasi-analytic if the class of smooth functions $\mathcal{C}_{\mathcal{M}}((0,1))$ associated to $\mathcal{M}$ is quasi-analytic, that is, when the only function in $\mathcal{C}_{\mathcal{M}}((0,1))$ vanishing to infinite order at a point in $(0,1)$ is the zero function. A necessary and sufficient condition on the logarithmically convex sequence $\mathcal{M}$ to generate a quasi-analytic class is given by the Denjoy-Carleman theorem (see e.g. \cite{Koosis}):
\medskip
\begin{theorem}[Denjoy-Carleman] \label{Den_Carl_thm}
Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex sequence of positive real numbers. The sequence $\mathcal{M}$ defines a quasi-analytic sequence if and only if
\begin{equation*}
\sum_{p= 1}^{+\infty} \frac{M_{p-1}}{M_p} = + \infty.
\end{equation*}
\end{theorem}
\medskip
Let us now introduce the notion of Bang degree defined in \cite{Bang} and \cite{NSV}, and used by Jaye and Mitkovski in \cite{JayeMitkovski},
\begin{equation}\label{Bang}
\forall 0<t \leq 1, \forall r>0, \quad 0 \leq n_{t, \mathcal{M},r}= \sup\Big\{N \in \mathbb{N}: \, \sum_{-\log t < n \leq N} \frac{M_{n-1}}{M_n} < r \Big\}\leq +\infty,
\end{equation}
where the sum is taken equal to $0$ when $N=0$. Notice that if $\mathcal{M}$ is quasi-analytic, then the Bang degree $n_{t, \mathcal{M},r}$ is finite for any $0<t \leq 1$ and $r>0$.
This Bang degree allows the authors of \cite{JayeMitkovski} to obtain uniform estimates for $L^2$-functions with fast decaying Fourier transforms and to establish uncertainty principles for a general class of Gevrey spaces. These authors also define
\begin{equation}\label{def_gamma}
\forall p \geq 1, \quad \gamma_{\mathcal{M}}(p) = \sup \limits_{1 \leq j \leq p} j \Big(\frac{M_{j+1} M_{j-1}}{M_j^2} -1\Big) \quad \text{and} \quad \Gamma_{\mathcal{M}} (p)= 4 e^{4+4\gamma_{\mathcal{M}}(p)}.
\end{equation}
We refer the reader to the Section~\ref{qa_section} for some examples and useful results about quasi-analytic sequences.
\newpage
\subsection{Some uncertainty principles}\label{uncertainty_principle_general}
\subsubsection{Uncertainty principles in general Gelfand-Shilov spaces}\label{general_GS}
In this section, we study uncertainty principles holding in general Gelfand-Shilov spaces. We consider the following subspaces of smooth functions
\begin{equation*}\label{gelfandshilov}
GS_{\mathcal{N},\rho} := \Big\{ f \in \mathcal{C}^{\infty}(\mathbb{R}^d), \quad \sup_{k \in \mathbb{N},\ \beta \in \mathbb{N}^d} \frac{\| \rho(x)^k \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{N_{k,|\beta|}} < +\infty \Big\},
\end{equation*}
where $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ is a positive measurable function and $\mathcal{N}=(N_{p,q})_{(p,q) \in \mathbb{N}^2}$ is a sequence of positive real numbers. Associated to these spaces, are the following semi-norms
\begin{equation*}
\forall f \in GS_{\mathcal{N},\rho}, \quad \|f\|_{GS_{\mathcal{N},\rho}} = \sup_{k \in \mathbb{N}, \ \beta \in \mathbb{N}^d} \frac{\| \rho(x)^k \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{N_{k,|\beta|}}.
\end{equation*}
When $$\forall x \in \mathbb{R}^d, \quad \rho(x)= \langle x \rangle= (1+\|x\|^2)^{\frac{1}{2}}$$ and $\mathcal{N}=\big( C^{p+q}(p!)^{\nu} (q!)^{\mu} \big)_{(p,q) \in \mathbb{N}^2}$ for some $C \geq 1$ and $\mu, \nu >0$ with $\mu+\nu \geq1$, $GS_{\mathcal{N}, \rho}$ is a subspace of the classical Gelfand-Shilov space $\mathcal{S}^{\mu}_{\nu}$, whereas when $\rho \equiv 1$, the space $GS_{\mathcal{N}, \rho}$ characterizes some Gevrey type regularity. We choose here to not discuss this particular case since it is studied in the recent works \cite{AlphonseMartin, JayeMitkovski}. In the following, a positive function $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ is said to be a contraction mapping when there exists $0\leq L <1$ such that
$$\forall x,y \in \mathbb{R}^d, \quad |\rho(x)-\rho(y)| \leq L \|x-y\|,$$
where $\| \cdot \|$ denotes the Euclidean norm. A double sequence of real numbers $\mathcal{N}=(N_{p,q})_{(p,q) \in \mathbb{N}^2}$ is said to be non-decreasing with respect to the two indexes when
$$\forall p \leq p', \forall q \leq q', \quad N_{p,q} \leq N_{p',q'}.$$
The following result provides some uncertainty principles holding for the spaces $GS_{\mathcal{N}, \rho}$:
\medskip
\begin{theorem}\label{general_uncertaintyprinciple}
Let $0<\gamma \leq 1$, $\mathcal{N}=(N_{p, q})_{(p,q) \in \mathbb{N}^2} \in (0,+\infty)^{\mathbb{N}^2}$ be a non-decreasing sequence with respect to the two indexes such that the diagonal sequence $\mathcal{M}=(N_{p,p})_{p \in \mathbb{N}} \in (0,+\infty)^{\mathbb{N}}$ defines a logarithmically-convex quasi-analytic sequence and $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ a positive contraction mapping
such that there exist some constants $m>0$, $R>0$ so that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \langle x \rangle.
\end{equation*}
Let $\omega$ be a measurable subset of $\mathbb{R}^d$. If $\omega$ is $\gamma$-thick with respect to $\rho$, then there exist some positive constants $ K=K(d,\rho) \geq 1$, $K'=K'(d,\rho, \gamma)\geq 1$, $r=r(d, \rho) \geq 1$ depending on the dimension $d \geq 1$, on $\gamma$ for the second and on the density $\rho$ such that for all $0<\varepsilon \leq N^2_{0,0}$,
\begin{equation*}
\forall f \in GS_{\mathcal{N},\rho}, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon} \|f\|^2_{L^2(\omega)} + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}},
\end{equation*}
where
\begin{equation*}
C_{\varepsilon}= K' \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t_0, \mathcal{M}, r}) \bigg)^{4n_{t_0, \mathcal{M}, r} }
\end{equation*}
with $n_{t_0, \mathcal{M}, r}$ being defined in \eqref{Bang} and
\begin{equation*}
t_0=\frac{\varepsilon^{\frac{1}{2}}}{K N_{d,d}}.
\end{equation*}
\end{theorem}
\medskip
It is particularly interesting to notice that Theorem~\ref{general_uncertaintyprinciple} provides a quantitative estimate of the constant $C_{\varepsilon}$ with respect to the different parameters. In specific cases, the Bang degree is easily computable (see Lemma~\ref{ex_qa_sequence}) and an explicit upper bound on the constant $C_{\varepsilon}$ can be obtained. The above uncertainty principles apply in particular to the case of the classical Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$ as follows:
\medskip
\begin{theorem}\label{specific_GS_uncertaintyprinciple}
Let $A \geq 1$, $0<\mu \leq 1$, $\nu >0$ with $\mu+\nu \geq 1$ and $0\leq \delta \leq \frac{1-\mu}{\nu}\leq 1$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping such that there exist some constants $m>0$, $R>0$ so that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{\delta}.
\end{equation*}
Let $\omega$ be a measurable subset of $\mathbb{R}^d$. If $\omega$ is thick with respect to $\rho$, then for all $0<\varepsilon \leq 1$, there exists a positive constant $C_{\varepsilon,A}>0$ such that for all $f \in \mathscr{S}(\mathbb{R}^d)$,
\begin{equation}\label{up_schwartz}
\| f \|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon,A} \|f\|^2_{L^2(\omega)} + \varepsilon \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \bigg(\frac{\|\langle x\rangle^{p} \partial^{\beta}_{x} f\|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu}(|\beta|!)^{\mu}}\bigg)^2,
\end{equation}
where, when $\delta < \frac{1-\mu}{\nu}$, there exists a positive constant $K=K(d, \gamma, \rho,\mu, \nu) \geq 1$ depending on the dimension $d$, $\rho$ and $\nu$ such that
$$0<C_{\varepsilon,A} \leq e^{K(1-\log \varepsilon +A^{\frac{2}{1-\mu-\delta \nu}})}, $$
whereas, when $\delta= \frac{1-\mu}{\nu}$, there exists a positive constant $K=K(d, \gamma, \rho, \mu, \nu) \geq 1$ depending on the dimension $d$, $\rho$ and $\nu$ such that
$$0<C_{\varepsilon,A} \leq e^{K(1-\log \varepsilon+\log A)e^{KA^2}}.$$
\end{theorem}
\medskip
Let us notice that the estimate \eqref{up_schwartz} is only relevant when
\begin{equation*}
\sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x\rangle^{p} \partial^{\beta}_x f\|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu}(|\beta|!)^{\mu}} <+\infty,
\end{equation*}
that is, when $f \in GS_{\mathcal{N}, \tilde{\rho}}$, with $\mathcal{N}=(A^{p+q}(p!)^{\nu} (q!)^{\mu})_{(p,q) \in \mathbb{N}^2}$ and $\tilde{\rho}= \langle \cdot \rangle$. The quantitative estimates given in Theorem~\ref{specific_GS_uncertaintyprinciple} are playing a key role in order to establish the following null-controllability results.
The proof of Theorem~\ref{general_uncertaintyprinciple} is given in Section~\ref{proof_mainprop}. It follows the strategy developed by Kovrijkine in \cite{Kovrijkine}, and its generalization given in Theorem~\ref{Spectral} together with a quantitative result on quasi-analytic functions which is a multidimensional version of \cite[Theorem~B]{NSV} from Nazarov, Sodin and Volberg. Regarding Theorem~\ref{specific_GS_uncertaintyprinciple}, its proof is given in Section~\ref{proof2}. It is a direct application of Theorem~\ref{general_uncertaintyprinciple} together with Lemma~\ref{ex_qa_sequence}.
Next section shows that Theorem~\ref{general_uncertaintyprinciple} also applies to more general sequences.
\subsubsection{Uncertainty principles in symmetric weighted Gelfand-Shilov spaces}\label{up_symmetric_GS}
Let $$\Theta : [0,+\infty) \longrightarrow [0,+\infty),$$ be a non-negative continuous function. We consider the following symmetric weighted Gelfand-Shilov spaces
\begin{equation*}
GS_{\Theta}= \Big\{ f \in L^2(\mathbb{R}^d): \quad \|f\|_{GS_{\Theta}} := \Big\|\big(e^{\Theta(|\alpha|)} \langle f, \Phi_{\alpha} \rangle_{L^2(\mathbb{R}^d)}\big)_{\alpha \in \mathbb{N}^d}\Big\|_{l^2(\mathbb{N}^d)} < +\infty \Big\},
\end{equation*}
where $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$ denotes the Hermite basis of $L^2(\mathbb{R}^d)$. The definition and basic facts about Hermite functions are recalled in Section~\ref{Hermite_functions}.
Before explaining how the spaces $GS_{\Theta}$ relate to Gelfand-Shilov spaces defined in Section~\ref{gelfand}, the assumptions on the weight function $\Theta$ need to be specify further.
Let us consider the following logarithmically-convex sequence
\begin{equation}\label{lc_sequence}
\forall p \in \mathbb{N}, \quad M_p= \sup_{t \geq 0} t^pe^{-\Theta(t)}.
\end{equation}
Let $s >0$. We assume that the sequence $(M_p)_{p \in \mathbb{N}}$ satisfies the following conditions:
\medskip
\text{(H1)} $\forall p \in \mathbb{N}, \quad 0<M_p < +\infty$,
\medskip
\text{(H2)} There exist some positive constants $C_{\Theta}>0$, $L_{\Theta}\geq 1$ such that
\begin{equation*}\label{H2}
\forall p \in \mathbb{N}, \quad p^{p} \leq C_{\Theta} L_{\Theta}^p M_p,
\end{equation*}
with the convention $0^0=1$,
\medskip
$\text{(H3)}_s$ The sequence $(M^s_p)_{p \in \mathbb{N}}$ is quasi-analytic, that is,
\begin{equation*}
\sum_{p=1}^{+\infty} \Big(\frac{M_{p-1}}{M_p}\Big)^s = +\infty,
\end{equation*}
according to Denjoy-Carleman Theorem.
Under these assumptions, the following Bernstein type estimates hold for the spaces $GS_{\Theta}$:
\medskip
\begin{proposition}\label{bernstein_estim1}
Let $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function. If the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $(H1)$ and $(H2)$, then the space $GS_{\Theta}$ is included in the Schwartz space $\mathscr{S}(\mathbb{R}^d)$, and for all $0< s \leq 1$, there exists a positive constant $D_{\Theta, d,s}\geq 1$ such that
\begin{multline*}
\forall f \in GS_{\Theta}, \forall r \in [0,+\infty), \forall \beta \in \mathbb{N}^d,\\
\|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq (D_{\Theta,d,s})^{1+r+|\beta|} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}},
\end{multline*}
where $\lfloor \cdot \rfloor$ denotes the floor function.
\end{proposition}
\medskip
\begin{remark} Let us notice that Proposition~\ref{bernstein_estim1} implies in particular the inclusion of spaces $GS_{\Theta} \subset GS_{\mathcal{N}, \rho_s}$, when $\frac{1}{2} \leq s \leq 1$, with
$$\forall x \in \mathbb{R}^d, \quad \rho_s(x)= \langle x \rangle^{2s-1}$$
and
$$\mathcal{N}= \Big((D_{\Theta,d, s}^{(2s-1)p+q+1} M_{\left\lfloor \frac{(2s-1)p+1 +q+(2-s)(d+1)}{2s} \right\rfloor +1}^s \Big)_{(p,q) \in \mathbb{N}^2},$$
and the following estimates
\begin{equation*}
\forall f \in GS_{\Theta}, \quad \|f\|_{GS_{\mathcal{N},\rho_s}} \leq \|f \|_{GS_{\Theta}}.
\end{equation*}
\end{remark}
The proof of Proposition~\ref{bernstein_estim1} is given in Appendix (Section~\ref{appendix}).
In order to derive uncertainty principles for functions with weighted Hermite expansions, the sequence $\mathcal{M}$ has in addition to satisfy the assumption $(H3)_s$ for some $\frac{1}{2} \leq s \leq 1$.
The quantitative estimates in Proposition~\ref{bernstein_estim1} together with the uncertainty principles given by Theorem~\ref{general_uncertaintyprinciple} allow us to establish the following estimates:
\medskip
\begin{theorem}\label{uncertainty_principle}
Let $0 \leq \delta \leq 1$ and $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function. Let us assume that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{\frac{1+\delta}{2}}$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping satisfying
\begin{equation*}
\exists m>0, \exists R>0, \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \left\langle x \right\rangle^{\delta}.
\end{equation*}
If $\omega$ is a measurable subset of $\mathbb{R}^d$ thick with respect to $\rho$, then there exists a positive constant $\varepsilon_0=\varepsilon_0(\Theta, d, \delta)>0$ such that for all $0<\varepsilon \leq \varepsilon_0$, there exists a positive constant $D_{\varepsilon}=D(d, \Theta, \varepsilon, \delta, \rho)>0$ so that
\begin{equation}\label{uncertainty_principle_sym}
\forall f \in GS_{\Theta}, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq D_{\varepsilon} \|f\|^2_{L^2(\omega)}+ \varepsilon \|f\|_{GS_{\Theta}}^2 .
\end{equation}
\end{theorem}
\medskip
The above result provides some uncertainty principles for functions with weighted Hermite expansions. Its proof is given in Section~\ref{thm_hermite_proof}. Let us point out that it is possible to obtain quantitative estimates on the constant $D_{\varepsilon}$ thanks to the ones in Theorem~\ref{general_uncertaintyprinciple}, and to recover the spectral inequalities for finite combinations of Hermite functions established in \cite{MP} with a constant growing at the same rate with respect to $N$. Indeed, by taking $\Theta(t)=t$ on $[0,+\infty)$, we readily compute that
\begin{equation*}
\forall p \in \mathbb{N}, \quad M_p= \sup_{t \geq 0} t^p e^{-t} = \Big( \frac{p}{e} \Big)^p
\end{equation*}
and the Stirling's formula provides
\begin{equation*}
M_p \underset{p \to +\infty}{\sim} \frac{p!}{\sqrt{2\pi p}}.
\end{equation*}
It follows that the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_1$ are satisfied.
By noticing that
\begin{equation*}
\|f\|^2_{GS_{\Theta}} = \sum_{|\alpha| \leq N} e^{2|\alpha|} |\langle f, \Phi_{\alpha} \rangle_{L^2(\mathbb{R}^d)} |^2 \leq e^{2N} \|f\|^2_{L^2(\mathbb{R}^d)},
\end{equation*}
when $N \in \mathbb{N}$ and $f=\sum_{|\alpha| \leq N} \langle f, \Phi_{\alpha} \rangle \Phi_{\alpha}$, we deduce from \eqref{uncertainty_principle_sym} while taking $\varepsilon = \frac{1}{2} e^{-2N}$ that
\begin{equation*}
\forall N \in \mathbb{N}, \forall f \in \mathcal{E}_N, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq 2 D_{\frac{1}{2} e^{-2N}} \|f\|^2_{L^2(\omega)},
\end{equation*}
where $\mathcal{E}_N= \textrm{Span}_{\mathbb{C}}\big\{\Phi_{\alpha}\big\}_{\alpha \in \mathbb{N}^d, \, |\alpha|\leq N}$.
We end this section by providing some examples of functions $\Theta$, which define a sequence $\mathcal{M}=(M_p)_{p\in \mathbb{N}}$ satisfying hypotheses $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{s}$ for some $\frac{1}{2}\leq s \leq 1$. In \cite[Proposition~4.7]{AlphonseMartin}, Alphonse and the author devise the following examples in the case $s=1$:
\medskip
\begin{proposition}[\cite{AlphonseMartin}, Alphonse \& Martin]\label{ex_theta1}
Let $k\geq1$ be a positive integer and $\Theta_{k,1} : [0,+\infty)\rightarrow[0,+\infty)$ be the non-negative function defined for all $t\geq0$ by
$$\Theta_{k,1}(t) = \frac t{g(t)(g\circ g)(t)... g^{\circ k}(t)},\quad\text{where}\quad g(t) = \log(e+t),$$
with $g^{\circ k} = g\circ\ldots\circ g$ ($k$ compositions). The associated sequence $\mathcal M^{\Theta_{k,1}}=(M^{\Theta_{k,1}}_p)_{p \in \mathbb{N}}$ defined in \eqref{lc_sequence} is a quasi-analytic sequence of positive real numbers.
\end{proposition}
\medskip
Let us notice that the assumption $\text{(H2)}$ is satisfied as
\begin{equation*}
\forall k \geq 1, \forall p \in \mathbb{N}, \quad M^{\Theta_{k,1}}_p= \sup_{t \geq 0} t^p e^{-\Theta_{k,1}(t)} \geq \sup_{t \geq 0} t^p e^{-t} = \Big(\frac{p}{e}\Big)^p.
\end{equation*}
Proposition~\ref{ex_theta1} allows to provide some examples for the cases $\frac{1}{2} \leq s \leq 1$:
\medskip
\begin{proposition}\label{ex_qa_bertrand}
Let $k\geq1$ be a positive integer, $\frac{1}{2} \leq s \leq 1$ and $\Theta_{k,s} : [0,+\infty)\rightarrow[0,+\infty)$ be the non-negative function defined for all $t\geq0$ by
$$\Theta_{k,s}(t) = \frac {t^s}{g(t)(g\circ g)(t)... g^{\circ k}(t)},\quad\text{where}\quad g(t) = \log(e+t),$$
with $g^{\circ k} = g\circ\ldots\circ g$ ($k$ compositions). The associated sequence $\mathcal M^{\Theta_{k,s}}=(M^{\Theta_{k,s}}_p)_{p \in \mathbb{N}}$ defined in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{s}$.
\end{proposition}
\medskip
The proof of Proposition~\ref{ex_qa_bertrand} is given in Section~\ref{qa_section}.
\subsection{Applications to the null-controllability of evolution equations}\label{null_controllability_results}
This section is devoted to state some null-controllability results for evolution equations whose adjoint systems enjoy Gelfand-Shilov smoothing effects. Before presenting these results, let us recall the definitions and classical facts about controllability.
The notion of null-controllability is defined as follows:
\medskip
\begin{definition} [Null-controllability] Let $P$ be a closed operator on $L^2(\mathbb{R}^d)$, which is the infinitesimal generator of a strongly continuous semigroup $(e^{-tP})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$, $T>0$ and $\omega$ be a measurable subset of $\mathbb{R}^d$.
The evolution equation
\begin{equation}\label{syst_general}
\left\lbrace \begin{array}{ll}
(\partial_t + P)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d,\ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation}
is said to be {\em null-controllable from the set $\omega$ in time} $T>0$ if, for any initial datum $f_0 \in L^{2}(\mathbb{R}^d)$, there exists a control function $u \in L^2((0,T)\times\mathbb{R}^d)$ supported in $(0,T)\times\omega$, such that the mild \emph{(}or semigroup\emph{)} solution of \eqref{syst_general} satisfies $f(T,\cdot)=0$.
\end{definition}
\medskip
By the Hilbert Uniqueness Method, see \cite{coron_book} (Theorem~2.44) or \cite{JLL_book}, the null-controllability of the evolution equation \eqref{syst_general} is equivalent to the observability of the adjoint system
\begin{equation} \label{adj_general}
\left\lbrace \begin{array}{ll}
(\partial_t + P^*)g(t,x)=0, \quad & x \in \mathbb{R}^d, \ t>0, \\
g|_{t=0}=g_0 \in L^2(\mathbb{R}^d),
\end{array}\right.
\end{equation}
where $P^*$ denotes the $L^2(\mathbb{R}^d)$-adjoint of $P$.
The notion of observability is defined as follows:
\medskip
\begin{definition} [Observability] Let $T>0$ and $\omega$ be a measurable subset of $\mathbb{R}^d$.
The evolution equation \eqref{adj_general} is said to be {\em observable from the set $\omega$ in time} $T>0$, if there exists a positive constant $C_T>0$ such that,
for any initial datum $g_0 \in L^{2}(\mathbb{R}^d)$, the mild \emph{(}or semigroup\emph{)} solution of \eqref{adj_general} satisfies
\begin{equation*}\label{eq:observability}
\int\limits_{\mathbb{R}^d} |g(T,x)|^{2} dx \leq C_T \int\limits_{0}^{T} \Big(\int\limits_{\omega} |g(t,x)|^{2} dx\Big) dt\,.
\end{equation*}
\end{definition}
\medskip
In the following, we shall always derive null-controllability results from observability estimates on adjoint systems.
\subsubsection{Null-controllability of evolution equations whose adjoint systems enjoy non symmetric Gelfand-Shilov smoothing effects}\label{null_controllability_non_symmetric_GS}
In this section, we aim at establishing null-controllability results for evolution equations whose adjoint systems enjoy Gelfand-Shilov smoothing effects. We consider $A$ a closed operator on $L^2(\mathbb{R}^d)$, that is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$, that is satisfying $$\forall t \geq 0, \forall f \in L^2(\mathbb{R}^d), \quad \|e^{-tA}f\|_{L^2(\mathbb{R}^d)} \leq \|f\|_{L^2(\mathbb{R}^d)},$$ and study the evolution equation
\begin{equation}\label{PDEgeneral}
\left\lbrace \begin{array}{ll}
\partial_tf(t,x) + Af(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d). &
\end{array}\right.
\end{equation}
We assume that the semigroup $(e^{-tA^*})_{t \geq 0}$ generated by the $L^2(\mathbb{R}^d)$-adjoint operator $A^*$, enjoys some Gelfand-Shilov smoothing effects for any positive time, that is,
\begin{equation*}\label{GS0}
\forall t >0, \forall f \in L^2(\mathbb{R}^d), \quad e^{-tA^*}f \in S^{\mu}_{\nu} (\mathbb{R}^d),
\end{equation*}
for some $\mu, \nu >0$ satisfying $\mu +\nu \geq 1$.
More precisely, we assume that the following quantitative regularizing estimates hold: there exist some constants $C \geq 1$, $r_1>0$, $r_2\geq0$, $0<t_0 \leq 1$ such that
\begin{multline}\label{GS_estimate}
\forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall f \in L^2(\mathbb{R}^d), \\
\|x^{\alpha} \partial^{\beta}_x (e^{-tA^*} f) \|_{L^2(\mathbb{R}^d)} \leq \frac{C^{1+|\alpha|+|\beta|}}{t^{r_1(|\alpha|+|\beta|)+r_2}} (\alpha!)^{\nu} (\beta !)^{\mu} \|f\|_{L^2(\mathbb{R}^d)},
\end{multline}
where $\alpha!=\alpha_1!...\alpha_d!$ if $\alpha=(\alpha_1,...,\alpha_d) \in \mathbb{N}^d$.
In a recent work \cite{Alphonse}, Alphonse studies the smoothing effects of semigroups generated by anisotropic Shubin operators
\begin{equation*}
\mathcal{H}_{m,k} = (-\Delta_x)^m+|x|^{2k},
\end{equation*}
equipped with domains
\begin{equation*}
D(\mathcal{H}_{m,k})= \left\{f \in L^2(\mathbb{R}^d) : \mathcal{H}_{m,k} f \in L^2(\mathbb{R}^d) \right\},
\end{equation*}
when $m,k \geq 1$ are positive integers. This author establishes in \cite[Corollary~2.2]{Alphonse} the following quantitative estimates for fractional anisotropic Shubin operators:
for all $m, k \geq 1$, $s >0$, there exist some positive constants $C\geq 1$, $r_1, r_2>0$, $t_0>0$ such that
\begin{multline*}
\forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall f \in L^2(\mathbb{R}^d), \\ \|x^{\alpha} \partial_x^{\beta} \big(e^{-t\mathcal{H}_{m,k}^s} f\big) \|_{L^2(\mathbb{R}^d)} \leq \frac{C^{1+|\alpha|+|\beta|}}{t^{r_1(|\alpha|+|\beta|)+r_2}} (\alpha!)^{\nu_{m,k,s}} (\beta !)^{\mu_{m,k,s}} \|f\|_{L^2(\mathbb{R}^d)},
\end{multline*}
with
\begin{equation*}
\nu_{m,k,s}= \max\Big(\frac{1}{2sk}, \frac{m}{k+m} \Big) \quad \text{and} \quad \mu_{m,k,s}= \max\Big(\frac{1}{2sm}, \frac{k}{k+m} \Big).
\end{equation*}
Thanks to these quantitative estimates and the general result of null-controllability for evolution equations whose adjoint systems enjoy symmetric Gelfand-Shilov smoothing effects in \cite[Theorem~2.5]{MP}, Alphonse derives in \cite[Theorem~2.3]{Alphonse} a sufficient growth condition on the density $\rho$ to ensure the null-controllability of evolution equations associated to the Shubin operators $\mathcal{H}_{l,l}$, with $l \geq 1$, in any positive time from any measurable thick control subset with respect to $\rho$. In this work, we extend this result to general Shubin operators $\mathcal{H}_{m,k}$, with $m, k \geq 1$, and more generally establish the null-controllability of evolution equations whose adjoint systems enjoy quantitative smoothing effects in specific Gelfand-Shilov spaces $S_{\nu}^{\mu}$.
The following result shows that null-controllability holds for the evolution equations \eqref{PDEgeneral} when the parameter $\delta$ ruling the growth of the density is strictly less than the critical parameter $\delta^*=\frac{1-\mu}{\nu}$.
\begin{theorem}\label{observability_result}
Let $(A,D(A))$ be a closed operator on $L^2(\mathbb{R}^d)$ which is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$ whose $L^2(\mathbb{R}^d)$-adjoint generates a semigroup satisfying the quantitative smoothing estimates \eqref{GS_estimate} for some $0<\mu <1 $, $\nu >0$ such that $\mu+\nu \geq 1$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a contraction mapping such that there exist some constants $0 \leq \delta < \frac{1-\mu}{\nu}$, $m>0$, $R>0$ so that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}.
\end{equation*}
If $\omega \subset \mathbb{R}^d$ is a measurable subset thick with respect to the density $\rho$, the evolution equation
\begin{equation*}\label{PDEadjoint}
\left\lbrace \begin{array}{ll}
\partial_tf(t,x) + Af(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation*}
is null-controllable from the control subset $\omega$ in any positive time $T>0$; and equivalently, the adjoint system
\begin{equation*}\label{PDEadjoint}
\left\lbrace \begin{array}{ll}
\partial_t g(t,x) + A^*g(t,x)=0, \quad & x \in \mathbb{R}^d, \ t>0, \\
g|_{t=0}=g_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation*}
is observable from the control subset $\omega$ in any positive time $T>0$. More precisely, there exists a positive constant $K=K(d, \rho, \delta, \mu, \nu) \geq 1$ such that
\begin{equation*}
\forall g \in L^2(\mathbb{R}^d), \forall T>0, \quad \|e^{-TA^*}g\|^2_{L^2(\mathbb{R}^d)} \leq K \exp\Big(\frac{K}{T^{\frac{2r_1}{1-\mu-\delta \nu}}}\Big) \int_0^T \|e^{-tA^*}g\|^2_{L^2(\omega)} dt.
\end{equation*}
\end{theorem}
\medskip
The proof of Theorem~\ref{observability_result} is given in Section~\ref{proof_obs}. It is derived from the uncertainty principles established in Theorem~\ref{specific_GS_uncertaintyprinciple} while revisiting the adapted Lebeau-Robbiano method used in \cite[Section~8.3]{BeauchardPravdaStarov} with some inspiration taken from the work of Miller \cite{Miller}.
Contrary to \cite[Theorem~2.5]{MP}, where the authors take advantage of the characterization of symmetric Gelfand-Shilov spaces through the decomposition into Hermite basis, let us stress that the above proof does not rely on a similar characterization of general Gelfand-Shilov spaces through the decomposition into an Hilbert basis composed by the eigenfunctions of a suitable operator.
In the critical case $\delta = \delta^*$, the null-controllability of the evolution equation \eqref{PDEgeneral} whose adjoint system enjoys quantitative smoothing estimates in the Gelfand-Shilov space $S_{\nu}^{\mu}$ is still an open problem.
As mentionned above, the general Shubin operators $\mathcal{H}_{m,k}$ are self-adjoint and generate strongly continuous semigroups on $L^2(\mathbb{R}^d)$, which enjoy quantitative smoothing effects. Consequently, Theorem~\ref{observability_result} can be directly applied to obtain the following null-controllability results:
\medskip
\begin{corollary}
Let $m,k \geq 1$ be positive integers, $s> \frac{1}{2m}$ and
\begin{equation*}
\delta^*_{m,k,s} := \left\lbrace \begin{array}{ll}
1 & \text{if } s \geq \frac{m+k}{2mk}, \\
\frac{k}{m} (2sm-1) & \text{if } \frac{1}{2m} < s \leq \frac{m+k}{2mk}.
\end{array}\right.
\end{equation*}
Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a contraction mapping such that there exist some constants $0 \leq \delta < \delta^*$, $m>0$, $R>0$ so that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}.
\end{equation*}
If $\omega \subset \mathbb{R}^d$ is a measurable subset thick with respect to the density $\rho$, the evolution equation associated to the fractional Shubin operator
\begin{equation*}\label{PDE_shubin}
\left\lbrace \begin{array}{ll}
\partial_tf(t,x) + \mathcal{H}_{m,k}^s f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\
f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), &
\end{array}\right.
\end{equation*}
is null-controllable from the control subset $\omega$ in any time $T>0$.
\end{corollary}
\medskip
\section{Proof of the uncertainty principles}
\subsection{Proof of Theorem~\ref{general_uncertaintyprinciple}}\label{proof_mainprop}
This section is devoted to the proof of Theorem~\ref{general_uncertaintyprinciple}.
Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping such that there exist some positive constants $m>0$, $R>0$ so that
\begin{equation}\label{rho_condi}
\forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \left\langle x \right\rangle.
\end{equation}
Let $\omega \subset \mathbb{R}^d$ be a measurable subset $\gamma$-thick with respect to the density $\rho$, that is,
\begin{equation*}\label{thick_rho}
\exists 0 < \gamma \leq 1, \forall x \in \mathbb{R}^d, \quad |\omega \cap B(x,\rho(x))| \geq \gamma |B(x,\rho(x))|=\gamma \rho(x)^d |B(0,1)|,
\end{equation*}
where $B(x,r)$ denotes the Euclidean ball centered at $x \in \mathbb{R}^d$ with radius $r>0$, and where $|\cdot|$ denotes the Lebesgue measure.
Since $\rho$ is a positive contraction mapping, Lemma~\ref{slowmet} in Appendix and the remark made after the statement of this result show that the family of norms $(\|\cdot\|_x)_{x \in \mathbb{R}^d}$ given by
\begin{equation*}
\forall x \in \mathbb{R}^d, \forall y \in \mathbb{R}^d, \quad \|y\|_x=\frac{\|y\|}{\rho(x)},
\end{equation*}
where $\|\cdot\|$ denotes the Euclidean norm in $\mathbb{R}^d$, defines a slowly varying metric on $\mathbb{R}^d$.
\subsection{Step 1. Bad and good balls} By using Theorem~\ref{slowmetric} in appendix, we can find a sequence $(x_k)_{k \geq 0}$ in $\mathbb{R}^d$ such that
\begin{multline}\label{recov}
\exists K_0 \in \mathbb{N}, \forall (i_1, ..., i_{K_0+1}) \in \mathbb{N}^{K_0+1} \textrm{ with } i_k \neq i_l \textrm{ if }1 \leq k \neq l \leq K_0+1, \\ \bigcap \limits_{k=1}^{K_0+1} {B_{i_k}}=\emptyset
\end{multline}
and
\begin{equation}\label{recov1}
\mathbb{R}^d=\bigcup_{k=0}^{+\infty} {B_k},
\end{equation}
where
\begin{equation}\label{asdf1}
B_k=\{y \in \mathbb{R}^d:\ \|y-x_k\|_{x_k} <1\}=\{y \in \mathbb{R}^d:\ \|y-x_k\| <\rho(x_k)\}=B(x_k,\rho(x_k)).
\end{equation}
Let us notice from Theorem~\ref{slowmetric} that the non-negative integer $K_0 =K_0(d, L)$ only depends on the dimension $d$ and $L$ the Lipschitz constant of $\rho$, since the constant $C \geq 1$ appearing in slowness condition (\ref{equiv}) can be taken equal to $C=\frac{1}{1-L}$.
It follows from (\ref{recov}) and (\ref{recov1}) that
\begin{equation}\label{asdf2}
\forall x \in \mathbb{R}^d, \quad 1 \leq \sum \limits_{k=0}^{+\infty} \mathbbm{1}_{B_k} (x) \leq K_0,
\end{equation}
where $\mathbbm{1}_{B_k}$ denotes the characteristic function of $B_k$.
We deduce from (\ref{asdf2}) and the Fubini-Tonelli theorem that for all $g \in L^2(\mathbb{R}^d)$,
\begin{equation*}
\|g\|_{L^2(\mathbb{R}^d)}^2 = \int_{\mathbb{R}^d}|g(x)|^2dx \leq \sum_{k=0}^{+\infty}\int_{B_k}|g(x)|^2dx \leq K_0 \|g\|_{L^2(\mathbb{R}^d)}^2.
\end{equation*}
Let $f \in GS_{\mathcal{N}, \rho} \setminus \{0\}$ and $\varepsilon>0$. We divide the family of balls $(B_k)_{k \geq 0}$ into families of good and bad balls. A ball $B_k$, with $k \in \mathbb{N}$, is said to be good if it satisfies
\begin{equation}\label{good_ball}
\forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad
\int_{B_k} |\rho(x)^{p} \partial_{x}^{\beta} f(x)|^2 dx
\leq \varepsilon^{-1}2^{2(p+|\beta|)+d+1} K_0 N^{2}_{p, |\beta|} \int_{B_k} |f(x)|^2 dx,
\end{equation}
On the other hand, a ball $B_k$, with $k \in \mathbb{N}$, which is not good, is said to be bad, that is, when
\begin{multline}\label{bad_ball}
\exists (p_0, \beta_0) \in \mathbb{N} \times \mathbb{N}^d, \\
\int_{B_k} |\rho(x)^{p_0} \partial_{x}^{\beta_0} f(x)|^2 dx
> \varepsilon^{-1}2^{2(p_0+|\beta_0|)+d+1} K_0 N^{2}_{p_0, |\beta_0|} \int_{B_k} |f(x)|^2 dx.
\end{multline}
If $B_k$ is a bad ball, it follows from \eqref{bad_ball} that there exists $(p_0, \beta_0) \in \mathbb{N} \times \mathbb{N}^d$ such that
\begin{multline}\label{gh05}
\int_{B_k}|f(x)|^2dx \leq
\frac{\varepsilon}{2^{2(p_0+|\beta_0|)+d+1} K_0 N^{2}_{p_0, |\beta_0|}}\int_{B_k} \rho(x)^{2p_0}|\partial_x^{\beta_0}f(x)|^2dx \\ \leq \sum_{(p,\beta) \in \mathbb{N}\times \mathbb{N}^d} \frac{\varepsilon}{2^{2(p+|\beta|)+d+1} K_0 N^{2}_{p, |\beta|} }\int_{B_k}\rho(x)^{2p}|\partial_x^{\beta}f(x)|^2dx.
\end{multline}
By summing over all the bad balls and by using from (\ref{recov}) that
\begin{equation*}
\mathbbm{1}_{\bigcup_{\textrm{bad balls}} B_k} \leq \sum_{\textrm{bad balls}} \mathbbm{1}_{B_k} \leq K_0 \mathbbm{1}_{\bigcup_{\textrm{bad balls}} B_k},
\end{equation*} we deduce from (\ref{gh05}) and the Fubini-Tonelli theorem that
\begin{multline}\label{gh6}
\int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx \leq \sum_{\textrm{bad balls}}\int_{B_k}|f(x)|^2dx
\\ \leq \sum_{(p,\beta) \in \mathbb{N}\times \mathbb{N}^d}\frac{\varepsilon}{2^{2(p+|\beta|)+d+1} N^{2}_{p, |\beta|} } \int_{\bigcup_{\textrm{bad balls}} B_k} \hspace{-8mm} |\rho(x)^{p} \partial_x^{\beta} f(x)|^2dx.
\end{multline}
By using that the number of solutions to the equation $p+\beta_1+...+\beta_{d}=m$, with $m \geq 0$, $d \geq 1$ and unknowns $p \in \mathbb{N}$ and $\beta=(\beta_1,...,\beta_d) \in \mathbb{N}^{d}$, is given by $\binom{m+d}{m}$, we obtain from (\ref{gh6}) that
\begin{multline}\label{gh6y}
\int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx \leq \varepsilon \sum_{m \geq 0} \frac{\binom{m+d}{m}}{2^{d+1} 4^m} \|f\|_{GS_{\mathcal{N},\rho}}^2 \\
\leq \varepsilon \sum_{m \geq 0} \frac{2^{m+d}}{2^{d+1} 4^m} \|f\|^2_{GS_{\mathcal{N},\rho}}= \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}},
\end{multline}
since
\begin{equation*}\label{gh45}
\binom{m+d}{m} \leq \sum_{j=0}^{m+d}\binom{m+d}{j}=2^{m+d}.
\end{equation*}
Recalling from (\ref{recov1}) that
$$ 1 \leq \mathbbm{1}_{\bigcup_{\textrm{bad balls}}B_k}+ \mathbbm{1}_{\bigcup_{\textrm{good balls}}B_k},$$
we notice that
\begin{equation}\label{asdf5}
\|f\|_{L^2(\mathbb{R}^d)}^2 \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2dx+ \int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx.
\end{equation}
It follows from (\ref{gh6y}) and (\ref{asdf5}) that
\begin{equation}\label{gh7}
\|f\|_{L^2(\mathbb{R}^d)}^2 \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2dx+ \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}}.
\end{equation}
\subsection{Step 2. Properties on good balls}
As the ball $B(0,1)$ is an Euclidean ball, the Sobolev embedding
$$W^{d,2}(B(0,1)) \xhookrightarrow{} L^{\infty}(B(0,1)),$$
see e.g.~\cite{adams} (Theorem~4.12), implies that there exists a positive constant $C_{d}\geq 1$ depending only the dimension $d \geq 1$ such that
\begin{equation}\label{sobolev}
\forall u \in W^{d,2}(B(0,1)), \quad
\|u\|_{L^{\infty}(B(0,1))} \leq C_{d} \|u\|_{W^{d,2}(B(0,1))}.
\end{equation}
By translation invariance and homogeneity of the Lebesgue measure, it follows from (\ref{rho_condi}), (\ref{asdf1}) and (\ref{sobolev}) that for all $u \in {W^{d,2}(B_k)}$,
\begin{multline*}
\|u\|^2_{L^{\infty}(B_k)}=\|x \mapsto u(x_k+x \rho(x_k))\|^2_{L^{\infty}(B(0,1))}
\leq C_{d}^2 \|x \mapsto u(x_k+x \rho(x_k))\|^2_{W^{d,2}(B(0,1))} \\
=C_{d}^2 \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} \rho(x_k)^{2|\alpha|-d} |\partial^{\alpha}_x u(x)|^2 dx
=C_{d}^2 \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k}m^{2|\alpha|-d} \Big( \frac{\rho(x_k)}{m} \Big)^{2|\alpha|-d} |\partial^{\alpha}_x u(x)|^2 dx\end{multline*}
and
\begin{multline}{\label{se1}}
\|u\|^2_{L^{\infty}(B_k)}
\leq C_{d}^2 \max(m,m^{-1})^d \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} \Big( \frac{\rho(x_k)}{m} \Big)^{d} |\partial^{\alpha}_x u(x)|^2 dx\\
= C_{d}^2 \max(1,m^{-1})^{2d} \rho(x_k)^{d} \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} |\partial^{\alpha}_x u(x)|^2 dx.
\end{multline}
We deduce from (\ref{se1}) that for all $u \in {W^{d,2}(B_k)}$,
\begin{equation}{\label{se2}}
\|u\|_{L^{\infty}(B_k)} \leq C_{d} \max(1,m^{-1})^{d} \rho(x_k)^{\frac{d}{2}} \|u\|_{W^{d,2}(B_k)}.
\end{equation}
Let $B_k$ be a good ball. By using the fact that $\rho$ is a $L$-Lipschitz function, we notice that
\begin{equation}{\label{equi}}
\forall x \in B_k=B(x_k,\rho(x_k)), \quad 0 < \rho(x_k) \leq \frac{1}{1-L} \rho(x).
\end{equation}
We deduce from (\ref{se2}) and (\ref{equi}) that for all $\beta \in \mathbb{N}^d$ and $k \in \mathbb{N}$ such that $B_k$ is a good ball
\begin{align}\label{gh30}
& \ \rho(x_k)^{|\beta|+ \frac{d}{2}}\|\partial_x^{\beta}f\|_{L^{\infty}(B_k)} \\ \notag
\leq & \ C_d \max(1,m^{-1})^{d} \rho(x_k)^{|\beta|+ d}\Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\|\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}}\\ \notag
= & \ C_d \max(1,m^{-1})^{d} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\| \rho(x_k)^{|\beta|+ d}\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}} \\ \notag
\leq & \ C_d \max(1,m^{-1})^{d} \frac{1}{(1-L)^{|\beta|+d}} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\| \rho(x)^{|\beta|+ d}\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}}.
\end{align}
By using (\ref{rho_condi}) and the definition of good balls (\ref{good_ball}), it follows from (\ref{gh30}) and the fact that $\mathcal{N}$ is non-decreasing with respect to the two indexes that for all $\beta \in \mathbb{N}^d$ and $k \in \mathbb{N}$ such that $B_k$ is a good ball
\begin{align}\label{asdf7}
& \ \rho(x_k)^{|\beta|+ \frac{d}{2}}\|\partial_x^{\beta}f\|_{L^{\infty}(B_k)} \\ \notag
\leq & \ C_d \max(1,m^{-1})^{d} \frac{1}{(1-L)^{|\beta|+d}} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \\ |\tilde{\beta}| \leq d}} \varepsilon^{-1} 2^{2(2|\beta|+ |\tilde{\beta}|)+3d+1} K_0 N^{2}_{|\beta|+d, |\beta|+ |\tilde{\beta}|} \|f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}} \\ \notag
\leq & \ \varepsilon^{-\frac{1}{2}} K_{d,m, L} \Big(\frac{4}{1-L} \Big)^{|\beta|} N_{|\beta|+d, |\beta|+d} \|f\|_{L^{2}(B_k)},
\end{align}
with $$K_{d,m, L}= C_d \max(1,m^{-1})^d \sqrt{2K_0}\Big(\frac{4\sqrt{2}}{1-L} \Big)^d (d+1)^{\frac{d}{2}} \geq 1,$$
since $C_d \geq 1$.
\subsection{Step 3 : Recovery of the $L^2(\mathbb{R}^d)$-norm.} Let $B_k$ be a good ball. Let us assume that $\|f\|_{L^2(B_k)} \neq0$. We can therefore define the following function
\begin{equation}\label{gh13b}
\forall y \in B(0,1), \quad \phi(y)=\varepsilon^{\frac{1}{2}}\rho(x_k)^{\frac{d}{2}}\frac{f(x_k+\rho(x_k) y)}{ K_{d,m,L} N_{d,d} \|f\|_{L^2(B_k)}}.
\end{equation}
We observe that
\begin{equation*}
\|\phi\|_{L^{\infty}(B(0,1))} = \varepsilon^{\frac{1}{2}}\rho(x_k)^{\frac{d}{2}}\frac{\|f\|_{L^{\infty}(B_k)}}{K_{d,m,L} N_{d,d} \|f\|_{L^2(B_k)}} \geq \frac{\varepsilon^{\frac{1}{2}}}{|B(0,1)|^{\frac{1}{2}}K_{d,m,L} N_{d,d}},
\end{equation*}
and
\begin{equation}\label{qa1}
\forall \beta \in \mathbb{N}^d, \quad \| \partial_x^{\beta} \phi \|_{L^{\infty}(B(0,1))} = \frac{\varepsilon^{\frac{1}{2}} \rho(x_k)^{|\beta|+\frac{d}{2}} \|\partial_x^{\beta} f\|_{L^{\infty}(B_k)}}{K_{d,m,L} N_{d,d}\|f\|_{L^2(B_k)}}.
\end{equation}
It follows from \eqref{asdf7} and \eqref{qa1} that
\begin{equation}\label{qa2}
\forall \beta \in \mathbb{N}^d, \quad \| \partial_x^{\beta} \phi \|_{L^{\infty}(B(0,1))} \leq \Big( \frac{4}{1-L} \Big)^{|\beta|} \frac{N_{|\beta|+d,|\beta|+d}}{N_{d,d}}.
\end{equation}
We deduce from \eqref{qa2} that $\phi \in \mathcal{C}_{\mathcal{M}'}(B(0,1))$ with $$\mathcal{M}'=(M'_p)_{p \in \mathbb{N}}= \Big(\Big( \frac{4}{1-L} \Big)^{p} \frac{N_{p+d,p+d}}{N_{d,d}}\Big)_{p \in \mathbb{N}}.$$ The assumption that the diagonal sequence $\mathcal{M}=(N_{p,p})_{p \in \mathbb{N}}$ is logarithmically-convex and quasi-analytic implies that these two properties hold true as well for the sequence $\mathcal{M}'$. Indeed, the logarithmic convexity of $\mathcal{M}'$ is straightforward and since
\begin{equation*}
\sum_{p=0}^{+\infty} \frac{M'_p}{M'_{p+1}} = \frac{1-L}{4} \sum_{p=d}^{+\infty} \frac{N_{p,p}}{N_{p+1,p+1}},
\end{equation*}
we deduce from the quasi-analyticity of $\mathcal{M}$ and from the Denjoy-Carleman's Theorem (Theorem~\ref{Den_Carl_thm}) that the sequence $\mathcal{M}'$ is also quasi-analytic.
Furthermore, we observe from the definition of the Bang degree \eqref{Bang} and the equality
\begin{equation*}
\forall 0<t \leq 1, \forall n \in \mathbb{N}^*, \quad \sum_{-\log t < p \leq n} \frac{M'_{p-1}}{M'_{p}} = \frac{1-L}{4} \sum_{-\log(t e^{-d})< p \leq n+d} \frac{N_{p-1,p-1}}{N_{p,p}}
\end{equation*}
that
\begin{equation}\label{bang1309}
\forall 0< t \leq 1, \quad n_{t, \mathcal{M}',2e d} = n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e} -d \leq n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}.
\end{equation}
Setting
\begin{equation*}\label{m_10}
E_k=\Big\{\frac{x-x_k}{\rho(x_k)} \in B(0,1): \ x \in B_k \cap \omega \Big\} \subset B(0,1),
\end{equation*}
we notice from \eqref{asdf1} that
\begin{equation}\label{m_11}
|E_k| = \frac{|\omega \cap B_k|}{\rho(x_k)^d} \geq \frac{\gamma |B_k|}{\rho(x_k)^d} \geq \gamma |B(0,1)| >0,
\end{equation}
since $\omega$ is $\gamma$-thick with respect to $\rho$ and $B_k=B(x_k,\rho(x_k))$. From now on, we shall assume that
\begin{equation}\label{small_eps}
0< \varepsilon \leq N^2_{0,0}.
\end{equation}
We deduce from \eqref{bang1309} and Proposition~\ref{NSV_multid_L2} applied with the function $\phi$ and the measurable subset $E_k$ of the bounded convex open ball $B(0,1)$ that there exists a positive constant $D_{\varepsilon}=D\big(\varepsilon, \mathcal{N}, d, \gamma, L,m\big)>1$ independent on $\phi$ and $k$ such that
\begin{equation}\label{qa3}
\int_{B(0,1)} |\phi(x)|^2 dx \leq D_{\varepsilon} \int_{E_k} |\phi(x)|^2 dx,
\end{equation}
with
\begin{equation*}
D_{\varepsilon}= \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}'}\big(2n_{t, \mathcal{M}', 2ed}\big) \bigg)^{4n_{t, \mathcal{M}', 2ed}},
\end{equation*}
and
\begin{equation*}
t= \frac{\varepsilon^{\frac{1}{2}}}{\max\big(1,|B(0,1)|^{\frac{1}{2}}\big)K_{d,m,L} N_{d,d}}.
\end{equation*}
Notice that from \eqref{small_eps}, we have $$0< t \leq 1,$$
since $K_{d,m,L} \geq 1$ and $\mathcal{N}$ is non-decreasing with respect to the two indexes. Let us also notice from the definitions in \eqref{def_gamma} that
$$ \forall n \geq 1, \quad 1 \leq \Gamma_{\mathcal{M}'}(n) \leq \Gamma_{\mathcal{M}}(n+d).$$
We deduce from \eqref{bang1309} and the non-decreasing property of $\Gamma_{\mathcal{M}'}$ that
\begin{align}\label{C_eps}
D_{\varepsilon} & \leq \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M}', 2ed}+d\big) \bigg)^{4n_{t, \mathcal{M}', 2ed}} \\\nonumber
& \leq \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}\big) \bigg)^{4n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}}.
\end{align}
Let us denote $$C_{\varepsilon}=\frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}\big) \bigg)^{4n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}}.$$
We deduce from \eqref{gh13b}, \eqref{m_11}, \eqref{qa3} and \eqref{C_eps} that
\begin{equation}\label{qa4}
\int_{B_k} |f(x)|^2 dx \leq C_{\varepsilon} \int_{\omega \cap B_k} |f(x)|^2 dx.
\end{equation}
Let us notice that the above estimate holds as well when $\| f \|_{L^2(B_k)}=0$.
By using anew from (\ref{recov}) that
\begin{equation*}
\mathbbm{1}_{\bigcup_{\textrm{good balls}} B_k} \leq \sum \limits_{\textrm{good balls}} \mathbbm{1}_{B_k} \leq K_0 \mathbbm{1}_{\bigcup_{\textrm{good balls}} B_k},
\end{equation*}
it follows from (\ref{gh7}) and (\ref{qa4}) that
\begin{align*}\label{gh56y}
\|f\|_{L^2(\mathbb{R}^d)}^2 & \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2 dx + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}} \\ \notag
& \leq \sum_{\textrm{good balls}}\|f\|_{L^{2}(B_k)}^2 + \varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}} \\ \notag
& \leq C_{\varepsilon} \sum_{\textrm{good balls}} \int_{\omega \cap B_k} |f(x)|^2 dx + \varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}} \\ \notag
& \leq K_0 C_{\varepsilon} \int_{\omega \cap \big(\bigcup_{\textrm{good balls}} B_k\big)}|f(x)|^2 dx +\varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}}.
\end{align*}
The last inequality readily implies that
\begin{equation*}
\|f \|^2_{L^2(\mathbb{R}^d)} \leq K_0 C_{\varepsilon} \| f \|^2_{L^2(\omega)} +\varepsilon \| f \|^2_{GS_{\mathcal{N}, \rho}}.
\end{equation*}
This ends the proof of Theorem~\ref{general_uncertaintyprinciple}.
\subsection{Proof of Theorem~\ref{specific_GS_uncertaintyprinciple}}\label{proof2}
Let $A \geq 1$, $0< \mu \leq 1$, $\nu >0$ with $\mu+\nu \geq 1$ and $0 \leq \delta \leq \frac{1-\mu}{\nu} \leq 1$. Let $f \in \mathscr{S}(\mathbb{R}^d)$. We first notice that if the quantity
\begin{equation*}
\sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} =+\infty,
\end{equation*}
is infinite, then the result of Theorem~\ref{specific_GS_uncertaintyprinciple} clearly holds. We can therefore assume that
\begin{equation}\label{bernst_estim_gs}
\sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} <+\infty.
\end{equation}
By assumption, we have
\begin{equation}\label{rho_assum}
\exists m,R >0, \forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}.
\end{equation}
We deduce from \eqref{bernst_estim_gs}, \eqref{rho_assum} and Lemma~\ref{interpolation} that
\begin{align*}
\forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad & \|\rho(x)^p \partial_x^{\beta} f\|_{L^2(\mathbb{R}^d)} \leq R^p \| \langle x \rangle^{\delta p} \partial^{\beta}_x f\|_{L^2(\mathbb{R}^d)}\\
& \leq R^p (4^{\nu}e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu} \sup_{q \in \mathbb{N}, \gamma \in \mathbb{N}^d} \frac{\|\langle x \rangle^q \partial_x^{\gamma} f \|_{L^2(\mathbb{R}^d)}}{A^{q+|\gamma|} (q!)^{\nu} (|\gamma|!)^{\mu}} \\
& \leq (4^{\nu}e^{\nu}\max(1,R) A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu} \sup_{q \in \mathbb{N}, \gamma \in \mathbb{N}^d} \frac{\|\langle x \rangle^q \partial_x^{\gamma} f \|_{L^2(\mathbb{R}^d)}}{A^{q+|\gamma|} (q!)^{\nu} (|\gamma|!)^{\mu}},
\end{align*}
which implies that
\begin{equation*}
\|f\|_{GS_{\mathcal{N}, \rho}} \leq \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} < +\infty,
\end{equation*}
with the non-decreasing sequence $$\mathcal{N}=\Big(\big(4^{\nu} e^{\nu}\max(1,R)A\big)^{p+q}(p!)^{\delta \nu} (q!)^{\mu}\Big)_{(p,q) \in \mathbb{N}^2}.$$ The assumption $0\leq \delta \leq \frac{1-\mu}{\nu}$ implies that the diagonal sequence $$\mathcal{M}=(M_p)_{p \in \mathbb{N}}=\Big(\big(4^{\nu} e^{\nu}\max(1,R)A\big)^{2p} (p!)^{\delta \nu + \mu}\Big)_{p \in \mathbb{N}}$$ is a logarithmically convex quasi-analytic sequence thanks to the Denjoy-Carleman's theorem (Theorem~\ref{Den_Carl_thm}) and since $\delta \nu +\mu \leq 1$. Since $\omega$ is assumed to be $\gamma$-thick with respect to $\rho$ for some $0<\gamma \leq 1$, we deduce from Theorem~\ref{general_uncertaintyprinciple} that there exist some constants $K=K(d, \rho, \delta, \mu, \nu)\geq 1, K'=K'(d, \rho, \gamma)\geq1, r=r(d, \rho) \geq 1$ so that for all $0< \varepsilon \leq M^2_0=1$,
\begin{multline}\label{appl_thm_up}
\|f\|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon} \| f\|^2_{L^2(\omega)} + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}} \\
\leq C_{\varepsilon} \|f\|^2_{L^2(\omega)} + \varepsilon \bigg(\sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}}\bigg)^2,
\end{multline}
where
\begin{equation}\label{c_eps1609}
C_{\varepsilon} =K' \bigg(\frac{2d}{\gamma}\Gamma_{\mathcal{M}}(2n_{t_0,\mathcal{M},r}) \bigg)^{4n_{t_0,\mathcal{M}, r}}
\end{equation}
with
\begin{equation*}
0< t_0=\frac{\varepsilon^{\frac{1}{2}}}{K A^{2d}} \leq 1.
\end{equation*}
Direct computations
\begin{equation*}
\forall N \geq 1, \quad \sum_{p > -\log t_0}^N \frac{M_{p-1}}{M_{p}} = (4^{\nu} e^{\nu}\max(1,R) A)^{-2} \sum_{p > -\log t_0}^{N} \frac{(p-1)!^{\delta \nu+\mu}}{p!^{\delta \nu+\mu}},
\end{equation*}
show that
\begin{equation*}
n_{t_0, \mathcal{M}, r} = n_{t_0, \mathcal{M}_{\delta \nu+\mu}, r'A^2},
\end{equation*}
with
\begin{equation*}
\mathcal{M}_{\delta \mu+\nu} = \big((p!)^{\delta \nu + \mu} \big)_{p \in \mathbb{N}} \quad \text{and} \quad r'=r 16^{\nu} e^{2\nu} \max(1,R^2).
\end{equation*}
By using from Lemma~\ref{ex_qa_sequence} that $\Gamma_{\mathcal{M}_{\delta \nu +\mu}}$ is bounded, it follows that there exists a positive constant $D \geq 1$ such that
\begin{equation*}
\forall n \in \mathbb{N}^*, \quad \Gamma_{\mathcal{M}}(n) = \Gamma_{\mathcal{M}_{\delta \nu +\mu}}(n) \leq D.
\end{equation*}
By using anew Lemma~\ref{ex_qa_sequence} and \eqref{c_eps1609}, we deduce that when $0 \leq \delta \nu+\mu <1$,
\begin{align}\label{ceps}
0< C_{\varepsilon} & \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{4n_{t_0, \mathcal{M}_{\delta \nu +\mu}, r' A^2}}\\ \nonumber
& \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{ 2^{\frac{1}{1-\delta \nu-\mu}+2} \big( 1-\log t_0+(A^2 r')^{\frac{1}{1-\delta \nu-\mu}}\big)} \\ \nonumber
& =K'\bigg(\frac{2d}{\gamma} D \bigg)^{2^{\frac{1}{1-\delta \nu-\mu}+2} \big(1+\log K+ 2d\log A-\frac{1}{2}\log \varepsilon+(A^2 r')^{\frac{1}{1-\delta \nu-\mu}} \big)}.
\end{align}
Since $0 \leq \log A \leq A \leq A^{\frac{2}{1-\delta \nu-\mu}}$ and $\log \varepsilon \leq 0$, it follows from \eqref{ceps} that
\begin{equation*}\label{ceps2}
0< C_{\varepsilon} \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{D'\big(1-\log \varepsilon +A^{\frac{2}{1-\delta \nu-\mu}} \big)},
\end{equation*}
with $D'= 2^{\frac{1}{1-\delta \nu-\mu}+2}\max \Big(1+\log K, 2d+ {r'}^{\frac{1}{1-\delta \nu-\mu}},\frac{1}{2}\Big)$.
On the other hand, when $\delta \nu +\mu =1$, Lemma~\ref{ex_qa_sequence} and the estimates \eqref{c_eps1609} imply that
\begin{align}\label{ceps3}
0< C_{\varepsilon} & \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4n_{t_0, \mathcal{M}_{1}, r'A^2}}\\ \notag
& \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4(1-\log t_0)e^{r' A^2}}\\ \notag
& \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4(1+\log K+2d \log A-\frac{1}{2}\log \varepsilon)e^{r' A^2}}.
\end{align}
While setting $D'=4 \max\big(1+\log K, 2d, r', \frac{1}{2}\big)$, we obtain from \eqref{ceps3}
\begin{equation*}
0< C_{\varepsilon} \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{D'(1+ \log A-\log \varepsilon)e^{D'A^2}}.
\end{equation*}
This ends the proof of Theorem~\ref{specific_GS_uncertaintyprinciple}.
\subsection{Proof of Theorem~\ref{uncertainty_principle}}\label{thm_hermite_proof}
Let $0 \leq \delta \leq 1$ and $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function such that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{\frac{1+\delta}{2}}$. Beforehand, let us notice that the logarithmic convexity property of the sequence $(M_p)_{p \in \mathbb{N}}$, that is,
\begin{equation*}
\forall p \in \mathbb{N}^*, \quad M_{p}^2 \leq M_{p+1} M_{p-1},
\end{equation*}
implies that
\begin{equation*}
\forall p \in \mathbb{N}^*, \quad \frac{M_p}{M_{p+1}} \leq \frac{M_{p-1}}{M_p} \leq \frac{M_0}{M_1},
\end{equation*}
since $M_p >0$ for all $p \in \mathbb{N}$. It follows that the modified sequence $(M'_p)_{p \in \mathbb{N}}= \big(\big(\frac{M_0}{M_1}\big)^p M_p \big)_{p \in \mathbb{N}}$ is a non-decreasing logarithmically convex sequence.
Let $f \in GS_{\Theta}$ and $s=\frac{1+\delta}{2}$. According to Proposition~\ref{bernstein_estim1}, there exists a positive constant $D_{\Theta, d, \delta}\geq 1$ independent on $f$, such that for all $r \geq 0$, $\beta \in \mathbb{N}^d$,
\begin{align*}
\|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq (D_{\Theta,d,\delta})^{1+r+|\beta|} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\
& =(D_{\Theta,d,\delta})^{1+r+|\beta|} \Big(\frac{M_1}{M_0}\Big)^{s\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor+s} \Big(M'_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\
& \leq D'^{1+r+|\beta|} \Big(M'_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}},
\end{align*}
where $D'=D'( \Theta, d , \delta) \geq 1$ is a new positive constant. Since the sequence $(M'_p)_{p \in \mathbb{N}}$ is non-decreasing and $\frac{1}{2} \leq s \leq 1$, we deduce that
\begin{equation*}
\forall r \geq0, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq D'^{1+r+|\beta|} \Big(M'_{\left\lfloor \frac{r+|\beta|}{2s} \right\rfloor +2d+4}\Big)^s \|f\|_{GS_{\Theta}}.
\end{equation*}
This implies that $f \in GS_{\mathcal{N}, \rho}$ and
$$\| f \|_{GS_{\mathcal{N}, \rho}} \leq \|f\|_{GS_{\Theta}}$$ with $$\mathcal{N}=(N_{p,q})_{p,q \in \mathbb{N}^d}=\Big(\max(R,1)^pD'^{1+p+q} \Big(M'_{\left\lfloor \frac{\delta p+q}{1+\delta} \right\rfloor +2d+4}\Big)^{\frac{1+\delta}{2}}\Big)_{(p,q) \in \mathbb{N}^2}.$$
We conclude by applying Theorem~\ref{general_uncertaintyprinciple}. To that end, we notice that the non-decreasing property of the sequence $(M'_p)_{p \in \mathbb{N}}$ ensures that the sequence $\mathcal{N}$ is non-decreasing with respect to the two indexes. We deduce from the Denjoy-Carleman Theorem (Theorem~\ref{Den_Carl_thm}) and assumption $\text{(H3)}_{\frac{1+\delta}{2}}$ that the diagonal sequence $$(N_{p,p})_{p \in \mathbb{N}} = \big(\max(R,1)^pD'^{1+2p} (M'_{p +2d+4})^{\frac{1+\delta}{2}}\big)_{p \in \mathbb{N}},$$ is quasi-analytic since
\begin{multline*}
\sum_{p=0}^{+\infty} \frac{N_{p,p}}{N_{p+1,p+1}}= \frac{1}{ \max(R,1)D'^2} \sum_{p=2d+4}^{+\infty} \Big(\frac{{M'_p}}{{M'_{p+1}}}\Big)^{\frac{1+\delta}{2}} \\= \frac{1}{\max(R,1) D'^2} \Big(\frac{M_1}{M_0}\Big)^{\frac{1+\delta}{2}} \sum_{p=2d+4}^{+\infty} \Big(\frac{{M_p}}{{M_{p+1}}}\Big)^{\frac{1+\delta}{2}}=+\infty.
\end{multline*}
The result of Theorem~\ref{uncertainty_principle} then follows from Theorem~\ref{general_uncertaintyprinciple}. It ends the proof of Theorem~\ref{uncertainty_principle}.
\section{Proof of Theorem~\ref{observability_result}}\label{proof_obs}
This section is devoted to the proof of the null-controllability result given by Theorem~\ref{observability_result}.
Let $(A,D(A))$ be a closed operator on $L^2(\mathbb{R}^d)$ which is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$ that satisfies the following quantitative smoothing estimates: there exist some constants $0 < \mu <1$, $\nu > 0$ with $\mu+ \nu \geq 1$ and
$C \geq 1$, $r_1>0$, $r_2 \geq 0$, $0< t_0 \leq 1$ such that
\begin{multline}\label{GS_estimate2}
\forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall g \in L^2(\mathbb{R}^d),\\
\| x^{\alpha}\partial_x^{\beta}( e^{-t A^*}g)\|_{L^2(\mathbb{R}^d)} \leq \frac{C ^{1+|\alpha|+|\beta|}}{t^{r_1 (|\alpha|+|\beta|)+r_2}}(\alpha !)^{\nu} (\beta!)^{\mu} \|g\|_{L^2(\mathbb{R}^d)},
\end{multline}
where $A^*$ denotes the $L^2(\mathbb{R}^d)$-adjoint of $A$.
Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a $L$-Lipschitz positive function
with~$\mathbb{R}^d$ being equipped with the Euclidean norm and $0 < L <1$
such that there exist some constants $0 \leq \delta < \frac{1-\mu}{\nu}$, $m>0$, $R>0$ so that
\begin{equation*}
\forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{\delta}.
\end{equation*}
Let $\omega \subset \mathbb{R}^d$ be a measurable subset that is thick with respect to the density $\rho$. Let us show that
Theorem~\ref{observability_result} can be deduced from the uncertainty principles given in Theorem~\ref{specific_GS_uncertaintyprinciple}. To that end, we deduce from the estimates \eqref{GS_estimate2} and Lemma~\ref{croch} that there exists a positive constant $C'=C'(C, d)\geq 1$ such that
\begin{multline}\label{bernstein_estimate_GS}
\forall 0< t \leq t_0, \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \forall g \in L^2(\mathbb{R}^d),\\
\| \langle x \rangle^{p} \partial_x^{\beta}( e^{-t A^*}g)\|_{L^2(\mathbb{R}^d)} \leq \frac{C'^{1+p+|\beta|}}{t^{r_1 (p+|\beta|)+r_2}}(p !)^{\nu} (|\beta|!)^{\mu} \|g\|_{L^2(\mathbb{R}^d)}.
\end{multline}
It follows from \eqref{bernstein_estimate_GS} and Theorem~\ref{specific_GS_uncertaintyprinciple} applied with $f=e^{-tA^*}g \in \mathscr{S}(\mathbb{R}^d)$ that there exists a positive constant $K=K(\gamma, d, \rho, \mu, \nu) \geq 1$ such that
$\forall 0< t \leq t_0, \forall g \in L^2(\mathbb{R}^d), \forall 0< \varepsilon \leq 1,$
\begin{equation}
\|e^{-tA^*}g\|^2_{L^2(\mathbb{R}^d)} \leq e^{K ( 1-\log \varepsilon+(C' t^{-r_1})^{\frac{2}{1-s}})} \|e^{-tA^*}g\|^2_{L^2(\omega)} + \frac{C'^2}{t^{2r_2}} \varepsilon \|g\|^2_{L^2(\mathbb{R}^d)},
\end{equation}
with $0<s=\delta \nu +\mu<1$.
Thanks to the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$, we deduce that for all $0< \tau \leq t_0$, $\frac{1}{2} \leq q <1$, $0<\varepsilon \leq 1$, $g \in L^2(\mathbb{R}^d)$,
\begin{align*}
\|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} & \leq \frac{1}{(1-q)\tau} \int_{q\tau}^{\tau} \|e^{-t A^*}g \|^2_{L^2(\mathbb{R}^d)} dt \\ \nonumber
& \leq \frac{ e^{K (1-\log\varepsilon+(C'(q\tau)^{-r_1})^{\frac{2}{1-s}})}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \varepsilon \frac{C'^2}{(q \tau)^{2r_2}} \|g\|^2_{L^2(\mathbb{R}^d)} \\ \nonumber
& \leq \frac{ e^{K (1-\log\varepsilon+(C'2^{r_1}\tau^{-r_1})^{\frac{2}{1-s}})}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \varepsilon \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \|g\|^2_{L^2(\mathbb{R}^d)}.
\end{align*}
For $0< \tau \leq t_0$ and $\frac{1}{2} \leq q <1$, we choose $$0<\varepsilon = \exp\big(-\tau^{-\frac{2r_1}{1-s}}\big)\leq 1.$$ Since $1 \leq \frac{1}{\tau^{2r_1}}$, it follows that there exists a new constant $K'=K'(\gamma, d, \rho, \delta, \mu, \nu,C', r_1, s)\geq 1$ such that for all $0< \tau \leq t_0$, $\frac{1}{2} \leq q <1$, $g \in L^2(\mathbb{R}^d)$,
\begin{equation*}
\|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{e^{K'\tau^{-\frac{2r_1}{1-s}}}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \exp\big(-\tau^{-\frac{2r_1}{1-s}}\big) \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \|g\|^2_{L^2(\mathbb{R}^d)}.
\end{equation*}
We follow the strategy developed by Miller in \cite{Miller}.
Let $0<t_1\leq t_0$ such that for all $0< \tau \leq t_1$,
\begin{equation*}
\frac{\exp \Big(K' \tau^{-\frac{2r_1}{1-s}}\Big)}{\tau} \leq \exp\Big(2K' \tau^{-\frac{2r_1}{1-s}}\Big)
\end{equation*}
and
\begin{equation*}
\exp\big(-\tau^{-\frac{2r_1}{1-s}}\big) \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \leq \exp\Big(-\frac{\tau^{-\frac{2r_1}{1-s}}}{2}\Big).
\end{equation*}
It follows that for all $0 < \tau \leq t_1$, $\frac{1}{2} \leq q <1$, $g \in L^2(\mathbb{R}^d)$,
\begin{multline*}
(1-q)\exp\Big(-\frac{2K'}{\tau^{\frac{2r_1}{1-s}}}\Big) \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \\
\leq \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + (1-q)\exp\Big(-\frac{2K'+\frac{1}{2}}{\tau^{\frac{2r_1}{1-s}}}\Big)\|g\|^2_{L^2(\mathbb{R}^d)}.
\end{multline*}
Setting $f(\tau)=(1-q)\exp\Big(-\frac{2K'}{\tau^{\frac{2r_1}{1-s}}}\Big)$ and choosing $q$ so that $$ \max\Big(\Big(\frac{2K'}{2K'+\frac{1}{2}}\Big)^{\frac{1-s}{2r_1}}, \frac{1}{2} \Big) \leq q<1,$$ we obtain that for all $0< \tau \leq t_1$ and $g \in L^2(\mathbb{R}^d)$,
\begin{equation}\label{approx_obs1609}
f(\tau) \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + f(q \tau) \|g\|^2_{L^2(\mathbb{R}^d)}.
\end{equation}
Thanks to this estimate, the observability estimate is established as follows: let $0< T \leq t_1$ and define the two sequences $(\tau_k)_{k \geq 0}$ and $(T_k)_{k \geq 0}$ as
\begin{equation*}
\forall k \geq 0, \quad \tau_k= q^k (1-q) T \quad \text{ and } \quad \forall k \geq 0, \quad T_{k+1}= T_k-\tau_k, \quad T_0=T.
\end{equation*}
By applying \eqref{approx_obs1609} with $e^{-T_{k+1}A^*}g$, it follows that for all $g \in L^2(\mathbb{R}^d)$ and $k \in \mathbb{N}$,
\begin{multline*}
f(\tau_k) \|e^{-T_k A^*}g\|^2_{L^2(\mathbb{R}^d)} -f(\tau_{k+1}) \|e^{-T_{k+1} A^*}g\|^2_{L^2(\mathbb{R}^d)} \\
\leq \int_{\tau_{k+1}}^{\tau_k} \|e^{-(t+T_{k+1})A^*}g \|^2_{L^2(\omega)} dt
= \int_{\tau_{k+1}+T_{k+1}}^{T_k} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt \leq \int_{T_{k+1}}^{T_k} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt.
\end{multline*}
By summing over all the integers $k \in \mathbb{N}$ and by noticing that $$\lim \limits_{k \to +\infty} f(\tau_k)=0, \quad \lim \limits_{k \to +\infty} T_k= T- \sum_{k \in \mathbb{N}} \tau_k =0,$$
and $$ \forall k \geq 0, \quad \|e^{-T_k A^*}g\|_{L^2(\mathbb{R}^d)} \leq \|g\|_{L^2(\mathbb{R}^d)},$$ by the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$,
it follows that
\begin{equation*}
\|e^{-T A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{1}{1-q}\exp\Big(\frac{2K'}{((1-q)T)^{\frac{2r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{T} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt.
\end{equation*}
By using anew the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$, we deduce that for all $g \in L^2(\mathbb{R}^d)$, $T \geq t_1$,
\begin{multline*}
\|e^{-T A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \|e^{-t_1 A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{1}{1-q} \exp\Big(\frac{2K'}{((1-q)t_1)^{\frac{2 r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{t_1} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt \\
\leq \frac{1}{1-q} \exp\Big(\frac{2K'}{((1-q)t_1)^{\frac{2 r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{T} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt.
\end{multline*}
This ends the proof of Theorem~\ref{observability_result}.
\section{Appendix}\label{appendix}
\subsection{Bernstein type estimates}\label{Hermite_functions}
This section is devoted to the proof of the Bernstein type estimates given in Proposition~\ref{bernstein_estim1}. To that end, we begin by recalling basic facts about Hermite functions. The standard Hermite functions $(\phi_{k})_{k\geq 0}$ are defined for $x \in \mathbb{R}$,
\begin{equation*}\label{defi}
\phi_{k}(x)=\frac{(-1)^k}{\sqrt{2^k k!\sqrt{\pi}}} e^{\frac{x^2}{2}}\frac{d^k}{dx^k}(e^{-x^2})
=\frac{1}{\sqrt{2^k k!\sqrt{\pi}}} \Bigl(x-\frac{d}{dx}\Bigr)^k(e^{-\frac{x^2}{2}})=\frac{ a_{+}^k \phi_{0}}{\sqrt{k!}},
\end{equation*}
where $a_{+}$ is the creation operator
$$a_{+}=\frac{1}{\sqrt{2}}\Big(x-\frac{d}{dx}\Big).$$
The Hermite functions satisfy the identity
\begin{equation*}\label{eq2ui1}
\forall k \in \mathbb{N}, \quad \Big(-\frac{d^2}{dx^2}+x^2\Big)\phi_{k}=(2k+1)\phi_{k}.
\end{equation*}
The family $(\phi_{k})_{k\in \mathbb{N}}$ is a Hilbert basis of $L^2(\mathbb R)$.
We set for $\alpha=(\alpha_{j})_{1\le j\le d}\in\mathbb N^d$, $x=(x_{j})_{1\le j\le d}\in \mathbb R^d,$
\begin{equation*}\label{jk1}
\Phi_{\alpha}(x)=\prod_{j=1}^d\phi_{\alpha_j}(x_j).
\end{equation*}
The family $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$ is a Hilbert basis of $L^2(\mathbb R^d)$
composed of the eigenfunctions of the $d$-dimensional harmonic oscillator
\begin{equation*}\label{6.harmo}
\mathcal{H}=-\Delta_x+|x|^2=\sum_{k\ge 0}(2k+d)\mathbb P_{k},\quad \text{Id}=\sum_{k \ge 0}\mathbb P_{k},
\end{equation*}
where $\mathbb P_{k}$ is the orthogonal projection onto $\text{Span}_{\mathbb{C}}
\{\Phi_{\alpha}\}_{\alpha\in \mathbb N^d,\val \alpha =k}$, with $\val \alpha=\alpha_{1}+\dots+\alpha_{d}$.
Instrumental in the sequel are the following basic estimates proved by Beauchard, Jaming and Pravda-Starov in the proof of \cite[Proposition~3.3]{kkj} (formula (3.38)).
\begin{lemma}\label{lem1}
With $\mathcal{E}_N= \emph{\textrm{Span}}_{\mathbb{C}}\{\Phi_{\alpha}\}_{\alpha \in \mathbb{N}^d, \ |\alpha| \leq N}$, we have for all $N \in \mathbb{N}$, $f \in \mathcal{E}_N$,
\begin{equation*}
\forall (\alpha, \beta) \in \mathbb{N}^d \times \mathbb{N}^d, \quad
\|x^{\alpha}\partial_x^{\beta}f\|_{L^2(\mathbb{R}^d)}\leq 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(N+|\alpha|+|\beta|)!}{N!}}\|f\|_{L^2(\mathbb{R}^d)}.
\end{equation*}
\end{lemma}
We can now prove Proposition~\ref{bernstein_estim1}. Let $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function such that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $(H1)$ and $(H2)$. Let $f \in GS_{\Theta}$, $0< s \leq 1$ and $(\alpha, \beta) \in \mathbb{N}^d \times \mathbb{N}^d$. We begin by proving that there exist some positive constants $C'_{\Theta}>0$, $\tilde{C}_{\Theta}>0$, independent on $f$, $\alpha$ and $\beta$ such that
\begin{equation*}\label{gse_1709}
\| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}.
\end{equation*}
It is sufficient to prove that there exist some positive constants $C'_{\Theta}>0$, $\tilde{C}_{\Theta}>0$, independent on $f$, $\alpha$ and $\beta$ such that for all $N \geq |\alpha|+|\beta|+1$,
\begin{equation}\label{bernst_goal}
\|x^{\alpha} \partial_x^{\beta} \pi_Nf \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}},
\end{equation}
with $\pi_N f$ the orthogonal projection of the function $f$ onto the space $\textrm{Span}_{\mathbb{C}}\{\Phi_{\alpha}\}_{\alpha \in \mathbb{N}^d, \ |\alpha| \leq N}$ given by
\begin{equation}\label{orth_proj}
\pi_N f = \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq N}} \left\langle f, \Phi_{\alpha} \right\rangle_{L^2(\mathbb{R}^d)} \Phi_{\alpha}.
\end{equation}
Indeed, by using that $(\pi_N f)_{N \in \mathbb{N}}$ converges to $f$ in $L^2(\mathbb{R}^d)$ and therefore in $\mathcal{D}'(\mathbb{R}^d)$, we obtain that the sequence $(x^{\alpha} \partial^{\beta}_x \pi_{N} f)_{N\in \mathbb{N}}$ converges to $x^{\alpha}\partial^{\beta}_x f$ in $\mathcal{D}'(\mathbb{R}^d)$. If the estimates \eqref{bernst_goal} hold, the sequence $(x^{\alpha} \partial^{\beta}_x \pi_{N} f)_{N\in \mathbb{N}}$ is bounded in $L^2(\mathbb{R}^d)$ and therefore weakly converges (up to a subsequence) to a limit $g \in L^2(\mathbb{R}^d)$. Thanks to the uniqueness of the limit in $\mathcal{D}'(\mathbb{R}^d)$, it follows that $g=x^{\alpha} \partial^{\beta}_x f \in L^2(\mathbb{R}^d)$. Moreover, we have
\begin{equation*}
\|x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq \liminf_{N \to +\infty} \|x^{\alpha} \partial_x^{\beta} \pi_{\phi(N)} f \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s\|f\|_{GS_{\Theta}}.
\end{equation*}
Let us prove that the estimates \eqref{bernst_goal} hold. Since $\pi_{|\alpha|+|\beta|}$ is an orthogonal projection on $L^2(\mathbb{R}^d)$ and therefore satisfies $$\| \pi_{|\alpha|+|\beta|} f \|_{L^2(\mathbb{R}^d)} \leq \| f \|_{L^2(\mathbb{R}^d)},$$ we deduce from Lemma~\ref{lem1} and \eqref{orth_proj} that for all $N \geq |\alpha|+ |\beta|+1$,
\begin{align*}
& \quad \|x^{\alpha}\partial_x^{\beta} \pi_N f \|_{L^2} \leq \|x^{\alpha}\partial_x^{\beta} \pi_{|\alpha|+|\beta|} f \|_{L^2} + \|x^{\alpha}\partial_x^{\beta} (\pi_{N}-\pi_{|\alpha|+|\beta|}) f \|_{L^2} \\ \notag
& \leq 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(2(|\alpha|+|\beta|))!}{(|\alpha|+|\beta|)!}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| \|x^{\alpha} \partial_x^{\beta} \Phi_{\gamma} \|_{L^2(\mathbb{R}^d)}.
\end{align*}
By using anew Lemma~\ref{lem1}, it follows that for all $N \geq |\alpha|+ |\beta|+1$,
\begin{align}\label{gse_1709}
& \|x^{\alpha}\partial_x^{\beta} \pi_N f \|_{L^2} \\ \notag
& \leq 2^{|\alpha|+|\beta|}(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(|\gamma|+|\alpha|+|\beta|)!}{|\gamma|!}} \\ \notag
& \leq 2^{|\alpha|+|\beta|}(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| 2^{|\alpha|+|\beta|} |\gamma|^{\frac{|\alpha|+|\beta|}{2}}.
\end{align}
On the first hand, it follows from $0< s \leq 1$ that for all $N \geq |\alpha|+ |\beta|+1$,
\begin{align}\label{gse_2}
& \quad \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| |\gamma|^{\frac{|\alpha|+|\beta|}{2}} \\ \notag
&\leq \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{s\Theta(|\gamma|)} |\gamma|^{-\frac{(2-s)(d+1)}{2}} |\gamma|^{\frac{|\alpha|+|\beta|+(2-s)(d+1)}{2}}e^{-s\Theta(|\gamma|)} \\ \notag
&\leq \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{s\Theta(|\gamma|)} |\gamma|^{-\frac{(2-s)(d+1)}{2}} \\ \notag
&\leq \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)}^{1-s} \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} \Big(|\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{\Theta(|\gamma|)}\Big)^s |\gamma|^{-\frac{(2-s)(d+1)}{2}}.
\end{align}
H\"older's inequality implies that for all $0 < s \leq 1$,
\begin{equation}\label{holder}
\sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} \Big(|\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{\Theta(|\gamma|)}\Big)^s |\gamma|^{-\frac{(2-s)(d+1)}{2}} \leq D_{d,s}\left\|\Big(e^{\Theta(|\gamma|)}\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\Big)_{\gamma \in \mathbb{N}^d} \right\|_{l^{2}(\mathbb{N}^d)}^{s},
\end{equation}
with
\begin{equation*}
D_{d,s} =\Big(\sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\gamma|^{-(d+1)}\Big)^{1-\frac{s}{2}} < +\infty.
\end{equation*}
Since $\Theta(|\gamma|) \geq 0$ for all $\gamma \in \mathbb{N}^d$, it follows that
\begin{equation}\label{infnorm1709}
\vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)} \leq \vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2} e^{\Theta(|\gamma|)}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)} \leq \| f \|_{GS_{\Theta}}.
\end{equation}
We deduce from \eqref{gse_2}, \eqref{holder} and \eqref{infnorm1709} that
\begin{equation}\label{gse_3}
\sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| |\gamma|^{\frac{|\alpha|+|\beta|}{2}}
\leq D_{d,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}.
\end{equation}
On the other hand, the assumption $\text{(H2)}$ implies that there exist $C_{\Theta}\geq 1$ and $L_{\Theta}\geq 1$ such that if $|\alpha|+|\beta| \geq 1$ then
\begin{align}\label{gse_4}
(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} & = (2s)^{\frac{|\alpha|+|\beta|}{2}} \Big(\frac{|\alpha|+|\beta|}{2s} \Big)^{s \frac{|\alpha|+|\beta|}{2s}} \\ \notag
& \leq (2s)^{\frac{|\alpha|+|\beta|}{2}} \Big(\left\lfloor \frac{|\alpha|+|\beta|}{2s} \right\rfloor +1\Big)^{s \Big(\left\lfloor \frac{|\alpha|+|\beta|}{2s}\right\rfloor +1\Big)} \\ \notag
& \leq (2s)^{\frac{|\alpha|+|\beta|}{2}} C^s_{\Theta}L^s_{\Theta} L_{\Theta}^{\frac{|\alpha|+|\beta|}{2}} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|}{2s} \right\rfloor +1}\Big)^s.
\end{align}
The last inequality holds true as well when $|\alpha| +|\beta|=0$ with the convention $0^0=1$ since $C_{\Theta} M_1 \geq 1$ and $L_{\Theta} \geq 1$.
The logarithmical convexity of the sequence $(M_p)_{p \in \mathbb{N}}$ gives
\begin{equation*}\label{log_conv0}
\forall p \in \mathbb{N}, \quad M_p \leq \frac{M_{0}}{M_1} M_{p+1}
\end{equation*}
and therefore,
\begin{equation*}\label{log_conv}
\forall 0 \leq p \leq q, \quad M_p \leq \Big(\frac{M_{0}}{M_1}\Big)^{q-p} M_{q}.
\end{equation*}
By using this estimate together with the following elementary inequality
\begin{equation*}
\forall x,y \geq 0, \quad \lfloor x +y \rfloor \leq \lfloor x \rfloor + \lfloor y \rfloor +1,
\end{equation*}
we obtain
\begin{equation}\label{log_conv2}
\forall 0 \leq r \leq r', \quad M_{\lfloor r \rfloor} \leq \max\Big(1,\frac{M_{0}}{M_1}\Big)^{\lfloor r'-r \rfloor+1} M_{\lfloor r' \rfloor}.
\end{equation}
It follows from \eqref{gse_4} and \eqref{log_conv2} that
\begin{align}\label{gse_5}
(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}}
& \leq C_{\Theta}^s L_{\Theta}^s \Big(\sqrt{2s L_{\Theta}} \Big)^{|\alpha|+|\beta|} \max\Big(1,\frac{M_0}{M_1}\Big)^{s (\left\lfloor \frac{(2-s)(d+1)}{2s} \right\rfloor +1)}\Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \\ \notag
& \leq C_{\Theta}^s L_{\Theta}^s \Big(\sqrt{2s L_{\Theta}} \Big)^{|\alpha|+|\beta|} \max\Big(1,\frac{M_0}{M_1}\Big)^{d+2} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s,
\end{align}
since $0< s \leq 1$.
We deduce from \eqref{gse_1709}, \eqref{gse_3} and \eqref{gse_5} that for all $N \geq |\alpha|+|\beta|+1$,
\begin{equation*}
\| x^{\alpha} \partial_x^{\beta} \pi_N f \|_{L^2(\mathbb{R}^d)} \leq K_{\Theta, s} K'^{|\alpha|+|\beta|}_{\Theta,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}},
\end{equation*}
with $K_{\Theta,s} = D_{d,s} +C^s_{\Theta} L^s_{\Theta} \max\Big(1,\frac{M_0}{M_1}\Big)^{d+2}\geq 1$ and $K'_{\Theta,s} = 2 \max(1, \sqrt{2sL_{\Theta}}) \geq 1$.
This implies that $f \in \mathscr{S}(\mathbb{R}^d)$ and for all $\alpha, \beta \in \mathbb{N}^d$,
\begin{equation}\label{gse_6}
\| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq K_{\Theta, s} K'^{|\alpha|+|\beta|}_{\Theta,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}.
\end{equation}
By using Newton formula, we obtain that for all $k \in \mathbb{N}$,
\begin{multline*}
\|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 = \int_{\mathbb{R}^d} \Big( 1 + \sum \limits_{i=1}^d {x_i^2} \Big)^k |\partial_x^\beta f(x) |^2 dx \\ = \int_{\mathbb{R}^d} \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} x^{2 \tilde{\gamma}} |\partial_x^\beta f(x) |^2 dx
=\sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} \|x^{\tilde{\gamma}} \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 ,
\end{multline*}
where we denote $\tilde{\gamma}=(\gamma_1,...,\gamma_d) \in \mathbb{N}^d$ if $\gamma=(\gamma_1,...\gamma_{d+1}) \in \mathbb{N}^{d+1}$. It follows from \eqref{log_conv2} and \eqref{gse_6} that for all $k \in \mathbb{N}$ and $\beta \in \mathbb{N}^d$,
\begin{align}\label{gse7}
\|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 \leq & \ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} K^2_{\Theta,s} K'^{2(|\tilde{\gamma}|+|\beta|)}_{\Theta, s} \Big(M_{\left\lfloor \frac{|\tilde{\gamma}| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}} \\ \nonumber
\leq & \ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} K^2_{\Theta, s} K'^{2(k+|\beta|)}_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{k-|\tilde{\gamma}|+2} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}} \\ \nonumber
\leq & K^2_{\Theta, s} (d+1)^k \max\Big(1,\frac{M_0}{M_1}\Big)^{k+2}K'^{2(k+|\beta|)}_{\Theta, s} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}},
\end{align}
since
\begin{equation*}
\sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !}=(d+1)^k,
\end{equation*}
thanks to Newton formula.
Let $r \in \mathbb{R}_+^* \setminus \mathbb{N}$. There exist $0 < \theta < 1$ and $k \in \mathbb{N}$ such that
\begin{equation*}\label{floor}
r= \theta k + (1- \theta)(k+1).
\end{equation*}
By using Hölder inequality, it follows from \eqref{gse7} that
\begin{multline}\label{holder1709}
\|\left\langle x\right\rangle^r \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} \leq \|\langle x\rangle^k \partial_x^\beta f\|_{L^2(\mathbb{R}^d)}^{\theta}\|\langle x\rangle^{k+1} \partial_x^\beta f\|_{L^2(\mathbb{R}^d)}^{1-\theta} \\
\leq K_{\Theta, s} (d+1)^{\frac{r}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r}{2}+1}K'^{r+|\beta|}_{\Theta, s}\Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s \theta} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s(1-\theta)} \|f\|_{GS_{\Theta}}.
\end{multline}
By using anew \eqref{log_conv2}, we have
\begin{equation*}
M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1} \leq \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r+1-k}{2s}+1} M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}
\end{equation*}
and
\begin{equation*}
M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1} \leq \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r-k}{2s}+1} M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1},
\end{equation*}
since $k \leq r$.
We deduce from \eqref{holder1709} that
\begin{align*}
\|\left\langle x\right\rangle^r \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} & \leq K_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{2r+\theta-k}{2}+1+s} (d+1)^{\frac{r}{2}} K'^{r+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\
& \leq K_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r}{2}+3} (d+1)^{\frac{r}{2}} K'^{r+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}},
\end{align*}
since $0<s\leq 1$, $k \leq r < k+1$ and $0< \theta <1$.
Let us notice that the above inequality also holds for $r \in \mathbb{N}$. Indeed, it follows from \eqref{log_conv2} and \eqref{gse7} that
\begin{align*}
\|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} \leq & K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+1}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}} \\
\leq &K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+1+\frac{1}{2}+1}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}}\\
\leq &K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+3}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}}.
\end{align*}
This ends the proof of Proposition~\ref{bernstein_estim1}.
\subsection{Gelfand-Shilov regularity}\label{gelfand}
We refer the reader to the works~\cite{gelfand_shilov,rodino1,rodino,toft} and the references herein for extensive expositions of the Gelfand-Shilov regularity theory.
The Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$, with $\mu,\nu>0$, $\mu+\nu\geq 1$, are defined as the spaces of smooth functions $f \in C^{\infty}(\mathbb{R}^d)$ satisfying the estimates
$$\exists A,C>0, \quad |\partial_x^{\alpha}f(x)| \leq C A^{|\alpha|}(\alpha !)^{\mu}e^{-\frac{1}{A}|x|^{1/\nu}}, \quad x \in \mathbb{R}^d, \ \alpha \in \mathbb{N}^d,$$
or, equivalently
$$\exists A,C>0, \quad \sup_{x \in \mathbb{R}^d}|x^{\beta}\partial_x^{\alpha}f(x)| \leq C A^{|\alpha|+|\beta|}(\alpha !)^{\mu}(\beta !)^{\nu}, \quad \alpha, \beta \in \mathbb{N}^d,$$
with $\alpha!=(\alpha_1!)...(\alpha_d!)$ if $\alpha=(\alpha_1,...,\alpha_d) \in \mathbb{N}^d$.
These Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$ may also be characterized as the spaces of Schwartz functions $f \in \mathscr{S}(\mathbb{R}^d)$ satisfying the estimates
$$\exists C>0, \varepsilon>0, \quad |f(x)| \leq C e^{-\varepsilon|x|^{1/\nu}}, \quad x \in \mathbb{R}^d; \qquad |\widehat{f}(\xi)| \leq C e^{-\varepsilon|\xi|^{1/\mu}}, \quad \xi \in \mathbb{R}^d.$$
In particular, we notice that Hermite functions belong to the symmetric Gelfand-Shilov space $S_{1/2}^{1/2}(\mathbb{R}^d)$. More generally, the symmetric Gelfand-Shilov spaces $S_{\mu}^{\mu}(\mathbb{R}^d)$, with $\mu \geq 1/2$, can be nicely characterized through the decomposition into the Hermite basis $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$, see e.g. \cite[Proposition~1.2]{toft},
\begin{multline*}
f \in S_{\mu}^{\mu}(\mathbb{R}^d) \Leftrightarrow f \in L^2(\mathbb{R}^d), \ \exists t_0>0, \ \big\|\big(\langle f,\Phi_{\alpha}\rangle_{L^2}\exp({t_0|\alpha|^{\frac{1}{2\mu}})}\big)_{\alpha \in \mathbb{N}^d}\big\|_{l^2(\mathbb{N}^d)}<+\infty\\
\Leftrightarrow f \in L^2(\mathbb{R}^d), \ \exists t_0>0, \ \|e^{t_0\mathcal{H}^{\frac{1}{2\mu}}}f\|_{L^2(\mathbb{R}^d)}<+\infty,
\end{multline*}
where $\mathcal{H}=-\Delta_x+|x|^2$ stands for the harmonic oscillator.
We end this section by proving two technical lemmas:
\begin{lemma}\label{croch}
Let $\mu, \nu >0$ such that $\mu+\nu \geq 1$, $C>0$ and $A \geq 1$. If $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfies
\begin{equation}\label{gs_estim}
\forall \alpha \in \mathbb{N}^d, \forall \beta \in \mathbb{N}^d, \quad \| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C A^{|\alpha|+|\beta|} (\alpha!)^{\nu} (\beta!)^{\mu},
\end{equation}
then, it satisfies
\begin{equation*}
\forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C (d+1)^{\frac{p}{2}}A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfying the estimates \eqref{gs_estim}. By using Newton formula, we obtain that for all $p \in \mathbb{N}$, $\beta \in \mathbb{N}^d$,
\begin{multline}\label{croch_estim}
\|\langle x \rangle^p \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} = \int_{\mathbb{R}^d} \Big(1+\sum_{i=1}^d x_i^2 \Big)^p |\partial_x^{\beta}f(x)|^2 dx \\
= \int_{\mathbb{R}^d} \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} x^{2\tilde{\gamma}} |\partial_x^{\beta}f(x)|^2 dx = \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} \|x^{\tilde{\gamma}} \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)},
\end{multline}
where we denote $\tilde{\gamma}=(\gamma_1,...,\gamma_d) \in \mathbb{N}^d$ if $\gamma=(\gamma_1,...,\gamma_{d+1}) \in \mathbb{N}^{d+1}$. Since for all $\alpha \in \mathbb{N}^d$, $\alpha! \leq (|\alpha|)!$, it follows from \eqref{gs_estim} and \eqref{croch_estim} that
\begin{align*}
\|\langle x \rangle^p \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} & \leq C^2\sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} A^{2(|\tilde{\gamma}|+|\beta|)} (|\tilde{\gamma}|!)^{2\nu} (|\beta|!)^{2\mu} \\
& \leq C^2 (d+1)^p A^{2(p+|\beta|)} (p!)^{2\nu} (|\beta|!)^{2\mu},
\end{align*}
since $$ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} = (d+1)^p.$$
\end{proof}
\begin{lemma}\label{interpolation}
Let $\mu, \nu >0$ such that $\mu+\nu \geq 1$, $0 \leq \delta \leq 1$, $C>0$ and $A \geq 1$. If $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfies
\begin{equation}\label{int}
\forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu},
\end{equation}
then, it satisfies
\begin{equation*}
\forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{\delta p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C(8^{\nu} e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfying the estimates \eqref{int}.
It follows from H\"older inequality that for all $r \in (0,+\infty) \setminus \mathbb{N}$ and $\beta \in \mathbb{N}^d$,
\begin{multline}\label{holder0}
\|\langle x \rangle^r \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} = \int_{\mathbb{R}^d} \big(\langle x \rangle^{2 \lfloor r \rfloor} |\partial^{\beta}_x f(x)|^2\big)^{\lfloor r \rfloor +1-r} \big(\langle x \rangle^{2(\lfloor r \rfloor+1)} |\partial^{\beta}_x f(x)|^2\big)^{r-\lfloor r \rfloor} dx \\
\leq \|\langle x \rangle^{\lfloor r \rfloor} \partial_x^{\beta} f \|^{2(\lfloor r \rfloor +1- r)}_{L^2(\mathbb{R}^d)} \|\langle x \rangle^{\lfloor r \rfloor+1} \partial_x^{\beta} f \|^{2(r-\lfloor r \rfloor)}_{L^2(\mathbb{R}^d)},
\end{multline}
where $\lfloor \cdot \rfloor$ denotes the floor function.
Since the above inequality clearly holds for $r \in \mathbb{N}$, we deduce from \eqref{int} and \eqref{holder0} that for all $r \geq 0$ and $\beta \in \mathbb{N}^d$,
\begin{align}\label{GS_1}
\|\langle x \rangle^r \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq C A^{r+|\beta|} (\lfloor r \rfloor!)^{(\lfloor r \rfloor +1-r) \nu} \big((\lfloor r \rfloor+1)!\big)^{(r-\lfloor r \rfloor) \nu} (|\beta|!)^{\mu} \\ \nonumber
& \leq C A^{r+|\beta|} \big((\lfloor r \rfloor+1)!\big)^{\nu} (|\beta|!)^{\mu} \\ \nonumber
& \leq C A^{r+|\beta|} (\lfloor r \rfloor+1)^{(\lfloor r \rfloor+1)\nu} (|\beta|!)^{\mu} \\ \nonumber
& \leq C A^{r+|\beta|} (r+1)^{(r+1)\nu} (|\beta|!)^{\mu}.
\end{align}
It follows from \eqref{GS_1} that for all $p \in \mathbb{N}^*$, $\beta \in \mathbb{N}^d$,
\begin{align}\label{puiss_frac}
\|\langle x \rangle^{\delta p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq C A^{p+|\beta|} (p+1)^{(\delta p+1)\nu} (|\beta|!)^{\mu} \leq C A^{p+|\beta|} (2p)^{(\delta p+1)\nu} (|\beta|!)^{\mu} \\ \notag
& \leq C (2^{\nu}A)^{p+|\beta|} p^{\nu} (2p)^{\delta \nu p} (|\beta|!)^{\mu} \leq C (8^{\nu}e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu},
\end{align}
since for all positive integer $p \geq 1$,
\begin{equation*}
p+1 \leq 2p \leq 2^p \quad \text{and} \quad p^p \leq e^p p!.
\end{equation*}
Notice that from \eqref{int}, since $8^{\nu} e^{\nu} \geq 1$, estimates \eqref{puiss_frac} also hold for $p=0$. This ends the proof of Lemma~\ref{interpolation}.
\end{proof}
\subsection{Quasi-analytic sequences}\label{qa_section}
This section is devoted to recall some properties of quasi-analytic sequences and to state a multidimensional version of the Nazarov-Sodin-Volberg theorem (Corollary~\ref{NSV}). This theorem plays a key role in the proof of Theorem~\ref{general_uncertaintyprinciple}. We begin by a lemma which provides some quasi-analytic sequences and quantitative estimates on the Bang degree $n_{t, \mathcal{M}, r}$ defined in \eqref{Bang}:
\begin{lemma}\label{ex_qa_sequence}
Let $0<s \leq 1$, $A \geq 1$ and $\mathcal{M}_s= (A^p(p!)^s)_{p \in \mathbb{N}}$. If $0<s<1$, then for all $0<t \leq 1$, $r>0$,
\begin{equation}
n_{t, \mathcal{M}_s, r} \leq 2^{\frac{1}{1-s}}\big(1-\log t+(Ar)^{\frac{1}{1-s}}\big).
\end{equation}
If $s=1$, then for all $0< t \leq 1$, $r>0$,
\begin{equation}\label{cass1}
n_{t, \mathcal{M}_1, r} \leq (1-\log t) e^{Ar} .
\end{equation}
Moreover, $$\forall 0 < s \leq 1, \forall p \in \mathbb{N}^*, \quad 0 \leq \gamma_{\mathcal{M}_s}(p) \leq s.$$
\end{lemma}
\medskip
\begin{proof}
Let $0<s\leq1$ and $0< t \leq 1$. The sequence $\mathcal{M}_s$ is logarithmically convex. By using that the Riemann series $$A^{-1}\sum \frac{1}{p^s} = \sum \frac{A^{p-1}((p-1)!)^s}{A^p(p!)^s}$$ is divergent, we notice that for all $r>0$, $n_{t, \mathcal{M}_s,r} < +\infty$. When $0<s<1$, we have that for all integers $p \geq 1$,
\begin{equation*}
\frac{1}{1-s}\big((p+1)^{1-s}-p^{1-s}\big)=\int_{p}^{p+1} \frac{1}{x^s} dx \leq \frac{1}{p^s}.
\end{equation*}
It follows that for all $N \in \mathbb{N}^*$,
\begin{equation*}
\frac{1}{1-s} \big((N+1)^{1-s}-(-\log t+1)^{1-s}\big) \leq \sum_{-\log t <p \leq N} \frac{1}{p^s}.
\end{equation*}
By taking $N= n_{t, \mathcal{M}_s, r}$ and since $0< 1-s < 1$, it follows that
\begin{equation*}
n_{t, \mathcal{M}_s, r} \leq \Big((1-\log t)^{1-s}+Ar\Big)^{\frac{1}{1-s}}.
\end{equation*}
The result then follows by using the basic estimate
\begin{equation*}
\forall x, y \geq0, \quad (x+y)^{\frac{1}{1-s}} \leq 2^{\frac{1}{1-s}} \max\Big(x^{\frac{1}{1-s}}, y^{\frac{1}{1-s}}\Big) \leq 2^{\frac{1}{1-s}} \Big(x^{\frac{1}{1-s}}+y^{\frac{1}{1-s}}\Big).
\end{equation*}
By proceeding in the same manner in the case when $s=1$, we deduce the upper bound \eqref{cass1} thanks to the formula
\begin{equation*}
\forall p \in \mathbb{N}^*, \quad \log(p+1)-\log p = \int_p^{p+1} \frac{dx}{x} \leq \frac{1}{p}.
\end{equation*}
By noticing that
\begin{equation*}
\forall 0<s \leq 1, \forall j \in \mathbb{N}^*, \quad (j+1)^s-j^s= \int_j^{j+1} \frac{s}{x^{1-s}} dx \leq s \frac{1}{j^{1-s}},
\end{equation*}
we finally obtain that for all $0<s \leq 1$, $p \in \mathbb{N}^*$,
\begin{equation*}
\gamma_{\mathcal{M}_s}(p)= \sup_{1 \leq j \leq p} j \Big(\frac{M_{j+1} M_{j-1}}{M_j^2} -1\Big) = \sup_{1\leq j \leq p} j^{1-s} \big((j+1)^s-j^s\big) \leq s< +\infty.
\end{equation*}
\end{proof}
Let us now prove Proposition~\ref{ex_qa_bertrand}.
This proof uses the following lemmas established in~\cite{AlphonseMartin}:
\begin{lemma}[{\cite[Lemma~4.4]{AlphonseMartin}}]\label{relation} Let $\mathcal M=(M_p)_{p \in \mathbb{N}}$ and $\mathcal M'=(M'_p)_{p\in\mathbb{N}}$ be two sequences of positive real numbers satisfying $$\forall p \in \mathbb{N}, \quad M_p \le M'_p.$$ If $\mathcal M'$ is a quasi-analytic sequence, so is the sequence $\mathcal M$.
\end{lemma}
\medskip
\begin{lemma}[{\cite[Lemma~4.5]{AlphonseMartin}}]\label{linearcomb} Let $\Theta : [0,+\infty)\rightarrow [0,+\infty)$ be a continuous function. If the associated sequence $\mathcal{M}^{\Theta}$ in \eqref{lc_sequence} is quasi-analytic, so is $\mathcal M^{T\Theta+c}$ for all $c \geq 0$ and $T>0$.
\end{lemma}
\medskip
Let $k \geq 1$ be a positive integer, $\frac{1}{2} \leq s \leq 1$ and $\Theta_{k,s} : [0,+\infty) \longrightarrow [0,+\infty)$ be the non-negative function defined in Proposition~\ref{ex_qa_bertrand}.
We first notice that the assumption $\text{(H1)}$ clearly holds for $\mathcal M^{\Theta_{k,s}}$. Let us check that the assumption $\text{(H2)}$ holds as well. To that end, we notice that
\begin{equation*}
\forall t \geq 0, \quad \Theta_{k,s}(t) \leq t+1
\end{equation*}
and we deduce that
\begin{equation*}
\forall p \in \mathbb{N}, \quad M^{\Theta_{k,s}}_p \geq \sup_{t \geq 0} t^p e^{-(t+1)}= e^{-1} \Big( \frac{p}{e}\Big)^p.
\end{equation*}
It remains to check that $\text{(H3)}_s$ holds. Thanks to the morphism property of the logarithm, it is clear that
\begin{equation*}
\Theta_{k,s}(t) \underset{t \to +\infty}{\sim} s\Theta_{k,1}(t^s)
\end{equation*}
and this readily implies that there exists a positive constant $C_{k,s}>0$ such that
\begin{equation*}
\forall t \geq 0, \quad \Theta_{k,s}(t) + C_{k,s} \geq s\Theta_{k,1}\big(t^s\big).
\end{equation*}
It follows that
\begin{align*}
\forall p \in \mathbb{N}, \quad \Big(M^{\Theta_{k,s}}_p\Big)^s=e^{sC_{k,s}}\Big(M^{\Theta_{k,s}+C_{k,s}}_p\Big)^s & \leq e^{sC_{k,s}} \sup_{t \geq 0} t^{sp}e^{-s^2 \Theta_{k,1}(t^s)} \\
&= e^{sC_{k,s}} \sup_{t \geq 0} t^{p}e^{-s^2 \Theta_{k,1}(t)} \\
& = e^{sC_{k,s}} M_p^{s^2\Theta_{k,1}}.
\end{align*}
By using Proposition~\ref{ex_theta1} together with Lemmas \ref{relation} and \ref{linearcomb}, the quasi-analyticity of the sequence $\big((M^{\Theta_{k,s}}_p)^s\big)_{p \in \mathbb{N}}$ follows from the quasi-analyticity of $\mathcal{M}^{\Theta_{k,1}}$.
The following result by Nazarov, Sodin and Volberg \cite{NSV} provides an uniform control on the uniform norm of quasi-analytic functions ruled by their values on a positive measurable subset. Originally stated in \cite[Theorem~B]{NSV}, it has been used by Jaye and Mitkovski (\cite{JayeMitkovski}) in the following form:
\begin{theorem}[{\cite[Theorem~2.5]{JayeMitkovski}}]\label{JayeNSV} Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$ and $f \in \mathcal{C}_{\mathcal{M}}([0,1]) \setminus \{0\}$. For any interval $I \subset [0,1]$ and measurable subset $\mathcal{J} \subset I$ with $|\mathcal{J}| >0$,
\begin{equation*}
\sup_{I} |f| \leq \Big(\frac{\Gamma_{\mathcal{M}}(2n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e}) |I|}{|\mathcal{J}|} \Big)^{2n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e}} \sup_{\mathcal{J}} |f|.
\end{equation*}
\end{theorem}
The following corollary is instrumental in this work:
\medskip
\begin{corollary}\label{NSV} Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$ and $0< s, t \leq 1$. There exists a positive constant $C=C(\mathcal{M}) \geq 1$ such that for any interval $I \subset [0,1]$ and measurable subset $\mathcal{J} \subset I$ with $|\mathcal{J}| \geq s >0$,
\begin{equation*}
\forall f \in \mathcal{C}_{\mathcal{M}}([0,1]) \text{ with } \|f\|_{L^{\infty}([0,1])} \geq t, \quad \sup_{I} |f| \leq \Big(\frac{\Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},e}) |I|}{s} \Big)^{2n_{t, \mathcal{M},e}} \sup_{\mathcal{J}} |f|.
\end{equation*}
\end{corollary}
\medskip
Corollary~\ref{NSV} is directly deduced from Theorem~\ref{JayeNSV} by noticing that for all $f \in \mathcal{C}_{\mathcal{M}}([0,1])$ satisfying $\| f \|_{L^{\infty}([0,1])} \geq t$, $$n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e} \leq n_{t, \mathcal{M},e}.$$
In order to use this result in control theory, we need a multidimensional version of Corollary~\ref{NSV}:
\medskip
\begin{proposition}\label{NSV_multid}
Let $d \geq 1$ and $U$ be a non-empty bounded open convex subset of $\mathbb{R}^d$ satisfying $|\partial U|=0$. Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$, $0< \gamma \leq 1$ and $0<t\leq1$. For any measurable subset $E \subset U$ satisfying $|E| \geq \gamma |U|>0$, we have
\begin{multline}\label{NSV_estimate}
\forall f \in \mathcal{C}_{\mathcal{M}}(U) \text{ with } \|f\|_{L^{\infty}(U)} \geq t, \\
\sup_{U} |f| \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M},d \operatorname{diam}(U) e}\big)\Big)^{2n_{t, \mathcal{M},d \operatorname{diam}(U) e}} \sup_{E} |f|.
\end{multline}
\end{proposition}
\medskip
\begin{proof}
Let $0< \gamma \leq 1$ and $0<t \leq 1$. Let $E$ be a measurable subset of $U$ satisfying $|E| \geq \gamma |U|>0$ and $f \in \mathcal{C}_{\mathcal{M}}(U)$ with $\|f\|_{L^{\infty}(U)} \geq t$. Since $\overline{U}$ is compact and that $f$ can be extended as a continuous map on $\overline{U}$, there exists $x_0 \in \overline{U}$ such that
\begin{equation}\label{max}
\sup_U |f|= |f(x_0)|.
\end{equation}
By using spherical coordinates, we have
\begin{equation*}
|E|= \int_{\mathbb{R}^d} {\mathrm{1~\hspace{-1.4ex}l}}_{E}(x) dx = \int_{\mathbb{R}^d} {\mathrm{1~\hspace{-1.4ex}l}}_{E}(x_0+x) dx
= \int_0^{+\infty} \int_{\mathbb{S}^{d-1}} {\mathrm{1~\hspace{-1.4ex}l}}_{E} (x_0 + t \sigma) d\sigma t^{d-1}dt.
\end{equation*}
Since $\overline{U}$ is convex, we deduce that
\begin{align*}\label{m1}
0<|E| & =\int_{\mathbb{S}^{d-1}} \int_{0}^{J_{\overline{U}}(\sigma)} {\mathrm{1~\hspace{-1.4ex}l}}_{E} (x_0 + t \sigma) t^{d-1}dt d\sigma \\
& = \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d\int_{0}^{1} {\mathrm{1~\hspace{-1.4ex}l}}_{E} \big(x_0 + J_{\overline{U}}(\sigma) t \sigma\big) t^{d-1}dt d\sigma \\
& \leq \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d\int_{0}^{1} {\mathrm{1~\hspace{-1.4ex}l}}_{E} \big(x_0 + J_{\overline{U}}(\sigma) t \sigma\big)dt d\sigma \leq \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d |I_{\sigma}| d\sigma,
\end{align*}
with
\begin{equation}\label{jauge}
J_{\overline{U}}(\sigma) =\sup\{t\geq0: \, x_0+ t\sigma \in \overline{U} \} \quad \text{and} \quad I_{\sigma} = \Big\{t \in [0,1]: \, x_0 + J_{\overline{U}}(\sigma) t \sigma \in E \Big\},
\end{equation}
when $\sigma \in \mathbb{S}^{d-1}$. Notice that $$\forall \sigma \in \mathbb{S}^{d-1}, \quad J_{\overline{U}}(\sigma) <+\infty,$$
since $\overline{U}$ is bounded.
It follows that there exists $\sigma_0 \in \mathbb{S}^{d-1}$ such that
\begin{equation}\label{m3}
|E| \leq |I_{\sigma_0}| \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d d\sigma .
\end{equation}
By using the assumption that $|\partial U|=0$ and $U$ is an open set, we observe that
\begin{equation*}
|U|=|\overline{U}|= \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d \int_0^1 t^{d-1}dt d\sigma = \frac{1}{d} \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d d\sigma.
\end{equation*}
By using that $|E| \geq \gamma |U|$, the estimate \eqref{m3} and the above formula provide the lower bound
\begin{equation} \label{m4}
|I_{\sigma_0}| \geq \frac{\gamma}{d}>0.
\end{equation}
Setting
\begin{equation}\label{fonct_aux}
\forall t \in [0,1], \quad g(t)=f\big(x_0 + J_{\overline{U}}(\sigma_0) t \sigma_0 \big),
\end{equation}
we notice that this function is well-defined as $x_0 + J_{\overline{U}}(\sigma_0) t \sigma_0 \in \overline{U}$ for all $t \in [0,1]$. We deduce from the fact that $f \in \mathcal{C}_{\mathcal{M}}(U)$, the estimate $$ J_{\overline{U}}(\sigma_0) \leq \operatorname{diam}(\overline{U})=\operatorname{diam}(U),$$
where $\operatorname{diam}(U)$ denotes the Euclidean diameter of $U$, and the multinomial formula that for all $p \in \mathbb{N}$,
\begin{multline*}
\|g^{(p)}\|_{L^{\infty}([0,1])} \leq \sum_{\substack{\beta \in \mathbb{N}^d, \\ |\beta|=p}} \frac{p!}{\beta !} \|\partial^{\beta}_x f \|_{L^{\infty}(\overline{U})} \big(J_{\overline{U}}(\sigma_0)\big)^p \\ \leq \bigg( \sum_{\substack{\beta \in \mathbb{N}^d, \\ |\beta|=p}} \frac{p!}{\beta !}\bigg) \operatorname{diam}(U)^p M_p = \big(d\operatorname{diam}(U)\big)^p M_p.
\end{multline*}
We observe that the new sequence $$\mathcal{M}':= \Big(\big(d\operatorname{diam}(U)\big)^p M_p \Big)_{p \in \mathbb{N}},$$ inherits from $\mathcal{M}$ its logarithmical convexity, its quasi-analytic property with the following identity for the associated Bang degrees
\begin{equation*}
n_{t,\mathcal{M}',e}= n_{t,\mathcal{M},d\operatorname{diam}(U)e}.
\end{equation*}
The function $g$ belongs to $\mathcal{C}_{\mathcal{M}'}([0,1])$. By using from \eqref{m4} that $|I_{\sigma_0}| >0$ and $\|g\|_{L^{\infty}([0,1])} \geq |g(0)|=\|f\|_{L^{\infty}(U)} \geq t$, we can apply Corollary~\ref{NSV} to obtain that
\begin{equation}\label{NSV_1}
\sup_{[0,1]} |g| \leq \Big(\frac{\Gamma_{\mathcal{M}'}(2n_{t, \mathcal{M}',e})}{|I_{\sigma_0}|} \Big)^{2n_{t, \mathcal{M}',e}} \sup_{I_{\sigma_0}} |g|.
\end{equation}
By noticing that $$\Gamma_{\mathcal{M}}=\Gamma_{\mathcal{M}'},$$
we deduce from \eqref{max}, \eqref{jauge}, \eqref{m4}, \eqref{fonct_aux} and \eqref{NSV_1} that
\begin{multline*}
\sup_U |f| = |f(x_0)| =|g(0)| \leq \sup \limits_{[0,1]} |g| \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_{I_{\sigma_0}} |g| \\
\leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_E |f|.
\end{multline*}
This ends the proof of Proposition~\ref{NSV_multid}.
\end{proof}
In order to use estimates as (\ref{NSV_estimate}) to derive the null-controllability of evolution equations posed in $L^2(\mathbb{R}^d)$, we need the following $L^2$-version of the Nazarov-Sodin-Volberg Theorem:
\medskip
\begin{proposition}\label{NSV_multid_L2}
Let $d \geq 1$ and $U$ be a non-empty bounded open convex subset of $\mathbb{R}^d$.
Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$, $0< \gamma \leq 1$ and $0<t \leq 1$. If $E \subset U$ is a measurable subset satisfying $|E| \geq \gamma |U|$, then for all $f \in \mathcal{C}_{\mathcal{M}}(U)$ with $\|f\|_{L^{\infty}(U)} \geq t$,
\begin{equation*}
\int_U |f(x)|^2 dx \leq \frac{2}{\gamma}\Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}\big)\Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \int_E |f(x)|^2 dx.
\end{equation*}
\end{proposition}
\medskip
\begin{proof}
Let $0<t\leq 1$, $f \in \mathcal{C}_{\mathcal{M}}(U)$ so that $\|f\|_{L^{\infty}(U)} \geq t$ and $E$ be a subset of $U$ satisfying $|E| \geq \gamma |U| >0$. Setting
\begin{equation*}
\tilde{E}= \Big\{x \in E: \ |f(x)|^2 \leq \frac{2}{|E|} \int_E |f(y)|^2 dy\Big\},
\end{equation*}
we observe that
\begin{equation}\label{m_20}
\int_E |f(x)|^2dx \geq \int_{ E \setminus \tilde{E}} |f(x)|^2 dx \geq \frac{2|E \setminus \tilde{E}|}{|E|} \int_E |f(x)|^2dx.
\end{equation}
Let us prove by contradiction that the integral
\begin{equation*}
\int_E |f(x)|^2 dx >0,
\end{equation*}
is positive.
If $$\int_E |f(x)|^2 dx =0,$$ then, $$E_{\mathcal{Z}}=\Big\{ x \in E: \quad f(x)=0 \Big\},$$ satisfies $|E_{\mathcal{Z}}|=|E|>0$. We therefore deduce from Proposition~\ref{NSV_multid}, since $\|f\|_{L^{\infty}(U)} \geq t$ and $|E_{\mathcal{Z}}|>0$, that $f=0$ on $U$. This contradicts the assumption $\| f \|_{L^{\infty}(U)} \geq t>0$ and therefore
\begin{equation*}
\int_E |f(x)|^2 dx >0.
\end{equation*}
We deduce from \eqref{m_20} that
\begin{equation*}\label{m_21}
|\tilde{E}| = |E|-|E\setminus \tilde{E}| \geq \frac{|E|}{2} \geq \frac{\gamma}{2} |U| >0.
\end{equation*}
Applying Proposition~\ref{NSV_multid} provides that
\begin{multline*}
\sup_U |f| \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_{\tilde{E}} |f| \\
\leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e})\Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \frac{\sqrt{2}}{\sqrt{|E|}} \Big(\int_E|f(x)|^2dx\Big)^{\frac{1}{2}}.
\end{multline*}
It follows that
\begin{align*}
\int_U |f(x)|^2dx \leq |U|\big(\sup_U |f|\big)^2 & \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}}\frac{2|U|}{|E|} \int_E |f(x)|^2dx \\
& \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \frac{2}{\gamma}\int_E |f(x)|^2dx.
\end{align*}
This concludes the proof of Proposition~\ref{NSV_multid_L2}.
\end{proof}
In \cite{JayeMitkovski}, the authors also establish a multi-dimensional version and a $L^2$-version of the Nazarov-Sodin-Volberg Theorem (Theorem~\ref{JayeNSV}) but the constants obtained there are less explicit than the ones given in Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2}. Quantitative constants will be essential in Section~\ref{null_controllability_results} to set up an adapted Lebeau-Robbiano method in order to derive null-controllability results.
We end this section by illustrating the above result with an example:
\begin{example}\label{NSV_example}
Let $0< s \leq 1$, $A\geq 1$, $R>0$, $d \geq 1$, $0<t \leq 1$, $0< \gamma \leq 1$ and $\mathcal{M}=(A^p (p!)^s)_{p \in \mathbb{N}}$. Let $E \subset B(0,R)$ be a measurable subset of the Euclidean ball centered at $0$ with radius $R$ such that $|E| \geq \gamma |B(0,R)|$. There exists a constant $K=K(s, d) \geq 1$ such that for all $f \in \mathcal{C}_{\mathcal{M}}(B(0,R))$ with $\|f\|_{L^{\infty}(B(0,R))} \geq t$,
\begin{equation*}
\| f \|_{L^{\infty}(B(0,R))} \leq C_{t, A, s, R, \gamma, d} \|f\|_{L^{\infty}(E)} \quad \text{and} \quad \| f \|_{L^2(B(0,R))} \leq C_{t, A, s, R, \gamma, d} \|f\|_{L^2(E)} ,
\end{equation*}
where when $0<s<1$,
$$0<C_{t, A, s, R, \gamma, d} \leq \Big(\frac{K}{\gamma}\Big)^{K(1-\log t+ (AR)^{\frac{1}{1-s}})}$$
and when $s=1$,
$$0<C_{t, A, 1, R, \gamma, d} \leq \Big(\frac{K}{\gamma}\Big)^{K(1-\log t)e^{KAR}}.$$
\end{example}
Let us check that Example~\ref{NSV_example} is a consequence of Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2}, together with Lemma~\ref{ex_qa_sequence}. We deduce from Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2} that for all $f \in \mathcal{C}_{\mathcal{M}}(B(0,R))$ with $\|f\|_{L^{\infty}(B(0,R))} \geq t$,
\begin{equation*}
\| f \|_{L^{\infty}(B(0,R))} \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},2Rd e}) \Big)^{2n_{t, \mathcal{M},2Rd e}} \|f\|_{L^{\infty}(E)}
\end{equation*}
and
\begin{equation*}
\| f \|_{L^2(B(0,R))} \leq \sqrt{\frac{2}{\gamma}} \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},2Rd e}) \Big)^{2n_{t, \mathcal{M},2Rd e}} \|f\|_{L^2(E)}.
\end{equation*}
Furthermore, Lemma~\ref{ex_qa_sequence} provides that $$\forall n \in \mathbb{N}^*, \quad \Gamma_{\mathcal{M}}(n) \leq e^{4+4s}$$ and if $0<s<1$ then,
\begin{equation*}
n_{t, \mathcal{M}, 2Rde} \leq 2^{\frac{1}{1-s}}\big(1-\log t+(2ARde)^{\frac{1}{1-s}}\big),
\end{equation*}
whereas if $s=1$, then
\begin{equation*}
n_{t, \mathcal{M}, 2Rde} \leq (1-\log t) e^{2ARde}.
\end{equation*}
The result of Example~\ref{NSV_example} therefore follows from the above estimates.
\subsection{Slowly varying metrics}{\label{vsm}}
This section is devoted to recall basic facts about slowly varying metrics. We refer the reader to~\cite{Hormander} (Section 1.4) for the proofs of the following results.
Let $X$ be an open subset in a finite dimensional $\mathbb{R}$-vector space $V$ and $\|\cdot\|_x$ a norm in $V$ depending on $x \in X$. The family of norms $(\|\cdot\|_x)_{x \in X}$ is said to define a slowly varying metric in $X$ if there exists a positive constant $C \geq 1$ such that for all $x \in X$ and for all $y \in V$ satisfying $\|y-x\|_x <1$, then $y \in X$ and
\begin{equation}{\label{equiv}}
\forall v \in V, \quad \frac{1}{C} \|v \|_x \leq \|v\|_y \leq C \|v \|_x.
\end{equation}
\medskip
\begin{lemma}\label{slowmet}\cite[Example~1.4.8]{Hormander}.
Let $X$ be an open subset in a finite dimensional $\mathbb{R}$-vector space $V$ and $d(x)$ a $\frac{1}{2}$-Lipschitz continuous function, positive in $X$ and zero in $V \setminus X$, satisfying
\begin{equation*}
\forall x,y \in X, \quad |d(x) - d(y) | \leq \frac{1}{2}\|x-y \|,
\end{equation*}
where $\|\cdot\|$ is a fixed norm in $V$. Then, the family of norms $(\|\cdot\|_x)_{x \in X}$ given by
\begin{equation*}\label{family_norms}
\|v\|_x= \frac{ \|v\|}{d(x)}, \quad x \in X, v \in V,
\end{equation*}
defines a slowly varying metric in X.
\end{lemma}
\medskip
The proof given in \cite[Example~1.4.8]{Hormander} shows more generally that result of Lemma~\ref{slowmet} holds true as well when $d$ is a contraction mapping function, that is, when there exists $0 \leq k <1$ such that
\begin{equation*}
\forall x,y \in X, \quad |d(x) - d(y) | \leq k \|x-y \|.
\end{equation*}
Let us consider the case when $X=V=\mathbb{R}^d$ and $\|\cdot\|$ is the Euclidian norm. If $0 < \delta \leq 1$ and $0< R <\frac{1}{\delta}$, then the gradient of the function $\rho_\delta(x)=R\left\langle x\right\rangle^{\delta}$ given by
$$\forall x \in \mathbb{R}^d, \quad \nabla \rho_\delta(x)=R \delta \frac{x}{\left\langle x\right\rangle^{2-\delta}},$$
satisfies $\| \nabla \rho_\delta\|_{L^{\infty}(\mathbb{R}^d)} \leq R \delta <1$. The mapping $\rho_{\delta}$ is then a positive contraction mapping and Lemma~\ref{slowmet} shows that the family of norms $\|\cdot\|_x= \frac{\|\cdot\|}{R \left\langle x\right\rangle^{\delta}}$ defines a slowly varying metric on $\mathbb{R}^d$.
\medskip
\begin{theorem}{\label{slowmetric}}
\cite[Theorem~1.4.10]{Hormander}.
Let $X$ be an open subset in $V$ a $\mathbb{R}$-vector space of finite dimension $d \geq 1$ and $(\|\cdot\|_x)_{x \in X}$ be a family of norms in $V$ defining a slowly varying metric. Then, there exists a sequence $(x_k)_{k \geq 0} \in X^{\mathbb{N}}$ such that the balls
\begin{equation*}
B_k=\left\{x \in V:\ \|x-x_k \|_{x_k} <1 \right\} \subset X,
\end{equation*}
form a covering of $X$,
$$X = \bigcup \limits_{k=0}^{+\infty} B_k,$$
such that the intersection of more than $N=\big(4 C^3+1 \big)^d$ two by two distinct balls $B_k$ is always empty, where $C \geq 1$ denotes the positive constant appearing in the slowness condition \emph{(\ref{equiv})}.
\end{theorem}
| proofpile-arXiv_065-86 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Elliptic and parabolic problems associated to the degenerate operators
\begin{equation*} \label{defL}
\mathcal L =y^{\alpha_1}\Delta_{x} +y^{\alpha_2}\left(D_{yy}+\frac{c}{y}D_y -\frac{b}{y^2}\right) \quad {\rm and}\quad D_t- \mathcal L
\end{equation*}
in the half-space $\mathbb{R}^{N+1}_+=\{(x,y): x \in \mathbb{R}^N, y>0\}$ or in $(0, \infty) \times \mathbb{R}^{N+1}_+$ lead quite naturally to the introduction of weighted Sobolev spaces which are anisotropic if $\alpha_1 \neq \alpha_2$. The aim of this paper is to provide the functional analytic properties of these Sobolev spaces needed in \cite{MNS-CompleteDegenerate} and in \cite{MNS-PerturbedBessel} in the $1$-d case, where we prove existence, uniqueness and regularity of elliptic and parabolic problems governed by the operators above. We also refer to \cite{met-calv-negro-spina, MNS-Sharp, MNS-Grad, MNS-Grushin, MNS-Max-Reg, Negro-Spina-Asympt} for the analogous results concerning the $N$-d version of $D_{yy}+\frac{c}{y}D_y -\frac{b}{y^2}$.
For $m \in \mathbb{R}$ we consider the measure $y^m dx dy $ in $\mathbb{R}^{N+1}_+$ and write $L^p_m$ for $L^p(\mathbb{R}_+^{N+1}; y^m dx dy)$.
Given $p>1$, $\alpha_1 \in \mathbb{R}$, $\alpha_2<2$, we define the Sobolev space
\begin{align*}
W^{2,p}(\alpha_1,\alpha_2,m)&=\left\{u\in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+):\ u,\ y^{\alpha_1} D_{x_ix_j}u,\ y^\frac{\alpha_1}{2} D_{x_i}u, y^{\alpha_2}D_{yy}u,\ y^{\frac{\alpha_2}{2}}D_{y}u\in L^p_m\right\}
\end{align*}
which is a Banach space equipped with the norm
\begin{align*}
\|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}=&\|u\|_{L^p_m}+\sum_{i,j=1}^n\|y^{\alpha_1} D_{x_ix_j}u\|_{L^p_m}+\sum_{i=1}^n\|y^{\frac{\alpha_1}2} D_{x_i}u\|_{L^p_m}\\
&+\|y^{\alpha_2}D_{yy}u\|_{L^p_m}+\|y^{\frac{\alpha_2}{2}}D_{y}u\|_{L^p_m}.
\end{align*}
Next we add a Neumann boundary condition for $y=0$ in the form $y^{\alpha_2-1}D_yu\in L^p_m$ and set
\begin{align*}
W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)=\{u \in W^{2,p}(\alpha_1,\alpha_2,m):\ y^{\alpha_2-1}D_yu\ \in L^p_m\}
\end{align*}
with the norm
$$
\|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}=\|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}+\|y^{\alpha_2-1}D_yu\|_{ L^p_m}.
$$
We consider also an integral version of the Dirichlet boundary condition, namely a weighted summability requirement for $y^{-2}u$ and introduce
$$
W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): y^{\alpha_2-2}u \in L^p_m\}
$$
with the norm $$\|u\|_{W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)}=\|u\|_{W^{2,p}(\alpha_1, \alpha_2, m)}+\|y^{\alpha_2-2}u\|_{L^p_m}.$$
Note that $\alpha_1, \alpha_2$ are not assumed to be positive. The restriction $\alpha_2<2$ is not really essential since one can deduce from it the case $\alpha_2>2$, using the change of variables described in the next section. However, we keep it both to simplify the exposition and because $\mathcal L$ is mainly considered for $\alpha_2<2$.
No requirement is made for the mixed derivatives $D_{x_iy}u$ to simplify some arguments. However, the weighted integrability of the mixed derivatives is automatic under the condition of Proposition \ref{Sec sob derivata mista}.
Sobolev spaces with weights are well-known in the literature, see e.g. \cite{grisvard}, \cite[Chapter 6]{necas}, \cite{Geymonat-Grisvard} and \cite{morel} for the non-anisotropic case. Variants of $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$, usually defined as the closure of compactly supported functions in $W^{2,p}(\alpha_1, \alpha_2, m)$, can be found in the above papers . However, we have not been able to find anything about $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$.
Let us briefly describe the content of the paper. In Section 2 we show that, by a change of variables, the spaces
$W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ and $W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$, $ \tilde\alpha_1=\frac{\alpha_1}{\beta+1},\quad \tilde\alpha_2=\frac{\alpha_2+2\beta}{\beta+1}, \quad \tilde m=\frac{m-\beta}{\beta+1}$ are isomorphic. This observation simplifies many proofs but requires the full scale of $L^p_m$ spaces, according to the general strategy of \cite{} to study the operator $\mathcal L$. Hardy inequalities and traces for $y=0$ are studied in Section 3. The main properties of the spaces $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ are proved in Section 4 together with a density result for smooth functions having zero $y$-derivative in a strip around $y=0$, which is crucial in the study of the operator $\mathcal L$. The space $W^{2,p}_{\mathcal{R}}(\alpha_1,\alpha_2,m)$ is studied in Section 5.
\section{A useful change of variables}\label{Section Degenerate}
For $k,\beta \in\mathbb{R}$, $\beta\neq -1$ let
\begin{align}\label{Gen Kelvin def}
T_{k,\beta\,}u(x,y)&:=|\beta+1|^{\frac 1 p}y^ku(x,y^{\beta+1}),\quad (x,y)\in\mathbb{R}^{N+1}_+.
\end{align}
Observe that
$$ T_{k,\beta\,}^{-1}=T_{-\frac{k}{\beta+1},-\frac{\beta}{\beta+1}\,}.$$
\begin{lem}\label{Isometry action der} The following properties hold for $1 \leq p \leq \infty$.
\begin{itemize}
\item[(i)] $T_{k,\beta\,}$ maps isometrically $L^p_{\tilde m}$ onto $L^p_m$ where
$$ \tilde m=\frac{m+kp-\beta}{\beta+1}.$$
\item[(ii)] For every $u\in W^{2,1}_{loc}\left(\mathbb{R}^{N+1}_+\right)$ one has
\begin{itemize}
\item[1.] $y^\alpha T_{k,\beta\,}u=T_{k,\beta\,}(y^{\frac{\alpha}{\beta+1}}u)$, for any $\alpha\in\mathbb{R}$;\medskip
\item [2.] $D_{x_ix_j}(T_{k,\beta\,}u)=T_{k,\beta} \left(D_{x_ix_j} u\right)$, \quad $D_{x_i}(T_{k,\beta\,}u)=T_{k,\beta}\left(D_{x_i} u\right)$;\medskip
\item[3.] $D_y T_{k,\beta\,}u=T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}u+(\beta+1)y^{\frac{\beta}{\beta+1}}D_yu\right)$,
\\[1ex] $D_{yy} (T_{k,\beta\,} u)=T_{k,\beta\,}\Big((\beta+1)^2y^{\frac{2\beta}{\beta+1}}D_{yy}u+(\beta+1)(2k+\beta)y^{\frac{\beta-1}{\beta+1}}D_y u+k(k-1)y^{-\frac{2}{\beta+1}}u\Big)$.\medskip
\item[4.] $D_{xy} T_{k,\beta\,}u=T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}D_xu+(\beta+1)y^{\frac{\beta}{\beta+1}}D_{xy}u\right)$
\end{itemize}
\end{itemize}
\end{lem}{\sc{Proof.}} The proof of (i) follows after observing the Jacobian of $(x,y)\mapsto (x,y^{\beta+1})$ is $|1+\beta|y^{\beta}$. To prove (ii) one first observes that any $x$-derivatives commutes with $T_{k,\beta}$. Then we compute
\begin{align*}
D_y T_{k,\beta\,}u(x,y)=&|\beta+1|^{\frac 1 p}y^{k}\left(k\frac {u(x,y^{\beta+1})} y+(\beta+1)y^\beta D_y u(x,y^{\beta+1})\right)\\[1ex]
=&T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}u+(\beta+1)y^{\frac{\beta}{\beta+1}}D_yu\right)
\end{align*}
and similarly
\begin{align*}
D_{yy} T_{k,\beta\,} u(x,y)=&T_{k,\beta\,}\Big((\beta+1)^2y^{\frac{2\beta}{\beta+1}}D_{yy}u+(\beta+1)(2k+\beta)y^{\frac{\beta-1}{\beta+1}}D_y u+k(k-1)y^{-\frac{2}{\beta+1}}u\Big).
\end{align*}
\qed
Let us specialize the above lemma to
\begin{align*}
T_{0,\beta}&:L^p_{\tilde m}\to L^p_m,\qquad \tilde m=\frac{m-\beta}{\beta+1}
\end{align*}
to transform Sobolev spaces with different exponents.
\begin{prop}
\label{Sobolev eq}
Let $p>1$, $m, \alpha_1,\alpha_2\in \mathbb{R}$ with $\alpha_2< 2$.
Then one has
\begin{align*}
W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)=T_{0,\beta}\left(W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)\right),\qquad \tilde\alpha_1=\frac{\alpha_1}{\beta+1},\quad \tilde\alpha_2=\frac{\alpha_2+2\beta}{\beta+1}.
\end{align*}
In particular, by choosing $\beta=-\frac{\alpha_2}2$ one has
\begin{align*}
W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)=T_{0,-\frac {\alpha_2} 2}\left(W^{2,p}_{\mathcal{N}}(\tilde \alpha,0,\tilde m)\right),\qquad \tilde\alpha=\frac{2\alpha_1}{2-\alpha_2},\quad \tilde m=\frac{m+\frac{\alpha_2} 2}{1-\frac{\alpha_2} 2}.
\end{align*}
\end{prop}
{\sc{Proof.}} Given $\tilde u\in W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$ let us set $ u(x,y)=(T_{0,\beta}\tilde u)(x,y)=|\beta+1|^{1/p}\tilde u(x,y^{\beta+1})$. Everything follows from the equalities of Lemma \ref{Isometry action der},
\begin{itemize}
\item [(i)] $y^{\alpha_1}D_{x_ix_j}u=T_{0,\beta} \left(y^{\tilde \alpha_1}D_{x_ix_j}\tilde u\right)$, \quad $y^{\frac{\alpha_1}{2}}D_{x_i}u=T_{0,\beta}\left(y^{\frac{\tilde \alpha_1}{2}}D_{x_i}\tilde u\right)$;\smallskip
\item [(ii)] $y^{\frac{\alpha_2}2}D_{y}u=(1+\beta)T_{0,\beta}\left(y^{\tilde\alpha_2}D_{y}\tilde u\right)$,\quad $y^{\alpha_2-1}D_{y}u=(1+\beta)T_{0,\beta}\left(y^{\tilde \alpha_2-1}D_{y}\tilde u\right)$;\smallskip
\item [(iii)] $y^{\alpha_2}D_{yy}u=(1+\beta)T_{0,\beta}\left[(1+\beta) y^{\tilde \alpha_2}D_{yy}\tilde u+\beta y^{\tilde \alpha_2-1}D_{y}\tilde u\right]$.
\end{itemize}
\qed
\begin{os}
Note that in the above proposition is essential to deal with $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$. Indeed in general the isometry $T_{0,\beta}$ does not transform $W^{2,p}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$ into $W^{2,p}(\alpha_1,\alpha_2,m)$, because of identity (iii) above.
\end{os}
\section{Hardy inequalities and traces}
In this section we prove some weighted Hardy inequalities and investigate trace properties of function $u$ such that $y^\beta D_yu\in L^p_m$.
\medskip
The following result is standard but we give a proof to settle "almost everywhere" issues.
\begin{lem} \label{L1}
Let $u\in L^1_{loc}(\mathbb{R}^{N+1}_+)$ be such that $D_yu\in L^1(\mathbb{R}^{N+1}_+)$. Then there exists $v$ such that $v=u$ almost everywhere and $v(\cdot,y)\in L^1_{loc}(\mathbb{R}^N)$ for every $y\geq 0$ and
$$v(x,y_2)-v(x,y_1)=\int_{y_1}^{y_2}D_yu(x,s)\ ds$$ for every $0 \leq y_1< y_2 \leq \infty$ and almost every $x\in Q$.
\end{lem}
{\sc Proof.} For a.e. $x \in \mathbb{R}^N$ the function $u(x, \cdot)$ is absolutely continuous and then
$$u(x,y_2)-u(x,y_1)=\int_{y_1}^{y_2}D_yu(x,s)\ ds$$
for a.e. $y_1, y_2$. It is therefore sufficient to define $v(x,y)=\int_c^y D_yu(x,s)\, ds+u(x,c)$, if $c$ is chosen in such a way that $u(\cdot,c) \in L^1_{loc}(\mathbb{R}^N)$.
\qed
Properties of functions $u\in L^p_m$ such that $D_y u \in L^p_m$ have been proved in \cite[Appendix B]{MNS-Caffarelli}. Here we exploit the more general property $y^\beta D_y u \in L^p_m$.
\begin{prop}
\label{Hardy in core}
Let $C:=\left|\frac{m+1}{p}-(1-\beta)\right|^{-1}$. The following properties hold for $ u\in L^1_{loc}(\mathbb{R}^{N+1}_+)$ such that $y^{\beta}D_{y}u\in L^p_m$.
\begin{itemize}
\item[(i)] If $\frac{m+1}p<1-\beta$ then $D_yu\in L^1\left(Q\times [0,1]\right)$ for any cube $Q$ of $\mathbb{R}^N$; in particular $u$ has a trace $u(\cdot,y)\in L^1_{loc}(\mathbb{R}^N)$ for every $0\leq y\leq 1$. Moreover setting $u_{0}(x)=\lim_{y\to0}u(x,y)$ one has
\begin{align*}
\|y^{\beta-1}(u-u_0)\|_{L^p_m}\leq C \|y^{\beta}D_{y}u\|_{L^p_m}.
\end{align*}
If moreover $u\in L^p_m$ then $u(\cdot,y)\in L^p(\mathbb{R}^N)$ for every $0\leq y\leq 1$.
\item[(ii)] If $\frac{m+1}p>1-\beta$ then $D_yu\in L^1\left(Q\times [1,\infty[\right)$ for any cube $Q$ of $\mathbb{R}^N$; in particular $u$ has a finite trace $u_{\infty}(x)=\lim_{y\to\infty}u(x,y)\in L^1_{loc}\left(\mathbb{R}^N\right)$ and
\begin{align}
\|y^{\beta-1}(u-u_{\infty)}\|_{L^p_m}\leq C \|y^{\beta}D_{y}u\|_{L^p_m}.
\end{align}
If moreover $u\in L^p_m$ then $u_\infty\in L^p(\mathbb{R}^N)$ and $u_\infty=0$ if $m \geq -1$.
\end{itemize}
\end{prop}
{\sc{Proof.}} To prove (i) let $f(x,y):=y^{\beta}D_{y}u(x,y)$. If $Q$ is a cube of $\mathbb{R}^N$ then since $\frac{m+1}p>1-\beta$ one has
\begin{align*}
\int_{Q\times [0,1]}|D_yu|dxdy&=\int_{Q\times [0,1]}|D_yu|y^{\beta}y^{-\beta-m}y^mdxdy\\[1ex]
&\leq \|y^{\beta}D_yu\|_{L^p_m}\left(\int_{0}^1 y^{-(\beta+m)p'+m}\right)^{\frac 1{p'}}|Q|^{\frac 1{p'}}
=C(Q,b,p)\|y^{\beta}D_yu\|_{L^p_m}.
\end{align*}
In particular by Lemma \ref{L1}, $u$ has a finite trace $u(\cdot,y)\in L^1_{loc}\left(\mathbb{R}^N\right)$
for every $0\leq y\leq 1$. Setting
$u_0(x)=u(x,0)=\lim_{y \to 0}u(x,y ) $ we can write
\begin{align*}
y^{\beta-1}\left(u(x,y)-u_{0}(x)\right)=y^{\beta-1}\int_{0}^y f(x,s)s^{-\beta}\,ds:=(H_1f)(y).
\end{align*}
By \cite[Lemma 10.3, (i)]{MNS-Caffarelli}, the operator $H_1$ is bounded on $L^p_m(\mathbb{R}_+)$ when $\frac{m+1}p<1-\beta$, hence
\begin{align*}
\|y^{\beta-1}\left (u(x,\cdot)-u_0 (x) \right)\|_{L^p_m(\mathbb{R}_+)}\leq C \|y^{\beta}D_{y}u(x,\cdot)\|_{L^p_m(\mathbb{R}_+)}.
\end{align*}
Claim (i) then follows by raising to the power $p$ and integrating with respect to $x$. To prove that $u(\cdot,y)\in L^p(\mathbb{R}^N)$ we proceed analogously: since $u\in L^p_m$ then $u(\cdot,y)\in L^p(\mathbb{R}^N)$ for a.e. $y\in [0,1]$. Without any loss of generality we suppose $u(\cdot,1)\in L^p(\mathbb{R}^N)$ and we write for any $y_0\in [0,1]$
\begin{equation*}
u(x,y_0)=u(x,1)-\int_{y_0}^ 1D_y u(x,s)\ ds=u(x,1)-\int_{y_0}^1 s^\beta D_y u(x,s)s^{\frac m p}s^{-\beta-\frac m p}\ ds.
\end{equation*}
Then using H\^older inequality
\begin{align*}
|u(x,y_0)|&\leq |u(x,1)|+\left(\int_{y_0}^1 \left|s^\beta D_y u(x,s)\right|^p s^{m}\ ds\right)^{\frac 1 p}\left(\int_{y_0}^1 s^{(-\beta-\frac mp)p'}\ ds\right)^{\frac 1 {p'}}\\[1ex]
&\leq |u(x,1)|+C\left(\int_{0}^1 \left|s^\beta D_y u(x,s)\right|^p s^{m}\ ds\right)^{\frac 1 p}.
\end{align*}
Raising to the power $p$ and integrating with respect to $x$ we obtain
\begin{align*}
\|u(\cdot,y_0)\|_{L^{p}(\mathbb{R}^N)}\leq C\left(\|u(\cdot,1)\|_{L^p(\mathbb{R}^N)}+\left\|y^\beta D_yu\right\|_{L^p_m}\right).
\end{align*}
The proof of (ii) is similar writing
\begin{align*}
y^{\beta-1}\left(u(x,y)-u_{\infty}(x)\right)=-y^{\beta-1}\int_{y}^\infty f(x,s)s^{-\beta}\,ds:=-(H_2f)(y)
\end{align*}
and applying \cite[Lemma 10.3, (ii)]{MNS-Caffarelli}. If $u \in L^p_m$ and $m \geq -1$, then $|u(x, \cdot)|^p$ is not summable with respect to $y^m\, dy$ for every $x$ where $u_\infty (x) \neq 0$, hence $u_\infty=0$ a.e.
\qed
\medskip
\medskip
In the next lemma we show that $u$ has has a logarithmic singularity for $y\to 0, \infty$, when $\frac{m+1}{p}=1-\beta $.
\begin{lem} \label{int-uMaggiore}
If $\frac{m+1}{p}=1-\beta $ and $u, y^{\beta}D_{y}u\in L^p_m$, then
\begin{equation} \label{behaviour}
\left(\int_{\mathbb{R}^N}|u(x,y)|^p\, dx\right)^{\frac 1 p}\leq \|u(\cdot,1)\|_{L^p(\mathbb{R}^N)}+|\log y|^{\frac{1}{p'}}\|y^{\beta} D_y\|_{L^p_m}.
\end{equation}
\end{lem}
{\sc Proof.}
Let $\frac{m+1}{p}=1-\beta$ and set $f=y^\beta D_y\in L^p_m$. Then for $y\in (0,1)$ one has
\begin{align*}
u(x,y)&=u(x,1)-\int_y^1 D_y u(x,s)\ ds=u(x,1)-\int_y^1s^{-\beta}f(x,s)\ ds\\[1ex]
&=u(x,1)-\int_y^1s^{-\beta-m} f(x,s)s^m\ ds.
\end{align*}
Therefore, since $(-\beta-m)p'+m=-1$, H\"older inequality yields
\begin{align*}
|u(x,y)|&\leq |u(x,1)|+\left(\int_y^1 s^{(-\beta-m)p'}s^m\ ds\right)^\frac{1}{p'}\left(\int_y^1 |f(x,s)|^ps^m\ ds\right)^\frac{1}{p}
\\[1ex]
&\leq | u(x,1)|+ |\log y|^\frac{1}{p'}\|f(x,\cdot)\|_{L^p\left((0,1),y^mdy\right)}.
\end{align*}
The inequality for $y>1$ is similar.
Since $u\in L^p_m$ then, as in Proposition \ref{Hardy in core}, we can suppose $u(\cdot,1)\in L^p(\mathbb{R}^N)$ and raising to the power $p$ and integrating with respect to $x$ we conclude the proof.
\qed
We also need some elementary interpolative inequalities; the first generalizes \cite[Lemma 4.3]{met-soba-spi-Rellich} (see also \cite{met-negro-soba-spina}).
\begin{lem} \label{inter} For $m, \beta \in \mathbb{R}$, $1<p<\infty$ there exist $C>0, \varepsilon_0>0$ such that for every $u \in W^{2,p}_{loc}((0, \infty))$, $0<\varepsilon <\varepsilon_0$,
$$
\|y^{\beta-1} u'\|_{L^p_m (\mathbb{R}_+)} \leq C \left (\varepsilon \|y^\beta u''\|_{L^p_m(\mathbb{R}_+)} +\frac{1}{\varepsilon} \|y^{\beta-2}u\|_{L^p_m(\mathbb{R}_+)} \right ).
$$
\end{lem}
{\sc Proof. } Changing $\beta$ we may assume that $m=0$. We use the elementary inequality
\begin{equation} \label{i1}
\int_a^b |u'(y)|^p\, dy \leq C\left (\varepsilon^p (b-a)^p \int_a^b |u''(y)|^p\, dy+\frac{1}{\varepsilon^p (b-a)^p}\int_a^b |u(y)|^p\, dy\right )
\end{equation}
for $\varepsilon \leq \varepsilon_0$, where $\varepsilon_0, C$ are the same as for the unit interval (this follows by scaling). We apply this inequality to each interval $I_n=[2^n, 2^{n+1}[$, $n \in \mathbb Z$
and multiply by $2^{n(\beta-1)p}$ thus obtaining since $y \approx 2^n$ in $I_n$
$$
\int_{I_n}y^{(\beta-1)p} |u'(y)|^p\, dy \leq \tilde C\left (\varepsilon^p \int_ {I_n}y^{\beta p}|u''(y)|^p\, dy+\frac{1}{\varepsilon^p}\int_{I_n} y^{(\beta-2)p}|u(y)|^p\, dy\right ).
$$
The thesis follows summing over $n$. \qed
\begin{lem} \label{inter1} For $m, \beta <2$, $1<p<\infty$ there exist $C>0, \varepsilon_0>0$ such that for every $u \in W^{2,p}_{loc}((1, \infty))$, $0<\varepsilon <\varepsilon_0$,
$$
\|y^{\frac{\beta}{2}} u'\|_{L^p_m((1, \infty))} \leq C \left (\varepsilon \|y^\beta u''\|_{L^p_m((1, \infty))} +\frac{1}{\varepsilon} \|u\|_{L^p_m((1, \infty))} \right ).
$$
\end{lem}
{\sc Proof. }We use \eqref{i1} in $(a_n, a_{n+1})$ where $a_n=n^{1+\frac{\gamma}{2}}$, so that $a_{n+1}-a_n \approx n^{\frac{\gamma}{2}}$. We multiply both sides by $n^{(1+\frac{\gamma}{2})(m+\frac{\beta p}{2})} \approx y^{m+\frac{\beta p}{2}}$ in $(a_n, a_{n+1})$ and sum over $n$. Choosing $\gamma \geq 0$ in such a way that $\beta=\frac{2\gamma}{2+\gamma}$, the thesis follows.
\qed
\section{The space $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$} \label{section sobolev}
Let $p>1$, $m, \alpha_1 \in \mathbb{R}$, $\alpha_2<2$. We recall that
\begin{align*} W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)=\{u \in W^{2,p}(\alpha_1,\alpha_2,m):\ y^{\alpha_2-1}D_yu\ \in L^p_m\}
\end{align*}
with the norm
$$
\|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}=\|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}+\|y^{\alpha_2-1}D_yu\|_{ L^p_m}.
$$
We have made the choice not to include the mixed derivatives in the definition of $W^{2,p}_{\mathcal{N}}\left(\alpha_1,\alpha_2,m\right)$ to simplify some arguments.
However the following result holds in a range of parameters which is sufficient for the study of the operator $\mathcal L$.
\begin{prop}\label{Sec sob derivata mista} If $\alpha_2-\alpha_1<2$ and $\alpha_1^{-} <\frac{m+1}p$ then there exists $C>0$ such that
$$\|y^\frac{\alpha_1+\alpha_2}{2} D_{y}\nabla_x u \|_{ L^p_m} \leq C \|u\|_{W^{2,p}_{\cal N}(\alpha_1, \alpha_2, m)}$$ for every $u \in W^{2,p}_{\mathcal{N}}\left(\alpha_1,\alpha_2,m\right)$.
\end{prop}
{\sc Proof.}
This follows from \cite[Theorem 7.1]{MNS-CompleteDegenerate}, choosing $c$ sufficiently large therein, so that $\alpha_1^{-} <\frac{m+1}p<c+1-\alpha_2$.
\qed
\begin{os}\label{Os Sob 1-d}
With obvious changes we may consider also the analogous Sobolev spaces on $\mathbb{R}_+$, $W^{2,p}(\alpha_2,m)$ and $W^{2,p}_{\cal N}(\alpha_2, m)$. For example we have
$$W^{2,p}_{\mathcal N}(\alpha,m)=\left\{u\in W^{2,p}_{loc}(\mathbb{R}_+):\ u,\ y^{\alpha}D_{yy}u,\ y^{\frac{\alpha}{2}}D_{y}u,\ y^{\alpha-1}D_{y}u\in L^p_m\right\}.$$
For brevity sake, we consider in what follows, only the Sobolev spaces on $\mathbb{R}^{N+1}_+$ but all the results of this section will be valid also in $\mathbb{R}_+$ changing the condition $\alpha_1^{-} <\frac{m+1}p$ (which appears in Sections \ref{denso}, \ref{Sec sob min domain}) to $0<\frac{m+1}p$.
\end{os}
We clarify in which sense the condition $y^{\alpha_2-1}D_y u \in L^p_m$ is a Neumann boundary condition.
\begin{prop} \label{neumann} The following assertions hold.
\begin{itemize}
\item[(i)] If $\frac{m+1}{p} >1-\alpha_2$, then $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=W^{2,p}(\alpha_1, \alpha_2, m)$.
\item[(ii)] If $\frac{m+1}{p} <1-\alpha_2$, then $$W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): \lim_{y \to 0}D_yu(x,y)=0\ {\rm for\ a.e.\ x \in \mathbb{R}^N }\}.$$
\end{itemize}
In both cases (i) and (ii), the norm of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ is equivalent to that of $W^{2,p}(\alpha_1, \alpha_2, m)$.
\end{prop}
{\sc Proof. } If $\frac{m+1}{p} >1-\alpha_2$ and $u \in W^{2,p}(\alpha_1, \alpha_2, m)$, we apply Proposition \ref{Hardy in core} (ii) to $D_y u$ and obtain that $\lim_{y \to \infty}D_yu(x,y)=g(x)$ exists. At the points where $g(x) \neq 0$, $u(x, \cdot)$ has at least a linear growth with respect to $y$ and hence $\int_0^\infty |u(x,y)|^p y^m\, dy=\infty$ (since $(m+1)/p>1-\alpha_2 >-1$). Then $g=0$ a.e. and Proposition \ref{Hardy in core}(ii) again gives $\|y^{\alpha_2 -1}D_yu\|_{L^p_m} \leq C\|y^{\alpha_2}D_{yy}u\|_{L^p_m}$.
If $\frac{m+1}{p} <1-\alpha_2$ we apply Proposition \ref{Hardy in core} (i) to $D_y u$ to deduce that $\lim_{y \to 0}D_yu(x,y)=h(x)$ exists. If $h=0$, then Hardy inequality yields $y^{\alpha_2-1}D_y u \in L^p_m$. On the other hand, $y^{\alpha_2-1}D_y u \in L^p_m$ implies $h=0$, since $y^{p(\alpha_2-1)}$ is not integrable with respect to the weight $y^m$.
\qed
\subsection{An alternative description of $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$}
We show an alternative description of $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$, adapted to the operator $D_{yy}+cy^{-1}D_y$.
\begin{lem}\label{Lem Trace Dy in W}
Let $c\in\mathbb{R}$ and let us suppose that $\frac{m+1}{p}<c+1-\alpha_2$. If $u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+)$ and $ u,\ y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m$, then the following properties hold.
\begin{itemize}
\item[(i)] The function $v=y^cD_y u$ satisfies $v,D_yv\in L^1_{loc} \left(\mathbb{R}^{N}\times[0,\infty)\right)$ and therefore has a trace $v_0(x):=\lim_{y\to 0}y^c D_yu(x,y)\in L^1_{loc}(\mathbb{R}^N)$ at $y=0$.
\item[(ii)] $v_0=0$ if and only $y^{\alpha_2-1}D_y u \in L^p_m(\mathbb{R}^N \times [0,1])$. In this case
\begin{align*}
\left\|y^{\alpha_2-1}D_yu\right \|_{L^p_m}\leq C \left\|y^{\alpha_2}\left (D_{yy}u+cy^{-1}D_yu\right)\right\|_{L^p_m}
\end{align*}
with $C=\left(c+1-\alpha_2-\frac{m+1}{p}\right)^{-1}>0$.
\item[(iii)] If the stronger assumption $0<\frac{m+1}p\leq c-1$ holds then $v_0=0$ and $y^{\alpha_2-1}D_y u \in L^p_m(\mathbb{R}^N \times [0,1])$.
\end{itemize}
\end{lem}
{\sc{Proof.}} Let $v:=y^{c}D_yu$ and
$$f:=y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}{y}\right)=y^{\alpha_2-c}D_yv\in L^p_m.$$
Claim (i) is then a consequence of Proposition \ref{Hardy in core} (i) with $\beta=\alpha_2-c$.
To prove (ii) we set $v_0(x)=\left(y^cD_yu\right)(x,0)$. Then one has $g:=y^{\alpha_2-c-1}(v-v_0)\in L^p_m$ by Proposition \ref{Hardy in core} (ii) again. Then
$$y^{\alpha_2-1}D_yu=g+y^{\alpha_2-1-c}v_0$$ is $L^p_m$-integrable near $y=0$ if and only if $v_0=0$, since $\frac{m+1}{p} <c+1-\alpha_2$.
Finally, when $v_0=0$, $y^{\alpha_2-1}=g=y^{\alpha_2-c-1}v$ and we can use Proposition \ref{Hardy in core} (ii).
Let us prove (iii). Note that $c-1 <c+1-\alpha_2$, since $\alpha_2<2$. At the points where $v_0(x) \neq 0$, we have for $0<y \leq \delta(x)$, $|D_yu(x, y)|\geq \frac 12|v_0(x)| y^{-c}$ which implies $|u(x,y)|\geq \frac 14|v_0(x)| y^{-c+1}$ for $0<y \leq \delta'(x)$, since $c>1$. This yields $\int_0^\infty |u(x,y)|^p y^m\, dy=\infty$, since $(m+1)/p\leq c-1$, and then $v_0=0$ a.e.
\qed
\medskip
To provide an equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ we need the following simple lemma.
\begin{lem} \label{elliptic}
Assume that $u \in L^p(\mathbb{R}^N) \cap W^{2,1}_{loc}(\mathbb{R}^N)$ for some $1<p<\infty$ and that $\Delta u \in L^p(\mathbb{R}^N)$. Then $u \in W^{2,p}(\mathbb{R}^N)$.
\end{lem}
{\sc{Proof.}} Let $v \in W^{2,p}(\mathbb{R}^N)$ be such that $v-\Delta v=u-\Delta u$ and consider $w=u-v \in L^p(\mathbb{R}^N) \cap W^{2,1}_{loc}(\mathbb{R}^N)$. If $\phi \in C_c^\infty (\mathbb{R}^N)$, then
$$
0=\int_{\mathbb{R}^N}(w-\Delta w)\phi=\int_{\mathbb{R}^N}w(\phi-\Delta \phi).
$$
Since $w \in L^p(\mathbb{R}^N)$ the above identity extends by density to all $\phi \in W^{2,p'}(\mathbb{R}^N)$ and then, since $I-\Delta$ is invertible from $W^{2,p'}(\mathbb{R}^N)$ to $L^{p'}(\mathbb{R}^N)$, we have $\int_{\mathbb{R}^N} w g=0$ for every $g \in L^{p'}(\mathbb{R}^N)$, so that $w=0$ and $u=v \in W^{2,p}(\mathbb{R}^N)$.
\qed
We can now show an equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$, adapted to the degenerate operator $D_{yy}+cy^{-1}D_y$.
\begin{prop}\label{Trace D_yu in W}
Let $c\in\mathbb{R}$ and $\frac{m+1}{p}<c+1-\alpha_2$. Then
\begin{align*}
W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m \right. \\[1ex]
&\left.\hspace{10ex} y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;\;and\;\;}\lim_{y\to 0}y^c D_yu=0\right\}
\end{align*}
and the norms $\|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}$ and $$\|u\|_{L^p_m}+\|y^{\alpha_1}\Delta_x u\|_{L^p_m}+\|y^{\alpha_2}(D_{yy}u+cy^{-1}D_yu)\|_{L^p_m}$$ are equivalent on $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$.
Finally, when $0<\frac{m+1}p\leq c-1$ then
\begin{align*}
W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu, y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\right\}.
\end{align*}
\end{prop}
{\sc{Proof.}} Let $\mathcal G$ be the space on the right hand side with the canonical norm indicated above. By Lemma \ref{Lem Trace Dy in W} $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m) \subset \mathcal G$ and the embedding is clearly continuous.
Conversely, let $u \in \mathcal G$. The estimate for $y^{\alpha_2-1}D_yu$ follows from Lemma \ref{Lem Trace Dy in W}(ii) and yields, by difference, also that for $y^{\alpha_2}D_{yy}u$. Since for $y\leq 1$ one has $y^{\frac{\alpha_2}2}\leq y^{\alpha_2-1}$ it follows that $y^{\frac{\alpha_2}2}D_yu\in L^p_m(\mathbb{R}^N \times [0,1])$ and $y^{\frac{\alpha_2}2}D_yu\in L^p_m(\mathbb{R}^N \times [1,\infty])$ by Lemma \ref{inter1}.
Finally, we prove the inequality
$$\|y^{\frac{\alpha_1}2}D_xu\|_{L^p_m}+\|y^{\alpha_1}D_{x_ix_j}u\|_{L^p_m}\leq C\left (\|u\|_{L^p_m}+\|y^{\alpha_1}\Delta_x u\|_{L^p_m}\right ). $$
Since $u(\cdot,y) \in L^p(\mathbb{R}^N) \cap W_{loc}^{2,p}(\mathbb{R}^N)$ for a.e. $y>0$, the lemma above and the Calderon-Zygmund inequality yield
\begin{align*}
\int_{\mathbb{R}^N} |D_{x_i x_j}(x,y)|^p\,dx\leq C\int_{\mathbb{R}^N} |\Delta_x(x,y)|^p\,dx.
\end{align*}
Multiplying by $y^{p\alpha_1+m
}$ and integrating over $\mathbb{R}_+$
we obtain $\sum_{i,j=1}^n\|y^{\alpha_1} D_{x_ix_j}u\|_{L^p_m}\leq C\|y^{\alpha_1} \Delta_x u\|_{L^p_m}$. The estimate
$$\|y^{\frac{\alpha_1}2} \nabla_{x}u\|_{L^p_m}\leq C\left(\|y^{\alpha_1} \Delta_x u\|_{L^p_m}+\|u\|_{L^p_m}\right)$$ can be obtained similarly using the interpolative inequality
$$\|\nabla_x u(\cdot,y)\|_{L^p(\mathbb{R}^n)}\leq \epsilon \|\Delta_x u(\cdot,y)\|_{L^p(\mathbb{R}^n)}+\frac {C(N,p)} \epsilon \| u(\cdot,y)\|_{L^p(\mathbb{R}^n)}$$
with $\epsilon=y^{\frac{\alpha_1}2}$.
The equality for $0<\frac{m+1}p\leq c-1$ follows from Lemma \ref{Lem Trace Dy in W}(iii).
\qed
\medskip
We provide now another equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ which involves a Dirichlet, rather than Neumann, boundary condition, in a certain range of parameters.
\begin{prop}\label{trace u in W op}
Let $c\geq 1$ and $\frac{m+1}{p}<c+1-\alpha_2$. The following properties hold.
\begin{itemize}
\item[(i)] If $c>1$ then
\begin{align*}
W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m,\right. \\[1ex]
&\left.\hspace{11ex}y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;and\;}\lim_{y\to 0}y^{c-1} u=0\right\}.
\end{align*}
\item[(ii)] If $c=1$ then
\begin{align*}
W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m,\right. \\[1ex]
&\left.\hspace{11ex}y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;and\;}\lim_{y\to 0} u(x,y)\in \mathbb{C}\right\}.
\end{align*}
\end{itemize}
\end{prop}
{\sc Proof. } Let us prove (i). By Proposition \ref{Trace D_yu in W} it is sufficient to show that the conditions $\lim_{y \to 0}y^cD_yu=0$ and $\lim_{y \to 0}y^{c-1}u=0$ are equivalent. We proceed as in Lemma \ref{Lem Trace Dy in W} setting $v:=y^{c}D_yu$ and
$$f:=y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}{y}\right)=y^{\alpha_2-c}D_yv\in L^p_m.$$ If $v_0(x)=\left(y^cD_yu\right)(x,0)$, then $g:=y^{\alpha_2-c-1}(v-v_0)\in L^p_m$ by Proposition \ref{Hardy in core} (ii), and
\begin{equation} \label{w1}
D_yu=y^{1-\alpha_2}g+y^{-c}v_0.
\end{equation}
Then, since $c>1$, we can write for $0<y<1$
\begin{align}\label{eq1 trace u in W}
u(x,1)- u(x,y)=\frac{1}{c-1}v_0(x)(y^{1-c}-1)+\int_y^1 s^{1-\alpha_2}g(x,s)\, ds
\end{align}
and
\begin{equation} \label{w2}
\int_y^1 s^{1-\alpha_2}|g(x,s)|\, ds \leq \|g\|_{L^p_m} \left(\int_y^1 s^{(1-\alpha_2 -\frac mp)p'} \right)^{\frac{1}{p'}} \leq C(1+y^{\gamma})
\end{equation}
with $\gamma=2-\alpha_2-(m+1)/p>1-c$ (when $\gamma=0$ the term $y^\gamma$ is substituted by $|\log y|^{\frac{1}{p'}}$). Since $c>1$, it follows that
\begin{align*}
\lim_{y \to 0} y^{c-1}u(x,y)=\frac{v_0(x)}{1-c}
\end{align*}
and therefore $\displaystyle\lim_{y \to 0} y^{c-1}u(x,y)=0$ if and only if $v_0(x)=0$ or, by Lemma \ref{Lem Trace Dy in W}(ii), if $\displaystyle\lim_{y \to 0}y^c D_yu(x,y)=0$.
To prove (ii) we proceed similarly. From \eqref{w1} with $c=1$ we obtain
\begin{align*}
u(x,1)- u(x,y)=-v_0(x)|\log y|+\int_y^1 s^{1-\alpha_2}g(x,s)\, ds,\qquad 0<y<1.
\end{align*}
The parameter $\gamma$ is positive, since $(m+1)/p<2-\alpha_2$ and the integral on the right hand side of \eqref{eq1 trace u in W} converges.
Therefore $\displaystyle\lim_{y \to 0} u(x,y)\in \mathbb{C}$ if and only if $v_0(x)=0$.
\qed
\begin{os}
We point out that the function $v=y^{c-1}u$ above satisfies $D_yv\in L^1 \left(Q\times[0,1]\right)$ for every cube $Q$. In particular
$D_y u\in L^1\left(Q\times [0,1]\right)$, if $\frac{m+1}{p} <2-\alpha_2$, by choosing $c=1$.
Indeed, if $c>1$, using \eqref{eq1 trace u in W}, \eqref{w2} with $v_0=0$ one has $y^{c-2}u\in L^1 \left(Q\times[0,1]\right)$. Then the equality $$D_{y}v=y^{c-1}D_yu+(c-1)y^{c-2}u=y^{c-\alpha_2}g+(c-1)y^{c-2}u$$
and $g \in L^p_m$ and H\"older inequality yield $y^{c-\alpha_2}g \in L^1(Q \times [0,1])$.
When $c=1$ then $v=u$ and we use \eqref{w1} with $v_0=0$ and then \eqref{w2}, since $\gamma>0$, as observed in the above proof.
\end{os}
\subsection{Approximation with smooth functions} \label{denso}
The main result of this section is a density property of smooth functions in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. We introduce the set
\begin{equation} \label{defC}
\mathcal{C}:=\left \{u \in C_c^\infty \left(\mathbb{R}^N\times[0, \infty)\right), \ D_y u(x,y)=0\ {\rm for} \ y \leq \delta\ {\rm and \ some\ } \delta>0\right \}
\end{equation}
and its one dimensional version
\begin{equation} \label {defD}
\mathcal{D}=\left \{u \in C_c^\infty ([0, \infty)), \ D_y u(y)=0\ {\rm for} \ y \leq \delta\ {\rm and \ some\ } \delta>0\right \}.
\end{equation}
Let $$C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D=\left\{u(x,y)=\sum_i u_i(x)v_i(y), \ u_i \in C_c^\infty (\mathbb{R}^N), \ v_i \in \cal D \right \}$$ (finite sums). Clearly $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D \subset \mathcal C$.
\begin{teo} \label{core gen}
If $\frac{m+1}{p}>\alpha_1^-$
then $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$ is dense in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$.
\end{teo}
Note that the condition $(m+1)/p>\alpha_1^-$ or $m+1>0$ and $(m+1)/p+\alpha_1>0$ is necessary for the inclusion $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D \subset W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$.
\medskip
For technical reason we start from the case $\alpha_2=0$ and write $\alpha$ for $\alpha_1$.
Then
$$W^{2,p}_{\mathcal N}(\alpha,0,m)=\left\{u\in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+):\ u,\, y^\alpha D_{x_ix_j}u,\ y^\frac{\alpha}{2} D_{x_i}u,\ D_{y}u\ D_{yy}u,\ \frac{D_yu}{y}\in L^p_m\right\}.$$
\medskip
We need some preliminary results which show the density of smooth functions with compact support in
$W^{2,p}_{\mathcal N}(\alpha,0,m)$. In the first no restriction on $\alpha$ is needed.
\begin{lem} \label{supp-comp}
The functions in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$ having support in $\mathbb{R}^N \times [0,b[$ for some $b>0$ are dense in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$.
\end{lem}
{\sc Proof.}
Let $0\leq\phi\leq 1$ be a smooth function depending only on the $y$ variable which is equal to $1$ in $(0,1)$ and to $0$ for $y \ge 2$.
Set $\phi_n(y)=\phi \left(\frac{y}{n}\right)$ and $u_n(x,y)=\phi_n(y)u(x,y)$.
Then $u_n\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ and has compact support in $\mathbb{R}^N \times [0,2n]$. By dominated convergence $u_n \to u$ in $L^p_m$. Since $D_{x_ix_j}u_n=\phi_n D_{x_ix_j}u$, $ D_{x_i}u_n=\phi_nD_{x_i}u$ we have $y^\alpha D_{x_ix_j}u_n\to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n\to y^\frac{\alpha}{2} D_{x_i}u$ , by dominated convergence again.
For the convergence of the $y$-derivatives, we observe that
$|D_y \phi_n|\leq \frac{C}{n}\chi_{[n,2n]}$, $|D_{yy} \phi_n| \leq \frac{C}{n^2}\chi_{[n,2n]}$. Since
$D_y u_n =\phi_n D_y u+ D_y \phi_n u$ and $D_{yy} u_n =\phi_n D_{yy} u+2D_y\phi_n D_y u+ uD_{yy}\phi_n$, we have also $D_y u_n \to D_y u$,
$D_{yy}u_n\to D_{yy}u$ and $\frac{D_yu_n}{y}\to \frac{D_yu}{y}$ in $L^p_m$.
\qed
\begin{lem} \label{supp-comp-x}
Assume that $\frac{m+1}{p}<2$ and $\frac{m+1}{p}+\alpha>0$. Then the functions in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$ with compact support are dense in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$.
\end{lem}
{\sc Proof.} Let $u\in W^{2,p}_{\mathcal N}(\alpha,0,m)$. By Lemma \ref{supp-comp}, we may assume that $u$ has support in $\mathbb{R}^N \times [0,b[$ for some $b>0$.
Let $0\leq\phi\leq 1$ be a smooth function depending only on the $x$ variable which is equal to $1$ if $|x|\leq 1$ and to $0$ for $|x| \ge 2$.
Set $\phi_n(x)=\phi \left(\frac{x}{n}\right)$ and $u_n(x,y)=\phi_n(x)u(x,y)$.
Then $u_n\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ and has compact support. By dominated convergence $u_n \to u$ in $L^p_m$. Moreover, since $D_y u_n =\phi_n D_y u$ and $D_{yy} u_n =\phi_n D_{yy} u$, we have immediately $D_{yy}u_n\to D_{yy}u$, $\frac{D_yu_n}{y}\to \frac{D_yu}{y}$ in $L^p_m$.
Concerning the derivatives with respect to the $x$ variable, we have
$|D_{x_i} \phi_n(x)|\leq \frac{C}{n}\chi_{[n,2n]}(|x|)$, $|D_{x_ix_j } \phi_n(x)|\leq \frac{C}{n^2}\chi_{[n,2n]}(|x|)$ and
\begin{align}\label{eq1 lem supp}
\nonumber D_{x_i}u_n&=\phi_n D_{x_i}u+u D_{x_i}\phi_n, \\
D_{x_jx_i}u_n&=\phi_n D_{x_ix_j}u+D_{x_j}\phi_n D_{x_i}u+D_{x_j}u D_{x_i}\phi_n +u D_{x_ix_j}\phi_n.
\end{align}
Let us show that $y^\alpha u,\ y^{\frac{\alpha}2} u\in L^p_m$. Since $u$ has support in $\mathbb{R}^N \times [0,b[$ this is trivial for $\alpha\geq 0$. When $\alpha<0$ let $f(x,y)=\frac{u(x,y)-u(x,0)}{y^2}$ so that
\begin{align*}
y^\alpha u=y^{\alpha+2}f+y^\alpha u(\cdot,0).
\end{align*}
By Proposition \ref{Hardy in core}, $f\in L^p_m$ and $u(\cdot,0)\in L^p(\mathbb{R}^N)$. Since $u$ and $f$ have support in $\mathbb{R}^N \times [0,b[$, the assumption $-\alpha<\frac{m+1}{p}<2$ then implies that $y^\alpha u\in L^p_m$ and also $y^{\frac{\alpha}2} u\in L^p_m$.
Using the classic interpolative inequality
$$\|\nabla_x u(\cdot,y)\|_{L^p(\mathbb{R}^N)}\leq \epsilon \|\Delta_x u(\cdot,y)\|_{L^p(\mathbb{R}^N)}+\frac {C(N,p)} \epsilon \| u(\cdot,y)\|_{L^p(\mathbb{R}^N)}$$
with $\epsilon=1$ we easily get (after raising to the power $p$, multiplying by $y^\alpha$ and integrating in $y$), $y^\alpha \nabla_x u\in L^p_m$.
Using \eqref{eq1 lem supp} and the fact that $y^\alpha u,\ y^{\frac{\alpha}2} u,\ y^\alpha \nabla_x u\in L^p_m$
we deduce using dominated convergence that
$y^\alpha D_{x_ix_j}u_n\to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n\to y^\frac{\alpha}{2} D_{x_i}u$ in $L^p_m$.
\qed
In the next lemma we add regularity with respect to the $x$-variable.
\begin{lem}
\label{smooth}
Let $\frac{m+1}{p}<2$, $\frac{m+1}{p}+\alpha>0$ and $u\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ with compact support. Then there exist $(u_n)_{n\in\mathbb{N}}\subset W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ with compact support such that $u_n$ converges to $u$ in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ and, for every $y\geq 0$, $u_n(\cdot,y)$ belongs to $ C^\infty(\mathbb{R}^N)$ and has bounded $x$-derivatives of any order.
\end{lem}
{\sc Proof.} Let $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ as above and let us fix a standard sequence of mollifiers in $\mathbb{R}^N$ $\rho_n=n^N\rho(nx)$ where $0\leq \rho\in C_c^\infty(\mathbb{R}^N)$, $\int_{\mathbb{R}^N}\rho(x)\ dx=1$. Let us set $u_n(x,y)=\left(\rho_n\ast u(\cdot,y)\right)(x)$ where $\ast$ means convolution with respect to the $x$ variable.
By Lemma \ref{L1} and Proposition \ref{Hardy in core}, $u(\cdot, y)\in L^p(\mathbb{R}^N)$ for every $y\geq 0$ and
therefore, by standard properties, $u_n$ has a compact support and $u_n(\cdot,y)\in C^\infty_b(\mathbb{R}^N)$ for every $y\geq 0$. By Young's inequality
\begin{align*}
\|u_n(\cdot,y)\|_{L^p(\mathbb{R}^N)}\leq \|u(\cdot,y)\|_{L^p(\mathbb{R}^N)},\qquad u_n(\cdot,y)\to u(\cdot,y)\quad\text{in}\quad L^p(\mathbb{R}^N),\qquad \forall y\geq 0.
\end{align*}
Raising to the power $p$, multiplying by $y^m$ and by integrating with respect to $y$, we get
\begin{align*}
\|u_n\|_{L^p_m}\leq \|u\|_{L^p_m}
\end{align*}
which, using dominated convergence with respect to $y$, implies $u_n\to u$ in $L^p_m$.
Using the equalities
\begin{align*}
y^\alpha D_{x_ix_j}u_n&=\rho_n\ast (y^\alpha D_{x_ix_j}u),\qquad y^\frac{\alpha}{2} D_{x_i}u_n=\rho_n\ast (y^\frac{\alpha}{2} D_{x_i}u),\\[1ex]
D_{yy}u_n&=\rho_n\ast D_{yy}u,\hspace{14.5ex} y^\gamma D_{y}u_n=\rho_n\ast (y^\gamma D_{y}u),
\end{align*}
$\gamma=0,1$ a similar argument as before yields $u_n\to u$ in $W^{2,p}_n(\alpha,0,m)$.\\\qed
We can now prove a weaker version of Theorem \ref{core gen} when $\alpha_2=0$.
\begin{prop} \label{corend}
If $\frac{m+1}{p}>\alpha^-$ then $\mathcal C$, defined in \eqref{defC}, is dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$.
\end{prop}
{\sc Proof.} (i) We first consider the case $\frac{m+1}{p}>2$. Let $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ which, by Lemma \ref{supp-comp}, we may assume to have the support in $\mathbb{R}^N\times [0,b]$. Let $\phi$ be a smooth function depending only on $y$, equal to $0$ in $(0,1)$ and to $1$ for $y \ge 2$. Let $\phi_n(y)=\phi (ny)$ and $u_n(x,y)=\phi_n(y)u(x,y)$. Then
\begin{align*}
D_{x_ix_j}u_n &=\phi_n D_{x_ix_j}u,\hspace{12ex}D_{x_i}u_n =\phi_n D_{x_i}u,\\[1ex]
D_y u_n &=\phi_n D_y u+D_y\phi_n u,\qquad D_{yy} u_n =\phi_n D_{yy} u+2D_y\phi_n D_yu+ uD_{yy}\phi_n.
\end{align*}
By dominated convergence $u_n \to u$, $y^\alpha D_{x_ix_j}u_n \to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n \to y^\frac{\alpha}{2} D_{x_i}u$ in $L^p_m$.
Let us consider now the terms containing the $y$ derivatives and observe that
\begin{align}\label{sti cut 2}
|D_{y} \phi_n|\leq Cn\chi_{[\frac 1 n,\frac 2{n}]}\leq \frac{2C}{y}\chi_{[\frac 1 n,\frac 2{n}]},\qquad |D_{yy } \phi_n|\leq C n^2\chi_{[\frac 1 n,\frac 2{n}]}\leq \frac{4C}{y^2}\chi_{[\frac 1 n,\frac 2{n}]}.
\end{align}
Using these estimates and since $y^{-2}u\in L^p_m $ by Proposition \ref{Hardy in core}
$$\frac{D_y u_n}{y} =\phi_n \frac{D_y u}{y}+\frac{u}{y} (D_y\phi_n) \to \frac{D_y u}{y}$$
in $L^p_m$, by dominated convergence.
In a similar way one shows that $D_y u_n \to D_yu$ and $D_{yy}u_n \to D_{yy}u$ in $L^p_m$ and hence functions with compact support in $\mathbb{R}^n\times ]0,\infty[$ are dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$. At this point, a standard smoothing by convolutions shows the density of $C_c^\infty (\mathbb{R}^N \times ]0,\infty[)$ in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$.
(ii) Let $\frac{m+1}{p}=2$. We proceed similarly to (i) and fix $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ with support in $\mathbb{R}^N\times [0,b]$.
Let $\phi$ be a smooth function which is equal to $0$ in $\left(0,\frac{1}{4}\right)$ and to $1$ for $y \ge \frac{1}{2}$.
Let $\phi_n(y)=\phi\left(y^\frac{1}{n}\right)$ and $u_n=\phi_n u$. By dominated convergence it is immediate to see that $u_n \to u$,\; $y^{\alpha}D_{x_ix_j}u_n\to y^{\alpha}D_{x_ix_j}u$,\; $y^\frac{\alpha}{2}D_{x_i}u_n\to y^\frac{\alpha}{2}D_{x_i}u$ in $L^p_m$. To treat the terms concerning the $y$ derivatives we observe that
\begin{align}\label{beh cut}
\nonumber |\phi_n'|&=\left|\frac{1}{n}\phi'\left(y^\frac{1}{n}\right)y^{\frac{1}{n}-1} \right|\leq \frac{C}{ny}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}\\[1ex]
| \phi_n''|&=\left|\frac{1}{n^2}\phi''\left(y^\frac{1}{n}\right)y^{\frac{2}{n}-2}+\frac 1 n(\frac 1 n-1)\phi'\left(y^\frac{1}{n}\right)y^{\frac{1}{n}-2}\right|\leq \frac{C}{ny^2}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}.
\end{align}
Moreover,
\begin{align*}
D_y u_n =\phi_n D_y u+\phi_n'u,\qquad D_{yy} u_n=\phi_n D_{yy} u+2\phi_n'D_yu+\phi_n''u.
\end{align*}
Then $\frac 1 y D_y u_n\to \frac 1 y D_y u$ in $L^p_m$ since $\phi_n \frac{D_y u}{y}\to \frac 1 y D_y u$ by dominated convergence and $\phi_n'\frac{u}{y}\to 0$. In fact, using \eqref{beh cut} and \eqref{behaviour} of Lemma \ref{int-uMaggiore} we have
\begin{align*}
\left\|\phi_n'\frac{u}{y}\right\|^p_{L^p_m}\leq \frac{C}{n^p}\int_{(\frac 1 4)^n}^{(\frac 1 2)^n} |\log y|^{p-1}y^{m-2p}\,dy=\frac{C}{n^{2p}}\int_{\frac 1 4}^{\frac 1 2} |\log s|^{p-1}s^{-1}\,dy
\end{align*}
which tends to $0$ as $n\to\infty$.
Concerning the second order derivative we have $ D_{yy} u_n\to D_{yy} u$ since $\phi_n D_{yy} u\to D_{yy} u$ by dominated convergence and the other terms tend to $0$. Indeed proceeding as before we have $|\phi_n'D_yu|\leq C\frac{C}{n}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}\frac{|D_y u|}{y}$ which tends to $0$ by dominated convergence. Finally,
\begin{align*}
\|\phi_n''u\|^p_{L^p_m}\leq \frac{C}{n^p}\int_{(\frac 1 4)^n}^{(\frac 1 2)^n} |\log y|^{p-1}y^{m-2p}\,dy=\frac{C}{n^{2p}}\int_{\frac 1 4}^{\frac 1 2} |\log s|^{p-1}s^{-1}\,dy
\end{align*}
which tends to $0$ as $n\to\infty$.
Now the proof is as for (i) and shows that $C_c^\infty (\mathbb{R}^N \times ]0,\infty[)$ is dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$.
(iii) Let assume finally that $\frac{m+1}{p}<2$. By Lemmas \ref{supp-comp-x}, \ref{smooth} we may assume that $u$ has compact support and that for every $y \geq 0$, $u(\cdot,y) \in C^\infty_b(\mathbb{R}^N)$.
By Proposition \ref{Hardy in core}, $\frac{u-u(\cdot,0)}{y^2}\in L^p_m$. Let $\phi$ be a smooth function equal to $0$ in $(0,1)$ and to $1$ for $y \ge 2$ and $\phi_n(y)=\phi (ny)$. Setting
$$u_n(x,y)=(1-\phi_n(y))u(x,0)+\phi_n(y)u(x,y),$$
then
\begin{align*}
D_{x_i}u_n &=(1-\phi_n)D_{x_i} u(\cdot,0)+ \phi_n D_{x_i}u,\\[1ex]
D_{x_ix_j}u_n &=(1-\phi_n)D_{x_ix_j} u(\cdot,0)+ \phi_nD_{x_ix_j}u,\\[1ex]
D_y u_n &=\phi_n' (u-u(\cdot,0))+\phi_nD_{y}u,\\[1ex]
D_{yy} u_n &=\phi_n'' (u-u(\cdot,0))+2\phi_n'D_yu+\phi_nD_{yy}u.
\end{align*}
It follows that $u_n \to u$,\; $y^\alpha D_{x_ix_j}u_n \to y^\alpha D_{x_ix_j}u$, \; $y^\frac{\alpha}{2} D_{x_i}u_n \to y^\frac{\alpha}{2} D_{x_i }u$ in $L^p_m$. Since the argument is always the same, let us explain it for $u_n$. The term $\phi_n u$ converges to $u$ by dominated convergence and $(1-\phi_n)u(\cdot, 0)$ converges to zero since $u(\cdot,0)$ is bounded with compact support.
Using \eqref{sti cut 2} one has
$$\frac{|\phi_n' (u-u(\cdot,0))|}{y}\leq C \chi_{[\frac{1}{n},\frac{2}{n}]}(y)\frac{|u-u(\cdot,0)|}{y^2}$$
which tend to $0$ in $L^p_m$ by dominated convergence and then $\frac{D_y u_n}{y}$ converges to $\frac{D_{y}u}{y}$ in $L^p_m$.
Similarly $D_{yy} u_n$ converges $D_{yy}u$ in $L^p_m$.
Each function $u_n$ has compact support, does not depend on $y$ for small $y$ and is smooth with respect to the $x$ variable for any fixed $y$. Smoothness with respect to $y$ is however not yet guaranteed. This last property can be added by taking appropriate convolutions in $y$ with a compact support mollifier.
\qed
We can now prove the general density result.
\medskip
{\sc{Proof of Theorem \ref{core gen}} }
The density of $\mathcal C$, defined in \eqref{defC}, in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$ follows by Lemma \ref{Sobolev eq} and Proposition \ref{corend} since the isometry $T_{0,-\frac{\alpha_2}2}$ isometrically maps dense subsets of $W^{2,p}_{\mathcal{N}}(\tilde \alpha,0,\tilde m)$ into dense subsets of $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ and, since $\alpha_2<2$, leaves invariant $\mathcal{C}$. Note also that the conditions $(m+1)/p>\alpha_1^-$ and $(\tilde m+1)/p>\tilde\alpha^-$ are equivalent, since $\alpha_2<2$, again.
In order to prove the density of $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$, we may therefore assume that $u$ is in $\mathcal C$, that is $u \in C_c^\infty (\mathbb{R}^{N} \times [0, \infty))$ and $ D_y u(x,y)=0$ for $y \leq \delta$ for some $\delta>0$.
Let $\eta$ be a smooth function depending only on the $y$ variable which is equal to $1$ in $[0,\frac{\delta}{2}]$ and to $0$ for $y \ge \delta$. Then, since $D_y u(x,y)=0$ for $y \leq \delta$,
$$u(x,y)=\eta (y)u(x,y)+(1-\eta(y))u(x,y)=\eta (y)w(x)+(1-\eta(y))u(x,y)=u_1(x,y)+u_2(x,y)$$
with $u_1(x,y)=\eta(y)w(x)$, $w(x)$ depending only on the $x$ variable.
Observe now that $u_2(x,y)=(1-\eta(y))u(x,y)=0$ in $[0,\frac{\delta}{2}]$ and outside the support of $u$, therefore it belongs to $C^\infty_c(\mathbb{R}^{N+1}_+)$ and the approximation
with respect to the $W^{2,p}(\mathbb{R}^{N+1}_+)$ norm by functions in $C_c^\infty (\mathbb{R}^{N})\otimes C_c^\infty (]0, \infty[)$ is standard (just use a sequence of polynomials converging uniformly to $u_2$ with all first and second order derivatives on a cube containing the support of $u_2$ and truncate outside the cube by a cut-off of the same type). This proves the result.\qed
\begin{os}
From
the proofs of Proposition \ref{corend} and Theorem \ref{core gen} it follows that if $u\in W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$ has support in $\mathbb{R}^N\times[0,b]$, then there exists a sequence $\left(u_n\right)_{n\in\mathbb{N}}\in\mathcal C$ such that $ \mbox{supp }u_n\subseteq \mathbb{R}^N\times[0,b]$ and $u_n\to u$ in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$.
\end{os}
\begin{cor}
\label{Core C c infty}
Assume $\frac{m+1}{p}\geq 2-\alpha_2$ and $\frac{m+1}{p}>\alpha_1^-$. Then $ C_c^\infty (\mathbb{R}^{N+1}_+) $ and
$C_c^\infty (\mathbb{R}^{N})\otimes C_c^\infty \left(]0, \infty[\right)$ are dense in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$.
\end{cor}
{\sc Proof. } This follows from the
the proofs of Proposition \ref{corend} and of Theorem \ref{core gen}.
\qed
\medskip
Specializing Proposition \ref{Hardy in core} to $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ we get the following corollary.
\begin{cor}\label{Hardy Rellich Sob}
Let $\frac{m+1}{p}>\alpha_1^-$. The following properties hold for any $u\in W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$.
\begin{itemize}
\item[(i)] If $\frac{m+1}p>1-\frac{\alpha_2}2$ then
\begin{align*}
\|y^{\frac{\alpha_2}2-1}u\|_{L^p_m}\leq C \|y^{\frac{\alpha_2}2}D_{y}u\|_{L^p_m}.
\end{align*}
\item[(ii)] If $\frac{m+1}p>2-\alpha_2$ then
\begin{align*}
\|y^{\alpha_2-2}u\|_{L^p_m}\leq C \|y^{\alpha_2-1}D_{y}u\|_{L^p_m}.
\end{align*}
\item[(iii)] If $\frac{m+1}p<2-\alpha_2$ then
\begin{align*}
\|y^{\alpha_2-2}(u-u(\cdot,0))\|_{L^p_m}\leq C \|y^{\alpha_2-1}D_{y}u\|_{L^p_m}.
\end{align*}
\item[(iv)] If $\alpha_2-\alpha_1<2$ and $\frac{m+1}p>1-\frac{\alpha_1+\alpha_2}{2}$, $\frac{m+1}p>\alpha_1^-$ then
\begin{align*}
\|y^{\frac{\alpha_1+\alpha_2}{2}-1}\nabla_{x}u\|_{L^p_m}\leq C \|y^\frac{\alpha_1+\alpha_2}{2} D_{y}\nabla_x u\|_{L^p_m}.
\end{align*}
\end{itemize}
\end{cor}
{\sc{Proof.}} By density we may assume that $u \in C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$. All points follow by applying Proposition \ref{Hardy in core} to $u$ in the cases (i), (ii) and (iii) and to $\nabla_x u$ in the case (iv), recalling Proposition \ref{Sec sob derivata mista}.\qed
\medskip
\section{The space $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$} \label{Sec sob min domain}
We consider an integral version of Dirichlet boundary conditions, namely a weighted summability of $y^{-2}u$ and introduce for $m \in \mathbb{R}$, $\alpha_2<2$
\begin{equation} \label{dominiodirichlet}
W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): y^{\alpha_2-2}u \in L^p_m\}
\end{equation}
with the norm $$\|u\|_{W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)}=\|u\|_{W^{2,p}(\alpha_1, \alpha_2, m)}+\|y^{\alpha_2-2}u\|_{L^p_m}.$$
We remark that $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ will be considered for every $m \in \mathbb{R}$ whereas $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ only for $m+1>0$. The symbol $\mathcal R$ stands for "Rellich", since Rellich inequalities concern with the summability of $y^{-2}u$.
\begin{prop} \label{RN} The following properties hold.
\begin{itemize}
\item[(i)] if $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ then $y^{\alpha_2-1}D_y u \in L^p_m.$
\item[(ii)] If $\alpha_2-\alpha_1<2$ and $\frac{m+1}{p}>2-\alpha_2$, then $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m) = W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=W^{2,p}(\alpha_1, \alpha_2, m)$, with equivalence of the corresponding norms. In particular, $C_c^\infty (\mathbb{R}^{N+1}_+)$ is dense in $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m) $.
\end{itemize}
\end{prop}
{\sc Proof. } The proof of (i) follows by integrating with respect to $x$ the inequality of Lemma \ref{inter}. The proof of (ii) follows from Proposition \ref{neumann}(i) and Corollary \ref{Hardy Rellich Sob}(ii), after noticing that $\alpha_2-\alpha_1 <2$ and $\frac{m+1}{p}>2-\alpha_2$ yield $\frac{m+1}{p}>\alpha_1^-$. The density of $C_c^\infty (\mathbb{R}^{N+1}_+)$ in $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ now follows from Corollary \ref{Core C c infty}.
\qed
Finally, we investigate the action of the multiplication operator $T_{k,0}:u\mapsto y^ku$. The following lemma is the companion of Lemma \ref{Sobolev eq} which deals with the transformation $T_{0,\beta}$.
\begin{lem}\label{isometryRN}
\label{y^k W}
Let $\alpha_2-\alpha_1<2$ and $\frac{m+1}{p}>2-\alpha_2$. For every $k\in\mathbb{R}$
\begin{align*}
T_{k,0}: W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m) \to W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)
\end{align*}
is an isomorphism (we shall write $y^k W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)= W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$).
\end{lem}
{\sc{Proof.}} Let $u=y^{k}v$ with $v\in W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$. Since all $x$-derivatives commute with $T_{k,0}$ we deal only with the $y$-derivatives. We observe that
\begin{align*}
D_yu=y^k(D_yv+k\frac{v}y),\qquad
D_{yy}u=y^k\left(D_{yy}v+2k \frac{D_y v}{y}+ k(k-1)\frac v{y^2}\right).
\end{align*}
Corollary \ref{Hardy Rellich Sob} yields
\begin{align*}
\|y^{\alpha_2-2}v\|_{L^p_m}+\|y^{\frac{\alpha_2}2-1}v\|_{L^p_m}+\|y^{\alpha_2-1}D_y v\|_{L^p_m}\leq c \|v\|_{W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)}
\end{align*}
and then $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$. Conversely, if $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$, then $y^{\alpha_2-1}D_y u \in L^p_{m-kp}$ by Proposition \ref{RN}(i) and similar formulas as above show that $y^{\alpha_2-1}D_y v, y^{\alpha_2} D_{yy}v \in L^p_m$. Since $y^{\alpha_2/2-1} \leq 1+y^{\alpha_2-2}$, then $y^{\alpha_2/2-1} u \in L^p_{m-kp}$ and $y^{\alpha/2}D_y v \in L^p_m$.
\qed
| proofpile-arXiv_065-88 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\begin{figure}[t]
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{figures/sample_467.png}
\caption{Image with human and object detections.}
\label{fig:teaser-sample}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/tokens.pdf}
\caption{Unary and pairwise tokens with predicted scores (\emph{riding a motorcycle}).}
\label{fig:teaser-tokens}
\end{subfigure}
\caption{Our Unary--Pairwise Transformer encodes human and object instances individually and in pairs, allowing it to reason about the data in complementary ways. In this example, our network correctly identifies the interactive pairs for the action \textit{riding a motorcycle}, while suppressing the visually-similar non-interactive pairs and those with different associated actions.
}
\label{fig:teaser}
\vspace{-10pt}
\end{figure}
Human--object interaction (HOI) detectors localise interactive human--object pairs in an image and classify the actions. They can be categorised as one- or two-stage, mirroring the grouping of object detectors.
Exemplified by Faster R-CNN~\cite{fasterrcnn}, two-stage object detectors typically include a region proposal network, which explicitly encodes potential regions of interest in the form of bounding boxes. These bounding boxes can then be classified and further refined via regression in a downstream network. In contrast, one-stage detectors, such as RetinaNet~\cite{retinanet}, retain the abstract feature representations of objects throughout the network, and decode them into bounding boxes and classification scores at the end of the pipeline.
In addition to the same categorisation convention, HOI detectors need to localise two bounding boxes per instance instead of one. Early works~\cite{hicodet,gpnn,no-frills,tin} employ a pre-trained object detector to obtain a set of human and object boxes, which are paired up exhaustively and processed by a downstream network for interaction classification. This methodology coincides with that of two-stage detectors and quickly became the mainstream approach due to the accessibility of high-quality pre-trained object detectors.
The first instance of one-stage HOI detectors was introduced by Liao \etal.~\cite{ppdm}. They characterised human--object pairs as interaction points, represented as the midpoint of the human and object box centres. Recently, due to the great success in using learnable queries in transformer decoders for localisation~\cite{detr}, the development of one-stage HOI detectors has been greatly advanced. However, HOI detectors that adapt the DETR model rely heavily on the transformer, which is notoriously difficult to train~\cite{train-xfmer}, to produce discriminative features. In particular, when initialised with DETR's pre-trained weights, the decoder attends to regions of high objectness by default. The heavy-weight decoder stack then has to be adapted to attend to regions of high interactiveness. Consequently, training such one-stage detectors often consumes large amounts of memory and time as shown in \cref{fig:convg-time}.
In contrast, two-stage HOI detectors do not repurpose the backbone network, but maintain it as an object detector. Since the first half of the pipeline already functions as intended at the beginning of training, the second half can be trained quickly for the specific task of HOI detection. Furthermore, since the object detector can be decoupled from the downstream interaction head during training, its weights can be frozen, and a lighter-weight network can be used for interaction detection, saving a substantial amount of memory and computational resources.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/conv_time.png}
\caption{Mean average precision as a function of the number of epochs (left) and training time (right) to convergence. The backbone networks for all methods have been initialised with the same weights and trained on 8 GeForce GTX TITAN X GPUs.}
\label{fig:convg-time}
\end{figure}
\begin{table}[t]\small
\vspace{-4pt}
\caption{The performance discrepancy between existing state-of-the-art one-stage and two-stage HOI detectors is largely attributable to the choice of backbone network. We report the mean average precision ($\times 100$) on the HICO-DET~\cite{hicodet} test set.}
\label{tab:one-vs-two}
\setlength{\tabcolsep}{6pt}
\vspace{-4pt}
\begin{tabularx}{\linewidth}{l l l C}
\toprule
\textbf{Method} & \textbf{Type} & \textbf{Detector Backbone} & \textbf{mAP} \\
\midrule
SCG~\cite{scg} & two-stage & Faster R-CNN R-50-FPN & 24.88 \\
SCG~\cite{scg} & two-stage & DETR R-50 & 28.79 \\
SCG~\cite{scg} & two-stage & DETR R-101 & \textbf{29.26} \\
\midrule
QPIC~\cite{qpic} & one-stage & DETR R-50 & 29.07 \\
QPIC~\cite{qpic} & one-stage & DETR R-101 & \textbf{29.90} \\
\midrule
Ours & two-stage & DETR R-50 & 31.66 \\
Ours & two-stage & DETR R-101 & \textbf{32.31} \\
\bottomrule
\end{tabularx}
\vspace{-6pt}
\end{table}
Despite these advantages, the performance of two-stage detectors has lagged behind their one-stage counterparts. However, most of these two-stage models used Faster R-CNN~\cite{fasterrcnn} rather than more recent object detectors. We found that simply replacing Faster R-CNN with the DETR model in an existing two-stage detector (SCG)~\cite{scg} resulted in a significant improvement, putting it on par with a state-of-the-art one-stage detector (QPIC), as shown in \cref{tab:one-vs-two}. We attribute this performance gain to the representation power of transformers and bipartite matching loss~\cite{detr}. The latter is particularly important because it resolves the misalignment between the training procedure and evaluation protocol. The evaluation protocol dictates that, amongst all detections associated with the same ground truth, the highest scoring one is the true positive while the others are false positives. Without bipartite matching, all such detections will be labelled as positives. The detector then has to resort to heuristics such as non-maximum suppression to mitigate the issue, resulting in procedural misalignment.
We propose a two-stage model that refines the output features from DETR with additional transformer layers for HOI classification. As shown in \cref{fig:teaser}, we encode the instance information in two ways: a unary representation where individual human and object instances are encoded separately, and a pairwise representation where human--object pairs are encoded jointly. These representations provide orthogonal information, and we observe different behaviours in their associated layers. The unary encoder layer preferentially increases the predicted interaction scores for positive examples, while the pairwise encoder layer suppresses the negative examples. As a result, this complementary behaviour widens the gap between scores of positive and negative examples, particularly benefiting ranking metrics such as mean average precision (mAP).
Our primary contribution is a novel and efficient two-stage HOI detector with unary and pairwise encodings. Our secondary contribution is demonstrating how pairwise box positional encodings---critical for HOI detection---can be incorporated into a transformer architecture, enabling it to jointly reason about unary appearance and pairwise spatial information. We further provide a detailed analysis on the behaviour of the two encoder layers, showing that they have complementary properties. Our proposed model not only outperforms state-of-the-art methods, but also consumes much less time and memory to train. The latter allows us to employ more memory-intensive backbone networks, further improving the performance.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/upt.pdf}
\caption{Flowchart for our unary--pairwise transformer. An input image is processed by a backbone CNN to produce image features, which are partitioned into patches of equal size and augmented with sinusoidal positional encodings. These tokens are fed into the DETR~\cite{detr} transformer encoder--decoder stack, generating new features for a fixed number of learnable object queries. These are decoded by an MLP as object classification scores and bounding boxes, and are also passed to the interaction head as unary tokens. The interaction head also receives pairwise positional encodings computed from the predicted bounding box coordinates. A modified transformer encoder layer then refines the unary tokens using the pairwise positional encodings. The output tokens are paired up and fused with the same positional encodings to produce pairwise tokens, which are processed by a standard transformer encoder layer before an MLP decodes the final features as action classification scores.
}
\label{fig:diagram}
\end{figure*}
\section{Related work}
Transformer networks~\cite{xfmer}, initially developed for machine translation, have recently become ubiquitous in computer vision due to their representation power, flexibility, and global receptive field via the attention mechanism. The image transformer ViT~\cite{vit} represented an image as a set of spatial patches, each of which was encoded as a token through simple linear transformations. This approach for tokenising images rapidly gained traction and inspired many subsequent works~\cite{swint}. Another key innovation of transformers is the use of learnable queries in the decoder, which are initialised randomly and updated through alternating self-attention and cross-attention with encoder tokens. Carion \etal~\cite{detr} use these as object queries in place of conventional region proposals for their object detector. Together with a bipartite matching loss, this design gave rise to a new class of one-stage detection models that formulate the detection task as a set prediction problem. It has since inspired numerous works in HOI detection~\cite{qpic, hoitrans, hotr, asnet}.
To adapt the DETR model to HOI detection, Tamura \etal~\cite{qpic} and Zou \etal~\cite{hoitrans} add additional heads to the transformer in order to localise both the human and object, as well as predict the action. As for bipartite matching, additional cost terms are added for action prediction. On the other hand, Kim \etal~\cite{hotr} and Chen \etal~\cite{asnet} propose an interaction decoder to be used alongside the DETR instance decoder. It is specifically responsible for predicting the action while also matching the interactive human--object pairs. These aforementioned one-stage detectors have achieved tremendous success in pushing the state-of-the-art performance. However, they all require significant resources to train the models. In contrast, this work focuses on exploiting novel ideas to produce equally discriminative features while preserving the memory efficiency and low training time of two-stage detectors.
Two-stage HOI detectors have also undergone significant development recently. Li \etal~\cite{idn} studied the integration and decomposition of HOIs in an analogy to the superposition of waves in harmonic analysis. Hou \etal explored few-shot learning by fabricating object representations in feature space~\cite{fcl} and learning to transfer object affordance~\cite{atl}. Finally, Zhang \etal~\cite{scg} proposed to fuse features of different modalities within a graphical model to produce more discriminative features. We make use of this modality fusion in our transformer model and show that it leads to significant improvements.
\section{Unary--pairwise transformers}
To leverage the success of transformer-based detectors, we use DETR~\cite{detr} as our backbone object detector and focus on designing an effective and efficient interaction head for HOI detection, as shown in \cref{fig:diagram}. The interaction head consists of two types of transformer encoder layers, with the first layer modified to accommodate additional pairwise input. The first layer operates on unary tokens, \ie, individual human and object instances, while the second layer operates on pairwise tokens, \ie, human--object pairs. Based on our analysis and experimental observations in \cref{sec:macro} and \cref{sec:micro}, self-attention in the unary layer preferentially increases the interaction scores for positive HOI pairs, whereas self-attention in the pairwise layer decreases the scores for negative pairs. As such, we refer to these layers as \textit{cooperative} and \textit{competitive} layers respectively.
\subsection{Cooperative layer}
\label{sec:coop}
A standard transformer encoder layer takes as input a set of tokens and performs self-attention. Positional encodings are usually indispensable to compensate for the lack of order in the token set. Typically, sinusoidal functions of the position~\cite{xfmer} or learnable embeddings~\cite{detr} are used for this purpose. It is possible to extend sinusoidal encodings to bounding box coordinates, however, our unary tokens already contain positional information, since they were decoded into bounding boxes. Instead, we take this as an opportunity to inject pairwise spatial information into the transformer, something that has been shown to be helpful for the task of HOI detection~\cite{scg}. Specifically, we compute the unary and pairwise spatial features used by Zhang \etal~\cite{scg} from the bounding boxes, including the unary box centre, width and height, and pairwise intersection-over-union, relative area, and direction, and pass this through an MLP to obtain the pairwise positional encodings. We defer the full details to~\cref{app:pe}.
We also found that the usual additive approach did not perform as well for our positional encodings. So we slightly modified the attention operation in the transformer encoder layer to allow directly injecting the pairwise positional encodings into the computation of values and attention weights.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\linewidth]{figures/modified_layer.pdf}
\caption{Architecture of the modified transformer encoder layer (left) and its attention module (right). FFN stands for feedforward network~\cite{xfmer}. ``Pairwise concat.'' refers to the operation of pairing up all tokens and concatenating the features. ``Duplicate'' refers to the operation of repeating the features along a new dimension.}
\label{fig:modified-layer}
\end{figure}
More formally, given the detections returned by DETR, we first apply non-maximum suppression and thresholding. This leaves a smaller set $\{d_i\}_{i=1}^{n}$, where a detection $d_i=(\bb_i, s_i, c_i, \bx_i)$ consists of the box coordinates $\bb_i \in \reals^4$, the confidence score $s_i \in [0, 1]$, the object class $c_i \in \cK$ for a set of object categories $\cK$, and the object query or feature $\bx_i \in \reals^{m}$. We compute the pairwise box positional encodings $\{\by_{i, j} \in \reals^m\}_{i, j=1}^{n}$ as outlined above.
We denote the collection of unary tokens by $X \in \reals^{n \times m}$ and the pairwise positional encodings by $Y \in \reals^{n \times n \times m}$. The complete structure of the modified transformer encoder layer is shown in \cref{fig:modified-layer}. For brevity of exposition, let us assume that the number of heads $h$ is 1, and define
\begin{align}
\dot{X} \in \reals^{n \times n \times m},\: \dot{X}_i & \triangleq X \in \reals^{n \times m}, \\
\ddot{X} \in \reals^{n \times n \times 2m},\: \ddot{\bx}_{i,j} & \triangleq \bx_{i} \oplus \bx_{j} \in \reals^{2m},
\end{align}
where $\oplus$ denotes vector concatenation. That is, the tensors $\dot{X}$ and $\ddot{X}$ are the results of duplication and pairwise concatenation. The equivalent values and attention weights can then be computed as
\begin{align}
V &= \dot{X} \otimes Y, \\
W &= \text{softmax}( (\ddot{X} \oplus Y) \bw + b ),
\end{align}
where $\otimes$ denotes elementwise product and $\bw \in \reals^{3m}$ and $b \in \reals$ are the parameters of the linear layer. The output of the attention layer is then computed as $W \otimes V$.
Additional details can be found in~\cref{app:me}.
\subsection{Competitive layer}
To compute the set of pairwise tokens, we form all pairs of distinct unary tokens and remove those where the first token is not human, as object--object pairs are beyond the scope of HOI detection. We denote the resulting set as $\{p_k = (\bx_i, \bx_j, \by_{i, j}) \mid i \neq j, c_i = ``\text{human}"\}$. We then compute the pairwise tokens from the unary tokens and positional encodings via multi-branch fusion (MBF)~\cite{scg} as
\begin{equation}
\bz_k = \text{MBF}(\bx_i \oplus \bx_j, \by_{i, j}).
\end{equation}
Specifically, the MBF module fuses two modalities in multiple homogeneous branches and return a unified feature representation. For completeness, full details are provided in~\cref{app:mbf}. Last, the set of pairwise tokens are fed into an additional transformer encoder layer, allowing the network to compare the HOI candidates, before an MLP predicts each HOI pair's action classification logits $\widetilde{\bs}$.
\subsection{Training and inference}
To make full use of the pre-trained object detector, we incorporate the object confidence scores into the final scores of each human--object pair. Denoting the action logits of the $k^{\text{th}}$ pair $p_k$ as $\widetilde{\bs}_k$, the final scores are computed as
\begin{equation}
\bs_k=(s_i)^\lambda \cdot (s_j)^\lambda \cdot \sigma(\widetilde{\bs}_k),
\label{eq:scores}
\end{equation}
where $\lambda > 1$ is a constant used during inference to suppress overconfident objects~\cite{scg} and $\sigma$ is the sigmoid function. We use focal loss\footnote{Final scores in \cref{eq:scores} are normalised to the interval $[0, 1]$. In training, we instead recover the scale prior to normalisation and use the corresponding loss with logits for numerical stability. See more details in~\cref{app:loss}.}~\cite{retinanet} for action classification to counter the imbalance between positive and negative examples. Following previous practice~\cite{no-frills,scg}, we only compute the loss on valid action classes for each object type, specified by the dataset. During inference, scores for invalid combinations of actions and objects (\eg, \textit{eating a car}) are zeroed out.
\section{Experiments}
\begin{table*}[t]\small
\centering
\caption{Comparison of HOI detection performance (mAP$\times100$) on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} test sets. The highest result in each section is highlighted in bold.}
\label{tab:results}
\begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l cccccccc}
\toprule
& & \multicolumn{6}{c}{\textbf{HICO-DET}} & \multicolumn{2}{c}{\textbf{V-COCO}} \\ [4pt]
& & \multicolumn{3}{c}{Default Setting} & \multicolumn{3}{c}{Known Objects Setting} & & \\
\cline{3-5}\cline{6-8}\cline{9-10} \\ [-8pt]
\textbf{Method} & \textbf{Backbone} & Full & Rare & Non-rare & Full & Rare & Non-rare & AP$_{role}^{S1}$ & AP$_{role}^{S2}$ \\
\midrule
HO-RCNN~\cite{hicodet} & CaffeNet & 7.81 & 5.37 & 8.54 & 10.41 & 8.94 & 10.85 & - & - \\
InteractNet~\cite{interactnet} & ResNet-50-FPN & 9.94 & 7.16 & 10.77 & - & - & - & 40.0 & - \\
GPNN~\cite{gpnn} & ResNet-101 & 13.11 & 9.34 & 14.23 & - & - & - & 44.0 & - \\
TIN~\cite{tin} & ResNet-50 & 17.03 & 13.42 & 18.11 & 19.17 & 15.51 & 20.26 & 47.8 & 54.2 \\
Gupta \etal~\cite{no-frills} & ResNet-152 & 17.18 & 12.17 & 18.68 & - & - & - & - & - \\
VSGNet~\cite{vsgnet} & ResNet-152 & 19.80 & 16.05 & 20.91 & - & - & - & 51.8 & 57.0 \\
DJ-RN~\cite{djrn} & ResNet-50 & 21.34 & 18.53 & 22.18 & 23.69 & 20.64 & 24.60 & - & - \\
PPDM~\cite{ppdm} & Hourglass-104 & 21.94 & 13.97 & 24.32 & 24.81 & 17.09 & 27.12 & - & - \\
VCL~\cite{vcl} & ResNet-50 & 23.63 & 17.21 & 25.55 & 25.98 & 19.12 & 28.03 & 48.3 & - \\
ATL~\cite{atl} & ResNet-50 & 23.81 & 17.43 & 27.42 & 27.38 & 22.09 & 28.96 & - & - \\
DRG~\cite{drg} & ResNet-50-FPN & 24.53 & 19.47 & 26.04 & 27.98 & 23.11 & 29.43 & 51.0 & - \\
IDN~\cite{idn} & ResNet-50 & 24.58 & 20.33 & 25.86 & 27.89 & 23.64 & 29.16 & 53.3 & 60.3 \\
HOTR~\cite{hotr} & ResNet-50 & 25.10 & 17.34 & 27.42 & - & - & - & 55.2 & \textbf{64.4} \\
FCL~\cite{fcl} & ResNet-50 & 25.27 & 20.57 & 26.67 & 27.71 & 22.34 & 28.93 & 52.4 & - \\
HOI-Trans~\cite{hoitrans} & ResNet-101 & 26.61 & 19.15 & 28.84 & 29.13 & 20.98 & 31.57 & 52.9 & - \\
AS-Net~\cite{asnet} & ResNet-50 & 28.87 & 24.25 & 30.25 & 31.74 & 27.07 & 33.14 & 53.9 & - \\
SCG~\cite{scg} & ResNet-50-FPN & 29.26 & \textbf{24.61} & 30.65 & \textbf{32.87} & \textbf{27.89} & \textbf{34.35} & 54.2 & 60.9 \\
QPIC~\cite{qpic} & ResNet-101 & \textbf{29.90} & 23.92 & \textbf{31.69} & 32.38 & 26.06 & 34.27 & \textbf{58.8} & 61.0 \\
\midrule
Ours (UPT) & ResNet-50 & 31.66 & 25.94 & 33.36 & 35.05 & 29.27 & 36.77 & 59.0 & 64.5 \\
Ours (UPT) & ResNet-101 & 32.31 & 28.55 & 33.44 & 35.65 & \textbf{31.60} & 36.86 & 60.7 & 66.2 \\
Ours (UPT) & ResNet-101-DC5 & \textbf{32.62} & \textbf{28.62} & \textbf{33.81} & \textbf{36.08} & 31.41 & \textbf{37.47} & \textbf{61.3} & \textbf{67.1} \\
\bottomrule
\end{tabularx}
\end{table*}
In this section, we first demonstrate that the proposed unary--pairwise transformer achieves state-of-the-art performance on both the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets, outperforming the next best method by a significant margin. We then provide a thorough analysis on the effects of the cooperative and competitive layers. In particular, we show that the cooperative layer increases the scores of positive examples while the competitive layer suppresses those of the negative examples. We then visualise the attention weights for specific images, and show how these behaviours are achieved by the attention mechanism.
At inference time, our method with ResNet50~\cite{resnet} runs at 24 FPS on a single GeForce RTX 3090 device.
\paragraph{Datasets:}
HICO-DET~\cite{hicodet} is a large-scale HOI detection dataset with $37\,633$ training images, $9\,546$ test images, $80$ object types, $117$ actions, and $600$ interaction types. The dataset has $117\,871$ human--object pairs with annotated bounding boxes in the training set and $33\,405$ in the test set.
V-COCO~\cite{vcoco} is much smaller in scale, with $2\,533$ training images, $2\,867$ validation images, $4\,946$ test images, and only $24$ different actions.
\subsection{Implementation details}
We fine-tune the DETR model on the HICO-DET and V-COCO datasets prior to training and then freeze its weights. For HICO-DET, we use the publicly accessible DETR models pre-trained on MS COCO~\cite{coco}. However, for V-COCO, as its test set is contained in the COCO val2017 subset, we first pre-train DETR models from scratch on MS COCO, excluding those images in the V-COCO test set. For the interaction head, we filter out detections with scores lower than $0.2$, and sample at least $3$ and up to $15$ humans and objects each, prioritising high scoring ones. For the hidden dimension of the transformer, we use $m=256$, the same as DETR. Additionally, we set $\lambda$ to $1$ during training and $2.8$ during inference~\cite{scg}. For the hyperparameters used in the focal loss, we use the same values as SCG~\cite{scg}.
We apply a few data augmentation techniques used in other detectors~\cite{detr,qpic}. Inputs images are scaled such that the shortest side is at least $480$ and at most $800$ pixels. The longest side is limited at $1333$ pixels. Additionally, each image is cropped with a probability of $0.5$ to a random rectangle with each side being at least $384$ pixels and at most $600$ pixels before being scaled. We also apply colour jittering, where the brightness, contrast and saturation values are adjusted by a random factor between $0.6$ to $1.4$. We use AdamW~\cite{adamw} as the optimiser with an initial learning rate of $10^{-4}$. All models are trained for $20$ epochs with a learning rate reduction at the $10^{\text{th}}$ epoch by a factor of $10$. Training is conducted on $8$ GeForce GTX TITAN X devices, with a batch size of $2$ per GPU---an effective batch size of $16$.
\subsection{Comparison with state-of-the-art methods}
\begin{table*}[t]\small
\caption{Comparing the effect of the cooperative (coop.) and competitive (comp.) layers on the interaction scores. We report the change in the interaction scores as the layer in the $\Delta$ Architecture column is added to the reference network, for positives, easy negatives and hard negatives, with the number of examples in parentheses. As indicated by the bold numbers, the cooperative layer significantly increases the scores of positive examples while the competitive layer suppresses hard negative examples. Together, these layers widen the gap between scores of positive and negative examples, improving the detection mAP.}
\label{tab:delta}
\setlength{\tabcolsep}{3pt}
\begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l c c c c c c}
\toprule
& & \multicolumn{2}{c}{$\Delta$ \textbf{Positives} ($25\,391$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Easy Negatives} ($3\,903\,416$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Hard Negatives} ($510\,991$)}\\ [4pt]
\cline{3-4} \cline{5-6} \cline{7-8} \\ [-8pt]
\textbf{Reference} & $\Delta$ \textbf{Architecture} & Mean & Median & Mean & Median & Mean & Median \\
\midrule
Ours w/o coop. layer & + coop. layer & \textbf{+0.1487} & +0.1078 & +0.0001 & +0.0000 & +0.0071 & +0.0000 \\
Ours w/o comp. layer & + comp. layer & -0.0463 & -0.0310 & -0.0096 & -0.0024 & \textbf{-0.1080} & -0.0922 \\
Ours w/o both layers & + both layers & \textbf{+0.0799} & +0.0390 & -0.0076 & -0.0018 & \textbf{-0.0814} & -0.0748 \\
\bottomrule
\end{tabularx}
\end{table*}
\begin{figure*}[t]
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_coop.png}
\caption{\cref{tab:delta} first row}
\label{fig:scatter-left}
\end{subfigure}
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_comp.png}
\caption{\cref{tab:delta} second row}
\label{fig:scatter-mid}
\end{subfigure}
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_both.png}
\caption{\cref{tab:delta} third row}
\label{fig:scatter-right}
\end{subfigure}
\caption{Change in the interaction score (delta) with respect to the reference score. \subref{fig:scatter-left} The distribution of score deltas when adding the cooperative layer (first row of \cref{tab:delta}). \subref{fig:scatter-mid} Adding the competitive layer to the model (second row). \subref{fig:scatter-right} Adding both layers (last row). For visualisation purposes, only $20\%$ of the negatives are sampled and displayed.
}
\label{fig:scatter}
\end{figure*}
The performance of our model is compared to existing methods on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets in \cref{tab:results}. There are two different settings for evaluation on HICO-DET. \textit{Default Setting}: A detected human--object pair is considered matched with a ground truth pair, if the minimum intersection over union (IoU) between the human boxes and object boxes exceeds $0.5$. Amongst all matched pairs, the one with the highest score is considered the true positive while others are false positives. Pairs without a matched ground truth are also considered false positives. \textit{Known Objects Setting}: Besides the aforementioned criteria, this setting assumes the set of object types in ground truth pairs are known. Therefore, detected pairs with an object type outside the set are removed automatically, thus reducing the difficulty of the problem. For V-COCO, the average precision (AP) is computed under two scenarios, differentiated by the superscripts $S1$ and $S2$. This is to account for missing objects due to occlusion. For scenario $1$, empty object boxes should be predicted in case of occlusion for a detected pair to be considered a match with the corresponding ground truth, while for scenario $2$, object boxes are always assumed to be matched in such cases.
We report our model's performance for three different backbone networks. Notably, our model with the lightest-weight backbone already outperforms the next best method by a significant margin in almost every category. This gap is further widened with more powerful backbone networks. In particular, since the backbone CNN and object detection transformer are detached from the computational graph, our model has a small memory footprint. This allows us to use a higher-resolution feature map by removing the stride in the $5^{\text{th}}$ convolutional block (C5) of ResNet~\cite{resnet}, which has been shown to improve detection performance on small objects~\cite{detr}. We denote this as dilated C5 (DC5).
\subsection{Macroscopic effects of the interaction head}
\label{sec:macro}
In this section, we compare the effects of the unary (cooperative) and pairwise (competitive) layers on the HICO-DET test set, with ResNet50~\cite{resnet} as the CNN backbone.
Since the parameters in the object detector are kept frozen for our model, the set of detections processed by the downstream network remains the same, regardless of any architectural changes in the interaction head. This allows us to compare how different variants of our model perform on the same human--object pairs. To this end, we collected the predicted interaction scores for all human--object pairs over the test set and compare how adding certain layers influence them. In \cref{tab:delta}, we show some statistics on the change of scores upon an architectural modification. In particular, note that the vast majority of collected pairs are easy negatives with scores close to zero. For analysis, we divide the negative examples into easy and hard, where we define an easy negative as one with a score lower than $0.05$ as predicted by the ``Ours w/o both layers'' model, which accounts for $90\%$ of the negative examples. In addition, we also show the distribution of the change in score with respect to the reference score as scatter plots in \cref{fig:scatter}. The points are naturally bounded by the half-spaces $0 \leq x+y \leq 1$.
Notably, adding the cooperative layer results in a significant average increase ($+0.15$) in the scores of positive examples, with little effect on the negative examples. This can be seen in \cref{fig:scatter-left} as well, where the score changes for almost all positive examples are larger than zero.
In contrast, adding the competitive layer leads to a significant average decrease ($-0.11$) in the scores of hard negative examples, albeit with a small decrease in the score of positive examples as well. This minor decrease is compensated by the cooperative layer as shown in the last row of \cref{tab:delta}. Furthermore, looking at \cref{fig:scatter-mid}, we can see a dense mass near the line $y=-x$, which indicates that many negative examples have had their scores suppressed to zero.
\begin{table}[t]\small
\caption{Effect of the cooperative and competitive layers on the HICO-DET test set under the default settings.}
\label{tab:ablation}
\setlength{\tabcolsep}{3pt}
\begin{tabularx}{\linewidth}{l C C C}
\toprule
\textbf{Model} & \textbf{Full} & \textbf{Rare} & \textbf{Non-rare} \\
\midrule
Ours w/o both layers & 29.22 & 23.09 & 31.05 \\
Ours w/o comp. layer & 30.78 & 24.92 & 32.53 \\
Ours w/o coop. layer & 30.68 & 24.69 & 32.47 \\
Ours w/o pairwise pos. enc. & 29.98 & 23.72 & 31.64 \\
\midrule
Ours ($1 \times$ coop., $1 \times$ comp.) & 31.33 & 26.02 & 32.91 \\
Ours ($1 \times$ coop., $2 \times$ comp.) & 31.62 & \textbf{26.18} & 33.24 \\
Ours ($2 \times$ coop., $1 \times$ comp.) & \textbf{31.66} & 25.94 & \textbf{33.36} \\
\bottomrule
\end{tabularx}
\end{table}
\paragraph{Ablation study:}
In \cref{tab:ablation}, we ablate the effect of different design decisions on performance. Adding the cooperative and competitive layers individually improves the performance by around $1.5$~mAP, while adding both layers jointly improves by over $2$~mAP. We also demonstrate the significance of the pairwise position encodings by removing them from the modified encoder and the multi-branch fusion module. This results in a 1.3~mAP decrease. Finally, we observe a slight improvement (0.3~mAP) when adding an additional cooperative or competitive layer, but no further improvements with more layers. As the competitive layer is more costly, we use two cooperative layers.
\subsection{Microscopic effects of the interaction head}
\label{sec:micro}
\begin{figure}[t]
\centering
\includegraphics[height=0.36\linewidth]{figures/image.png} \hspace{3pt}
\includegraphics[height=0.36\linewidth]{figures/unary_attn.png}
\caption{Detected human and object instances (left) and the unary attention map for these instances (right).}
\label{fig:unary_attn}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/pairwise_attn.png}
\caption{Pairwise attention map for the human and object instances in \cref{fig:unary_attn}.}
\label{fig:pairwise_attn}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/stand_on_snowboard_6544.png}
\caption{\textit{standing on a snowboard}}
\label{fig:standing-on-snowboard}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_umbrella_7243.png}
\caption{\textit{holding an umbrella}}
\label{fig:holding-umbrella}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/carrying_suitcase_357.png}
\caption{\textit{carrying a suitcase}}
\label{fig:carrying-suitcase}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/sitting_at_dining_table_8701.png}
\caption{\textit{sitting at a dining table}}
\label{fig:sitting-at-dinning-table}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/sitting_on_bench_934.png}
\caption{\textit{sitting on a bench}}
\label{fig:sitting-on-bench}
\end{subfigure}
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/flying_airplane_573.png}
\caption{\textit{flying an airplane}}
\label{fig:flying-airplane}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_surfboard_1681.png}
\caption{\textit{holding a surfboard}}
\label{fig:holding-surfboard}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/wielding_baseball_bat_1860.png}
\caption{\textit{wielding a baseball bat}}
\label{fig:wielding-baseball-bat}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/riding_bike_998.png}
\caption{\textit{riding a bike}}
\label{fig:riding-bike}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_wine_glasses_2661.png}
\caption{\textit{holding a wineglass}}
\label{fig:holding-wineglass}
\end{subfigure}
\caption{Qualitative results of detected HOIs. Interactive human--object pairs are connected by red lines, with the interaction scores overlaid above the human box. Pairs with scores lower than $0.2$ are filtered out.}
\label{fig:qualitative}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/driving_truck_8578.png}
\caption{\textit{driving a truck}}
\label{fig:driving-truck}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\includegraphics[height=0.67\linewidth]{figures/buying_bananas_2502.png}
\caption{\textit{buying bananas}}
\label{fig:buying-bananas}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/repairing_laptops_607.png}
\caption{\textit{repairing a laptop}}
\label{fig:repairing-laptop}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/washing_bicycle_4213.png}
\caption{\textit{washing a bicycle}}
\label{fig:washing-bike}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/cutting_tie_9522.png}
\caption{\textit{cutting a tie}}
\label{fig:cutting-tie}
\end{subfigure}
\caption{Failure cases often occur when there is ambiguity in the interaction~\subref{fig:driving-truck},~\subref{fig:buying-bananas},~\subref{fig:repairing-laptop} or a lack of training data~\subref{fig:repairing-laptop},~\subref{fig:washing-bike},~\subref{fig:cutting-tie}.}
\label{fig:failure}
\end{figure*}
In this section, we focus on a specific image and visualise the effect of attention in our cooperative and competitive layers. In \cref{fig:unary_attn}, we display a detection-annotated image and its associated attention map from the unary (cooperative) layer. The human--object pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ are engaged in the interaction \textit{riding a horse}. Excluding attention weights along the diagonal, we see that the corresponding human and horse instances attend to each other.
We hypothesise that attention between pairs of unary tokens (e.g., $1$ and $4$) helps increase the interaction scores for the corresponding pairs. To validate this hypothesis, we manually set the attention logits between the three positive pairs to minus infinity, thus zeroing out the corresponding attention weights. The effect of this was an average decrease of $0.06$ ($8\%$) in the interaction scores for the three pairs, supporting the hypothesis.
In \cref{fig:pairwise_attn}, we visualise the attention map of the pairwise (competitive) layer. Notably, all human--object pairs attend to the interactive pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ in decreasing order, except for the interactive pairs themselves. We hypothesise that attention is acting here to have the dominant pairs suppress the other pairs. To investigate, we manually set the weights such that the three interactive pairs all attend to $(1, 4)$ as well, with a weight of $1$. This resulted in a decrease of their interaction scores by $0.08$ ($11\%$). We then instead zeroed out the attention weights between the rest of the pairs and ($1, 4$), which resulted in a small increase in the scores of negative pairs. These results together suggest that attention in the competitive layer is acting as a soft version of non-maximum suppression, where pairs less likely to foster interactions attend to, and are suppressed by, the most dominant pairs. See~\cref{app:qual} for more examples.
\subsection{Qualitative results and limitations}
\vspace{-8pt}
In \cref{fig:qualitative}, we present several qualitative examples of successful HOI detections, where our model accurately localises the human and object instances and assigns high scores to the interactive pairs. For example, in \cref{fig:holding-umbrella}, our model correctly identifies the subject of an interaction (the lady in red) despite her proximity to a non-interactive human (the lady in black). We also observe in \cref{fig:standing-on-snowboard} that our model becomes less confident when there is overlap and occlusion. This stems from the use of object detection scores in our model. Confusion in the object detector often translates to confusion in action classification.
We also show five representative failure cases for our model, illustrating its limitations. In \cref{fig:driving-truck}, due to the indefinite position of drivers in the training set (and real life), the model struggled to identify the driver. For \cref{fig:washing-bike}, the model failed to recognise the interaction due to a lack of training data ($1$ training example), even though the action is well-defined. Overall, ambiguity in the actions and insufficient data are the biggest challenges for our model.
Another limitation, specific to our model, is that the computation and memory requirements of our pairwise layer scale quadratically with the number of unary tokens. For scenes involving many interactive humans and objects, this becomes quite costly.
Moreover, since the datasets we used are limited, we may expect poorer performance on
data in the wild, where image resolution, lighting condition, etc. may be less controlled.
\section{Conclusion}
In this paper, we have proposed a two-stage detector of human--object interactions using a novel transformer architecture that exploits both unary and pairwise representations of the human and object instances. Our model not only outperforms the current state-of-the-art---a one-stage detector---but also consumes much less time and memory to train. Through extensive analysis, we demonstrate that attention between unary tokens acts to increase the scores of positive examples, while attention between pairwise tokens acts like non-maximum suppression, reducing the scores of negative examples. We show that these two effects are complementary, and together boost performance significantly.
\vspace{-10pt}
\paragraph{Potential negative societal impact:}
Transformer models are large and computationally-expensive, and so have a significant negative environmental impact. To mitigate this, we use pre-trained models and a two-stage architecture, since fine-tuning an existing model requires less resources, as does training a single stage with the other stage fixed. There is also the potential for HOI detection models to be misused, such as for unauthorised surveillance, which disproportionately affects minority and marginalised communities.
\vspace{-10pt}
\paragraph{Acknowledgments:}
We are grateful for support from Continental AG (D.C.). We would also like to thank Jia-Bin Huang and Yuliang Zou for their help with the reproduction of some experiment results.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_065-97 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The numerical analysis of elastic shells is a
vast field with important applications in physics and engineering. In most
cases, it is carried out via the finite element method. In the physics and computer
graphics literature, there have been suggestions to use simpler methods based
on discrete differential geometry \cite{meyer2003discrete,bobenko2008discrete}. Discrete differential geometry of surfaces is the study of triangulated polyhedral surfaces. (The epithet ``simpler'' has to be understood as ``easier to implement''.) We mention in passing that models based on triangulated polyhedral surfaces have applications in materials science beyond the elasticity of thin shells. E.g., recently these models have been used to describe defects in nematic liquids on thin shells \cite{canevari2018defects}. This amounts to a generalization to arbitrary surfaces of the discrete-to-continuum analysis for the XY model in two dimensions that leads to Ginzburg-Landau type models in the continuum limit \cite{MR2505362,alicandro2014metastability}.
\medskip
Let us describe some of the methods mentioned above in more detail.
Firstly, there are the so-called
\emph{polyhedral membrane models} which in fact can be used for a whole
array of physical and engineering problems (see e.g.~the review
\cite{davini1998relaxed}). In the context of plates and shells, the
so-called Seung-Nelson model \cite{PhysRevA.38.1005} is widely used.
This associates membrane and bending energy to a piecewise affine map $y:\R^2\supset U\to \R^3$, where the pieces are determined by a triangulation $\mathcal T$ of the polyhedral domain $U$. The bending energy is given by
\begin{equation}
E^{\mathrm{SN}}(y)= \sum_{K,L} |n(K)-n(L)|^2\,,\label{eq:1}
\end{equation}
where the sum runs over those unordered pairs of triangles $K,L$ in $\mathcal T$ that share an edge, and $n(K)$ is the surface normal on the triangle $K$. In \cite{PhysRevA.38.1005}, it has been argued that for a fixed
limit deformation $y$, the energy \eqref{eq:1} should approximate the Willmore energy
\begin{equation}
E^{\mathrm{W}}(y)=\int_{y(U)} |Dn|^2\; \mathrm{d}{\mathscr H}^2\label{eq:2}
\end{equation}
when the grid size of the triangulation $\mathcal T$ is sent to 0, and the argument of the discrete energy \eqref{eq:1} approximates the (smooth) map $y$. In \eqref{eq:2} above, $n$ denotes the surface normal and $\H^2$ the two-dimensional Hausdorff measure.
These statements have been made more precise in \cite{schmidt2012universal}, where it has been shown that the result of the limiting process depends on the used triangulations. In particular, the following has been shown in this reference: For $j\in\N$, let $\mathcal T_j$ be a triangulation of $U$ consisting of equilateral triangles such that one of the sides of each triangle is parallel to the $x_1$-direction, and such that the triangle size tends 0 as $j\to\infty$. Then the limit energy reads
\[
\begin{split}
E^{\mathrm{FS}}(y)=\frac{2}{\sqrt{3}}\int_U &\big(g_{11}(h_{11}^2+2h_{12}^2-2h_{11}h_{22}+3h_{22}^2)\\
&-8g_{12}h_{11}h_{12}+2 g_{22}(h_{11}^2+3h_{12}^2)\big)(\det g_{ij})^{-1}\; \mathrm{d} x\,,
\end{split}
\]
where
\[
\begin{split}
g_{ij}&=\partial_i y\cdot\partial_j y\\
h_{ij}&=n\cdot \partial_{ij} y \,.
\end{split}
\]
More precisely, if $y\in C^2(U)$ is given, then the sequence of maps $y_j$ obtained by piecewise affine interpolation of the values of $y$ on the vertices of the triangulations $\mathcal T_j$ satisfies
\[
\lim_{j\to \infty}E^{\mathrm{SN}}(y_j)=E^{\mathrm{FS}}(y)\,.
\]
Secondly, there is the more recent approach to using discrete differential
geometry for shells pioneered by Grinspun et al.~\cite{grinspun2003discrete}.
Their energy does not depend on an immersion $y$ as above, but is defined directly on triangulated surfaces. Given such a surface $\mathcal T$, the energy is given by
\begin{equation}
E^{\mathrm{GHDS}}(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} \alpha_{KL}^2\label{eq:3}
\end{equation}
where the sum runs over unordered pairs of neighboring triangles $K,L\in\mathcal T$, $l_{KL}$ is the length of the interface between $K,L$, $d_{KL}$ is the distance between the centers of the circumcircles of $K,L$, and $\alpha_{KL}$ is the difference of the angle between $K,L$ and $\pi$, or alternatively the angle between the like-oriented normals $n(K)$ and $n(L)$, i.e. the \emph{dihedral angle}.
In \cite{bobenko2005conformal}, Bobenko has defined an energy for piecewise affine surfaces $\mathcal T$ that is invariant under conformal transformations. It is defined via the circumcircles of triangles in $\mathcal T$, and the external intersection angles of circumcircles of neighboring triangles. Denoting this intersection angle for neighboring triangles $K,L$ by $\beta_{KL}$, the energy reads
\begin{equation}\label{eq:4}
E^\mathrm{B} (\mathcal T) = \sum_{K,L}\beta_{KL}-\pi\, \#\,\text{Vertices}(\mathcal T)\,.
\end{equation}
Here $\text{Vertices}(\mathcal T)$ denotes the vertices of the triangulation $\mathcal T$, the sum is again over nearest neighbors.
It has been shown in \cite{bobenko2008surfaces} that this energy is the same as
\eqref{eq:3} up to terms that vanish as the size of triangles is sent to zero
(assuming sufficient smoothness of the limiting surface). The reference
\cite{bobenko2008surfaces} also contains an analysis of the energy for this
limit. If the limit surface is smooth, and it is approximated by triangulated
surfaces $\mathcal T_\varepsilon$ with maximal triangle size $\varepsilon$ that satisfy
a number of technical assumptions, then the Willmore energy of the limit surface
is smaller than or equal to the limit of the energies \eqref{eq:3} for the approximating surfaces, see Theorem 2.12 in \cite{bobenko2008surfaces}. The technical assumptions are
\begin{itemize}
\item each vertex in the triangulation $\mathcal T_\varepsilon$ is connected to six other vertices by edges,
\item the lengths of the sides of the hexagon formed by six triangles that share one vertex differ by at most $O(\varepsilon^4)$,
\item neighboring triangles are congruent up to $O(\varepsilon^3)$.
\end{itemize}
Furthermore, it is stated that the limit is achieved if additionally the triangulation approximates a ``curvature line net''.
\medskip
The purpose of this present paper is to generalize this convergence result, and put it into the framework of $\Gamma$-convergence \cite{MR1968440,MR1201152}. Instead of fixing the vertices of the polyhedral surfaces to lie on the limiting surfaces, we are going to assume that the convergence is weakly * in $W^{1,\infty}$ as graphs. This approach allows to completely drop the assumptions on the connectivity of vertices in the triangulations, and the assumptions of congruence -- we only need to require a certain type of regularity of the triangulations that prevents the formation of small angles. %
\medskip
We are going to work with the energy
\begin{equation}\label{eq:5}
E(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} |n(K)-n(L)|^2\,,
\end{equation}
which in a certain sense is equivalent to \eqref{eq:3} and \eqref{eq:4} in the limit of vanishing triangle size, see the arguments from \cite{bobenko2008surfaces} and Remark \ref{rem:main} (ii) below.
\medskip
To put this approach into its context in the mathematical literature, we point out that it is another instance of a discrete-to-continuum limit, which has been a popular topic in mathematical analysis over the last few decades. We mention the seminal papers \cite{MR1933632,alicandro2004general} and the fact that a variety of physical settings have been approached in this vein, such as spin and lattice systems \cite{MR1900933,MR2505362}, bulk elasticity \cite{MR2796134,MR3180690}, thin films \cite{MR2429532,MR2434899}, magnetism \cite{MR2186037,MR2505364}, and many more.
\medskip
The topology that we are going to use in our $\Gamma$-convergence statement is much coarser than the one that corresponds to Bobenko's convergence result; however it is not the ``natural'' one that would yield compactness from finiteness of the energy \eqref{eq:5} alone. For a discussion of why we do not choose the latter see Remark \ref{rem:main} (i) below. Our topology is instead defined as follows:
Let $M$ be some fixed compact oriented two-dimensional $C^\infty$ submanifold of
$\R^3$ with normal $n_M:M\to S^2$. Let $h_j\in W^{1,\infty}(M)$, $j=1,2,\dots$,
such that $\|h_j\|_{W^{1,\infty}}<C$ and $\|h_j\|_{\infty}<\delta(M)/2$ (where $\delta(M)$ is the \emph{radius of injectivity} of $M$, see Definition \ref{def:radius_injectivity} below) such that $
T_j:= \{x+h_j(x)n_M(x):x\in M\}$ are triangulated surfaces (see Definition \ref{def:triangular_surface} below). We say
$\mathcal T_j\to \mathcal S:=\{x+h(x)n_M(x):x\in M\}$ if $h_j\to h$ in
$W^{1,p}(M)$ for all $1\leq p<\infty$. Our main theorem, Theorem \ref{thm:main}
below, is a $\Gamma$-convergence result in this topology.
The regularity assumptions that we impose on the triangulated surfaces under
considerations are ``$\zeta$-regularity'' and the ``Delaunay property''. The
definition of these concepts can be found in Definition
\ref{def:triangular_surface} below.
\begin{thm}
\label{thm:main}
\begin{itemize}
\item[(o)] Compactness: Let $\zeta>0$, and let $h_j$ be a bounded sequence in
$W^{1,\infty}(M)$ such that $\mathcal T_j=\{x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface and $\|h_j\|_\infty\leq\delta(M)/2$ for $j\in \N$ with $\limsup_{j\to\infty}E(\mathcal T_j)<\infty$. Then there exists a subsequence $h_{j_k}$ and $h\in W^{2,2}(M)$ such that $h_{j_k}\to h $ in $W^{1,p}(M)$ for every $1\leq p < \infty$.
\item[(i)] Lower bound: Let $\zeta>0$. Assume that for $j\in\N$, $h_j\in W^{1,\infty}(M)$ with $\|h_j\|\leq \delta(M)/2$, $\mathcal T_j:=\{x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface fulfilling the Delaunay
property, and that $\mathcal T_j\to S=\{x+h(x)n_M(x):x\in M\}$ for $j\to\infty$. Then
\[
\liminf_{j\to\infty} E(\mathcal T_j)\geq \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,.
\]
\item[(ii)] Upper bound: Let $h\in W^{1,\infty}(M)$ with $\|h\|_\infty\leq \delta(M)/2$ and
$S=\{x+h(x)n_M(x):x\in M\}$. Then there exists $\zeta>0$ and a sequence $(h_j)_{j\in\N}\subset W^{1,\infty}(M)$ such that $\mathcal T_j:=\{(x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface satisfying the
Delaunay property for each $j\in \N$, and we have $\mathcal T_j\to S$ for $j\to \infty$ and
\[
\lim_{j\to\infty} E(\mathcal T_j)= \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,.
\]
\end{itemize}
\end{thm}
\begin{rem}\label{rem:main}
\begin{itemize}
\item[(i)] We are not able to derive a convergence result in a topology that
yields convergence from boundedness of the energy \eqref{eq:5} alone. Such an
approach would necessitate the interpretation of the surfaces as varifolds or
currents. To the best of our knowledge, the theory of
integral functionals on varifolds (see e.g.~\cite{menne2014weakly,hutchinson1986second,MR1412686}) is not
developed to the point to allow for a treatment of this question. In particular, there does not exist a sufficiently
general theory of lower semicontinuity of integral functionals for varifold-function pairs.
\item[(ii)] We can state
analogous results based on the energy functionals \eqref{eq:3},
\eqref{eq:4}. To do so, our proofs only need to be modified slightly: As soon as we have reduced the situation to the graph case
(which we do by assumption), the upper bound construction can be carried out
as here; the smallness of the involved dihedral angles assures that the
arguments from \cite{bobenko2005conformal} suffice to carry through the proof.
Concerning the lower bound, we also reduce to the case of small dihedral angles by a blow-up procedure around Lebesgue points of the derivative of the surface normal of the limit surface. (Additionally, one can show smallness of the contribution of a few pairs of triangles whose dihedral angle is not small.) Again, the considerations from \cite{bobenko2005conformal} allow for a translation of our proof to the case of the energy functionals \eqref{eq:3},
\eqref{eq:4}.
\item[(iii)] As we will show in Chapter \ref{sec:necess-dela-prop}, we need to require the Delaunay property in order to obtain the lower bound statement. Without this requirement, we will show that a hollow cylinder can be approximated by triangulated surfaces with arbitrarily low energy, see Proposition~\ref{prop: optimal grid}.
\item[(iv)] Much more general approximations of surfaces by discrete geometrical objects have recently been proposed in \cite{buet2017varifold,buet2018discretization,buet2019weak}, based on tools from the theory of varifolds.
\end{itemize}
\end{rem}
\subsection*{Plan of the paper}
In Section \ref{sec:defin-prel}, we will fix definitions and make some preliminary observations on triangulated surfaces. The proofs of the compactness and lower bound part will be developed in parallel in Section \ref{sec:proof-comp-lower}. The upper bound construction is carried out in Section \ref{sec:surf-triang-upper}, and in Section \ref{sec:necess-dela-prop} we demonstrate that the requirement of the Delaunay property is necessary in order to obtain the lower bound statement.
\section{Definitions and preliminaries}
\label{sec:defin-prel}
\subsection{Some general notation}
\begin{notation}
For a two-dimensional submanifold $M\subset\R^3$, the tangent space of $M$ in $x\in M$ is
denoted by $T_{x}M$. For functions $f:M\to\R$, we denote their gradient by $\nabla f\in T_xM$; the norm $|\cdot|$ on $T_xM\subset\R^3$ is the Euclidean norm inherited from $\R^3$. For $1\leq p\leq \infty$, we denote by $W^{1,p}(M)$ the space of functions $f\in L^p(M)$ such that $\nabla f\in L^p(M;\R^3)$, with norm
\[
\|h\|_{W^{1,p}(M)}=\|f\|_{L^p(M)}+\|\nabla f\|_{L^p(M)}\,.
\]
For $U\subset\R^n$ and a function
$f:U\to\R$, we denote the graph of $f$ by
\[
\mathrm{Gr}\, f=\{(x,f(x)):x\in U\}\subset\R^{n+1}\,.
\]
For $x_1,\dots,x_m\subset \R^k$, the convex hull of $\{x_1,\dots,x_m\}$ is
denoted by
\[
[x_1,\dots,x_m]=\left\{\sum_{i=1}^m \lambda_ix_i:\lambda_i\in [0,1] \text{
for } i=1,\dots,m, \, \sum_{i=1}^m\lambda_i=1\right\}\,.
\]
We will identify $\R^2$ with the subspace $\R^2\times\{0\}$ of $\R^3$. The $d-$dimensional Hausdorff measure is denoted by $\H^d$, the $k-$dimensional Lebesgue measure by $\L^k$. The
symbol ``$C$'' will be used as follows: A statement such as
``$f\leq C(\alpha)g$'' is shorthand for ``there exists a constant $C>0$ that
only depends on $\alpha$ such that $f\leq Cg$''. The value of $C$ may change
within the same line. For $f\leq C g$, we also write
$f\lesssim g$.
\end{notation}
\subsection{Triangulated surfaces: Definitions}
\begin{defi}
\label{def:triangular_surface}
\begin{itemize}
\item [(i)] A \textbf{triangle} is the convex hull $[x,y,z]\subset \R^3$ of three points $x,y,z \in \R^3$. A \textbf{regular} triangle is one where $x,y,z$ are not colinear, or equivalently ${\mathscr H}^2([x,y,z])>0$.
\item[(ii)]
A \textbf{triangulated surface} is a finite collection
${\mathcal T} = \{K_i\,:\,i = 1,\ldots, N\}$ of regular triangles
$K_i = [x_i,y_i,z_i] \subset \R^3$ so that
$\bigcup_{i=1}^N K_i \subset \R^3$ is a topological two-dimensional manifold
with boundary; and the intersection of two different triangles $K,L\in {\mathcal T}$ is either empty, a common vertex, or a common edge.
We identify ${\mathcal T}$ with its induced topological manifold
$\bigcup_{i=1}^N K_i \subset \R^3$ whenever convenient. We say that ${\mathcal T}$ is \textbf{flat} if
there exists an affine subplane of $\R^3$ that contains ${\mathcal T}$.
\item[(iii)] The \textbf{size} of the triangulated surface, denoted $\size({\mathcal T})$, is the
maximum diameter of all its triangles.
\item[(iv)] The triangulated surface ${\mathcal T}$ is called $\zeta$\textbf{-regular}, with
$\zeta > 0$, if the minimum angle in all triangles is at least $\zeta$ and
$\min_{K\in {\mathcal T}} \diam(K) \geq \zeta \size({\mathcal T})$.
\item[(v)] The triangulated surface satisfies the \textbf{Delaunay}
property if for every triangle
$K = [x,y,z] \in {\mathcal T}$ the following property holds: Let $B(q,r)\subset \R^3$ be the
smallest ball such that $\{x,y,z\}\subset \partial{B(q,r)}$. Then $B(q,r)$ contains
no vertex of any triangle in ${\mathcal T}$. The point $q = q(K)\in \R^3$ is called
the \textbf{circumcenter} of $K$, $\overline{B(q,r)}$ its
\textbf{circumball} with circumradius $r(K)$, and $\partial B(q,r)$ its \textbf{circumsphere}.
\end{itemize}
\end{defi}
Note that triangulated surfaces have normals defined on all triangles and are compact
and rectifiable. For the argument of the circumcenter map
$q$, we do not distinguish between triples of points $(a,b,c)\in \R^{3\times
3}$ and the triangle $[a,b,c]$ (presuming $[a,b,c]$ is a regular triangle).
\begin{notation}
If ${\mathcal T}=\{K_i:i=1,\dots,N\}$ is a triangulated surface, and
$g:{\mathcal T}\to \R$,
then we identify $g$ with the function
$\cup_{i=1}^N K_i\to \R$ that is
constant on the (relative) interior of each triangle $K$, and equal to
$0$ on $K\cap L$ for $K\neq L\in {\mathcal T}$. In particular we may write in this case
$g(x)=g(K)$ for $x\in \mathrm{int}\, K$.
\end{notation}
\begin{defi}
Let ${\mathcal T}$ be a triangulated surface and $K,L \in {\mathcal T}$. We set
\[
\begin{split}
\l{K}{L} &:= \H^1(K\cap L)\\
d_{KL} &:= |q(K) - q(L)|
\end{split}
\]
If $K,L$ are \textbf{adjacent}, i.e. if $\l{K}{L} > 0$, we may define $|n(K) - n(L)|\in \R$ as the norm of the difference of the normals $n(K),n(L)\in S^2$ which share an orientation, i.e. $2\sin \frac{\alpha_{KL}}{2}$, where $\alpha_{KL}$ is the dihedral angle between the triangles, see Figure \ref{fig:dihedral}. The discrete bending energy is then defined as
\[
E({\mathcal T}) = \sum_{K,L\in {\mathcal T}} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2.
\]
Here, the sum runs over all unordered pairs of triangles. If $|n(K) - n(L)| = 0$ or $\l{K}{L} = 0$, the energy density is defined to be $0$ even if $d_{KL}=0$. If $|n(K) - n(L)| > 0$, $\l{K}{L} > 0$ and $d_{KL} = 0$, the energy is defined to be infinite.
\end{defi}
\begin{figure}[h]
\begin{subfigure}{.45\textwidth}
\begin{center}
\includegraphics[height=5cm]{dihedral_triangles_v2.pdf}
\end{center}
\caption{ \label{fig:dihedral}}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}{.45\textwidth}
\includegraphics[height=5cm]{d_KL_l_KL.pdf}
\caption{\label{fig:dkllkl}}
\end{subfigure}
\caption{($\mathrm{A}$) The dihedral angle $\alpha_{KL}$ for triangles $K,L$. It is related to the norm of the difference between the normals via $|n(K)-n(L)|=2\sin\frac{\alpha_{KL}}{2}$. ($\mathrm{B}$) Definitions of $d_{KL}$, $l_{KL}$.}
\end{figure}
\begin{notation}
\label{not:thetaKL}
Let $H$ be an affine subplane of $\R^3$.
For triangles $K,L\subset H$ that share an edge and $v\in\R^3$ parallel to
$H$, we define the function
$\mathds{1}^v_{KL}:H \to \{0,1\}$ as $\mathds{1}_{KL}^v(x) = 1$ if and
only if
$[x,x+v]\cap (K\cap L) \neq \emptyset$. If the intersection $K\cap L$ does
not consist of a single edge, then $\mathds{1}_{KL}^v\equiv
0$. Furthermore, we let $\nu_{KL}\in \R^3$ denote the unit vector parallel to $H$
orthogonal to the shared edge of $K,L$ pointing from $K$ to $L$ and
\[
\theta_{KL}^v=\frac{|\nu_{KL}\cdot v|}{|v|}\,.
\]
\end{notation}
See Figure \ref{fig:parallelogram} for an illustration of Notation \ref{not:thetaKL}.
\begin{figure}
\includegraphics[height=5cm]{char_fun_1.pdf}
\caption{Definition of $\theta_{KL}^v$: The parallelogram spanned by $v$ and the shared side $K\cap L$ has area $\theta^v_{KL}l_{KL}|v|$. This parallelogram translated by $-v$ is the support of $\mathds{1}_{KL}^v$. \label{fig:parallelogram}}
\end{figure}
\medskip
We collect the notation that we have introduced for triangles and triangulated
surfaces for the reader's convenience in abbreviated form: Assume that $K=[a,b,c]$ and $L=[b,c,d]$
are two regular triangles in $\R^3$. Then we have the following notation:
\begin{equation*}
\boxed{
\begin{split}
q(K)&: \text{ center of the smallest circumball for $K$}\\
r(K)& :\text{ radius of the smallest circumball for $K$}\\
d_{KL}&=|q(K)-q(L)|\\
l_{KL}&:\text{ length of the shared edge of $K,L$}\\
n(K)&: \text{ unit vector
normal to $K$ }
\end{split}
}
\end{equation*}
The following are defined if $K,L$
are contained in an affine subspace $H$ of $\R^3$, and $v$ is a vector
parallel to $H$:
\begin{equation*}
\boxed{
\begin{split}
\nu_{KL}&:\text{ unit vector parallel to $H$
orthogonal to}\\&\quad\text{ the shared edge of $K,L$ pointing from $K$ to $L$}\\
\theta_{KL}^v&=\frac{|\nu_{KL}\cdot v|}{|v|}\\
\mathds{1}_{KL}^v&: \text{ function defined on $H$, with value one if}\\
&\quad\text{ $[x,x+v]\cap (K\cap L)\neq \emptyset$, zero otherwise}
\end{split}
}
\end{equation*}
\subsection{Triangulated surfaces: Some preliminary observations}
For two adjacent triangles $K,L\in {\mathcal T}$, we have $d_{KL} = 0$ if and only if the vertices of $K$ and $L$ have the same circumsphere. The following lemma states that for noncospherical configurations, $d_{KL}$ grows linearly with the distance between the circumsphere of $K$ and the opposite vertex in $L$.
\begin{lma}\label{lma: circumcenter regularity}
The circumcenter map $q:\R^{3\times 3} \to \R^3$ is $C^1$ and Lipschitz when
restricted to $\zeta$-regular triangles. For two adjacent triangles $K =
[x,y,z]$, $L = [x,y,p]$, we have that
\[d_{KL} \geq \frac12 \big| |q(K)-p| -r(K) \big|\,.
\]
\end{lma}
\begin{proof}
The circumcenter $q = q(K)\in \R^3$ of the triangle $K = [x,y,z]$ is the solution to the linear system
\begin{equation}
\begin{cases}
(q - x)\cdot (y-x) = \frac12 |y-x|^2\\
(q - x)\cdot (z-x) = \frac12 |z-x|^2\\
(q - x)\cdot ((z-x)\times (y-x)) = 0.
\end{cases}
\end{equation}
Thus, the circumcenter map $(x,y,z)\mapsto q$ is $C^1$ when restricted to $\zeta$-regular $K$. To see that the map is globally Lipschitz, it suffices to note that it is $1$-homogeneous in $(x,y,z)$.
For the second point, let $s=q(L)\in \R^3$ be the circumcenter of $L$. Then by the triangle inequality, we have
\begin{equation}
\begin{aligned}
|p-q|\leq |p-s| + |s-q| = |x-s| + |s-q| \leq |x-q| + 2|s-q| = r + 2d_{KL},\\
|p-q| \geq |p-s| - |s-q| = |x-s| - |s-q| \geq |x-q| - 2 |s-q| = r - 2d_{KL}.
\end{aligned}
\end{equation}
This completes the proof.
\end{proof}
\begin{lma}
\label{lem:char_func}
Let $\zeta>0$, and $a,b,c,d\in \R^2$ such that $K=[a,b,c]$ and $L=[b,c,d]$ are $\zeta$-regular.
\begin{itemize}\item[(i)]
We have that
\begin{equation*}
\int_{\R^2} \mathds{1}_{KL}^v(x)\d x = |v|l_{KL}\theta_{KL}\,.
\end{equation*}
\item[(ii)] Let $\delta>0$, $v,w\in\R^2$, $\bar v=(v,v\cdot w)\in \R^3$,
$\bar a=(a,a\cdot w)\in \R^3$ and $\bar b, \bar c,\bar d\in \R^3$ defined
analogously. Let
$\bar K=[\bar a,\bar b,\bar c]$, $\bar L=[\bar b,\bar c,\bar d]$.
Then
\[
\int_{\R^2} \mathds{1}_{KL}^v(x)\, \d x = \frac{|\bar v|}{\sqrt{1+|w|^2}}
\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}\,.
\]
\end{itemize}
\end{lma}
\begin{proof}
The equation (i) follows from the fact that $\mathds{1}_{KL}^v$ is the
characteristic function of a parallelogram, see Figure \ref{fig:parallelogram}.
To prove (ii) it suffices to observe that
$\int_{\R^2} \mathds{1}_{KL}^v(x)\sqrt{1+w^2}\d x$ is the volume of the
parallelogram from (i) pushed forward by the map $\tilde h(x)= (x,x\cdot
w)$, see Figure \ref{fig:char_fun_2}.
\end{proof}
\begin{figure}[h]
\includegraphics[height=5cm]{char_fun_2.pdf}
\caption{The parallelogram pushed forward by an affine map $x\mapsto (x,x\cdot w)$. \label{fig:char_fun_2}}
\end{figure}
\subsection{Graphs over manifolds}
\begin{assump}
\label{ass:Mprop}
We assume $M\subset\R^3$
is
an oriented compact two-dimensional $C^\infty$-submanifold of $\R^3$.
\end{assump}
This manifold will be fixed in the following. We denote the normal of $M$ by $n_M:M\to S^2$, and the second fundamental form at $x_0\in M$ is denoted by $S_M(x_0):T_{x_0}M\to T_{x_0}M$.
\medskip
\begin{defi}
\label{def:radius_injectivity}
The \emph{radius of injectivity} $\delta(M)>0$ of $M$ is the largest number such that the map $\phi:M\times (-\delta(M),\delta(M))\to \R^3$, $(x,h) \mapsto x + h n_M(x)$ is injective and the operator norm of $\delta(M)S_M(x)\in\mathcal{L}(T_xM)$ is at most $1$ at every $x\in M$.
\end{defi}
We define a graph over $M$ as follows:
\begin{defi}
\label{def:Mgraph}
\begin{itemize}
\item[(i)] A set $M_h = \{x+ h(x)n_M(x)\,:\,x\in M\}$ is called a \emph{graph} over $M$ whenever $h:M\to \R$ is a continuous function with $\|h\|_\infty \leq \delta(M)/2$.
\item[(ii)] The graph $M_h$ is called a ($Z$-)Lipschitz graph (for $Z > 0$) whenever $h$ is ($Z$-)Lipschitz, and a smooth graph whenever $h$ is smooth.
\item[(iii)] A set $N\subset B(M,\delta(M)/2)$ is said to be locally a tangent Lipschitz
graph over $M$ if for every $x_0\in M$ there exists $r>0$ and a Lipschitz
function $h:(x_0 +T_{x_0}M)\cap B(x_0,r)\to \R$ such that the intersection of $N$
with the cylinder $C(x_0,r,\frac{\delta(M)}{2})$ over $(x_0 +T_{x_0}M)\cap B(x_0,r)$ with height $\delta(M)$ in both
directions of $n_M(x_0)$, where
\[
C(x_0,r,s) := \left\{x + tn_M(x_0)\,:\,x\in (x_0 + T_{x_0}M) \cap B(x_0,r), t\in [-s,s] \right\},
\]
is equal to the graph of $h$ over $T_{x_0}M\cap B(x_0,r)$,
\[
N \cap C\left(x_0,r,\frac{\delta(M)}{2}\right) =\{x+h(x)n_M(x_0):x\in (x_0+T_{x_0}M)\cap B(x_0,r)\}\,.
\]
\end{itemize}
\end{defi}
\begin{lma}\label{lma: graph property}
Let $N\subset B(M,\delta(M)/2)$ be locally a tangent Lipschitz graph over $M$.Then $N$ is a Lipschitz graph over $M$.
\end{lma}
\begin{proof}
By Definition \ref{def:Mgraph} (iii), we have that for every $x\in M$, there
exists exactly one element
\[
x'\in
N\cap \left( x+n_M(x_0)[-\delta(M),\delta(M)]\right)\,.
\]
We write $h(x):=(x'-x)\cdot n_M(x)$, which obviously
implies $N=M_h$. For every $x_0\in
M$ there exists a neighborhood of $x_0$ such that $h$ is Lipschitz continuous in
this neighborhood by
the locally tangent Lipschitz property and the regularity of $M$. The global
Lipschitz property for $h$ follows from the local one by a standard covering argument.
\end{proof}
\begin{lma}
\label{lem:graph_rep}
Let $h_j\in W^{1,\infty}(M)$ with $\|h_j\|_{\infty}\leq\delta(M)/2$ and $h_j\weakstar h\in W^{1,\infty}(M)$ for $j\to \infty$. Then
for every point $x\in M$, there exists a neighborhood $V\subset x+T_xM$,
a Euclidean motion $R$ with $U:=R(x+T_xM)\subset \R^2$, functions $\tilde
h_j:U\to\R$ and $\tilde h:U\to \R$ such that $\tilde h_j\weakstar \tilde h$
in $W^{1,\infty}(U)$ and
\[
\begin{split}
R^{-1}\mathrm{Gr}\, \tilde h_j&\subset M_{h_j} \\
R^{-1}\mathrm{Gr}\, \tilde h&\subset M_{h} \,.
\end{split}
\]
\end{lma}
\begin{proof}
This follows immediately from our assumption that $M$ is $C^2$ and the
boundedness of $\|\nabla h_j\|_{L^\infty}$.
\end{proof}
\section{Proof of compactness and lower bound}
\label{sec:proof-comp-lower}
\begin{notation}
\label{not:push_gen}
If $U\subset\R^2$, ${\mathcal T}$ is a flat triangulated surface ${\mathcal T}\subset U$, $h:U\to\R$ is Lipschitz, and
$K=[a,b,c]\in{\mathcal T}$, then we write
\[
h_*K=[(a,h(a)),(b,h(b)),(c,h(c))]\,.
\]
We denote by $h_*{\mathcal T}$ for the triangulated surface defined by
\[
K\in{\mathcal T}\quad\Leftrightarrow \quad h_*K\in
h_*{\mathcal T}\,.
\]
\end{notation}
For an illustration Notation \ref{not:push_gen}, see Figure \ref{fig:push_gen}.
\begin{figure}[h]
\includegraphics[height=5cm]{pushforward_general.pdf}
\caption{Definition of the push forward of a triangulation $\mathcal T\subset \R^2$ by a map $h:\R^2 \to \R$. \label{fig:push_gen}}
\end{figure}
\begin{lma}
\label{lem:CS_trick}
Let $U\subset\R^2$, let ${\mathcal T}$ be a flat triangulated surface with $U\subset
{\mathcal T}\subset\R^2$, let $h$ be a Lipschitz function $U\to \R$ that is
affine on each triangle of ${\mathcal T}$, ${\mathcal T}^*=h_*{\mathcal T}$, let $g$ be a function that is constant on each
triangle of ${\mathcal T}$, $v\in \R^2$, $U^v=\{x\in\R^2:[x,x+v]\subset U\}$, and $W\subset U^v$.
\begin{itemize}\item[(i)]
Then
\[
\begin{split}
\int_{W}& |g(x+v)-g(x)|^2\d x\\
&\leq | v| \left(\sum_{K,L\in{\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W}
\sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{KL}^vl_{KL}d_{K^*L^*}}{l_{K^*L^*}}
\,,
\end{split}
\]
where we have written $K^*=h_*K$, $L^*=h_*L$ for $K,L\in {\mathcal T}$.
\item[(ii)]
Let $w\in\R^2$, and denote by
$\bar K$, $\bar L$ the triangles $K,L$ pushed forward by the map
$x\mapsto (x,x\cdot w)$.
Then
\[
\begin{split}
\int_{W}& |g(x+v)-g(x)|^2\d x\\
&\leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\sum_{K,L\in{\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W}
\sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar
L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\,.
\end{split}
\]
\end{itemize}
\end{lma}
\begin{proof}
By the Cauchy-Schwarz inequality, for $x\in W$, we have that
\[
\begin{split}
| g(x+v)- g(x)|^2&\leq \left(\sum_{K,L\in {\mathcal T}} \mathds{1}_{K
L}^v(x)| g(K)- g(L)|\right)^2\\
&\leq \left(\sum_{K,L\in {\mathcal T}}
\frac{l_{K^*L^*}}{\theta_{KL}^vl_{KL}d_{K^*L^*}}\mathds{1}_{K
L}^v(x)| g(K)- g(L)|^2\right)\\
&\qquad \times\left(\sum_{K,L\in {\mathcal T}}
\mathds{1}^v_{KL}(x)\frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\right)\,.
\end{split}
\]
Using these estimates and Lemma \ref{lem:char_func} (i), we obtain
\begin{equation}
\begin{aligned}
&\int_{U^v} | g(x+v) - g(x)|^2\,\d x\\
\leq & \int_{U^v} \left( \sum_{K,L\in {\mathcal T}}
\mathds{1}^v_{KL}(x)\frac{l_{K^*L^*}}{\theta^v_{KL}l_{KL}d_{K^*L^*}} | g(K) - g(L)|^2 \right)\\
&\quad \times
\left( \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}} \right) \,\d x\\
\leq & |v|\left( \sum_{K,L\in {\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} | g(K) - g(L)|^2 \right) \max_{x\in U^v} \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\,.
\end{aligned}
\end{equation}
This proves (i). The claim (ii) is proved analogously, using $
\frac{\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}$ instead of $
\frac{\theta_{ K L}^{v}l_{ K L}d_{K^*L^*}}{l_{K^*L^*}}$ in the
Cauchy-Schwarz inequality, and then Lemma \ref{lem:char_func} (ii).
\end{proof}
In the following proposition, we will consider sequences of flat triangulated surfaces
${\mathcal T}_j$ with $U\subset{\mathcal T}_j\subset\R^2$ and sequences of Lipschitz functions
$h_j:U\to \R$. We write ${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$, and for $K\in {\mathcal T}_j$, we write
\[
K^*=(h_j)_*K\,.
\]
\begin{prop}\label{prop:lower_blowup}
Let $U,U'\subset\R^2$ be open, $\zeta>0$, $({\mathcal T}_j)_{j\in \N}$
a sequence of flat $\zeta$-regular triangulated surfaces with $U\subset{\mathcal T}_j\subset
U'$ and $\mathrm{size} ({\mathcal T}_j) \to
0$. Let $(h_j)_{j\in\N}$ be a sequence of Lipschitz functions $U'\to \R$ with
uniformly bounded gradients such that $h_j$ is affine
on each triangle of ${\mathcal T}_j$ and the triangulated surfaces
${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$ satisfy the Delaunay property.
\begin{itemize}
\item[(i)] Assume that
\[
\begin{split}
h_j&\weakstar h\quad \text{ in } W^{1,\infty}(U')\,,
\end{split}
\]
and $\liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*) - n(L^*)|^2<\infty$. Then $h\in W^{2,2}(U)$.
\item[(ii)] Let $U=Q=(0,1)^2$, and let $(g_j)_{j\in\N}$ be a sequence of functions $U'\to\R$ such that $g_j$ is constant on
each triangle in ${\mathcal T}_j$. Assume that
\[
\begin{split}
h_j&\to h\quad \text{ in } W^{1,2}(U')\,,\\
g_j&\to g \quad\text{ in }
L^2(U')\,,
\end{split}
\]
where $h(x)=w\cdot x$ and $g(x)=u\cdot x$ for some $u,w\in \R^2$.
Then we have
\[
u^T\left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u \sqrt{1+|w|^2}\leq \liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K) - g_j(L)|^2\,.
\]
\end{itemize}
\end{prop}
\begin{proof}[Proof of (i)]
We write
\[
E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*)
- n(L^*)|^2 \,.
\]
Fix $v\in B(0,1)\subset\R^2$, write $U^v=\{x\in\R^2:[x,x+v]\subset U\}$,
and
fix $k\in \{1,2,3\}$.
Define the function $N_j^k:U\to \R^3$ by requiring $N_j^k(x)=n(K^*)\cdot e_k$ for $x\in K\in{\mathcal T}_j$.
By Lemma \ref{lem:CS_trick} with $g_j=N_j^k$, we
have that
\begin{equation}
\label{eq:11}
\int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x
\leq |v|
\left(\max_{x\in U^v} \sum_{K,L\in{\mathcal T}_j}\mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}
\right) E_j\,.
\end{equation}
Since
$h_j$ is uniformly Lipschitz, there exists a constant $C>0$ such that
\[
\frac{l_{KL}}{l_{K^*L^*}}
d_{K^*L^*}<C d_{KL}\,.
\]
We claim that
\begin{equation}\label{eq:15}
\begin{split}
\max_{x\in U^v} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}^v(x) \theta_{KL}d_{KL}
&\lesssim |v|+C\size({\mathcal T}_{j})\,.
\end{split}
\end{equation}
Indeed, let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$.
We have that for all pairs $K_i,K_{i+1}\in {\mathcal T}_{j}$,
\begin{equation}
\label{eq:12}
\theta_{K_iK_{i+1}} d_{K_iK_{i+1}} = \left|(q(K_{i+1})-q(K_i)) \cdot \frac{v}{|v|}\right| \,,
\end{equation}
which yields the last estimate in \eqref{eq:15}.
Inserting in \eqref{eq:11} yields
\begin{equation}
\int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x
\leq
C|v|(|v|+C\size({\mathcal T}_{j})) E_j\,.
\end{equation}
By passing to the limit $j\to\infty$ and standard difference quotient arguments,
it then follows that the limit $N^k=\lim_{j\to\infty} N_j^k$ is in
$W^{1,2}(U)$. Since $h$ is also in $W^{1,\infty}(U)$ and $(N^k)_{k=1,2,3}=(\nabla
h,-1)/\sqrt{1+|\nabla h|^2}$ is the normal to the graph of $h$, it follows that $h\in W^{2,2}(U)$.
\end{proof}
\bigskip
\begin{proof}[Proof of (ii)]
We write
\[
E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K)
- g_j(L)|^2
\]
and may assume without loss of generality that $\liminf_{j\to \infty}
E_j<\infty$.
Fix $\delta > 0$. Define the set of bad triangles as
\[
{\mathcal B}_j^\delta := \{K \in {\mathcal T}_{j}\,:\,\left|\nabla h_j(K)- w\right| > \delta\}.
\]
Fix $v\in B(0,1)$, and write $Q^v=\{x\in \R^2:[x,x+v]\subset Q\}$. Define the set of good points as
\[
A_j^{\delta,v} := \left\{x\in Q^v: \#\{K\in {\mathcal B}_j^\delta\,:
\,K \cap [x,x+v] \neq \emptyset\} \leq
\frac{\delta|v|}{\size({\mathcal T}_{j})}\right\}.
\]
We claim that
\begin{equation}\label{eq:17}
\L^2(Q^v \setminus A_j^{\delta,v}) \to 0\qquad\text{ for } j\to\infty\,.
\end{equation}
Indeed,
let $v^\bot=(-v_2,v_1)$, and let $P_{v^\bot}:\R^2\to v^\bot \R$ denote the projection onto the linear subspace parallel to $v^\bot$. Now by the definition of $A_j^{\delta,v}$, we may estimate
\[
\begin{split}
\int_{Q^v}|\nabla h_j-w|^2\d x \gtrsim & \# \mathcal B_j^{\delta} \left(\size {\mathcal T}_j \right)^2 \delta\\
\gtrsim & \frac{\L^2(Q\setminus A_j^{\delta,v})}{|v|\size{\mathcal T}_j}\frac{\delta|v|}{\size {\mathcal T}_j} \left(\size {\mathcal T}_j \right)^2 \delta\\
\gtrsim &\L^2(Q^v \setminus A_j^{\delta,v})\delta^2|v|\,,
\end{split}
\]
and hence \eqref{eq:17} follows by
$h_j\to h$ in $W^{1,2}(Q)$.
For the push-forward of $v$ under the affine map $x\mapsto (x,h(x))$,
we write
\[
\bar v= (v,v\cdot w)\in\R^3\,.
\]
Also, for $K=[a,b,c]\in {\mathcal T}_j$, we write
\[
\bar K=[(a,a\cdot w),(b,b\cdot w),(c,c\cdot w)]=h_*K\,.
\]
By Lemma \ref{lem:CS_trick}, we have that
\begin{equation}
\label{eq: difference quotient estimate}
\begin{split}
\int_{A_j^{\delta, v}} &| g_{j}(x+v) - g_j(x)|^2\d x \\
& \leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\max_{x\in A_j^{\delta, v}}
\sum_{K,L\in {\mathcal T}_j} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar L}^{\bar
v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\right) E_j\,.
\end{split}
\end{equation}
We claim that
\begin{equation}
\max_{x\in A_j^{\delta, v}} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}(x)
\frac{\theta_{\bar K \bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}} \leq
(1+C\delta)\left(|\bar v|+C\size({\mathcal T}_j)\right)\,.\label{eq:16}
\end{equation}
Indeed,
Let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$.
For all pairs $K_i,K_{i+1}\in {\mathcal T}_{j} $ we have
\begin{equation}
\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}d_{\bar K_i\bar K_{i+1}} = (q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|} \,.
\end{equation}
Also, we have that for $K_i,K_{i+1}\in {\mathcal T}_{j} \setminus
{\mathcal B}_j^\delta$,
\begin{equation*}
\begin{split}
\frac{l_{K_i^*K_{i+1}^*}d_{\bar K_i\bar K_{i+1}}}{l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}&\leq 1+C\delta\,.
\end{split}
\end{equation*}
Hence
\begin{equation}\label{eq: good triangles}
\begin{split}
\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset}&
\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}\\
& \leq (1+C\delta)\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset}
\left(\left(q(\bar K_{i+1})-q(\bar K_i)\right)\cdot
\frac{\bar v}{|\bar v|}\right)\,
\end{split}
\end{equation}
If one of the triangles $K_i,K_{i+1}$ is in ${\mathcal B}_j^\delta$, then we may estimate
\[
\left|\left(q(\bar K_{i+1})-q(\bar K_i)\right) \cdot \frac{\bar v}{|\bar v|}\right|\leq C\size{\mathcal T}_j\,.
\]
Since there are few bad triangles along $[x,x+v]$, we have, using $x\in A_j^{\delta,v}$,
\begin{equation}\label{eq: bad triangles}
\begin{split}
\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta \neq \emptyset}&
\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}-(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}\\
&\leq C\#\{K\in
{\mathcal B}_j^\delta\,:\,K \cap [x,x+v] \neq \emptyset\} \size({\mathcal T}_j)
\\
&\leq C\delta|\bar v|\,.
\end{split}
\end{equation}
Combining \eqref{eq: good triangles} and \eqref{eq: bad triangles} yields
\begin{equation*}
\begin{split}
\sum_{i = 0}^{N-1}\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}&
\leq (1+C\delta)\sum_{i = 0}^{N-1}(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}+C\delta|\bar v|\\
&= (1+C\delta)(q(\bar K_N) -
q(\bar K_0)) \cdot \frac{\bar v}{|\bar v|} + C \delta |\bar v| \\
&\leq (1+C\delta)\left(|\bar v|
+ C\size({\mathcal T}_{j})\right).
\end{split}
\end{equation*}
This proves \eqref{eq:16}.
\medskip
Inserting \eqref{eq:16} in \eqref{eq: difference quotient estimate}, and
passing to the limits $j\to\infty$ and $\delta\to 0$, we obtain
\[|v\cdot u |^2
\leq \frac{|\bar v|^2}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,.
\]
Now let
\[
\underline{u}:=\left(\mathds{1}_{2\times 2},w\right)^T \left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u\,.
\]
Then we have $|\underline{u}\cdot \bar v|=|u\cdot v|$ and hence
\[
\begin{split}
|\underline{u}|^2&=\sup_{v\in \R^2\setminus \{0\}}\frac{|\underline{u}\cdot \bar v|^2}{|\bar v|^2}\\
&\leq \frac{1}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,.
\end{split}
\]
This proves the proposition.
\end{proof}
\subsection{Proof of compactness and lower bound in Theorem \ref{thm:main}}
\begin{proof}[Proof of Theorem \ref{thm:main} (o)]
For a subsequence (no relabeling), we have that $h_j\weakstar h$ in
$W^{1,\infty}(M)$. By Lemma \ref{lem:graph_rep}, ${\mathcal T}_j$ may be locally
represented as the graph of a Lipschitz function $\tilde h_j:U\to \R$, and $M_h$
as the graph of a Lipschitz function $\tilde h:U\to \R$, where $U\subset\R^2$
and $\tilde h_j\weakstar \tilde h$ in $W^{1,\infty}(U)$
\medskip
It remains to prove that $\tilde h\in W^{2,2}(U)$. Since the norm of the
gradients are uniformly bounded,
$\|\nabla \tilde h_j\|_{L^\infty(U)}<C$, we have that the projections of ${\mathcal T}_j$ to
$U$ are (uniformly) regular flat triangulated surfaces. Hence by Proposition
\ref{prop:lower_blowup} (i), we have that $\tilde h\in W^{2,2}(U)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main} (i)]
Let $\mu_j = \sum_{K,L\in {\mathcal T}_j} \frac{1}{d_{KL}} |n(K) - n(L)|^2
\H^1|_{K\cap L}\in {\mathcal M}_+(\R^3)$. Note that either a subsequence of $\mu_j$ converges
narrowly to some $\mu \in {\mathcal M}_+(M_h)$ or there is nothing to show. We will show in
the first case that
\begin{equation}
\frac{d\mu}{d\H^2}(z) \geq |Dn_{M_h}|^2(z)\label{eq:7}
\end{equation}
at $\H^2$-almost every point $z\in M_h$ which implies in particular the lower bound.
By Lemma \ref{lem:graph_rep}, we may reduce the proof to the situation that $M_{h_j}$, $M_h$ are given as
graphs of Lipschitz functions $\tilde h_j:U\to \R$, $\tilde h:U\to \R$
respectively, where $U\subset \R^2$ is some open bounded set.
We have that $\tilde h_j$ is piecewise
affine on some (uniformly in $j$) regular triangulated surface $\tilde {\mathcal T}_j$ that satisfies
\[
(\tilde h_j)_*\tilde {\mathcal T}_j={\mathcal T}_j\,.
\]
Writing down the surface normal to $M_h$ in the coordinates of $U$,
\[N(x)=\frac{(-\nabla \tilde h, 1)}{\sqrt{1+|\nabla \tilde h|^2}}\,,
\]
we have that almost every $x\in U$ is a Lebesgue point of $\nabla N$.
We write $N^k=N\cdot e_k$ and note that \eqref{eq:7} is equivalent to
\begin{equation}
\label{eq:8}
\frac{\d\mu}{\d\H^2}(z)\geq \sum_{k=1}^3\nabla N^k(x)\cdot \left(\mathds{1}_{2\times
2}+\nabla \tilde h(x)\otimes\nabla \tilde h(x)\right)^{-1}\nabla
N^k(x)\,,
\end{equation}
where $z=(x,\tilde h(x))$.
Also, we define $N_j^k:U\to\R^3$ by letting $N_j^k(x)=n((\tilde h_j)_*K)\cdot
e_k$ for $x\in K\in
\tilde {\mathcal T_j}$. (We recall that $n((\tilde h_j)_*K)$ denotes the
normal of the triangle $(\tilde h_j)_*K$.)
\medskip
Let now $x_0\in U$ be a Lebesgue point of $\nabla \tilde h$ and $\nabla N$.
We write $z_0=(x_0,\tilde h(x_0))$.
Combining the narrow convergence $\mu_j\to\mu$ with the Radon-Nikodym differentiation Theorem, we may choose a sequence $r_j\downarrow 0$ such that
\[
\begin{split}
r_j^{-1}\size{{\mathcal T}_j}&\to 0\\
\liminf_{j\to\infty}\frac{\mu_j(Q^{(3)}(x_0,r_j))}{r_j^2}&= \frac{\d\mu}{\d\H^2}(z_0)\sqrt{1+|\nabla \tilde h(x_0)|^2}\,,
\end{split}
\]
where $Q^{(3)}(x_0,r_j)=x_0+[-r_j/2,r_j/2]^2\times \R$ is the cylinder over $Q(x_0,r_j)$.
Furthermore, let $\bar N_j,\bar h_j,\bar N,\bar h: Q\to \R$ be defined by
\[
\begin{split}
\bar N_j^k(x)&=\frac{N_j(x_0+r_j x)-N_j(x_0)}{r_j}\\
\bar N^k(x)&=\nabla N^k(x_0)\cdot (x-x_0)\\
\bar h_j(x)&=\frac{\tilde h_j(x_0+r_j x)-\tilde h_j(x_0)}{r_j}\\
\bar h(x)&=\nabla \tilde h(x_0)\cdot (x-x_0)\,.
\end{split}
\]
We recall that by assumption we have that $N^k\in W^{1,2}(U)$. This
implies in particular that (unless $x_0$ is contained in a certain set of
measure zero, which we discard), we have that
\begin{equation}\label{eq:9}
\bar N_j^k\to \bar N^k\quad\text{ in } L^2(Q)\,.
\end{equation}
Also, let $T_j$ be the blowup map
\[
T_j(x)=\frac{x-x_0}{r_j}
\]
and let ${\mathcal T}_j'$ be the triangulated surface one obtains by blowing up $\tilde{\mathcal T}_j$,
defined by
\[
\tilde K\in \tilde {\mathcal T}_j\quad \Leftrightarrow \quad T_j\tilde K \in {\mathcal T}_j'\,.
\]
Now let $\mathcal S_j$ be the smallest subset of ${\mathcal T}_j'$ (as sets of
triangles) such that
$Q\subset\mathcal S_j$ (as subsets of $\R^2$).
Note that $\size\mathcal S_j\to 0$, $\bar N_j^k$ is constant and $\bar h_j$ is
affine on each $K\in \mathcal S_j$. Furthermore, for $x\in K\in \tilde {\mathcal T}_j$, we have that
\[
\nabla \tilde h_j(x)=\nabla \bar h_j(T_jx)
\]
This implies in particular
\begin{equation}
\bar h_j\to \bar h\quad \text{ in } W^{1,2}(Q)\,.\label{eq:6}
\end{equation}
Concerning the discrete energy functionals, we have for the rescaled
triangulated surfaces $({\mathcal T}_j')^*=(\bar h_j)_* {\mathcal T}_j'$, with $K^*=(\bar h_j)_*K$
for $K\in {\mathcal T}_j'$,
\begin{equation}\label{eq:10}
\liminf_{j\to\infty} \sum_{K,L\in
{\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\leq \liminf_{j\to\infty}r_j^{-2}\mu_j(Q^{(3)}(x_0,r_j)) \,.
\end{equation}
Thanks to \eqref{eq:9}, \eqref{eq:6}, we may apply Proposition
\ref{prop:lower_blowup} (ii) to the sequences of functions $(\bar h_j)_{j\in\N}$,
$(\bar N_j^k)_{j\in\N}$. This yields (after summing over $k\in\{1,2,3\}$)
\[
\begin{aligned}
|Dn_{M_h}|^2(z_0)&\sqrt{1+|\nabla \tilde
h(x_0)|^2}\\
& = \nabla N(x_0)\cdot \left(\mathds{1}_{2\times 2} +\nabla \tilde h(x_0)\otimes
\nabla \tilde h(x_0)\right)^{-1}\nabla N(x_0)\sqrt{1+|\nabla \tilde
h(x_0)|^2} \\
& \leq \liminf_{j\to\infty} \sum_{K, L\in
{\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\,,
\end{aligned}
\]
which in combination with \eqref{eq:10} yields \eqref{eq:8} for $x=x_0$, $z=z_0$
and completes the proof
of the lower bound.
\end{proof}
\section{Surface triangulations and upper bound}
\label{sec:surf-triang-upper}
Our plan for the construction of a recovery sequence is as follows: We shall construct optimal sequences of triangulated surfaces first locally around a point $x\in M_h$. It turns out the optimal triangulation must be aligned with the principal curvature directions at $x$. By a suitable covering of $M_h$, this allows for an approximation of the latter in these charts (Proposition \ref{prop: local triangulation}). We will then formulate sufficient conditions for a vertex set to supply a global approximation (Proposition \ref{prop: Delaunay existence}). The main work that remains to be done at that point to obtain a proof of Theorem \ref{thm:main} (ii) is to add vertices to the local approximations obtained from Proposition \ref{prop: local triangulation} such that the conditions of Proposition \ref{prop: Delaunay existence} are fulfilled.
\subsection{Local optimal triangulations}
\begin{prop}\label{prop: local triangulation}
There are constants $\delta_0, C>0$ such that for all $U \subset \R^2$ open, convex, and bounded; and $h\in C^3(U)$ with $\|\nabla h\|_\infty \eqqcolon \delta \leq \delta_0$, the following holds:
Let $\varepsilon > 0$, $ C\delta^2 < |\theta| \leq \frac12$, and define $X \coloneqq \{(\varepsilon k + \theta \varepsilon l , \varepsilon l, h(\varepsilon k + \theta \varepsilon l, \varepsilon l))\in U\times \R\,:\,k,l\in \Z\}$. Then any Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$ and maximum circumradius $\max_{K\in {\mathcal T}} r(K) \leq \varepsilon$ has
\begin{equation}\label{eq: local error}
\begin{aligned}
\sum_{K,L\in {\mathcal T}}& \frac{\l{K}{L}}{d_{KL}}|n(K) - n(L)|^2\\
\leq &\left(1+ C(|\theta|+\delta+\varepsilon)\right) \L^2(U) \times\\
&\times\left(\max_{x\in U} |\partial_{11} h(x)|^2 + \max_{x\in U} |\partial_{22} h(x)|^2 + \frac{1}{|\theta|} \max_{x\in U} |\partial_{12} h(x)|^2 \right)+C\varepsilon\,.
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
We assume without loss of generality that $\theta > 0$.
We consider the projection of $X$ to the plane,
\[
\bar X:=\{(\varepsilon k + \theta \varepsilon l , \varepsilon l)\in U:k,l\in\Z\}\,.
\]
Let $\bar{\mathcal T}$ be the flat triangulated surface that consists of the triangles of the form
\[
\begin{split}
\varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l+1)(\theta e_1+e_2)]\\
\text{ or } \quad \varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l-1)(\theta e_1+e_2)]\,,
\end{split}
\]
with $k,l\in \Z$ such that the triangles are contained in $U$, see Figure \ref{fig:upper2d_barT}.
\begin{figure}[h]
\centering
\includegraphics[height=5cm]{upper2d_barT.pdf}
\caption{The flat triangulated surface $\bar {\mathcal T}$. \label{fig:upper2d_barT}}
\end{figure}
Obviously the flat triangulated surface $\bar{\mathcal T}$
has vertex set $\bar X$. Also, we have that
\begin{equation}\label{eq:19}
|x-y|\leq |(x,h(x))-(y,h(y))|\leq (1+C\delta)|x-y|
\end{equation}
for all $x,y \in \bar X$. We claim that for $\delta$ chosen small enough, we have the implication
\begin{equation}\label{eq:18}
h_*K=[(x,h(x)),(y,h(y)),(z,h(z))]\in {\mathcal T}\quad \Rightarrow \quad K= [x,y,z]\in \bar{\mathcal T} \,.
\end{equation}
Indeed, if $K\not\in \bar {\mathcal T}$, then either $r(K)>\frac32\varepsilon$ or there exists $w\in X$ with $|w-q(K)|<(1 -C\theta)r(K)$. In the first case, $r(h_*K)>(1-C\delta)\frac32\varepsilon$ by \eqref{eq:19} and hence $h_*K\not\in {\mathcal T}$ for $\delta$ small enough. In the second case, we have by \eqref{eq:19} and Lemma \ref{lma: circumcenter regularity} that
\[
|(w,h(w))-q(h_*K)|<(1+C\delta)(1 -C\theta)r(h_*K)\,,
\]
and hence $h_*K$ does not satisfy the Delaunay property for $\delta$ small enough. This proves \eqref{eq:18}.
Let $[x,y]$ be an edge with either $x,y \in X$ or $x,y \in \bar X$. We call this edge \emph{horizontal} if $(y-x) \cdot e_2 = 0$, \emph{vertical} if $(y-x) \cdot (e_1 - \theta e_2)= 0$, and \emph{diagonal} if $(y-x) \cdot (e_1 + (1-\theta) e_2) = 0$.
By its definition, $\bar {\mathcal T}$ consists only of triangles with exactly one horizontal, vertical, and diagonal edge each. By what we have just proved,
the same is true for ${\mathcal T}$.
\medskip
To calculate the differences between normals of adjacent triangles, let us consider one fixed triangle $K\in {\mathcal T}$ and its neighbors $K_1,K_2,K_3$, with which $K$ shares a horizontal, diagonal and vertical edge respectively, see Figure \ref{fig:upper2d}.
\begin{figure}[h]
\includegraphics[height=5cm]{upper2d.pdf}
\caption{Top view of a triangle $K\in{\mathcal T}$ with its horizontal, diagonal and vertical neighbors $K_1,K_2,K_3$. \label{fig:upper2d}}
\end{figure}
We assume without loss of generality that one of the vertices of $K$ is the origin. We write
$x_0=(0,0)$, $x_1=\varepsilon(1-\theta,-1)$, $x_2=\varepsilon(1,0)$, $x_3=\varepsilon(1+\theta,1)$, $x_4=\varepsilon(\theta,1)$, $x_5=\varepsilon(\theta-1,1)$, and $y_i=(x_i, h(x_i))$ for $i=0,\dots,5$. With this notation we have $K=[y_0,y_2,y_4]$, $K_1=[y_0,y_1,y_2]$, $K_2=[y_2,y_3,y_4]$ and $K_3=[y_4,y_5,y_0]$. See Figure \ref{fig:upper2d_barT}. As approximations of the normals, we define
\[
\begin{split}
v(K)&=\varepsilon^{-2}y_2\wedge y_4\,\\
v(K_1)&=\varepsilon^{-2} y_1\wedge y_2\\
v(K_2)&= \varepsilon^{-2}(y_3-y_2)\wedge(y_4-y_2)\\
v(K_3)&= \varepsilon^{-2} y_4\wedge y_5\,.
\end{split}
\]
Note that $v(L)$ is parallel to $n(L)$ and $|v(L)|\geq 1$ for $L\in \{K,K_1,K_2,K_3\}$.
Hence for $i=1,2,3$, we have that
\[
|n(K)-n(K_i)|^2\leq |v(K)-v(K_i)|^2\,.
\]
For each $x_i$, we write
\[
h(x_i)= x_i \cdot \nabla h(0) + \frac12 x_i \nabla^2 h(0) x_i^T+O(\varepsilon^3)\,,
\]
where $O(\varepsilon^3)$ denotes terms $f(\varepsilon)$ that satisfy
$\limsup_{\varepsilon\to 0}\varepsilon^{-3}|f(\varepsilon)|<\infty$.
By an explicit computation we obtain that
\[
\begin{split}
\left|v(K)-v(K_1)\right|^2&= \varepsilon^2\left|(\theta-1)\theta \partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2+O(\varepsilon^3)\\
\left|v(K)-v(K_2)\right|^2&= \varepsilon^2\left(\left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2\right)+O(\varepsilon^3)\\
\left|v(K)-v(K_3)\right|^2&=\varepsilon^2\left( \theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2\right)+O(\varepsilon^3)\,,
\end{split}
\]
where all derivatives of $h$ are taken at $0$.
Using the Cauchy-Schwarz inequality and $|1-\theta|\leq 1$, we may estimate the term on the right hand side in the first line above,
\[
\left|(\theta-1)\theta\partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2
\leq (1+\theta) |\partial_{22}h|^2+ \left(1+\frac{C}{\theta}\right)\left(\theta^2 |\partial_{11} h|^2+|\partial_{12}h|^2\right)\,.
\]
In a similar same way, we have
\[
\begin{split}
\left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2&\leq C(|\partial_{12}h|^2
+\theta^2|\partial_{11} h|^2)\\
\theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2&\leq (1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}|\partial_{12}h|^2\,,
\end{split}
\]
so that
\[
\begin{split}
\left|n(K)-n(K_1)\right|^2&\leq \varepsilon^2(1+\theta) |\partial_{22}h|^2+ C\varepsilon^2 \left(\theta |\partial_{11} h|^2+ \frac1\theta |\partial_{12}h|^2\right)+O(\varepsilon^3)\\
\left|n(K)-n(K_2)\right|^2&\leq C\varepsilon^2 (|\partial_{12}h|^2
+\theta^2|\partial_{11} h|^2)+O(\varepsilon^3)\\
\left|n(K)-n(K_3)\right|^2&\leq \varepsilon^2(1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}\varepsilon^2|\partial_{12}h|^2+O(\varepsilon^3)\,,
\end{split}
\]
Also, we have by Lemma \ref{lma: circumcenter regularity} that
\[
\begin{split}
\frac{l_{KK_1}}{d_{KK_1}}&\leq 1+C(\delta+\varepsilon+\theta)\\
\frac{l_{KK_2}}{d_{KK_2}}&\leq \left(1+C(\delta+\varepsilon+\theta)\right) \frac{C}{\theta}\\
\frac{l_{KK_3}}{d_{KK_3}}&\leq 1+C(\delta+\varepsilon+\theta)\,.
\end{split}
\]
Combining all of the above, and summing up over all triangles in ${\mathcal T}$, we obtain the statement of the proposition.
\end{proof}
\begin{comment}
\begin{proof}[Old proof]
Consider the points $a,b,c,d,e,f \in U$ defined, after translation, as $a \coloneqq (0,0)$, $b \coloneqq (\varepsilon,0)$, $c \coloneqq (\theta \varepsilon, \varepsilon)$, $d \coloneqq ((1+\theta)\varepsilon, \varepsilon)$, $e \coloneqq ((1-\theta)\varepsilon, -\varepsilon)$, and $f \coloneqq ((\theta-1)\varepsilon,\varepsilon)$. We consider the lifted points $A = (a,h(a))\in X$, and likewise $B,C,D,E,F\in X$.
First, we note that ${\mathcal T}$ only contains translated versions of the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $P \coloneqq [A,B,D]$, and $S \coloneqq [A,C,D]$, as all other circumballs $\overline{B(q,r)}$ with $r\leq \varepsilon$ contain some fourth point in $X$.
We note that ${\mathcal T}$ contains the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $N \coloneqq [A,B,E]$, and $O \coloneqq [A,C,F]$. We then define
\begin{equation}
\begin{aligned}
v(K) \coloneqq \frac1{\varepsilon^2}(B-A)\times(C-A), v(L) \coloneqq \frac1{\varepsilon^2}(D-C)\times (D-B),\\
v(N) \coloneqq \frac1{\varepsilon^2}(B-A) \times (B-E), v(O) \coloneqq \frac1{\varepsilon^2}(C-F)\times(C-A).
\end{aligned}
\end{equation}
Then by the convex projection theorem
\begin{equation}
\begin{aligned}
|n(K) - n(L)| \leq &|v(K) - v(L)| \leq \frac{1}{\varepsilon} (2+\theta) |h(d)-h(c)-h(b)+h(a)|,\\
|n(K) - n(N)| \leq &|v(K) - v(N)| = \frac{1}{\varepsilon} |h(c)-h(a)-h(b)+h(e)|,\\
|n(K) - n(O)| \leq &|v(K) - v(O)| \leq \frac{1}{\varepsilon} (1+\theta) |h(b)-h(a)-h(c)+h(f)|.
\end{aligned}
\end{equation}
By the fundamental theorem of calculus, we may rewrite these second differences of the parallelogram by e.g.
\begin{equation}
h(d)-h(c)-h(b)+h(a) = \frac{1}{|(b-a)\wedge (c-a)|}\int_{[a,b,c,d]} (b-a) \cdot D^2 h(x) (c-a)\,dx
\end{equation}
and thus estimate
\begin{equation}
\begin{aligned}
|h(d)-h(c)-h(b)+h(a)| \leq &(1+C\theta) \int_{[a,b,c,d]} |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\
|h(c)-h(a)-h(b)+h(e)| \leq &(1+C\theta) \int_{[a,b,c,e]}|\partial_{22} h(x)| + |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\
|h(b)-h(a)-h(c)+h(f)| \leq &(1+C\theta)\int_{[a,b,c,f]} |\partial_{12} h(x)| + |\partial_{11} h(x)|\,dx.
\end{aligned}
\end{equation}
We note that all interactions appearing in the left-hand side of \eqref{eq: local error} are translations of the diagonal $(K,L)$, vertical $(K,N)$, and horizontal $(K,O)$ cases above. We estimate the prefactors using Lemma \ref{lma: circumcenter regularity}:
\begin{equation}
\frac{\l{K}{L}}{d_{KL}} \leq \frac{C}{\theta},\,\frac{\l{K}{N}}{d_{KN}} \leq 1 + C\theta + C\delta,\,\frac{\l{K}{O}}{d_{KO}} \leq 1 + C\theta + C\delta.
\end{equation}
This allows us to bound all diagonal interactions, using H\"older's inequality, by
\begin{equation}
\frac12 \sum_{\tilde K, \tilde L\text{ diagonal}} \frac{\l{\tilde K}{\tilde L}}{d_{\tilde K \tilde L}} |n(\tilde K) - n(\tilde L)|^2 \leq \frac{C}{\theta}\left(C\delta^2 + C\theta^2\int_U |\partial_{11} h(x)|^2\,dx\right).
\end{equation}
On the other hand, we may bound the vertical interactions by
\begin{equation}
\begin{aligned}
&\frac12 \sum_{\tilde K, \tilde N\text{ vertical}} \frac{\l{\tilde K}{\tilde N}}{d_{\tilde K \tilde N}} |n(\tilde K) - n(\tilde N)|^2\\
\leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{22} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2 + \theta^2 |\partial_{11}h(x)|^2\,dx \right),
\end{aligned}
\end{equation}
and similarly all horizontal interactions by
\begin{equation}
\begin{aligned}
&\frac12 \sum_{\tilde K, \tilde O\text{ horizontal}} \frac{\l{\tilde K}{\tilde O}}{d_{\tilde K \tilde O}} |n(\tilde K) - n(\tilde O)|^2\\
\leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{11} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2\,dx \right),
\end{aligned}
\end{equation}
Combining these three estimates yields the result.
\end{proof}
\end{comment}
\subsection{Global triangulations}
We are going to use a known fact about triangulations of point sets in $\R^2$, and transfer them to $\R^3$. We first cite a result for planar Delaunay triangulations, Theorem \ref{thm: planar Delaunay} below, which can be found in e.g. \cite[Chapter 9.2]{berg2008computational}. This theorem states the existence of a Delaunay triangulated surface associated to a \emph{protected} set of points.
\begin{defi}
Let $N\subset\R^3$ be compact, $X\subset N$ a finite set of points and
\[
D(X,N)=\max_{x\in N}\min_{y\in X}|x-y|\,.
\]
We say that $X$ is $\bar \delta$-protected if whenever $x,y,z \in X$ form a regular triangle $[x,y,z]$ with circumball $\overline{B(q,r)}$ satisfying $r \leq D(X,N)$, then $\left| |p-q| - r \right| \geq \bar\delta$ for any $p\in X \setminus \{x,y,z\}$.
\end{defi}
\begin{thm}\label{thm: planar Delaunay}[\cite{berg2008computational}]
Let $\alpha > 0$. Let $X\subset \R^2$ be finite and not colinear. Define $\Omega := \conv(X)$. Assume that
\[\min_{x\neq y \in X} |x-y| \geq \alpha D(X,\Omega)\,,
\]
and that $X$ is $\delta D(X,\Omega)$-protected for some $\delta>0$. Then there exists a unique maximal Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$, given by all regular triangles $[x,y,z]$, $x,y,z\in X$, with circumdisc $\overline{B(q,r)}$ such that $B(q,r) \cap X = \emptyset$.
The triangulated surface ${\mathcal T}$ forms a partition of $\Omega$, in the sense that
\[
\sum_{K\in {\mathcal T}} \mathds{1}_K = \mathds{1}_\Omega\quad {\mathscr H}^2\text{almost everywhere}\,,
\]
where $\mathds{1}_A$ denotes the characteristic function of $A\subset \R^3$.
Further, any triangle $K\in {\mathcal T}$ with $\dist(K,\partial \Omega) \geq 4D(X,\Omega)$ is $c(\alpha)$-regular, and $d_{KL} \geq \frac{\delta}{2} D(X,\Omega)$ for all pairs of triangles $K \neq L \in {\mathcal T}$.
\end{thm}
We are now in position to formulate sufficient conditions for a vertex set to yield a triangulated surface that serves our purpose.
\begin{prop}\label{prop: Delaunay existence}
Let $N\subset\R^3$ be a 2-dimensional compact smooth manifold, and let $\alpha, \delta > 0$. Then there is $\varepsilon = \varepsilon(N,\alpha,\delta)>0$ such that whenever $X\subset N$ satisfies
\begin{itemize}
\item [(a)]$D(X,N) \leq \varepsilon$,
\item [(b)] $\min_{x,y\in X} |x-y| \geq \alpha D(X,N)$,
\item [(c)] $X$ is $\delta D(X,N)$-protected
\end{itemize}
then there exists a triangulated surface ${\mathcal T}(X,N)$ with the following properties:
\begin{itemize}
\item [(i)] $\size({\mathcal T}(X,N)) \leq 2D(X,N)$.
\item [(ii)] ${\mathcal T}(X,N)$ is $c(\alpha)$-regular.
\item [(iii)] ${\mathcal T}(X,N)$ is Delaunay.
\item [(iv)] Whenever $K\neq L \in {\mathcal T}(X,N)$, we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$.
\item [(v)] The vertex set of ${\mathcal T}(X,N)$ is $X$.
\item [(vi)] ${\mathcal T}(X,N)$ is a $C(\alpha, N)D(X,N)$-Lipschitz graph over $N$. In particular, ${\mathcal T}(X,N)$ is homeomorphic to $N$.
\end{itemize}
\end{prop}
The surface case we treat here can be viewed as a perturbation of Theorem \ref{thm: planar Delaunay}. We note that the protection property (c) is vital to the argument. A very similar result to Proposition \ref{prop: Delaunay existence} was proved in \cite{boissonnat2013constructing}, but we present a self-contained proof here.
\begin{proof}[Proof of Proposition \ref{prop: Delaunay existence}]
We construct the triangulated surface ${\mathcal T}(X,N)$ as follows: Consider all regular triangles $K=[x,y,z]$ with $x,y,z\in X$ such that the Euclidean Voronoi cells $V_x,V_y,V_z$ intersect in $N$, i.e. there is $\tilde q \in N$ such that $|\tilde q - x| = |\tilde q - y| = |\tilde q - z| \leq |\tilde q - p|$ for any $p\in X\setminus \{x,y,z\}$.
\emph{Proof of (i):} Let $[x,y,z]\in {\mathcal T}(X,N)$. Let $\tilde q \in V_x \cap V_y \cap V_z \cap N$, set $\tilde r := |\tilde q - x|$. Then $\tilde r = \min_{p\in X} |\tilde q - p| \leq D(X,N)$, and because $[x,y,z]\subset \overline{B(\tilde q, \tilde r)}$ we have $\diam([x,y,z])\leq 2 \tilde r \leq 2D(X,N)$.
\emph{Proof of (ii):} Let $\overline{B(q,r)}$ denote the Euclidean circumball of $[x,y,z]$. Then $r\leq \tilde r$ by the definition of the circumball. Thus $\min(|x-y|,|x-z|,|y-z|) \geq \alpha r$, and $[x,y,z]$ is $c(\alpha)$-regular by the following argument: Rescaling such that $r = 1$, consider the class of all triangles $[x,y,z]$ with $x,y,z \in S^1$, $\min(|x-y|,|x-z|,|y-z|) \geq \alpha$. All these triangles are $\zeta$-regular for some $\zeta>0$, and by compactness there is a least regular triangle in this class. That triangle's regularity is $c(\alpha)$.
\emph{Proof of (iii):} Because of (ii), $N\cap \overline{B(q,r)}$ is a $C(\alpha, N)\varepsilon$-Lipschitz graph over a convex subset $U$ of the plane $ x + \R(y-x) + \R(z-x)$, say $N\cap \overline{B(q,r)} = U_h$. It follows that $\tilde q - q = h(\tilde q) n_U$. Because $h(x)= 0$, it follows that $|\tilde q - q| = |h(\tilde q)| \leq C(\alpha, N) D(X,N)^2$.
Thus, for $D(X,N) \leq \delta(2C(\alpha,N))^{-1}$, we have that $|\tilde q - q| \leq \frac{\delta}{2}D(X,N)$. This together with (c) suffices to show the Delaunay property of ${\mathcal T}(X,N)$: Assume there exists $p\in X \setminus \{x,y,z\} \cap B(q,r)$. Then by (c) we have $|p-q| \leq r - \delta D(X,N)$, and by the triangle inequality $|p-\tilde q| \leq |p- q| + \frac{\delta}{2}D(x,N) < \tilde r$, a contradiction.
\emph{Proof of (iv):}
It follows also from (c) and Lemma \ref{lma: circumcenter regularity} that for all adjacent $K,L\in {\mathcal T}(X,N)$ we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$.
\emph{Proof of (v) and (vi):} Let $\eta>0$, to be fixed later. There is $s>0$ such that for every $x_0\in N$, the orthogonal projection $\pi:\R^3 \to x_0 + T_{x_0}N$ is an $\eta$-isometry when restricted to $N\cap B(x_0,s)$, in the sense that that $|D\pi - \mathrm{id}_{TN}|\leq \eta$.
Let us write $X_\pi=\pi(X\cap B(x_0,s))$. This point set fulfills all the requirements of Theorem \ref{thm: planar Delaunay} (identifying $x_0+T_{x_0}N$ with $\R^2$), except for possibly protection.
We will prove below that
\begin{equation}\label{eq:23}
X_\pi\text{ is } \frac{\delta}{4}D(X,N) \text{-protected}.
\end{equation}
We will then consider the planar Delaunay triangulated surface ${\mathcal T}' \coloneqq {\mathcal T}(X_\pi, x_0 + T_{x_0}N)$, and show that for $x,y,z\in B(x_0,s/2)$ we have
\begin{equation}\label{eq:22}
K:=[x,y,z]\in {\mathcal T}(X,N)\quad \Leftrightarrow \quad K_\pi:=[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'\,
\end{equation}
If we prove these claims, then (v) follows from Theorem \ref{thm: planar Delaunay}, while (vi) follows from Theorem \ref{thm: planar Delaunay} and Lemma \ref{lma: graph property}.
\medskip
We first prove \eqref{eq:23}: Let $\pi(x),\pi(y),\pi(z)\in X_\pi$, write $K_\pi= [\pi(x),\pi(y),\pi(z)]$, and assume $r(K_\pi)\leq D(X_\pi,\mathrm{conv}(X_{\pi}))$. For a contradiction, assume that $\pi(p)\in X_\pi\setminus \{\pi(x),\pi(y),\pi(z)\}$ such that
\[
\left||q(K_\pi)-\pi(p)|-r(K_\pi)\right|<\frac{\delta}{4}D(X,N)\,.
\]
Using again $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity},
we obtain, with $K=[x,y,z]$,
\[
\left||q(K)-p|-r(K)\right|<(1+C\eta)\frac{\delta}{4}D(X,N)\,.
\]
Choosing $\eta$ small enough, we obtain a contradiction to (c). This completes the proof of \eqref{eq:23}.
\medskip
Next we show the implication $K\in {\mathcal T}\Rightarrow K_\pi\in {\mathcal T}'$: Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$.
Assume for a contradiction that $\pi(p)$ is contained in the circumball of $K_\pi$,
\[
|\pi(p) - q(K_\pi)|\leq r(K_\pi)\,.
\] Then by $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity}\,,
\[
|p-q(K)|\leq r(K) + C(\alpha)\eta D(X,N)\,.
\]
Choosing $\eta<\delta/(2C(\alpha))$, we have by (c) that
\[|p-q(K)| \leq r(K) - \delta D(X,N)\,,
\]
which in turn implies $|p-\tilde q| < \tilde r$.
This is a contradiction to $\tilde q \in V_x \cap V_y \cap V_z$, since $p$ is closer to $\tilde q$ than any of $x,y,z$. This shows $K_\pi\in {\mathcal T}'$.
Now we show the implication $K_\pi\in {\mathcal T}'\Rightarrow K\in {\mathcal T}$: Let $x,y,z\in X\cap B(x_0,s/2)$ with $[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'$. Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$. Assume for a contradiction that $|p-\tilde q| \leq \tilde r$. Then again by Lemma \ref{lma: circumcenter regularity} we have
\[
|p - \tilde q| < \tilde r \Rightarrow |p-q| < r + \delta D(X,N) \Rightarrow |p-q| \leq r - \delta D(X,N) \Rightarrow |\pi(p) - q'| < r'.
\]
Here again we used (c) and the fact that $D(X,N)$ is small enough. The last inequality is a contradiction, completing the proof of \eqref{eq:22}, and hence the proof of the present proposition.
\end{proof}
\begin{rem}
A much shorter proof exists for the case of the two-sphere, $N = \mathcal{S}^2$. Here, any finite set $X\subset \mathcal{S}^2$ such that no four points of $X$ are coplanar and every open hemisphere contains a point of $X$ admits a Delaunay triangulation homeomorphic to $\mathcal{S}^2$, namely $\partial \conv(X)$.
Because no four points are coplanar, every face of $\partial \conv(X)$ is a regular triangle $K = [x,y,z]$. The circumcircle of $K$ then lies on $\mathcal{S}^2$ and $q(K) = n(K)|q(K)|$, where $n(K)\in \mathcal{S}^2$ is the outer normal. (The case $q(K)=-|q(K)|n(K)$ is forbidden because the hemisphere $\{x\in \mathcal{S}^2\,:\,x \cdot n(K)>0\}$ contains a point in $X$.) To see that the circumball contains no other point $p\in X\setminus \{x,y,z\}$, we note that since $K\subset \partial \conv(X)$ we have $(p-x)\cdot n(K)< 0$, and thus $|p-q(K)|^2 = 1 + 1 - 2p \cdot q(K) > 1 + 1 - 2x \cdot q(K) = |x-q(K)|^2$.
Finally, $\partial \conv(X)$ is homeomorphic to $\mathcal{S}^2$ since $\conv(X)$ contains a regular tetrahedron.
\end{rem}
We are now in a position to prove the upper bound of our main theorem, Theorem \ref{thm:main} (ii).
\begin{figure}
\includegraphics[height=5cm]{upper_global.pdf}
\caption{The global triangulation of a smooth surface is achieved by first covering a significant portion of the surface with the locally optimal triangulation, then adding additional points in between the regions, and finally finding a global Delaunay triangulation. \label{fig:upper bound}}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm:main} (ii)]
We first note that it suffices to show the result for $h\in C^3(M)$ with $\|h\|_\infty < \frac{\delta(M)}{2}$. To see this, we approximate in the general case $h\in W^{2,2}(M)\cap W^{1,\infty}(M)$, $\|h\|_\infty \leq \frac{\delta(M)}{2}$ by smooth functions $h_{\beta} := H_\beta h$, where $(H_\beta)_{\beta >0 }$ is the heat semigroup. Clearly $H_\beta h \in C^\infty(M)$, and $\nabla H_\beta h \to \nabla h$ uniformly, so that $\|h\|_{\infty}\leq \frac{\delta}{2}$ and $\|\nabla h_{\beta}\|_\infty <\|\nabla h\|_{\infty}+1$ for $\beta$ small enough.
Then
\[
\int_M f(x,h_\beta(x),\nabla h_\beta(x), \nabla^2 h_\beta) \,\d{\mathscr H}^2 \to \int_M f(x,h(x),\nabla h(x), \nabla^2 h) \,\d{\mathscr H}^2
\]
for $\beta\to 0$ whenever
\[f:M \times [-\delta(M)/2, \delta(M)/2] \times B(0,\|\nabla h\|_{\infty}+1) \times (TM \times TM) \to \R
\]
is continuous with quadratic growth in $\nabla^2 h$. The Willmore functional
\[
h\mapsto \int_{M_h} |Dn_{M_h}|^2\d{\mathscr H}^2\,,
\]
which is our limit functional, may be written in this way. This proves our claim that we may reduce our argument to the case $h\in C^3(M)$, since the above approximation allows for the construction of suitable diagonal sequences in the strong $W^{1,p}$ topology, for every $p<\infty$.
\medskip
For the rest of the proof we fix $h\in C^3(M)$. We choose a parameter $\delta>0$. By compactness of $M_h$, there is a finite family of pairwise disjoint closed
open sets $(Z_i)_{i\in I}$ such that
\[
{\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I} Z_i\right) \leq \delta
\]
and such that, after applying a rigid motion $R_i:\R^3\to \R^3$, the surface $R_i(M_h \cap Z_i)$ is the graph of a function $h_i\in C^2(U_i)$ for some open sets $(U_i)_{i\in I}$ with $\|\nabla h_i\|_\infty \leq \delta$ and $\|\nabla^2 h_i - \diag(\alpha_i,\beta_i)\|_\infty \leq \delta$.
\medskip
We can apply Proposition \ref{prop: local triangulation} to $R_i(M_h \cap Z_i)$ with global parameters $\theta := \delta$ and $\varepsilon>0$ such that $\dist(Z_i,Z_j)>2\varepsilon$ for $i\neq j$, yielding point sets $X_{i,\varepsilon}\subset M_h \cap B_i$. The associated triangulated surfaces ${\mathcal T}_{i,\varepsilon}$ (see \ref{fig:upper bound}) have the Delaunay property, have vertices $X_{i,\varepsilon}$ and maximum circumball radius at most $\varepsilon$. Furthermore, we have that
\begin{equation}\label{eq: sum local interactions}
\begin{aligned}
\sum_{i\in I} &\sum_{K,L\in {\mathcal T}_{i,\varepsilon}} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2\\
& \leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \L^2(U_i)\times\\
&\quad \times \left(\max_{x\in U_i}|\partial_{11}h_i(x)|^2+\max_{x\in U_i}|\partial_{22}h_i(x)|^2+\delta^{-1}\max_{x\in U_i}|\partial_{12}h_i(x)|^2\right)+C\varepsilon\\
&\leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2+C(\varepsilon+\delta)\,,
\end{aligned}
\end{equation}
where in the last line we have used $\|\nabla h_i\|_{\infty}\leq \delta$, $\|\dist(\nabla^2h_i,\diag(\alpha_i,\beta_i)\|_{\infty}\leq \delta$, and the identity
\[
\begin{split}
\int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2&=
\int_{(U_i)_{h_i}}|Dn_{(U_i)_{h_i}}|^2\d\H^2\\
&=\int_{U_i}\left|(\mathbf{1}_{2\times 2}+\nabla h_i\otimes \nabla h_i)^{-1}\nabla^2 h_i\right|^2(1+|\nabla h_i|^2)^{-1/2}\d x\,.
\end{split}
\]
We shall use the point set $Y_{0,\varepsilon} := \bigcup_{i\in I} X_{i,\varepsilon}$ as a basis for a global triangulated surface. We shall successively augment the set by a single point $Y_{n+1,\varepsilon} := Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ until the construction below terminates after finitely many steps. We claim that we can choose the points $p_{n,\varepsilon}$ in such a way that for every $n\in\N$ we have
\begin{itemize}
\item [(a)] $\min_{x,y\in Y_{n,\varepsilon}, x\neq y} |x-y| \geq \frac{\varepsilon}{2}$.
\item [(b)] Whenever $x,y,z,p\in Y_{n,\varepsilon}$ are four distinct points such that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ exists and has $r\leq \varepsilon$, then
\[
\left| |p-q| - r \right| \geq \frac{\delta}{2} \varepsilon.
\]
If at least one of the four points $x,y,z,p$ is not in $Y_{0,\varepsilon}$, then
\begin{equation}\label{eq:21}
\left| |p-q| - r \right| \geq c \varepsilon,
\end{equation}
where $c>0$ is a universal constant.
\end{itemize}
First, we note that both (a) and (b) are true for $Y_{0,\varepsilon}$.
Now, assume we have constructed $Y_{n,\varepsilon}$. If there exists a point $x\in M_h$ such that $B(x,\varepsilon) \cap Y_{n,\varepsilon} = \emptyset$, we consider the set $A_{n,\varepsilon}\subset M_h \cap B(x,\frac{\varepsilon}{2})$ consisting of all points $p\in M_h \cap B(x,\frac{\varepsilon}{2})$ such that for all regular triangles $[x,y,z]$ with $x,y,z\in Y_{n,\varepsilon}$ and circumball $\overline{B(q,r)}$ satisfying $r\leq 2\varepsilon$, we have $\left||p-q| - r\right| \geq c \varepsilon$.
Seeing as how $Y_{n,\varepsilon}$ satisfies (a), the set $A_{n, \varepsilon}$ is nonempty if $c>0$ is chosen small enough, since for all triangles $[x,y,z]$ as above we have
\[
{\mathscr H}^2\left(\left\{ p\in B(x,\frac{\varepsilon}{2})\cap M_h\,:\,\left||p-q| - r\right| < c \varepsilon \right\}\right) \leq 4c \varepsilon^2,
\]
and the total number of regular triangles $[x,y,z]$ with $r\leq 2\varepsilon$ and $\overline{B(q,r)}\cap B(x,\varepsilon)\neq \emptyset$ is universally bounded as long as $Y_{n,\varepsilon}$ satisfies (a).
We simply pick $p_{n,\varepsilon}\in A_{n,\varepsilon}$, then clearly $Y_{n+1,\varepsilon} \coloneqq Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ satisfies (a) by the triangle inequality. We now have to show that $Y_{n+1,\varepsilon}$ still satisfies (b).
This is obvious whenever $p = p_{n,\varepsilon}$ by the definition of $A_{n,\varepsilon}$. If $p_{n,\varepsilon}$ is none of the points $x,y,z,p$, then (b) is inherited from $Y_{n,\varepsilon}$. It remains to consider the case $p_{n,\varepsilon} = x$. Then $x$ has distance $c\varepsilon$ to all circumspheres of nearby triples with radius at most $2\varepsilon$. We now assume that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ has radius $r \leq \varepsilon$ and that some point $p\in Y_{n,\varepsilon}$ is close to $\partial B(q,r)$. To this end, define
\[
\eta \coloneqq \frac{\left||p-q| - r \right|}{\varepsilon}\,.
\]
We show that $\eta \geq \eta_0$ for some universal constant. To this end, we set
\[
p_t \coloneqq (1-t)p + t\left(q+r\frac{p-q}{|p-q|}\right)
\]
(see Figure \ref{fig:pt}) and note that if $\eta\leq \eta_0$, all triangles $[y,z,p_t]$ are uniformly regular.
\begin{figure}[h]
\centering
\includegraphics[height=5cm]{pt.pdf}
\caption{The definition of $p_t$ as linear interpolation between $p_0$ and $p_1$. \label{fig:pt}}
\end{figure}
Define the circumcenters $q_t \coloneqq q(y,z,p_t)$, and note that $q_1 = q$. By Lemma \ref{lma: circumcenter regularity}, we have $|q_1 - q_0| \leq C|p_1 - p_0| = C\eta \varepsilon$ if $\eta\leq \eta_0$. Thus the circumradius of $[y,z,p_0]$ is bounded by
\[
|y-q_0| \leq |y-q| + |q-q_0| \leq (1+C\eta)\varepsilon \leq 2\varepsilon
\]
if $\eta\leq \eta_0$. Because $x\in Y_{n+1,\varepsilon} \setminus Y_{n,\varepsilon} \subset A_{n,\varepsilon}$, we have, using \eqref{eq:21},
\[
c\varepsilon \leq \left| |x-q_0| - |p-q_0|\right| \leq \left| |x-q| - |p-q| \right| + 2 |q - q_0| \leq (1+2C)\eta\varepsilon,
\]
i.e. that $\eta \geq \frac{c}{1+2C}$. This shows (b).
\begin{comment}
We note that by (a) we have $r\geq \frac{\varepsilon}{4}$. We set $p_t \coloneqq (1-t) p + t\left((q + r\frac{p-q}{|p-q|}\right)$ for $t\in[0,1]$. If $\eta<\eta_0$, then the triangles $[y,z,p_t]$ are all $\zeta_0$-regular triangles for some universal constants $\zeta_0,\eta_0>0$. By Lemma \ref{lma: circumcenter regularity}, then $|q(y,z,p_0) - q(y,z,p)|\leq C \eta \varepsilon$. However, $q(y,z,p_0) = q$, and $|p-q(y,z,p)| \leq 2\varepsilon$ for $\eta<\eta_0$. By the choice $x\in A_{n,\varepsilon}$ then
\[
\left| |x-q(y,z,p)| - |p-q(y,z,p)| \right| \geq c\varepsilon,
\]
which implies that
\[
\left||x-q| - |p-q| \right| \geq c \varepsilon - C \eta \varepsilon,
\]
i.e. that $\eta \geq \min\left( \frac{1}{1+C}, \eta_0, \frac14 \right)$, which is a universal constant. This shows (b).
\end{comment}
Since $M_h$ is compact, this construction eventually terminates, resulting in a set $X_\varepsilon := Y_{N(\varepsilon),\varepsilon} \subset M_h$ with the properties (a), (b), and $D(X_\varepsilon,M) \leq \varepsilon$.
\medskip
Consider a Lipschitz function $g:M_h\to \R$. Since $M_h$ is a $C^2$ surface, we have that for $\|g\|_{W^{1,\infty}}$ small enough, $(M_h)_g$ is locally a tangent Lipschitz graph over $M$, see Definition \ref{def:Mgraph} (iii). By Lemma \ref{lma: graph property}, this implies that $(M_h)_g$ is a graph over $M$.
Invoking Proposition \ref{prop: Delaunay existence} yields a Delaunay triangulated surface ${\mathcal T}_\varepsilon \coloneqq {\mathcal T}(X_\varepsilon, M_h)$ with vertex set $X_\varepsilon$ that is $\zeta_0$-regular for some $\zeta_0>0$, and $\bigcup_{K\in {\mathcal T}_\varepsilon} = (M_h)_{g_\varepsilon}$ with $\|g_\varepsilon\|_{W^{1,\infty}}\leq C(\delta)\varepsilon$.
By the above, there exist Lipschitz functions $h_\varepsilon:M\to \R$ such that
$(M_h)_{g_\varepsilon} = M_{h_\varepsilon}$, with $h_\varepsilon \to h$ in $W^{1,\infty}$, $\|h_\varepsilon\|_\infty \leq \frac{\delta(M)}{2}$ and $\|\nabla h_\varepsilon\|\leq \|\nabla h\|_{\infty}+1$.
\medskip
It remains to estimate the energy. To do so, we look at the two types of interfaces appearing in the sum
\[
\sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2.
\]
First, we look at pairwise interactions where $K,L\in {\mathcal T}(X_{i,\varepsilon})$ for some $i$. These are bounded by \eqref{eq: sum local interactions}.
Next, we note that if $\varepsilon < \min_{i\neq j \in I} \dist(B_i,B_j)$, it is impossible for $X_{i,\varepsilon}$ and $X_{j,\varepsilon}$, $i\neq j$, to interact.
Finally, we consider all interactions of neighboring triangles $K,L\in {\mathcal T}_\varepsilon$ where at least one vertex is not in $Y_{0,\varepsilon}$. By \eqref{eq:21}, these pairs all satisfy $\frac{l_{KL}}{d_{KL}} \leq C$ for some universal constant $C$ independent of $\varepsilon,\delta$, and $|n(K) - n(L)|\leq C\varepsilon$ because ${\mathcal T}$ is $\zeta_0$-regular and $M_h$ is $C^2$. Further, no points were added inside any $B_I$. Thus
\[
\begin{split}
\sum_{\substack{K,L\in {\mathcal T}_\varepsilon\,:\,\text{at least}\\\text{ one vertex is not in }Y_{0,\varepsilon}}}& \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2 \\
&\leq C{\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I}B(x_i, r_i - 2\varepsilon)\right)\\
&\leq C \delta + C(\delta)\varepsilon.
\end{split}
\]
Choosing an appropriate diagonal sequence $\delta(\varepsilon) \to 0$ yields a sequence ${\mathcal T}_\varepsilon = M_{h_\varepsilon}$ with $h_\varepsilon\to h$ in $W^{1,\infty}(M)$ with
\[
\limsup_{\varepsilon \to 0} \sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) -n(L)|^2 \leq \int_{M_h} |Dn_{M_h}|^2\,d{\mathscr H}^2.
\]
\end{proof}
\section{Necessity of the Delaunay property}
\label{sec:necess-dela-prop}
We now show that without the Delaunay condition, it is possible to achieve a lower energy. In contrast to the preceding sections, we are going to choose an underlying manifold $M$ with boundary (the ``hollow cylinder'' $S^1\times[-1,1]$). By ``capping off'' the hollow cylinder one can construct a counterexample to the lower bound in Theorem \ref{thm:main}, where it is assumed that $M$ is compact without boundary.
\begin{prop}\label{prop: optimal grid}
Let $M =S^1\times[-1,1] \subset \R^3$ be a
hollow cylinder and $\zeta>0$. Then there are $\zeta$-regular triangulated
surfaces ${\mathcal T}_j\subset \R^3$ with $\size({\mathcal T}_j) \to 0$ and ${\mathcal T}_j \to M$ for
$j\to\infty$ with
\[
\limsup_{j\to\infty} \sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}}
|n(K)-n(L)|^2 < c(\zeta) \int_M |Dn_M|^2\,d\H^2\,,
\]
where the positive constant $c(\zeta)$ satisfies
\[
c(\zeta)\to 0 \quad \text{ for } \quad\zeta\to 0\,.
\]
\end{prop}
\begin{figure}
\includegraphics[height=5cm]{cylinder2.pdf}
\caption{A non-Delaunay triangulated cylinder achieving a low energy . \label{fig:cylinder}}
\end{figure}
\begin{proof}
For every $\varepsilon = 2^{-j}$ and $s\in\{2\pi j^{-1}:j=3,4,5,\dots\}$, we
define a flat triangulated surface ${\mathcal T}_j\subset \R^2$ with
$\size({\mathcal T}_j) \leq \varepsilon$ as follows: As manifolds with boundary, ${\mathcal T}_j=[0,2\pi]\times
[-1,1]$ for all $j$; all triangles are isosceles,
with one side a translation of $[0,\varepsilon]e_2$ and height $s\varepsilon$ in
$e_1$-direction. We neglect the triangles close to the boundary
$[0,2\pi]\times\{\pm 1\}$, and leave it to
the reader to verify that their contribution will be negligeable in the end.
\medskip
We then wrap this triangulated surface around the cylinder, mapping the
corners of triangles onto the surface of the cylinder via $(\theta,t) \mapsto
(\cos\theta, \sin\theta, t)$, to obtain a triangulated surface $\tilde {\mathcal T}_j$.
Obviously, the topology of $\tilde {\mathcal T}_j$ is $S^1\times[-1,1]$.
Then we may estimate all terms $\frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2$. We
first find the normal of the reference triangle $K\in \tilde {\mathcal T}_j$ spanned by the points $x = (1,0,0)$, $y = (1,0,\varepsilon)$, and $z = (\cos(s\varepsilon),\sin(s\varepsilon),\varepsilon/2)$. We note that
\[
n(K) = \frac{(y-x) \times (z-x)}{|(y-x) \times (z-x)|} = \frac{(-s\varepsilon\sin(s\varepsilon), s\varepsilon(\cos(s\varepsilon)-1),0)}{s\varepsilon(2-2\cos(s\varepsilon))} = (1,0,0) + O(s\varepsilon).
\]
We note that the normal is the same for all translations $K+te_3$ and for all triangles bordering $K$ diagonally. The horizontal neighbor $L$ also has $n(L) = (1,0,0) + O(s\varepsilon)$. However, we note that the dimensionless prefactor satisfies $\frac{\l{K}{L}}{d_{KL}} \leq \frac{2\varepsilon}{\varepsilon/s} = s$. Summing up the $O(s^{-1}\varepsilon^{-2})$ contributions yields
\[
\sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2 \leq C \frac{s^3\varepsilon^2}{s\varepsilon^2} = Cs^2.
\]
This holds provided that $\varepsilon$ is small enough. Letting $s\to 0$, we see that this energy is arbitrarily small.
\end{proof}
\bibliographystyle{alpha}
| proofpile-arXiv_065-110 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The crater chronology expresses the crater production rate (number of craters per unit time per surface area)
as a function of time. It encapsulates our understanding of the observed crater record on surfaces of different
bodies. If known, it can be used to estimate the surface age, identify the dominant populations of impactors,
and infer interesting things about the dynamical and collisional evolution of the solar system. Unfortunately,
it is quite difficult do determine an accurate crater chronology from data alone. This is because the ages of
different craters are often unknown and must be inferred by indirect means. The only crater chronology that
is directly derived from observational data is the lunar chronology (e.g. \citealp{2001SSRv...96...55N};
\citealt{2009AJ....137.4936M}; \citealp{2014E&PSL.403..188R}). The Moon has a well preserved crater record
and the soil samples returned by lunar missions can be used to infer accurate absolute ages of at least some
lunar craters and basins. This provides time anchors from which the lunar chronology can be reconstructed.
For most other solar system bodies, for which the crater record is not well preserved and/or
the absolute crater ages are unknown, the crater chronology must be inferred by different means (e.g. \citealp{2010P&SS...58.1116M};
\citealp{2012P&SS...66...87M}; \citealp{2014P&SS..103..131O}; \citealp{2016NatCo...712257M}). For example,
some researchers have re-scaled the lunar chronology to other bodies (\citealp{2009AJ....137.4936M};
\citealp{2014P&SS..103..104S}), including the main belt asteroids, even if this method may be difficult to
justify (\citealp{2014P&SS..103..131O}). Another approach, which we pursue here, is to model the evolution
of impactors and their impacts on target bodies, and use the scaling laws (\citealp{2007Icar..187..345H};
\citealp{2016JGRE..121.1695M}; \citealp{2016Icar..271..350J}) to determine the expected crater distributions.
The results are then compared to observations.
Before 2011 our knowledge of the asteroid crater records was based on spacecraft images of $\sim$10 of these bodies,
all of them smaller than 100 km in diameter. The arrival of the \textit{Dawn} spacecraft to Vesta in 2011 and
Ceres in 2015 opened a new window into studies of impact cratering in the asteroid belt. A large basin
on Vesta's surface have been suggested to explain the Vesta's collisional family \citep{1993Sci...260..186B}. It was later
imaged by the Hubble Space Telescope \citep{1997Icar..128...88T} and \textit{Dawn}, and found to be $\simeq 500$
km in diameter (named Rheasilvia). \textit{Dawn} has also discovered another basin on Vesta, now called Veneneia,
roughly 400 km in diameter \citep{2012Sci...336..690M}. In contrast, Ceres's surface does not show any obvious basins
and the largest craters, Kerwan and Yalode, have diameters of only 280 km and 260 km, respectively
\citep{2016NatCo...712257M}.
This is puzzling because Ceres has a collisional cross-section that is $\sim 4$ times larger than Vesta.
For two of Vesta's basins, there should thus be $\sim 8$ basins on Ceres, but there is none.
Previous attempts to derive a crater chronology for Vesta have been carried out by \citet{2014P&SS..103..104S}
and \citet{2014P&SS..103..131O}. The former work used the lunar chronology and rescaled it, by simply multiplying
the crater production rate by a fixed factor, to Vesta. They estimated the Rheasilvia and Veneneia age to be
$\sim3.5$ Gy. This is a significantly older age of Rheasilvia than the one ($\sim1$ Gy) suggested in
\citet{2012Sci...336..690M}. At least part of this difference is due to different crater counting strategies
adopted by different research teams.
The young age of Rheasilvia would be more in line with the age of the Vesta family,
thought to form in aftermath of the Rheasilvia impact, which was estimated from arguments based on the collisional
grinding of family members \citep{1996A&A...316..248M}. Dynamical modeling of the Vesta family does not constrain
the family age well and admits ages $\geq1$ Gy (\citealt{2005A&A...441..819C}; \citealp{2008Icar..193...85N}),
which are compatible with either age estimate mentioned above.
\citet{2014P&SS..103..131O} developed a new chronology for Vesta based on a synthesis of previous results. Their
chronology accounts for the long-term dynamical depletion of the asteroid belt \citep{2010Icar..207..744M},
effects of planetary migration/instability and scattering by planetary embryos that may have resided in
the belt during the earliest stages (\citealp{2001Icar..153..338P}; \citealp{2007Icar..191..434O};
\citealp{2010AJ....140.1391M}).
Their chronology implies the Rheasilvia age to be $\sim1$ Gy and
creates some tension with the low probability of forming Rheasilvia this late ($\sim4$\% according
to \citealp{2014P&SS..103..131O}). They also pointed out a significant difference between the lunar and Vesta chronologies,
suggesting that the flux of impactors on Vesta was {\it not} orders of magnitude higher during the lunar
Late Heavy Bombardment (LHB).
A similar analysis was published for Ceres in \citet{2016Sci...353.4759H} and \citet{2016NatCo...712257M}. The
former work applied both the lunar and O'Brien chronologies to Ceres and determined a relatively young age
of the Kerwan crater (550-720 My). The absence of (obvious) large basins on Ceres is puzzling. \citet{2016NatCo...712257M}
proposed that some large depressions observed on the Ceres's surface, referred to as \textit{planitia}, could be strongly relaxed basins. They identified at least two of these topological features, Vendimia planitia with
a $\sim830$ km diameter and another planitia with a $\sim570$ km diameter. Various geological mechanisms related to crustal relaxation, including potentially recent geologic activity, could be responsible for nearly complete basin erasure.
Here we determine the crater chronologies of Ceres, Vesta and the Moon using a dynamical model of the asteroid belt
from \citet{2017AJ....153..103N}. See that work for a complete description of the model. In brief,
the model accounts for the early dynamical evolution of the belt due to migration/instability of the outer planets
and tracks asteroid orbits to the present epoch. The main asteroid belt, well characterized by modern surveys,
is then used to calibrate the number and orbits of asteroids at any given time throughout the solar system history.
The model does not account for other effects, such as scattering by planetary
embryos, or other impactor populations, such as comets, leftovers of the terrestrial planet accretion, etc.
In Sect. \ref{sec:The-model}, we describe the model in more detail and explain the method that we used to
determine the crater chronology and size distribution. The results for Vesta and Ceres are discussed in
Sect. \ref{sec:Results}. Sect. \ref{sec:Conclusions} summarizes our main conclusions.
\section{Model\label{sec:The-model}}
\subsection{Dynamical model\label{sec:Dynamical-model}}
We use the dynamical model of \citet{2017AJ....153..103N} to determine the crater chronologies of Ceres and Vesta.
In that work, we performed a numerical simulation --labeled as CASE1B-- of 50,000 test asteroids over the age of
the solar system. The simulation starts at the time of the solar nebula dispersal (it does not account for gas drag).
The adopted physical model takes into account gravitational perturbations of all planets from Venus to Neptune (Mercury is
included for $t \leq t_{\mathrm{inst}}$, where $t_{\mathrm{inst}}$ is the time of dynamical instability; see below).
During the early stages, the giant planets are assumed to evolve by planetesimal-driven migration and dynamical
instability (the so-called jumping-Jupiter model; \citealp{2009A&A...507.1041M}; \citealp{2012Natur.485...78B};
\citealp{2012AJ....144..117N}). See \citet{2018ARA&A..56..137N} for a review. The simulations span 4.56 Gy and the time of
the instability time $t_{\mathrm{inst}}$ is considered to be a free parameter. The Yarkovsky effect and collisional
evolution of the main belt is not modeled in \citet{2017AJ....153..103N}. This limits the reliability of the model
to large asteroids for which these effects are not overly significant \citep{2018AJ....155...42N}. Comets and other
impactor populations are not considered. This is equivalent to assuming that Ceres and Vesta crater records are
dominated by asteroid impactors.
The dynamical model of \citet{2017AJ....153..103N} employed a flexible scheme to test any initial orbital distribution
of asteroids. By propagating this distribution to the present time and comparing it with the observed distribution
of main belt asteroids, we were able to reject models with too little or too much initial excitation (also see
\citealp{2015AJ....150..186R}). From the models that passed this test we select the one that has the Gaussian distributions in
$e$ and $i$ with $\sigma_e=0.1$ and $\sigma_i=10^\circ$, and a power-law radial surface density $\Sigma(a)=1/a$.
We also tested other initial distributions, such as the one produced by the Grand Tack model \citep{2011Natur.475..206W},
and will briefly comment on them in Sect. 3. The Grand Tack distribution is wider in eccentricity (approximately
Gaussian with $\sigma_e \simeq 0.2$ and Rayleigh in $i$ with $\sigma_i \simeq 10^\circ$; see \citealp{2015AJ....150..186R} for explicit definitions of these distributions).
The impact probability and velocity of a test asteroid on a target world is computed by the \"Opik algorithm
\citep{1994Icar..107..255B}. This allows us to account for impacts on bodies that were not explicitly included in the
simulation, such as Ceres, Vesta or the Moon. Ceres and Vesta are placed on their current orbits since time zero
(corresponding to the dispersal of the protosolar nebula). This is only an approximation because in reality both
these asteroids must have experienced orbital changes during the planetary migration/instability. Establishing how
these changes may have affected their crater records is left for future work. See \citet{2017AJ....153..103N} for the
method used for the Moon. The impact probabilities are initially normalized \textit{to 1 test particle surviving
at the end of the simulation}. In other words, the impact probabilities directly provided by a given simulation are divided by the total number of test particles that survived at the end of that simulation. This normalization is necessary, because the final state of the simulation resembles well the present asteroid belt only in terms of orbital distribution, but not in absolute numbers. The actual impact flux is obtained by multiplying these normalized impact probabilities by the number of asteroids larger than a given size in the present asteroid belt (see Eq. (\ref{eq:ntd}) below).
\subsection{Crater chronology\label{subsec:chrono}}
The usual approach to modeling crater records of planetary and minor bodies consists of two steps. In the first step,
scientists define the chronology function, $f(t)$, which gives the crater production rate as a function of
time $t$. In the second step, the model production function (MPF), $n(D_{\mathrm{crat}})$, is synthesized from available
constraints to compute the crater production rate as a function of crater diameter, $D_{\mathrm{crat}}$. The number of craters
is then computed as $n(t,D_{\mathrm{crat}})=f(t)\,n(D_{\mathrm{crat}})\,{\rm d}t\,{\rm d}D_{\mathrm{crat}}$. Integrating this relationship over $t$ and/or
$D_{\mathrm{crat}}$ leads to cumulative distributions (e.g., the number of craters larger than diameter $D_{\mathrm{crat}}$ produced since time $t$).
This approach implicitly assumes that MPF is unchanging with time which may not be accurate if size-dependent
processes such as the Yarkovsky effect \citep{2015aste.book..509V} influence the impactor population. We do not
investigate such processes here.
Here we use a notation where $t$ measures time from time zero, corresponding to the dispersal of the protosolar gas
nebula, to the present epoch ($t=0$ to 4.56 Gy) and $T$ measures
time backward from the present epoch to time zero; thus $T=4.56\,{\rm Gy}-t$. We first define the chronology function and MPF
in terms of the {\it impactor} flux and diameters (the conversion method from impactor properties to craters is
described in Sect. \ref{subsec:mpf}). The cumulative number of impacts, $n(T,D_{\mathrm{ast}})$, of asteroids larger than the diameter $D_{\mathrm{ast}}$ in the
last $T$, is
\begin{equation}
n(T,D_{\mathrm{ast}})=F(T) \, \mathcal{N}(>\!\! D_{\mathrm{ast}})\label{eq:ntd}
\end{equation}
where $\mathcal{N}(>\!\!\! D_{\mathrm{ast}})$ is the current number of main belt asteroid larger than $D_{\mathrm{ast}}$ and $F(T)$ is the cumulative
chronology function obtained from the dynamical model (here normalized to one asteroid larger than $D_{\mathrm{ast}}$ at $T=0$).
Eq. (\ref{eq:ntd}) represents a forward-modeling approach that is independent of any crater data; instead, it relies
on the accuracy of numerical simulations to reproduce the main belt evolution and our understanding of the
main belt size distribution (see Sect. \ref{subsec:mpf}).
Having the chronology function, the intrinsic impact probability (actually, the expected value of a Poisson distribution) on the target world, $P_{i}$, can be obtained as:
\begin{equation}
P_{i}(T)=\frac{4\pi}{S}\frac{{\rm d}F(T)}{{\rm d}T}\label{eq:pit}
\end{equation}
where $S$ is the surface area of the target and the factor $4\pi$ accounts for the difference between the surface area
and the cross section. With this definition of $P_{i}$, the total number of impacts is given as $P_{i}\,R^{2}\,n\, \Delta t$,
where $R$ is the target radius, $n$ is the number of impactors and $\Delta t$ is the time interval.
The model gives $P_{i}(0) \simeq 4.1\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$ for both Ceres and Vesta. This is
somewhat higher than the mean value $P_i = 2.85\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$ usually considered for
the whole asteroid belt \citep{1992Icar...97..111F}. For Ceres, \citet{2016NatCo...712257M} found $P_i=3.55\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$,
which is more consistent with our $P_{i}(0)$. The small difference can be related to the fact that our model distribution
of main belt asteroids is more concentrated towards smaller semimajor axes, because the model does not account the
presence of large collisional families at $a\gtrsim3$ au (mainly the Themis, Hygiea and Eos families). The mean impact
velocities computed from our model are in the range of 4.6 to 7 km~$\mathrm{s}^{-1}$ for the whole simulated time
interval. They show a slightly decreasing trend with $t$ during the earliest stages as asteroid impactors on high-$e$
orbits are removed. The mean velocity at $T=0$ is in good agreement with the current value \citep{1994Icar..107..255B}.
\subsection{Size distribution\label{subsec:mpf}}
A general procedure to analytically estimate the MPF has been outlined in \citet{2009AJ....137.4936M}. A limitation
of this procedure arises from uncertainties in modeling the processes of crater erasure such as, in particular,
the obliteration of older and smaller craters by newer and larger ones. The crater erasure can be included in the
MPF through a weight function, as explained in \citet{2006Icar..183...79O} and \citet{2009AJ....137.4936M}.
Here we instead develop a Monte Carlo approach to forward model the crater size distribution
\citep[also see][]{2016NatCo...712257M}.
To simulate the formation of craters we combine the observed size distribution of the main belt asteroids
with the chronology functions obtained from our dynamical model. The size distribution is constructed from
the WISE/NEOWISE observations \citep{2011ApJ...741...68M, 2016PDSS..247.....M}\footnote{Available
at the NASA PDS Small Bodies Node, \url{https://sbn.psi.edu/pds/resource/neowisediam.html}},
which is practically complete down to $D_{\mathrm{ast}}\simeq9$-10 km. For diameters slightly smaller than that,
we adopt an extrapolation $\mathcal{N}=10^{\,\alpha}D_{\mathrm{ast}}^{\,\,\,\,-\gamma}$, where $\alpha=6.5,\,\gamma=-2.6$ for the
distribution of the whole main belt, and $\alpha=6.23,\,\gamma=-2.54$ for the main belt background, i.e. subtracting
the members of known asteroid families. These extrapolations were obtained by fitting the size distribution
of asteroids slightly larger than 10 km by a power law and extending the power law below 10 km.
Our model consists of the following steps:
\begin{enumerate}
\item We define the minimum impactor diameter, $D_{\mathrm{ast},0}$, that needs to be accounted for to match the smallest craters
that we want to model.
\item We use Eq. (\ref{eq:ntd}) to determine the average number of impacts $\overline{n}_{\mathrm{imp}} =n(T,D_{\mathrm{ast},0})$ at $T=0$ Ga.
\item We draw the actual number of impacts $n_{\mathrm{imp}}$ over the desired time span from a Poisson distribution with mean $\overline{n}_{\mathrm{imp}}$.
\item We generate $n_{\mathrm{imp}}$ craters from main belt impactors larger than $D_{\mathrm{ast},0}$ using the following procedure:
\begin{enumerate}
\item From the main belt size distribution, we draw the size $D_{\mathrm{ast}}$ of the impactor (in m).
\item From the chronology function, we draw the time $T$ that will represent the crater age.
\item We obtain the velocity $v$ of the impact (in $\mathrm{m\,s}{}^{-1}$) at the time $T$.
Note that this is more accurate than just drawing a value from the overall impact velocity distribution,
because velocities are slightly higher at earlier times.
\item We set the impact angle $\theta=45^{\circ}$ \citep{1962pam..book.....K}.
\item We compute the crater diameter $D_{\mathrm{crat}}$ (in m) using the scaling law from \citet{2016Icar..271..350J}
for non-porous targets:
\begin{equation}
D_{\mathrm{crat}}=1.52\,D_{\mathrm{ast}}^{\,\,\,0.88}\,v^{0.5}\,\left(\sin\theta\right)^{0.38}\left(\frac{\delta}{\rho}\right)^{0.38}g^{-0.25}D_{\mathrm{sc}}^{\,\,-0.13}\ .\label{eq:scal}
\end{equation}
Here, $\delta$ is the impactor's density, $\rho$ is the target's density, $g$ is the target's surface gravity
(in $\mathrm{m\,s}{}^{-2}$), and $D_{\mathrm{sc}}$ is the simple-to-complex transition diameter (i.e., the diameter
for which the crater starts to develop complex structures, such as multiple ridges, concentric rings, etc.). The
values of these parameters adopted here for Ceres and Vesta are given in Table \ref{params}.
\end{enumerate}
\item We assign to each crater the initial weight $W=1$.
\item To account for crater erasure, we consider, one by one, the model-generated craters with size $D_{\mathrm{crat}}$ and age $T$.
We then select all craters with sizes $<D_{\mathrm{crat}}$ and ages $>T$, and subtract from their weights an amount $\pi D_{\mathrm{crat}}^{2}/(4S)$, which is the ratio of the crater surface area to the body surface area.
When $W$ becomes 0, the corresponding crater is assumed to be totally obliterated. This recipe is designed to model the crater overlap only, i.e. a ``cookie cutting'' approach.
\item The final size distribution of craters is obtained by adding all weights of craters with diameter $D_{\mathrm{crat}}$
together.
\item The steps (3) to (7) are repeated 1000 times to build up statistics. We compute the mean and the $1\sigma$
uncertainty of crater size distributions and compare them to observations.
\item Optionally, we can include formation of a large basin at a specific time. This is done, for example, to test
the erasure of older and smaller craters by the Rheasilvia basin formation.
\end{enumerate}
The crater erasure algorithm in step (4) is a simple method that only accounts for crater overlap. It does not take
into account, for example, that the material ejected from large craters may degrade/bury small craters at considerable distance
from the impact site. It is therefore expected that our method could underestimate the erasure of small craters. Here, however, we restrict our analysis to $D_{\mathrm{crat}}>50$ km craters for which this effect may not be important.
\subsection{Caveats\label{sec:Caveats}}
As a by-product of the procedure outlined above we obtain a set of $D_{\mathrm{crat}}$ \textsl{vs.} $D_{\mathrm{ast}}$ values indicating that the
scaling law in Eq. (3) approximately follows a linear dependence $D_{\mathrm{crat}}\simeq f_{\mathrm{sl}}\times D_{\mathrm{ast}}$, where $f_{\mathrm{sl}}$
is a constant factor, at least in the
size range considered here. The typical values of $f_{\mathrm{sl}}$ are in the range $\sim11$-13 for Ceres and
$\sim8$-10 for Vesta. Therefore, if we want to fit the size distribution of craters with $D_{\mathrm{crat}} > 60$ km, we have
to set $D_{\mathrm{ast},0}\sim6$ km in the case of Vesta and $D_{\mathrm{ast},0}\sim4$ km in the case of Ceres. This creates a problem because the dynamical model used here
is strictly reliable only for $D_{\mathrm{ast}}\gtrsim 10$ km (because it does not account for the size-dependent processes such
as the Yarkovsky effect or collisional fragmentation).
The Yarkovsky drift of a $D_{\mathrm{ast}} = 4$ km asteroid is expected to be $\sim0.04$ au~Gy$^{-1}$. The drift may be directed
inward or outward, depending on asteroid's spin axis orientation. The intrinsic collision probability of the target is not
expected to be significantly affected by this, because the inward and outward drifts would average out (assuming a random
orientation of spin axes). The main effect of the Yarkovsky drift should consist in larger depletion of small
main belt asteroids relative to our size-independent model where asteroid orbits are expected to be more stable. This could potentially
mean that the chronology function would have a somewhat steeper dependence with time for $D_{\mathrm{ast}} < 10$~km than for
$D_{\mathrm{ast}}>10$~km impactors. The investigation of this effect is left for future work.
The effects of collisional grinding are difficult to estimate. The collisional grinding removes mass over time and thus
reduces the population of small asteroids. This happens on the top of the dynamical depletion. The general expectation
is that the belt should evolve faster initially when it is still massive \citep{2005Icar..175..111B}. Recall that we anchor the
results of our dynamical model to the {\it current} population of small asteroids. Thus, running the clock back in time,
our model must underestimate the actual number of impacts (because it does not account for impactors that were
collisionally eliminated).
The formation of asteroids families over the age of the solar system contributes to enhance the two effects discussed above, but it has also another consequence. There are several large collisional families in the outer asteroid belt
(Themis, Hygiea and Eos families) and these families have many $D_{\mathrm{ast}} \sim 10$ km members (\citealp{2015aste.book..297N}; \citealp{2015PDSS..234.....N}). Including
these bodies in our calibration effectively means that we assume that all these families existed for the whole duration
of our simulation (i.e., formed 4.56 Ga), which is clearly not the case because, for example, the Eos family formed
only $\sim1.3$ Ga \citep{2006Icar..182...92V}. To test how this approximation affects our results, we can
remove asteroid families from the main belt and calibrate our chronology on the current main belt background. These
tests show a variation in the number of impacts by a factor of $\sim2$. The uncertainty of our results, described below,
cannot be better than that.
Finally, another possible source of uncertainty is the contribution to the collisional rates in the main belt of the population of Hungaria asteroids, which may have constituted a significant early population depending on the eccentricity history of Mars \citep{2018Icar..304....9C}. Model CASE1B from \citet{2017AJ....153..103N} does account for a primordial population of asteroids in the range $1.6<a<2.1$ au, the so called E-belt \citep{2012Natur.485...78B}. Therefore, the derived production functions and chronologies used here include the effects of this population. However, model CASE1B did not reproduce well the currently observed population of Hungarias, because the E-belt became more depleted that it should, especially at later times \citep{2015AJ....150..186R}. In any case, the uncertainty introduced by this effect is small and would be within the factor of 2 discussed above.
\section{Results\label{sec:Results}}
\subsection{Comparison of lunar and asteroid chronologies}
The chronology functions obtained in our model for Vesta, Ceres and the Moon are compared in Fig. \ref{crono}.
The lunar chronology shows a vast number of impacts during the early epochs when the impactor flux is at least
$\sim2$ orders of magnitude higher than at the present time \citep{2017AJ....153..103N}. This happens because many
main belt asteroids become destabilized during the planetary migration/instability and evolve into the terrestrial
planet region, which leads to a strong asteroid bombardment of the Moon and terrestrial planets. In contrast,
the impact flux on Vesta and Ceres is more unchanging with time. This happens because Vesta and Ceres orbit within the main belt and are continuously impacted by asteroids.
For them, the early bombardment is not as dramatic
as for the Moon. This means that the lunar chronology does not apply to Vesta or Ceres. These considerations
also imply that that Vesta's and Ceres's craters should be on average younger than the lunar craters.
\citet{2014P&SS..103..131O} reached similar conclusions. To illustrate this, we show the Vesta chronology
from \citet{2014P&SS..103..131O} in Fig. \ref{crono}b. We used Eqs. (16) and (18) in their paper and scaled
their MPF (their figure 1) assuming a linear scaling law with $8\leq f_{\mathrm{sl}}\leq20$. Note that
$f_{\mathrm{sl}}\sim9$ reproduce well the scaling law of \citet{2016Icar..271..350J} for Vesta. We would
therefore expect that our results for Vesta should plot near the upper limit of their chronology function
range, and this is indeed the case. In \citet{2014P&SS..103..131O}, the Vesta's chronology was pieced together
from several publications and was compared with the lunar chronology of \citet{2001SSRv...96...55N} (which
was obtained by yet another method). The advantage of our approach is that all chronologies are derived from
a single, self-consistent physical model.
\subsection{Impact flux for early and late instabilities}
The time of planetary migration/instability is a crucial parameter for the Moon as it substantially changes
the lunar impact flux during early stages and the overall number of impacts (Fig. \ref{crono}a). Vesta's
and Ceres's impact records are much less sensitive to this parameter. Indeed, Fig. \ref{crono}a shows that
the records are nearly identical for $T_{\mathrm{inst}}=4.5$ Ga and $T_{\mathrm{inst}}=3.9$ Ga. We therefore do not
expect to find many clues about the LHB or the early evolution of the giant planets by analyzing the
crater record of these asteroids. Given that other available constraints indicate that the instability
happened early \citep{2018NatAs...2..878N}, we give preference to the early instability case in the rest of the paper.
We find no appreciable difference for the Gaussian and Grand Tack initial distributions. The Gaussian
initial distribution, as described in Sect. \ref{sec:Dynamical-model}, is used in the following analysis.
The early instability model suggests that the Moon should have
registered $\sim27$ impacts from $D_{\mathrm{ast}}>9$ km asteroids over the age of the solar system (see also \citealp{2017AJ....153..103N}),
while Ceres and Vesta registered $\sim51$ and $\sim16$ such impacts, respectively (Fig. \ref{crono}b). According to
\citet{2014P&SS..103..131O}, Vesta would have registered between 10 and 75 impacts of $D_{\mathrm{ast}}>9$ km asteroids, but $\sim70$\% of these
impacts would have occurred during the first 50 My of evolution. In general, O'Brien et al.'s chronology produces
$\sim1.5$ times fewer impacts per Gy during the last $\sim4$ Gy than our chronology (assuming $f_{\mathrm{sl}}\sim9$).
This discrepancy is, at least in part, related to the fact that O'Brien et al.'s chronology shows a drop at the
very beginning, reflecting their attempt to account for strong depletion of the main asteroid belt by processes
not modeled here (e.g., planetary embryos, Grand
Tack).\footnote{The strong
depletion of the asteroid belt was thought to be needed, because the formation models based on the
minimum mass solar nebula suggested that the primordial mass of the main belt was 100-1000 times larger than
the present one \citep{1977Ap&SS..51..153W}. Also, the classical model of asteroid accretion by collisional
coagulation required a large initial mass to produce 100-km class objects. The formation paradigm has shifted,
however, with more recent models favoring a low initial mass \citep{2015aste.book..493M}.}
\citet{2016NatCo...712257M} derived a chronology function for Ceres that has a very similar shape to
O'Brien et al.'s chronology for Vesta. It also shows a drop during the first 50 My of evolution due
to a presumably strong primordial depletion of the main belt. Using this chronology, they predicted 180 and
90 impacts from impactors with $D_{\mathrm{ast}}>10$ km and $D_{\mathrm{ast}}>13$ km, respectively. According to their scaling laws,
these impactors produce craters with $D_{\mathrm{crat}}\sim100$ km. About 70\% of these impacts happen during the first
400 My of evolution (i.e. before the dynamical instability that they place at 4.1 Ga). Compared to that,
our model implies $\sim$4 times fewer impacts and we do not find any significant difference between the
number of impacts for the early and late instability cases. The number of craters of various sizes expected
from our model is reported in Table \ref{tab-counts}. For Vesta, these numbers are in a good agreement
with observations, especially if we account for modest crater erasure (see Sect. \ref{subsec:mpf}). For Ceres, strong crater erasure by viscous relaxation may be required (Sect. \ref{sec:Ceres-craters}).
\subsection{Vesta's craters}
Figure \ref{distvesta} compares our model size distributions of Vesta's craters to observations.
To introduce this comparison, recall that we have blindly taken a dynamical model of the
asteroid belt evolution (i.e., without any a priori knowledge of what implications the model will have for
the Vesta's crater record) and used a standard scaling law to produce the crater record. There is not much
freedom in this procedure. If the dynamical model were not accurate, for example, we could have obtained
orders of magnitude more or less craters than what the {\it Dawn} mission found. But this is not the case.
In fact, there is a very good general agreement between the model results and observations. This also
shows that the caveats discussed in Sect. \ref{sec:Caveats} do not (strongly) influence the results.
In more detail, in a model where no crater erasure is taken into account (left panel of Fig.
\ref{distvesta}), the agreement is excellent for craters with $D_{\mathrm{crat}}>100$ km. There is a small difference
for $D_{\mathrm{crat}}\lesssim100$ km, where the model distribution steeply raises and slightly overestimates the
number of craters. A similar problem was identified in \citet{2014P&SS..103..131O}. We tested whether
this issue may be a result of crater erasure. Indeed, when crater erasure is included in the model
(the middle panel of Fig. \ref{distvesta}), the size distribution shifts down and becomes slightly
shallower. It now better fits the data in the whole range modeled here. The results do not change
much when we include the presumed Rheasilvia basin formation at $\sim1$ Gy ago (right panel of Fig.
\ref{distvesta}).\footnote{If the dynamical model is calibrated on the main belt background (i.e.,
asteroid families removed; Sect. \ref{sec:Caveats}), we obtain $\sim2$ times fewer craters. This does not make
much of a difference on the logarithmic scale in Fig. \ref{distvesta}, but the overall fit without
crater erasure becomes slightly better.}
In summary, our model works really well to reproduce the Vesta's crater record and a modest crater
erasure may be needed to better fit the number of $D_{\mathrm{crat}} \lesssim 100$ km craters.
\subsection{Ceres's craters\label{sec:Ceres-craters}}
Figure \ref{distceres} shows a similar comparison for Ceres. In this case, the model without crater
erasure predicts nearly an order of magnitude more craters on Ceres's surface than the number of
actual craters. A similar problem was noted in \citet{2016NatCo...712257M}. The situation improves
when the crater erasure is included in the model (middle panel of Fig. \ref{distceres}), but the problem
is not entirely resolved. We could have tried to erase craters more aggressively, for example, by assuming that small craters are degraded by distal ejecta from large craters \citep{2019Icar..326...63M}. But this would create problems for
Vesta where the model with our conservative erasure method (craters must overlap to be erased) worked quite well.
Actually, \citet{2019Icar..326...63M} showed that crater degradation by energetic deposition of ejecta (e.g. secondary cratering/ballistic sedimentation) on the Moon works differently for the larger craters comparable to the crater sizes considered here \citep{2019EPSC...13.1065M}, so that mechanism would probably not be applicable in the cases of Ceres and Vesta.
Following \citet{2016NatCo...712257M}, we therefore investigate the effects of
viscous relaxation (which are specific to ice-rich Ceres).
To empirically incorporate the effects of viscous relaxation in our model, we assume that the model
weight of each crater diminishes according to the following prescription:
\begin{equation}
W=\exp\left(-T/\tau\right)\label{eq:ww}
\end{equation}
where e-folding timescale is a function of crater diameter,
\begin{equation}
\tau = C/ D_{\mathrm{crat}} \label{eq:tau}
\end{equation}
as supported by classical models of relaxation on icy surfaces \citep[e.g.][]{1973Icar...18..612J,2012GeoRL..3917204B,2013Icar..226..510B}. Here, $C = 4\pi\eta/\rho g$ is a constant depending on the viscosity $\eta$ of the surface layer.
The right panel of Fig. \ref{distceres} shows the model results for Ceres considering crater erasure together with viscous relaxation. In this case, we are able to fit the observed crater record assuming a value of $C\simeq 200$~km~Gy, which would imply a surface viscosity of $\sim 3\times 10^{23}$~Pa~s. This is about three orders of magnitude larger than the viscosity of pure ice at 180 K (the approximate temperature of Ceres surface), meaning that the particulate content volume in the icy surface layer needs to be significant. In fact, viscous relaxation of a purely icy surface is expected to be an aggressive process, with a typical e-folding timescale of only 1 My for the erasure of topographic wavelengths as short as 100 km. Our result is in line with more rigorous studies of the Ceres internal structure \citep{2017E&PSL.476..153F}, which infer a mechanically strong crust, with maximum effective viscosity $\sim 10^{25}$~Pa~s.
This gives some support to the viscous relaxation prescription discussed above. We caution,
however, that the results are likely not unique and different combinations of crater erasure and viscous
relaxation prescriptions (e.g., more aggressive crater erasure and longer viscous relaxation timescale)
could produce similarly good fits.
In summary, we find that both erasure processes should be important for Ceres and $D_{\mathrm{crat}}\sim100$ km Ceres's craters should viscously relax on an e-folding timescale of $\sim 1$-2 Gy. This represents an interesting constrain on geophysical models of viscous relaxation and Ceres's composition.
\subsection{Basins formation}
Here, we discuss the probability of forming large craters or basins ($D_{\mathrm{crat}}>400$~km) on Vesta and Ceres at different
times in the past.
One possible approach to this problem consists in computing the so-called isochrones for each body, i.e. the crater production function at different times $T$. For a given diameter $D_{\mathrm{crat}}$, each isochrone gives the expected number of craters $\mu(>D_{\mathrm{crat}},\,<T)$, and the probability of forming exactly $N$ (and only $N$) craters $>D_{\mathrm{crat}}$ in a time $<T$ is obtained from a Poisson distribution:
\begin{equation}
p_{\mu}(N)=\frac{\mu^N\,e^{-\mu}}{N!}\label{eq:pn}
\end{equation}
Figure \ref{isocro} shows the isochrones for Ceres and Vesta, as determined from our model, without considering any crater erasure. If we take the case of a 500 km basin on Vesta, we find that the expected value for the $T=1$~Ga isochrone is $\mu =0.10$, and from Eq. (\ref{eq:pn}) the probability of forming one basin is 9\%, while the probability of forming two basins is much smaller, 0.5\%. However, if we consider the $T=4.56$~Ga isochrone, the probability of forming two basins increases to 4.6\%. We recall that the probability of forming at least one 500 km basin in the last 1 Gy can be obtained as $1-p_{\mu}(0)$, which in this case would give a value of 9.5\%. Table \ref{tab-poison} summarizes the results for $D_{\mathrm{crat}}>400$ km.
Another possible approach consists in using our model to directly determine the probability of producing at least $N$ craters larger than a given size over a certain time span. This approach differs from
the previous one in that it does not rely on the Poisson statistics, but on the output of the Monte Carlo simulations. Figure \ref{probab} shows the probability of creating at least one crater (panel a)
and at least two craters (panel b) larger than a cutoff diameter on Vesta. Again, no crater erasure is
considered here. We find that the probability of creating the Rheasilvia basin with $D_{\mathrm{crat}}\simeq500$~km
(the cyan line in panel a) in the last 1 Gy (or 2 Gy) is 10\% (or 18\%). This is about 2.5 times more likely than
the probability reported in \citet{2014P&SS..103..131O}. This happens because our chronology function
leaves more space for a relatively late formation of craters/basins. \citet{2014P&SS..103..131O},
instead, adopted a strong primordial depletion and had more basins forming early on (e.g.
\citealp{2007Icar..191..434O}). If we consider $D_{\mathrm{crat}}>400$~km (blue line in panel a), the probabilities of forming at least one crater become 14\% in the last 1 Gy, and 25\% in the last 2 Gy. These values are slightly larger than those reported in Table \ref{tab-poison}, because the Poisson statistics constrains the formation of exactly $N$ craters.
The probability of forming both the Rheasilvia and Veneneia basins (the blue line in Fig. \ref{probab}b corresponding to $D_{\mathrm{crat}}=400$ km) is 15\% over the age of the solar system, and 6\% in the last 3 Ga. Again, these value are slightly larger than those reported in Table \ref{tab-poison}.
Table \ref{tab-bene} reports different probabilities assuming that the age of Rheasilvia is $\leq1$
Gy (note that we do not claim that this is an accurate age of the Rheasilvia basin; we merely test this
assumption) and Veneneia is $>1$ Gy, for the models with early and late instabilities. The probabilities
are slightly higher in the early instability model simply because, in this model, the rate of impacts is slightly
higher in the past 1 Gy. Thus, a young age for Rheasilvia could potentially be more consistent with
an early instability model. In any case, our chronology still implies that most Vesta's craters/basins
should have preferentially formed early in the solar system history.
Figure \ref{probceres} shows the results for Ceres. In this case, the probability of {\it not} creating any
basin with $D_{\mathrm{crat}}>400$ km over the age of the solar system is only 1\% (the red line in Fig. \ref{probceres}).
Combining this result with the one for Vesta (see above) we estimate that the joint probability of creating
two $D_{\mathrm{crat}} > 400$~km basins on Vesta younger than 3 Gy and no $D_{\mathrm{crat}} > 400$ km basin on Ceres is only $<0.1$\%.
Figure \ref{proyect} shows, at the top, the one exceptional case we found over 1000 realizations that fulfills the above condition. For comparison, an example of the typical outcome of our Monte Carlo model is shown at the bottom. This result emphasizes the need for efficient removal of Ceres's basins by viscous relaxation (or some other process).
\section{Conclusions\label{sec:Conclusions}}
Our findings can be summarized as follows:
\begin{itemize}
\item The crater chronologies of Ceres and Vesta are very different from that of the Moon. This is a consequence
of the fact that both Vesta and Ceres spent their whole lifetimes in the asteroid belt and are impacted
all the time, whereas the Moon experienced a more intense bombardment during the first $\sim1$ Gy.
This means that using the lunar chronology for Ceres and Vesta is incorrect. The scaled lunar chronology would
imply that Vesta's basins must have formed very early in the solar system history, which may not necessarily
be the case.
\item Our crater chronologies of Ceres and Vesta are similar to those obtained in some previous studies
(\citealp{2014P&SS..103..131O}; \citealp{2016NatCo...712257M}). In our chronology, however, the crater ages are
not as concentrated toward the early times as in these works, allowing more impacts in the past 3~Gy.
\item The model crater record of Vesta matches observations (e.g., 10 known craters with $D_{\mathrm{crat}}>90$ km).
The model with crater erasure overpredicts, by a factor of $\sim3$, the number of $D_{\mathrm{crat}}>90$ km craters observed on the Ceres's surface. An additional erasure process such as, for example, the size-dependent viscous relaxation of craters (with $\sim 2$ Gy timescale for $D_{\mathrm{crat}}=100$ km craters), may be responsible for this discrepancy.
\item We estimate that the probability of creating the Rheasilvia and Veneneia basins ($D_{\mathrm{crat}} >400$ km) on Vesta during the last
3 Gy is $\simeq 6$\%, somewhat larger than found in the previous studies. A recent formation of the Rheasilvia basin can be more easily accepted in a dynamical model with the early instability, where the impact probabilities in the last 1 Gy are higher.
\item The probability of producing two large basins ($D_{\mathrm{crat}} > 400$ km) on Vesta and simultaneously not producing
any basin on Ceres is interestingly small ($<0.1$\%). The relative paucity of large craters/basins on Ceres may
be explained in a model with crater erasure and viscous relaxation.
\end{itemize}
\acknowledgments
The authors wish to thank David Minton for helpful comments and suggestions during the revision of this paper. FR's work was supported by the Brazilian National Council of Research (CNPq). DN's work was supported by the NASA SSERVI and SSW programs.
The simulations were performed on the SDumont computer cluster of the Brazilian System of High Performance Processing (SINAPAD).
| proofpile-arXiv_065-119 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Tension between the distance anchors}
\label{sec:anchors}
R11 and R16 use the LMC, NGC 4258 and Milky Way (MW) Cepheid parallaxes as the primary distance anchors.
In Sect. \ref{subsec:LMCanchor}, we discuss the LMC and NGC 4258 anchors and show that their geometrical distances are in
tension with the R16 photometry. The MW distance anchor is discussed in Sect. \ref{subsec:MWanchor}.
\subsection{The LMC and NGC 4258 anchors}
\label{subsec:LMCanchor}
Recently, there have been substantial improvements in the accuracies of the distance anchors.
Using 20 detached eclipsing binary (DEB) systems, \cite{Pietrzynski:2019} determine a distance modulus for
the Large Magellanic Cloud (LMC) of
\begin{equation}
\mu_{\rm LMC} = 18.477 \pm 0.026 \ {\rm mag}. \label{equ:mu1}
\end{equation}
A reanalysis of VLBI observations \cite{Humphreys:2013} of water masers in NGC 4258 by \cite{Reid:2019}
gives a distance modulus of
\begin{equation}
\mu_{\rm N4258} = 29.397 \pm 0.033 \ {\rm mag}, \label{equ:mu2}
\end{equation}
substantially reducing the systematic error in the geometric distance to this galaxy compared to the distance
estimate used in R16.
In addition, R19 have published HST photometry for
LMC Cepheids using the same photometric system as that used for the
cosmological analysis of $H_0$ in R16. With
these observations, calibration errors associated with ground based photometry of LMC Cepheids
are effectively eliminated as a significant source of systematic error.
I use the data for the 70 LMC Celpheids listed in R19
and the data for the 139 Cepheids in NGC 4258 from R16 to perform a joint $\chi^2$ fit:
\begin{subequations}
\begin{eqnarray}
(m^W_H)_{\rm LMC} & = & \mu_{\rm LMC} + c + b \ {\rm log}_{10} \left ({ P \over 10 \ {\rm days} } \right), \label{equ:M3a} \\
(m^W_H)_{\rm N4258} & = & \mu^P_{\rm N4258} + c + b \ {\rm log}_{10} \left ({ P \over 10 \ {\rm days} } \right), \label{equ:M3b}
\end{eqnarray}
\end{subequations}
where $\mu_{\rm LMC}$ is fixed at the value of Eq. (\ref{equ:mu1}) and $\mu^P_{\rm N4258}$, $c$, $b$
are parameters to be determined from the fit\footnote{For the LMC Cepheids an intrinsic scatter
of $0.08$ mag is added to the error estimates given in R19 which is necessary to produce a reduced $\chi^2$ of unity.}. The results are as follows:
\begin{subequations}
\begin{eqnarray}
\mu^P_{\rm N4258} & = & 29.220 \pm 0.029, \label{equ:M4a} \\
c & = & -5.816 \pm 0.011, \label{equ:M4b}\\
b & = & -3.29 \pm 0.04. \label{equ:M4c}
\end{eqnarray}
\end{subequations}
The difference between (\ref{equ:M4a}) and (\ref{equ:mu2}) is
\begin{equation}
\Delta \mu_{\rm N4258} = (\mu_{\rm N4258} - \mu^P_{\rm N4258}) = 0.177 \pm 0.051, \label{equ:M4}
\end{equation}
and differs from zero by nearly $3.5 \sigma$. In other
words, the DEB LMC distance together with SH0ES Cepheids is placing NGC 4258 at a distance of
$6.98 \ {\rm Mpc}$ if metallicity effects are ignored, whereas the maser distance is $7.58 \ {\rm Mpc}$.
The best fit PL relation to the combined LMC and NGC 4258 data is shown in Fig. \ref{fig:PLanchor}.
\begin{figure}
\centering
\includegraphics[width=150mm, angle=0]{figures/pgPLanchor.pdf}
\caption{The joint PL relation for the LMC and NGC 4258 Cepheids.
The line shows the best fit from
Eqs. (\ref{equ:M4a}) -- (\ref{equ:M4c}).}
\label{fig:PLanchor}
\end{figure}
There are a number of possible explanations of this result:
\smallskip
\noindent
(i) There may be unidentified systematic errors in the geometric distance estimates of Eqs. (\ref{equ:mu1})
and (\ref{equ:mu2}).
\smallskip
\noindent
(ii) Part of the discrepancy may be attributable to a metallicity dependence
of the PL relation.
\smallskip
\noindent
(iii) There may be a photometric offset between the R16 and LMC Cepheid magnitudes.
\smallskip
Since this discrepancy involves two of the three primary distance anchors used by the
SH0ES team, it needs to be considered extremely seriously.
Point (i) can be tested by using independent distance indicators. For example, a recent
paper \cite{Huang:2018} used the near-infrared PL relation of Mira variables in the LMC and NGC 4258 to
determine a relative distance modulus of
\begin{equation}
\mu_{\rm N4258} - \mu_{\rm LMC} = 10.95 \pm 0.06, \qquad {\rm Miras}, \qquad \qquad \label{equ:mu3}
\end{equation}
(where the error is dominated by systematic errors). This estimate is in good agreement with
the geometric estimates of Eqs. \ref{equ:mu1} and \ref{equ:mu2}:
\begin{equation}
\mu_{\rm N4258} - \mu_{\rm LMC} = 10.92 \pm 0.04, \qquad {\rm DEB+maser}. \label{equ:mu4}
\end{equation}
\subsection{Milky Way parallaxes and NGC 4258}
\label{subsec:MWanchor}
\begin{figure}
\centering
\includegraphics[width=150mm, angle=0]{figures/pgMWPL.pdf}
\caption{The PL relations for the MW Cepheids with parallaxes and NGC 4258 Cepheids.
The solid line shows the best fit to the joint data.}
\label{fig:MWPL}
\end{figure}
Parallaxes of Milky Way (MW) Cepheids have been used by the SH0ES team
as a third distance anchor. Following the publication of R16,
\cite{Riess:2018} have measured parallaxes for an additional 7 MW
Cepheids with periods greater than 10 days, supplementing the parallax
measurements of 13 MW Cepheids from \cite{Benedict:2007,
vanLeeuwen:2007}. Figure \ref{fig:MWPL} shows the equivalent of
Fig. \ref{fig:PLanchor} but now using the 20 MW Cepheids as an anchor
in place of the LMC\footnote{Note that I repeated the GAIA DR2 analysis
of \cite{Riess:2018b} finding identical results for the DR2 parallax
zero point offset. I therefore do not consider GAIA parallaxes any
further.}. The best fit gives
\begin{subequations}
\begin{eqnarray}
\mu^P_{\rm N4258} & = & 29.242 \pm 0.052, \label{equ:M5a} \\
c & = & -5.865 \pm 0.033, \label{equ:M5b}\\
b & = & -3.21 \pm 0.08. \label{equ:M5c}
\end{eqnarray}
\end{subequations}
As with the LMC, this comparison disfavours the maser distance, though because the error bar is larger this is significant
at only $2.2\sigma$.
However, since the metallicities of the MW and NGC 4258 Cepheids are very similar, this comparison
suggests that metallicity differences of the PL relation cannot explain the shift reported in Eq. (\ref{equ:M4}).
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES team:} The full analysis of the SH0ES Cepheid data includes an empirically determined correction
for Cepheid metallicity as shown in equation 4.1 below. The difference stated here of 3.5$\sigma$ between the LMC
and NGC 4258 appears so only when ignoring the metallicity term. Including it brings the LMC and NGC 4258 closer
together by $\sim$ 0.06 mag and reduces the significance of the difference in these anchors to slightly over 2$\sigma$.
Because the metallicity term is measured internal to the data and included as part of the $H_0$ analysis, we think the 2$\sigma$ number is the fair statement of the significance.
We note there are now 5 other geometric anchors from different methods of Milky Way parallax (from HST FGS, HST spatial scan, Gaia DR2 Cepheids, Gaia DR2 Cepheid companions, and Gaia DR2 cluster hosts, which yield $H_0$=76.2 (2.2\%), 75.7 (3.3\%), 73.7 (3.3\%), 72.7 (3.8\%) and 73.3 (3.5\%) (see Breuval et al. 2020) which makes it appear that the difference between 72.0 (NGC 4258, 1.5\%) and 74.2 (LMC,1.3\%) is not significant. However, this point which G.E. shared with us motivated us to acquire new Cepheid observations of the outer regions of NGC 4258 with HST to measure the anchor distance at the same (i.e., low) metallicity so we can revisit this issue with an improved characterization of metallicity. We appreciate G.E.'s suggestions and we expect an update on this in the next SH0ES paper.}
\ \par
\end{adjustwidth}
\section{Object-by-object comparison of the R11 and R16 Cepheid photometry}
\label{sec:appendix}
\begin{table}[h]
\begin{center}
\begin{tabular}{llllllll} \hline
& & & & \multicolumn{2}{c}{all Cepheids} & \multicolumn{2}{c}{outliers removed} \\
galaxy & $N_{\rm R11}$ & $N_{\rm R16}$ & $N_{\rm match}$ & $\qquad \langle \Delta m \rangle$& $\langle \Delta C \rangle$ & $ \qquad \langle \Delta m \rangle$ & $\langle \Delta C \rangle$ \\ \hline
N4536 & 69 & 33 & 28 & $-0.114 \pm 0.057$ & $0.153$ & $-0.069 \pm 0.062$ & $0.153$\\
N4639 & 32 & 25 & 17 & $-0.071 \pm 0.100$ & $0.091$ & $-0.071 \pm 0.100$ & $0.091$ \\
N3370 & 79 & 63 & 51 & $-0.105 \pm 0.055$ & $0.146$ & $-0.090 \pm 0.055$ & $0.145$\\
N3982 & 29 & 16 & 12 & $-0.178 \pm 0.090$ & $0.092$ & $-0.081 \pm 0.094$ & $0.092$\\
N3021 & 29 & 18 & 13 & $+0.120 \pm 0.146$ & $0.196$ & $+0.120 \pm 0.146$ & $0.196$ \\
N1309 & 36 & 44 & 16 & $-0.087 \pm 0.091$ & $0.330$ & $-0.087 \pm 0.091$ & $0.330$ \\
N5584 & 95 & 83 & 65 & $-0.028 \pm 0.049$ & $0.039$ & $+0.001 \pm 0.051$ & $0.038$ \\
N4038 & 39 & 13 & 11 & $-0.239 \pm 0.153$ & $0.109$ & $-0.239 \pm 0.153$ & $0.109$ \\
N4258 & 165 & 139 & 73 & $-0.217 \pm 0.055$ & $0.145$ & $-0.020 \pm 0.062$ & $0.143$
\cr \hline
\end{tabular}
\caption{Offsets to the magnitudes and colours. $N_{\rm R11}$ is the number of Cepheids in Table 2 of R11. $N_{\rm R16}$ is the number of Cepheids in Table 4 of R16. $N_{\rm match}$ is the number of Cepheids common to both tables.}
\label{table:fits}
\end{center}
\end{table}
This analysis is based on matching Cepheids from Table 2 of R11 and Table 4 of R16.
Note the following:
\smallskip
\noindent
(i) The R11 table contain Cepheids with a rejection flag. Cepheids with IFLAG = 0 were accepted
by R11 and those with IFLAG = 1 were rejected.
\smallskip
\noindent
(ii) The data in the R16 table has been `pre-clipped' by the authors and does not list
data for Cepheids that were rejected by R16. The R16 table contains Cepheids
that do not appear in the R11 table.
\smallskip
\noindent
(iii) The R16 magnitudes have been corrected for bias and blending errors from scene reconstruction.
Each Wesenheit F160W magnitude has an error estimate:
\begin{equation}
\sigma_{\rm tot} = (\sigma^2_{\rm sky} + \sigma^2_{\rm ct} + \sigma^2_{\rm int} + (f_{\rm ph} \sigma_{\rm ph}^2)) ^{1/2},
\end{equation}
where $\sigma_{\rm sky}$ is the dominant error and comes from contamination of the sky background by blended images,
$\sigma^2_{ct}$ is the error in the colour term $R(V-I)$, which is small and of order $0.07$ {\rm mag};
$\sigma_{\rm int}$ is the internal scatter from the width of the instability strip, which is known
from the LMC and M31 to be small ($\approx 0.08$ mag); $f_{\rm ph} \sigma_{\rm ph}$ is the error in the phase correction
of the Cepheid light curves.
\smallskip
\noindent
(iv) The positions in R11 are not listed to high enough precision to uniquely identify a Cepheid in the R16 table.
There are ID numbers listed by R11 and R16, but for three galaxies (NGCs 3370, 3021, 1309) these numbers do not match.
Where possible, we have matched Cepheids using their ID numbers. For the remaining three galaxies, we
have used a combination of positional coincidence and agreement in periods to match Cepheids. (This gives
perfect agreement for the six galaxies with matching ID numbers.)
Outliers can have a significant effect on fits to the magnitude and colour differences. We fit:
\begin{subequations}
\begin{eqnarray}
(m_H^W)_{\rm R16} &= & (m_H^W)_{\rm R11} + \langle \Delta m_H^W \rangle, \\
(V-I)_{\rm R16} &= & (V-I)_{\rm R11} + \langle \Delta C \rangle,
\end{eqnarray}
\end{subequations}
with and without outliers, where outliers are defined as having
\begin{equation}
\left \vert {((m_H^W)_{\rm R11} - (m_H^W)_{\rm R16}) \over (\sigma_{\rm tot})_{\rm R16}} \right \vert > 2.5.
\end{equation}
The results are given in Table \ref{table:fits}. The rest of this appendix shows the equivalent plots for each of the nine galaxies.
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A4536.pdf}
\caption
{The plot to the left shows R11 Wesenheit $H$ magnitudes plotted against R16 Wesenheit H magnitudes. Central plot shows
R11 (V-I) colours plotted against R16 (V-I) colours. The dashed line in the central
panel shows the best fit offset. Plot to the right shows the difference in $H$-band magnitudes
in units of the R16 error, $((m_H^W)_{\rm R11} - (m_H^W)_{\rm R16})/(\sigma_{\rm tot})_{\rm R16}$, plotted against R16 $(V-I)$ colour.
Blue points show Cepheids with IFLAG=1 in R11 (i.e. these
were rejected by R11 but accepted by R16). Red points show Cepheids with IFLAG=0 in R11. }
\label{fig:4536}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R4536R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4536R16.pdf}
\caption
{Left plot shows the PL relation for R11 Cepheids. Blue points show Cepheids rejected by R11 (IFLAG =1);
red points show Cepheids accepted by R11 (IFLAG = 0). The solid line shows the best fit linear
relation fitted to the red points. The dashed line shows thebest fit with the slope constrained to $b=-3.3$.
Right plot shows the PL relation for R16 Cepheids. The solid line shows the best fit linear
relation fitted to the red points. The dashed line shows the best fit with the slope constrained to $b=-3.3$. The parameters
of these fits are given in Table \ref{table:PL}.}
\label{fig:R4536}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A4639.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:4639}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R4639R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4639R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R4639}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A3370.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:3370}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R3370R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3370R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R3370}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A3982.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:3982}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R3982R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3982R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R3982}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A3021.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:3021}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R3021R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3021R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R3021}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A1309.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:1309}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R1309R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R1309R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R1309}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A5584.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\label{fig:5584}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R5584R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R5584R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R5584}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A4038.pdf}
\caption
{As Fig. \ref{fig:4536}. There are 39 Cepheids listed in R11 and 13 in R16 with 11 matches.}
\label{fig:5584}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R4038R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4038R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R4038}
\end{figure}
\begin{figure}
\includegraphics[width=150mm, angle=0]{figures/A4258.pdf}
\caption
{As Fig. \ref{fig:4536}.}
\end{figure}
\begin{figure}
\includegraphics[width=75mm, angle=0]{figures/R4258R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4258R16.pdf}
\caption
{As Fig. \ref{fig:R4536}}
\label{fig:R4258}
\end{figure}
\newpage
\section{Conclusions}
\label{sec:conclusions}
In the abstract, I asked the question `what would it take to make
SH0ES compatible with early time measurements?'. The right hand panel
of Fig. \ref{fig:H0_contour1} provides an answer. A bias in the
intercept of the PL relation of the crowded field SH0ES photometry
common to all SH0ES galaxies of about 0.1 - 0.14 mag, which I have
termed the SH0ES degeneracy, resolves the tension between late and
early time measurements of $H_0$ without the need for new
physics. Such a bias also resolves the tension between the geometric
distance of the LMC, MW parallaxes and the maser distance to NGC 4258,
and resolves the tension between the TRGB distance scale of F19 and
the Cepheid scale of the SH0ES team (which is based on combining the LMC, MW and NGC 4258
anchors). To my knowledge, there is no convincing way at present of
ruling out such a common mode systematic in the SH0ES data. {\it
Excluding the SH0ES degeneracy as a possibility should be a priority
for future research.}
Fortunately, this can be done by concentrating on {\it calibrations},
since the Hubble tension is a near 10\% effect. There is really no
need to focus on acquiring data for more SN host galaxies (which may
cause $\sim 2$\% changes to the value of $H_0$, see
Fig. \ref{fig:H0}). Here are some possible ways forward:
\smallskip
\noindent
(i) The CCHP and SH0ES teams disagree on the calibration of the TRGB \cite{Yuan:2019, Freedman:2020}. The main issue
concerns corrections for rededening and extinction to the LMC. As discussed by \cite{Freedman:2020}
Gaia parallaxes of MW globular clusters should improve the Galactic calibration of the TRGB and JWST should
allow precision measurements of additional calibrators. Calibrating the TRGB ladder using NGC 4258 would provide
an important consistency check of the LMC calibration.
\smallskip
\noindent
(ii) The discrepancy between the LMC and NGC 4258 anchors identified
in Sect. \ref{subsec:LMCanchor} needs to be resolved. Even if this turns
out to be a consequence of a rare statistical fluctuation (which I doubt), one needs to
establish which of these anchors is closer to the truth. In my view,
the DEB LMC distance appears to be reliable and so it would be worth rechecking the
NGC 4258 maser VLBI analysis \cite{Reid:2019}. Independent checks of
the maser distance, for example, refining the accuracy of the TRGB
distance to NGC 4258 \cite{Mager:2008}, would be particularly
valuable. Another interesting test would be to obtain HST data on
Cepheids in the outskirts of NGC 4258, reducing the impact of crowding
corrections and metallicity difference with the LMC (Riess, private
communication). If the distance to NGC 4258 can be shown to be lower
than the $7.58 \ {\rm Mpc}$ found by \cite{Reid:2019}, this would
strengthen the case for the Hubble tension.
\smallskip
\noindent
(iii) I have included the Appendix on the R11 and R16 photometric
comparison because I am far from convinced that the crowded field
photometry is unbiased and returns realistic errors. It should be
possible to apply MCMC techniques for scene reconstruction and to
calculate posterior distributions on the magnitude errors (which will
almost certainly be asymmetric). It would also be helpful if researchers
published as much data as possible, including data on outliers, to allow
independent checks.
\smallskip
\noindent
(iv) In the longer term, other techniques such as strong lensing
time delays, distant masers, and gravitational wave sirens will hopefully become
competitive with the traditional distance ladder approaches\footnote{As this article
was nearing completion \cite{Birrer:2020} presented a new analysis of strong gravitational
lensing time delays analysing the mass-sheet degeneracy and sensitivity to galaxy mass profiles.
These authors find $H_0=67.4^{+4.1}_{-3.2} ~\text{km}~\text{s}^{-1} \Mpc^{-1}$, lower than the value derived by \cite{Shajib:2020} but with a larger error. Also,
the Atacama Cosmology Telescope collaboration released the DR4 maps and cosmological parameters \cite{Naess:2020, Aiola:2020}.
Their results are in very good agreement with the results from \Planck. Athough this was not a surprise to me, it
surely lays to rest any remaining concerns of even the most hardened of skeptics that the \Planck\ results are affected by systematic errors.}.
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES team: }
The case that Vanilla $\Lambda$CDM calibrated in the Early Universe predicts a value of $H_0$
of 67-68 appears very sound. It is a mystery to us why the distance ladder methods consistently find a
higher value, but one we feel worth paying attention to.
It is worth highlighting a recent result that is independent of all of G.E.'s considerations
in this talk and all other rungs, Cepheids, TRGB, SNe Ia, etc
and which has not received much attention. That is the final result from the Maser Cosmology
Project, Pesce et al. 2020, which measures geometric distances to 6 masers in the Hubble flow and finds $73.9 \pm 3.0$,
which corroborates prior indications that the local value of $H_0$ exceeds the predicted value with $\sim$ 98\% confidence
if G.E. sees no problems with it (tongue back in cheek).
}
\ \par
\end{adjustwidth}
\medskip
\medskip
\section{The SH0ES degeneracy}
\label{sec:degeneracy}
One interpretation of the results of the previous section is that
\cite{Reid:2019} have overestimated the maser distance to NGC 4258
and/or underestimated the error. The maser analysis is considerably
more complicated than the DEB distance estimates, and so
it is extremely important that the maser analysis is revisited and, if
possible, checked against independent techniques (such as TRGB
\cite{Mager:2008, Jang:2017} and Mira \cite{Huang:2018} distance
measurements). In this Section, I want to discuss another possibility, which I will call
the `SH0ES degeneracy'.
\subsection{Global fits and values of $H_0$}
\label{subsec:global_fits}
I will begin by analysing global fits to determine $H_0$ in (almost) the same way as in the SH0ES papers.
A metallicity dependence of the PL relation is included by adding an extra term to Eq. (\ref{equ:dataP}) so that the
magnitude of Cepheid $j$ in host galaxy $i$ is
\begin{equation}
(m_H^W)_{i, j} = a_i + b \log_{10} \left [ { P_{i,j} \over 10 {\rm days}} \right ] + Z_w \Delta \log_{10} (O/H)_{i, j},
\end{equation}
where $Z = 12 + \log_{10} (O/H)_{i,j}$ is the metallicity listed in Table 4 of R16, $\Delta \log_{10} (O/H)_{i,j}$ is the difference
relative to Solar metallicity, for which I adopt $Z_\odot = 8.9$. For the LMC I adopt a uniform metallicity of $Z_{\rm LMC} = 8.65$.
First I list the results\footnote{The fits discussed here were computed using the {\tt MULTINEST} sampling algorithm \cite{Feroz:2009, Feroz:2011}. }
using NGC 4258 as a distance anchor, adopting the distance modulus of Eq. (\ref{equ:mu2}).
In these fits, I use all Cepheids with periods in the
range $10-200 \ {\rm days}$. In the first solution, labelled `no priors', no slope or metallicity priors are imposed.
In this solution, the best fit slope is
much shallower than the slope of the M31 PL relation (as discussed in Sect. \ref{subsec:photometry}). For the solution
labelled `slope prior', I impose a tight prior on the slope of $b = -3.300 \pm 0.002$ to counteract the HST photometry
and force the slope to match the M31 and LMC slopes.
\begin{subequations}
\begin{eqnarray}
{\rm NGC} \ {\rm 4258}\ {\rm anchor}, \ {\rm no} \ {\rm priors:} & & H_0 = 72.0 \pm 1.9 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg2a}\\
& & b = -3.06 \pm 0.05, \nonumber \\
& & Z_w = -0.17 \pm 0.08, \nonumber \\
{\rm NGC} \ {\rm 4258}\ {\rm anchor}, \ {\rm slope} \ {\rm prior:} & & H_0 = 70.3 \pm 1.8 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg2b}\\
& & b = -3.30 \pm 0.002, \nonumber \\
& & Z_w = -0.15 \pm 0.08. \nonumber
\end{eqnarray}
\end{subequations}
These solutions are not strongly discrepant with the \Planck\ value of $H_0$; the `no prior' solution for $H_0$ is high by $2.3\sigma$ and the `slope prior' solution is high by $1.5\sigma$ (see also E14).
Using the LMC as a distance anchor, the LMC photometry from R19, and adopting the distance modulus of Eq. (\ref{equ:mu1}) I find
\begin{subequations}
\begin{eqnarray}
{\rm LMC} \ {\rm anchor}, \ {\rm no} \ {\rm priors:} & & H_0 = 76.5 \pm 1.7 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg3a}\\
& & b = -3.17 \pm 0.04, \nonumber \\
& & Z_w = -0.18 \pm 0.08, \nonumber \\
{\rm LMC} \ {\rm anchor}, \ {\rm slope} \ {\rm prior:} & & H_0 = 74.8 \pm 1.6 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg3b}\\
& & b = -3.30 \pm 0.002, \nonumber \\
& & Z_w = -0.18 \pm 0.08, \nonumber
\end{eqnarray}
\end{subequations}
and using both anchors:
\begin{subequations}
\begin{eqnarray}
{\rm NGC}\ 4258 + {\rm LMC} \ {\rm anchors}, \ {\rm no} \ {\rm priors:} & & H_0 = 74.6 \pm 1.5 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg4a}\\
& & b = -3.18 \pm 0.04, \nonumber \\
& & Z_w = -0.22 \pm 0.07, \nonumber \\
{\rm NGC} \ 4258 + {\rm LMC} \ {\rm anchors}, \ {\rm slope} \ {\rm prior:} & & H_0 = 73.5 \pm 1.3 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg4b}\\
& & b = -3.30 \pm 0.002, \nonumber \\
& & Z_w = -0.21 \pm 0.07. \nonumber
\end{eqnarray}
\end{subequations}
These fits differ slightly from those given in R16 and R19 because the
SH0ES team include M31 Cepheids in the fits. The only effect of adding
the M31 data is to pull the best fit slope closer to $b=-3.3$; as a
consequence their best fits are intermediate between the `no prior'
and `slope prior' results
\footnote{There are other minor differences, for example in R19, the
SH0ES team include the ground based LMC data to better constrain the
slope of the PL relation; they also include the NGC 4258 photometry
when using the LMC or MW parallaxes as distance anchors, which only
affects the slope of the PL relation. These differences are
unimportant for our purposes.}.
The joint solution of Eq. (\ref{equ:deg4a}) is actually quite close to the R19 solution of Eq. ({\ref{equ:H0_1}) and is
higher than the \Planck\ value by $4.5 \sigma$.
\begin{figure}
\centering
\includegraphics[width=75mm, angle=0]{figures/4258N_LMCN_joint.pdf} \includegraphics[width=75mm, angle=0]{figures/4258N_LMCN_mu.pdf}
\caption{The panel to the left shows 68 and 95\% contours in the $H_0-b$ plane for the solutions of Eqs. (\ref{equ:deg2a}) (red contours)
and (\ref{equ:deg3a}) (blue contours). The joint solution for the NGC 4258 and LMC anchors ( Eq. (\ref{equ:deg4a})) is
shown by the grey contours. The horizontal and vertical bands shows $1\sigma$ and $2\sigma$ ranges of the Planck value of $H_0$ (Eq.
(\ref{equ:H0_2}) ) and M31 PL slope (Eq. (\ref{equ:data4})). The plot to the right shows the 68\% and 95\% constraints on the NGC 4258 and LMC distance moduli from the joint fit. The geometric distance moduli of Eqs. (\ref{equ:mu1}) and
(\ref{equ:mu2}) are shown by the dotted lines. }
\label{fig:H0_contour}
\end{figure}
The `no prior' solutions are shown by the left hand panel of Fig. \ref{fig:H0_contour}. One might think that the
blue and red contours are consistent with each other, but in fact, {\it all} of SN data and {\it almost all} of
the Cepheid data are common to both analyses. The difference between these two solutions reflects the tension between
the LMC and NGC 4258 anchor distances discussed in Sect. \ref{subsec:LMCanchor}. The joint fit (grey contours) tries
to share this tension, but lies closer to the LMC fit because the LMC anchor carries more weight than NGC 4258.
The joint fit then leads to a value of $H_0$ that is strongly in tension with \Planck. However, there is a statistical inconsistency in the joint fit. This is illustrated by the right hand plot in Fig. \ref{fig:H0_contour} which shows constraints on the
LMC and NGC 4258 distance moduli from the joint fit. These parameters are, of course, highly correlated, but one can see
that the geometrical best fit values (shown by the intersection of the dotted lines) sits well outside the 95\% contours. This is the statistically
more rigorous way of quantifying the discrepancy discussed in Sect. \ref{subsec:LMCanchor}, including metallicity effects.
\subsection{The SH0ES degeneracy and values of $H_0$}
\label{subsec:SH0ES degeneracy}
At this point, one might follow R16 and argue that the safest way to proceed is to average over distance anchors.
However, this is extremely dangerous if one or more of the distance anchors is affected by systematic errors, or
if there are systematics in the photometry that affect some distance anchors but not others.
\begin{figure}
\centering
\includegraphics[width=75mm, angle=0]{figures/4258_LMC_MW_jointDP.pdf}
\includegraphics[width=75mm, angle=0]{figures/4258_LMC_MWF_jointDP.pdf}
\caption{The plot to the left shows 68 and 95\% contours in the $H_0-b$ plane for different combinations of distance
anchors. In this analysis, I have added M31 Cepheids which shifts the contours towards $b=-3.3$ and slightly lower
values of $H_0$. The plot to the right shows the effect of subtracting a constant offset $\delta a$ from the distant Cepheid
magnitudes, illustrating the SH0ES degeneracy.}
\label{fig:H0_contour1}
\end{figure}
The SH0ES degeneracy is an example of the latter type of systematic. Imagine that there is a constant
offset between the SH0ES PL intercepts, and the intercepts of the
MW and LMC distance anchors. Write the true intercept of the PL relation as
\begin{equation}
a^T = a_{\rm SH0ES} - \delta a. \label{equ:deg5}
\end{equation}
There are a
number of possible sources of error that might produce such a shift;
for example, systematic errors in crowding corrections (though see
\cite{Riess:2020}),
asymmetries in the distribution of outliers, asymmetries in the
magnitude errors, scene reconstruction errors, or selection biases such as the short period incompleteness
discussed in Sect. \ref{subsec:slopes}.
The assumption here is that the shift
$\delta a$ is the same for all R16 galaxies, whereas for well resolved nearby bright Cepheids
such as those in the LMC and MW, there is no such shift. This model should not be taken literally because the data
quality in the SH0ES sample varies (for example, although NGC 4258 is nearer than most of the SN host galaxies,
the Cepheids are more crowded in this galaxy). We will approximate the model of Eq. (\ref{equ:deg5}) by subtracting a
constant offset from all SH0ES H-band magnitudes
\begin{equation}
m_H = m_{H,{\rm SH0ES}} - \delta a. \label{equ:deg6}
\end{equation}
Since $\delta a$ is assumed to be a constant, it will cancel if we use NGC 4258 as a distance anchor. However, a
constant offset will lead to
a bias in the value of $H_0$ if the LMC, MW or M31 are used as distance anchors. This is the SH0ES degeneracy\footnote{The
possible cancellation of systematic errors if NGC 4258 is used as a distance anchor was recognised in the early SH0ES papers.
\cite{Riess:2009, Riess:2011}.}.
\begin{wrapfigure}[]{l}{3.0in}
\vspace{-0.14in}
\includegraphics[width=0.4\textwidth, angle=0]{figures/4258_LMCF_1d.pdf}
\caption
{Posterior distributions for the offsets $\delta a$ for the 4258+LMC and 4258+LMC+MW solutions
show in the right hand panel of Fig. \ref{fig:H0_contour1}.}
\label{fig:offset}
\end{wrapfigure}
The effect of this degeneracy is illustrated in Fig. \ref{fig:H0_contour1}. The panel to the left shows the progressive combination of NGC 4258, LMC and MW anchors in the usual way. Note that for this comparison,
I have added Cepheids from M31, which pushes the contours closer to $b=-3.3$. The panel to the right shows what happens if we
add a constant offset $\delta a$ as a free parameter. Clearly, this has no impact on the results using NGC 4258 as an anchor. However, if
we combine the 4258 anchor with any combination of LMC, MW anchors, all that happens is that $\delta a$ adjusts to bring the
combined constraints close to those from the NGC 4258 anchor. All of these contours have substantial overlap with the
\Planck\ value of $H_0$.
The posterior distributions of $\delta a$ for these solutions are shown in Fig. \ref{fig:offset}. The offsets are as follows:
\begin{subequations}
\begin{eqnarray}
{\rm NGC}\ 4258 + {\rm LMC} \ {\rm anchors}, & & \delta a = -0.14 \pm 0.05, \\
{\rm NGC}\ 4258 + {\rm LMC} + {\rm MW}\ {\rm anchors}, & & \delta a = -0.10 \pm 0.05.
\end{eqnarray}
\end{subequations}
The first of these constraints is another way of expressing the nearly $3 \sigma$ tension between the LMC and NGC 4258 anchors.
As expected from the discussion in Sect. \ref{sec:SH0ES_data}, we see that an offset of $\approx 0.1 \ {\rm mag}$ (in the sense that the R16 photometry is too bright) can largely resolve the Hubble tension. {\it Increasing the precision of the MW and LMC distance anchors
(as has been done in R19 and R20) does not strengthen the case for the Hubble tension, unless one can rule out
the SH0ES degeneracy convincingly}. This can be done: (a) by independently determining the distance modulus of NGC 4258, and/or
(b) comparing Cepheid distance moduli for SN hosts with distance moduli determined from other techniques, to which we turn next.
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES team: } We agree that use of NGC 4258 as the only anchor, excluding the LMC and the 5 sets of Milky Way parallaxes reduces the tension. Further fixing the slope to be steeper than these fits provide by what is more than a few $\sigma$ as G.E. does in this example reduces the tension further. We don't think this approach is reasonable (not to mention the danger of CMB-confirmation bias) but we have made all the photometry for these exercises available to the Community for their own analyses and the Community has reanalyzed the data, consistently concluding that the data are not easily moved to a place of diminished tension. See Follin and Knox (2018), Cardona et al. (2017) or Burns et al. (2018) for examples.
On the second point G.E. hypothesizes a ``common mode'' error where measurements of nearby, bright Cepheids such as those in the LMC or MW are measured differently than other, extragalactic Cepheids in SN hosts or NGC 4258. This is a concern that keeps us up at night! The full test of this as G.E. notes is comparing $H_0$ using only the anchor NGC 4258 where we get $H_0=72.0$ and as discussed above only 2 $\sigma$ different than the LMC, hence no real evidence of an issue. We expect this test to improve with the new NGC 4258 Cepheid data now in hand. The most likely way such a common mode error would arise is by ``crowding''. We just published a new paper, Riess et al. (2020) that used the amplitudes of Cepheid light curves to measure such a unrecognized, crowding-induced, common mode error and constrained it to 0.029 $\pm$ 0.037 mag, ruling out crowding as the source of such an error. Count-rate non-linearity is another potential source of common-mode error but was recently calibrated to 0.3\% precision in $H_0$ making it an unsuitable source. We welcome any specific hypotheses for another mechanism for such an error so we can test for it. We also note that we neglect the possibility that Cepheids in nearby hosts (Milky Way, LMC, NGC 4258 and M31) are actually different than those in SN hosts as it would seem to violate the principle that we should not live in a special region of space where Cepheids are fainter.}
\ \par
\end{adjustwidth}
\section{Introduction}
\label{sec:introduction}
We are experiencing very strange times. One consequence of prolonged enforced isolation
has been to fuel my obsession with `cosmic tensions'. Without the normal constraints of a `day job',
and the restraining influences of close colleagues, this obsession has led to the work described here.
By now, the `Hubble tension' has become well known to the astronomical community and beyond, and so needs
very little introduction. The latest analysis of the Cepheid-based distance scale measurement of $H_0$ from the SH0ES collaboration \citep{Riess:2011, Riess:2016,
Riess:2019} gives
\begin{equation}
H_0 = 74.03 \pm 1.42 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_1}
\end{equation}
whereas the value inferred from \Planck\ assuming the standard six parameter \LCDM\ model\footnote{I will refer to this model as base \LCDM.} is
\begin{equation}
H_0 = 67.44 \pm 0.58 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_2}
\end{equation}
\cite{PCP18, Efstathiou:2019}. The difference of $6.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$
between (\ref{equ:H0_1}) and (\ref{equ:H0_2}) is, apparently, a $4.3\sigma$ discrepancy. Another way of expressing the
tension is to note that the difference between (\ref{equ:H0_1}) and (\ref{equ:H0_2}) is nearly $10\%$ and is much larger
than the $2\%$ error of the SH0ES estimate, {\it which includes estimates of systematic errors}. This is, therefore, an
intriguing tension which has stimulated a large (almost daily) literature on extensions to the
base \LCDM\ model, focussing mainly on mechanisms to reduce the sound horizon \citep[for a review see][]{Knox:2020}.
Although the Hubble tension first became apparent following the
publication of the first cosmological results from
\Planck\ \cite{PCP13}, the low value of $H_0$ in the base
\LCDM\ cosmology can be inferred in many other ways. For example,
applications of the inverse distance ladder assuming either the
\Planck\ or WMAP \cite{Bennett:2013} values of the sound horizon,
$r_d$, or a CMB-free value of $r_d$ inferred from primordial
nucleosynthesis, consistently yield a low value of $H_0$
\citep[e.g.][]{Aubourg:2015, Verde:2017, Addison:2018, Abbott:2018,
Macaulay:2019}. Furthermore, using BAO and SN data, it is possible
to reconstruct $H(z)$, independently of the late time behaviour of
dark matter and dark energy \cite{Bernal:2016, Lemos:2019}. The
resulting low value of $H_0$ strongly suggests that any departure from
the base \LCDM\ model must involve changes to the physics at early
times. This new physics must mimic the exquisite agreement of the base
\LCDM\ model with the \Planck\ temperature and polarization power
spectra and hopefully preserve the consistency of observed light element
abundances and primordial nucleosynthesis. Models invoking a transient
scalar field contributing to the energy density just prior to
recombination (early dark energy) have been suggested as a possible
solution to the Hubble tension \cite{Poulin:2019, Agrawal:2019,
Smith:2020}, but these models fail to match the shape of the
galaxy power spectrum \cite{Hill:2020, Ivanov:2020, D'Amico:2020}. The
shape of the galaxy power spectrum itself provides tight constraints
on the Hubble constant assuming the base \LCDM\ cosmology, consistent
with Eq. (\ref{equ:H0_2}) \cite{ D'Amico:2020b, Philcox:2020}. I think it
is fair to say that despite many papers, no compelling theoretical
solution to the the Hubble tension has yet emerged.
An alternative explanation\footnote{With all due respect to the SH0ES
team.} is that the SH0ES result is biased by systematic errors that
are not included in the error estimate of Eq. (\ref{equ:H0_1}). As is
well known, distance ladder measurements of $H_0$ are extremely
challenging and have a long and chequered history. However, before
progressing further it is important to note that at the time of
writing the balance of evidence seems to be tilted in favour of the
SH0ES result. Gravitational lensing time delays \citep{Bonvin:2017,
Wong:2019, Shajib:2020}, distant maser galaxies \citep{Pesce:2020}
and Mira variables \cite{Huang:2020} all yield high values of the
Hubble constant compared to the base \LCDM\ value of (\ref{equ:H0_2})
though with larger errors than the SH0ES estimate. However, the recent
TRGB measurements \citep{Freedman:2019, Freedman:2020} give
\begin{equation}
H_0 = 69.6 \pm 1.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_3}
\end{equation}
apparently intermediate between the \Planck\ and SH0ES values (but see Sect. \ref{sec:TRGB}).
In this article, I address the following questions:
\smallskip
\noindent
(i) Can we imagine systematics that would bias the SH0ES analysis?
\smallskip
\noindent
(ii) Is there any evidence for such systematics?
\smallskip
\noindent
(iii) Are the TRGB results of \citep{Freedman:2019, Freedman:2020} compatible with SH0ES?
\smallskip
\noindent
(iv) What new observations are needed to resolve the discrepancies identified in this article?
\section{Comments on the SH0ES data}
\label{sec:SH0ES_data}
I will refer to the sequence of SH0ES papers \cite{Riess:2011, Riess:2016, Riess:2019} as R11, R16, and R19
respectively and I will begin with some representative numbers.
Suppose we wanted to explain the Hubble tension by invoking an additive error $\delta m$
in the magnitudes of SH0ES Cepheids. The shift in the value of $H_0$ would be
\begin{equation}
{\delta H_0 \over H_0} = 0.2\ln 10 \ \delta m. \label{equ:data1}
\end{equation}
The $6.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$ difference between (\ref{equ:H0_1}) and
(\ref{equ:H0_2}) requires $\delta m = 0.2 \ {\rm mag} $ (in the sense
that the SH0ES Cepheids would have to be too
bright)\footnote{Note that a shift of $\delta m = 0.1 \ {\rm mag.}$
would reduce the Hubble tension to about $2\sigma$.}. However, the
errors on individual distance moduli of SN host galaxies (e.g. from
Table 5 of R16) are typically quoted as $\delta \mu \sim 0.06 \ {\rm
mag}$. Averaging over $19$ SN host galaxies, the error in the
Cepheid magnitude scale is of order $0.06 /\sqrt{19} \sim 0.014 \ {\rm
mag.}$, {\it i.e. } about an order of magnitude smaller than the
required offset. At first sight it seems implausible that a magnitude
offset could have any bearing on the Hubble tension.
\begin{figure}
\centering
\includegraphics[width=150mm,angle=0]{figures/HR11residual.pdf}
\caption {R11 PL magnitude residuals relative to the global fit 5 of Table 2 in E14.
Red points show residuals for Cepheids that are accepted by R11 while
blue points are rejected by R11.}
\label{fig:HR11residual}
\end{figure}
\subsection{Outliers}
\label{subsec:outliers}
However, consider Fig. \ref{fig:HR11residual}. This is a combination of the two panels of Fig. 2 from \cite{Efstathiou:2014}
(hereafter E14). This shows residuals of the Wesenheit H-band ($m^W_H$) period luminosity (PL) relation from a global fit to the 9 galaxies in the R11 sample. As in R11,
\begin{equation}
m_H^W = m_H - R(V-I), \label{equ:data2}
\end{equation}
where $H=F160W$, $V=F555W$ and $I=F814W$ in the HST photometric system. In Fig. 2, I used $R=0.41$ (consistent with R11), though for the rest of this article I use $R = 0.386$ to be consistent with R16 and later SH0ES papers. One can see from Fig. \ref{fig:HR11residual} that there are many `outliers', with residuals that differ by 3 mag or more from the best fit. R11 identified outliers by fitting the PL relation for each galaxy
separately (rather than identifying outliers with respect to the global fit).
The R11 rejection algorithm works as follows:
\noindent
$\bullet$ The $m_H$ PL relations are fitted galaxy-by-galaxy, weighted by the magnitude errors in Table 2 of R11,
to a power law with slope fixed at $b=-3.1$ in the first iteration.
Cepheids with periods $> 205$ days are excluded.
\noindent
$\bullet$ Cepheids are rejected if they deviate from the best-fit relation by $\ge 0.75 \ {\rm mag}$, or by more than
2.5 times the magnitude error.
\noindent
$\bullet$ The fitting and rejection is repeated iteratively 6 times.
\smallskip
The Cepheids rejected by R11 are shown by the blue points in Fig. \ref{fig:HR11residual} and those accepted by R11
are shown by the red points. For the red points, the dispersion around the mean is $0.47$ mag and it is $0.60$ mag. for
all points. {\it To avoid a bias in the measurement of $H_0$, the mean must be determined to an accuracy much better
than $0.1$ mag (or a quarter of a tick mark on the y-axis in Fig. \ref{fig:HR11residual}}). The finite width of the
instability strip will produce an `intrinsic' scatter in the PL relation. In the H-band, the intrinsic scatter is
about $0.08$ mag and can be measured accurately from photometry of LMC and M31 Cepheids. The much higher dispersion
apparent in Fig. \ref{fig:HR11residual} is a consequence of photometric errors and misidentification of Cepheids (type II
Cepheids misclassified as classical Cepheids) and is larger than the error estimates given in R11. In the H-band,
Cepheid photometry of SN host galaxies is challenging even with HST because of crowding. Photometry
requires deblending of the images, as illustrated in Fig. 5 of R16, and also crowding corrections to the local sky background. In addition,
one can see from
Fig. \ref{fig:HR11residual} that the blue points are distributed asymmetrically around the mean, and so the question that
has worried me is whether the PL relations are really unbiased at the $0.1$ mag level, given that the underlying data
are so noisy and contain so many outliers\footnote{E14 experimented with different ways of rejecting outliers relative to
{\it global} fits.}. The SH0ES methadology requires the absence of any systematic biases, relying on large numbers of Cepheids
to reduced the errors on distance moduli well below the $\sim 0.5$ dispersion seen in Fig. \ref{fig:HR11residual}.
\begin{figure}
\centering
\includegraphics[width=150mm,angle=0]{figures/HR16residual.pdf}
\caption {R16 magnitude residuals relative to a global fit of the PL relation for
the 19 SN host galaxies and NGC 4258.}
\label{fig:HR16residual}
\end{figure}
If we now fast forward to R16, the equivalent plot to
Fig. \ref{fig:HR11residual} is shown in Fig. \ref{fig:HR16residual}.
The number of SN host galaxies with Cepheid photometry was increased
from 8 in R11 to 19 in R16. The optical data (in the F350LP, F555W and
F814W filters) for the R16 sample is described in detail in
\cite{Hoffmann:2016}\footnote{In the optical,
the images are largely the same, but with WIFPC3 data for three galaxies (NGC 1309, 3021, 3370) supplementing earlier ACS and WFPC2 images.}. For the 9 galaxies in common between R11
and R16, the F160W images are identical in the two analyses,
but were reprocessed for R16. There
are now no obvious outliers in Fig. \ref{fig:HR16residual}. and the dispersion around the mean is
$0.35 \ {\rm mag}$ with $\chi^2=1114$ for 1037 Cepheids. What happened to the outliers?
The outliers in R16 were removed in two stages: first for each
individual galaxy, Cepheids were rejected if their F814W-F160W colours
lay outside a $1.2 \ {\rm mag}$ colour range centred on the median
colour. An additional $\approx 5\%$ of the Cepheids were rejected if
their magnitudes differed by more than $2.7\sigma$ from a global
fit. Since colour selection removes a large fraction of the outliers,
R16 argue that outlier rejection is not a serious issue. However, R16
did not publish magnitudes for the rejected Cepheids and so it is not
possible to check the impact of colour selection and outlier rejection.
It is not obvious to this author that applying
symmetric rejection criteria to exclude outliers will produce unbiased
results. The R16 photometric data has been investigated by
\cite{Follin:2018} and (extensively) by myself and appears to be
self consistent.
\vfill\pagebreak\newpage
\begin{adjustwidth}{0.25in}{0.25in}
\noindent{\narrower{\underline{\bf Response from SH0ES Team:} The discovery of fundamental-mode Cepheid variables from time-series imaging requires a number
of winnowing steps from non-variables which outnumber them more than 1000-fold and other variable types which rival their frequency. Colors are useful to distinguish Cepheids (which have spectral types F-K, a modest color range) from other variables like Miras and Semi-regular variables which are redder (spectral type M) or RR Lyraes which are bluer (spectral types A-F). Colors may also remove strong blends of a Cepheid with a red or blue Giant. R11 did not use $I-H$ colors in this way and subsequently rejected many ``outliers'' off of the period-luminosity relation though these were published for further analyses, e.g., by G.E. R16 added $I-H$ color selection and at the same rejection threshold of the PL relation found very few PL relation outliers, about 2\% of the sample and their inclusion or exclusion has little impact on $H_0$. So most of the R11 outliers were too red or blue to be Cepheids and are not outliers in R16 because they never make it to the PL relation. (We consider outliers only as objects that pass selection criteria.)
We think the R16 selection is a clear and defined improvement in method and supersedes the R11 results which are no longer relevant.}}
\ \par
\ \par
\end{adjustwidth}
Nevertheless, we can test the R16 analysis by invoking additional data. Here I describe four tests:
\noindent
(i) Consistency of the slope of the SH0ES PL relation with that of nearby galaxies.
\noindent
(ii) Consistency of the R11 and R16 photometry.
\noindent
(iii) Consistency of the distance anchors.
\noindent
(iv) Consistency of Cepheid and TRGB distance moduli.
\begin{figure}
\centering
\includegraphics[width=130mm,angle=0]{figures/slopes.pdf}
\caption {PL relations for M31 (upper plot) and M101 (lower plot). The best fit slope and its $1\sigma$ error are
listed in each plot. }
\label{fig:slopes}
\end{figure}
\subsection{The slope of the PL relation}
\label{subsec:slopes}
The upper panel
of Fig. \ref{fig:slopes} shows the PL relation for M31\footnote{The M31 photometry is
listed in R16 and is based on the following sources \cite{Dalcanton:2012, Riess:2012, Kodric:2015, Wagner-Kaiser:2015}.}. The solid line shows
a fit
\begin{equation}
m_H^W = a + b \log_{10} \left [ { P \over 10 \ {\rm days}} \right ], \label{equ:dataP}
\end{equation}
where $P$ is the Cepheid period. For M31, the slope is\footnote{There has been some discussion in the literature, e.g.
\cite{Kodric:2015} of a break in the slope of the PL relation at $P \sim \ 10 \ {\rm days}$. There is no evidence of such a break
at H-band. Fitting the M31 PL relation to Cepheids with $P \ge 10 \ {\rm days}$ gives $b = -3.30 \pm 0.06$. Note that for the LMC, using the 70 Cepheids with HST photometry listed in R19, I find $b=-3.32 \pm 0.04$.}
\begin{equation}
b = -3.30 \pm 0.03. \label{equ:data4}
\end{equation}
The lower panel in Fig. \ref{fig:slopes} shows the PL relation for
M101 using the photometry from R16. The slope for M101 is much shallower
than that for M31, differing by $3.9 \sigma$, strongly suggesting a
bias. One finds similar results for other R19 galaxies, including NGC
4258. Global fits to the PL relation using either the R11 or R16 data
therefore give a slope that is shallower than (\ref{equ:data4}). The
reason for this is clear from Fig. 15 of \cite{Hoffmann:2016} which shows that
the Cepheid sample is incomplete at short periods. The
short period Cepheids are faint and close the photometric limits of the R16
observations. As a consequence, the optical selection is biased
towards brighter Cepheids. \cite{Hoffmann:2016} impose lower limits on the periods
for the final Cepheid sample, but it is clear from their Fig. 15 that these limits
are not sufficiently conservative. If we assume that the Cepheid PL relation is universal\footnote{Clearly a necessary assumption if we are to use Cepheids in distance ladder measurement.} then we can assess the impact of this incompleteness bias
by comparing values of $H_0$ with and without imposing a prior on the slope (see E14). We will show below
that imposing a strong slope prior of $b=-3.300 \pm 0.002$\footnote{The width of this prior is not a typo. To force $b=-3.3$, the prior must be tight enough
to counteract the shallow slow of the R16 PL relation.} {\it lowers} $H_0$ by $1.7 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$.
This is a systematic shift, which is larger than the $1\sigma$ error
quoted in Eq. (\ref{equ:H0_1}).
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES Team:} We find the statistical uncertainty in individual slopes is often an underestimate because individual slopes are strongly influenced by the presence or absence of rare very long period Cepheids ($P>70$ days).
For example, in the LMC we measured the H-band slope from 43 Cepheids with $P>10$ days (Riess et al. 2019) to be $-3.38 \pm 0.07$ mag while Persson et al. (2004) measured it to be $-3.16 \pm 0.1$ from 39 $P>10$ day Cepheids, a highly significant difference considering this is the same galaxy and mostly same objects. The difference comes from the inclusion of 2 rare ones at $P>50 $ days, $P=99$ and $P=134$ days, which were too bright for us to observe with HST. At long periods Cepheids are too rare to well populate
the instability strip and may even depart somewhat in slope. We also note that the two G.E. compared, M101 and M31 lie at opposite extremes among the sample of 20 plus hosts. A fair test of the uniformity of slopes should be made in a narrower period range where all hosts are well-populated.}
\ \par
\end{adjustwidth}
\subsection{Comparison of the R11 and R16 photometry}
\label{subsec:photometry}
R11 and R16 contain 9 galaxies in common. As a consistency check, it is interesting to compare the R11 and R16 photometry on an object-by-object basis. It is not possible to do this for all Cepheids within a particular galaxy,
because the R16 data have been pre-clipped: data for Cepheids rejected as outliers are not presented. Also, there are
Cepheids listed in R16 that are not listed in R11. The overlap between the two samples
is high enough, however, that one can draw reliable conclusions. The details of this comparison are given in Appendix \ref{sec:appendix}. I summarize the results in this subsection.
\begin{figure}
\centering
\includegraphics[width=74mm,angle=0]{figures/R4536R11.pdf} \includegraphics[width=74mm,angle=0]{figures/R4536R16.pdf}
\caption {Left plot shows the PL relation for R11 Cepheids in the SN host galaxy NGC 4536. Blue points show Cepheids rejected by R11 (IFLAG =1);
red points show Cepheids accepted by R11 (IFLAG = 0). The solid line shows the best fit linear
relation fitted to the red points. The dashed line shows the best fit with the slope constrained to $b=-3.3$.
Right plot shows the PL relation for R16 Cepheids. The solid line shows the best fit linear
relation and the dashed line shows the best fit with the slope constrained to $b=-3.3$.}
\label{fig:R4536a}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{lllllll} \hline
& \multicolumn{3}{c}{R11} & \multicolumn{3}{c}{R16} \\
galaxy & \qquad $b$ & \qquad $a$ & \qquad $a_{-3.3}$ & \qquad $b$ & \qquad $a$ & \qquad $a_{-3.3}$ \\ \hline
N4536 & $-2.95 \pm 0.18$ & $24.88 \pm 0.10$ & $25.05 \pm 0.05$ & $-3.27 \pm 0.11$ & $24.99 \pm 0.11$ & $25.01 \pm 0.04$ \\
N4639 & $-2.68 \pm 0.50$ & $25.41 \pm 0.30$ & $25.78 \pm 0.08$ & $-2.33 \pm 0.47$ & $25.03 \pm 0.29$ & $25.61 \pm 0.06$ \\
N3370 & $-3.17 \pm 0.25$ & $26.18 \pm 0.17$ & $26.28 \pm 0.04$ & $-3.34 \pm 0.21$ & $26.21 \pm 0.14$ & $26.18 \pm 0.04$ \\
N3982 & $-3.77 \pm 0.56$ & $26.18 \pm 0.32$ & $25.91 \pm 0.08$ & $-2.54 \pm 0.21$ & $25.40 \pm 0.29$ & $25.85 \pm 0.06$ \\
N3021 & $-2.86 \pm 0.49$ & $26.08 \pm 0.28$ & $26.31 \pm 0.10$ & $-2.29 \pm 0.60$ & $26.04 \pm 0.34$ & $26.58 \pm 0.09$ \\
N1309 & $-2.35 \pm 0.60$ & $26.08 \pm 0.45$ & $26.78 \pm 0.07$ & $-3.09 \pm 0.38$ & $26.47 \pm 0.29$ & $26.63 \pm 0.04$ \\
N5584 & $-2.87 \pm 0.23$ & $25.57 \pm 0.16$ & $25.85 \pm 0.04$ & $-3.07 \pm 0.18$ & $25.72 \pm 0.12$ & $25.88 \pm 0.03$ \\
N4038 & $-2.84 \pm 0.29$ & $25.45 \pm 0.26$ & $25.84 \pm 0.08$ & $-3.81 \pm 0.75$ & $25.79 \pm 0.63$ & $25.37 \pm 0.11$ \\
N4258 & $-3.22 \pm 0.15$ & $23.39 \pm 0.06$ & $23.43 \pm 0.04$ & $-3.15 \pm 0.10$ & $23.35 \pm 0.04$ & $23.40 \pm 0.03$
\cr \hline
\end{tabular}
\caption{Fits to the PL relation for data given in R11 and R16. }
\label{table:PL}
\end{center}
\end{table}
To give an idea of the differences between R11 and R16, Fig. \ref{fig:R4536a}\footnote{
This figure is reproduced from Appendix \ref{sec:appendix}, which includes
equivalent plots for all 9 galaxies in common between R11 and R16.}
shows the PL relations for NGC 4536.
The solid lines show fits to Eq. (\ref{equ:data1}) (fitted only to the IFLAG = 0 Cepheids for the R11 data).
The dashed lines show fits with slope constrained to $b=-3.3$;
the intercept of these fits is denoted as $a_{-3.3}$. The parameters for these fits are listed in Table 1.
The error weighted averages of the slopes are:
\begin{subequations}
\begin{eqnarray}
\langle b \rangle & = & -2.95 \pm 0.10, \quad {\rm R11 \ excluding \ NGC \ 4258}, \\
\langle b \rangle & = & -3.04 \pm 0.08, \quad {\rm R11 \ including \ NGC \ 4258}. \\
\langle b \rangle & = & -3.09 \pm 0.09, \quad {\rm R16 \ excluding \ NGC \ 4258}, \\
\langle b \rangle & = & -3.11 \pm 0.07, \quad {\rm R16 \ including \ NGC \ 4258}.
\end{eqnarray}
\end{subequations}
consistent with the shallow slopes determined from global fits.
\begin{wrapfigure}[]{l}{3.0in}
\vspace{-0.00in}
\includegraphics[width=0.4\textwidth, angle=0]{figures/pg_intercept.pdf}
\caption
{The intercept $(a_{-3.3})_{R11}$ plotted against $(a_{-3.3})_{R16}$. The dashed line shows the best fit relation with a slope of unity. Solid
line shows $(a_{-3.3})_{R11} = (a_{-3.3})_{R16}$. }
\label{fig:intercept}
\vspace{0.09in}
\end{wrapfigure}
The uncertainties in the intercepts are very large if the slopes are allowed to vary and so we focus on the constrained intercepts $a_{-3.3}$. These are plotted in Fig. \ref{fig:intercept} for the 8 SN host galaxies. The dashed line shows the `best fit'
relation with a slope of unity and an offset,
\begin{subequations}
\begin{equation}
(a_{-3.3})_{R16} = (a_{-3.3})_{R11} + \Delta a, \label{equ:data3a}
\end{equation}
where
\begin{equation}
\Delta a = -0.062 \pm 0.027. \label{equ:data5}
\end{equation}
\end{subequations}
The fit Eq. (\ref{equ:data5}) assumes that the errors on $(a_{-3.3})_{R11}$ and $(a_{-3.3})_{R16}$
are independent, which is clearly not true since the imaging data is largely common to both samples. In reality the
offset is much more significant than the $2.3 \sigma$ suggested by Eq. (\ref{equ:data5}).
It is also clear from Fig. \ref{fig:intercept} that the errors are underestimated\footnote{Assuming independent errors, the
dashed line gives $\chi^2= 21.27$ for 8 degrees of freedom, which is high by $\sim 3.3\sigma$.}. The offset of Eq. (\ref{equ:data5}) translates to a systematic shift in $H_0$ of about $2 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$, in the sense that the R16 data gives a higher $H_0$, for all
distance anchors\footnote{This can easily be verified by repeating the $H_0$ analysis for the 9 galaxies in R11 using the R16 photometry.}. Again, this systematic shift is larger than the error given in Eq. (\ref{equ:H0_1}).
The origin of this offset is surprising and is discussed in detail in Appendix \ref{sec:appendix}. The object-by-object
comparison of the photometry between R11 and R16 yields mean offsets (after removal of a few outliers) of
\begin{subequations}
\begin{eqnarray}
(m_H^W)_{\rm R16} &= & (m_H^W)_{\rm R11} + \langle \Delta m_H^W \rangle, \qquad \langle \Delta m_H^W \rangle = -0.051 \pm 0.025, \label{equ:data6a}\\
(V-I)_{\rm R16} &= & (V-I)_{\rm R11} + \langle \Delta C \rangle, \qquad \langle \Delta C \rangle = 0.14 \pm 0.03. \label{equ:data6b}
\end{eqnarray}
\end{subequations}
The offset in Eq. (\ref{equ:data5}) comes almost entirely from the difference in colours. As discussed in Appendix \ref{sec:appendix}, the colour offset is very significant: the errors in photometric scene reconstruction are correlated between different
passbands (and, in any case, are smaller in $V$ and $I$ than in $H$ band), so the errors in $V-I$ colours are much smaller than the errors for individual passbands. This large systematic difference in colours is not discussed by R16.
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES Team:} There have been many updates to the official STScI pipeline between R11 in 2011 (produced shortly after WFC3 was installed) and R16 in 2016 (when WFC3 was mature) including changes in zeropoints, geometric distortion files, flat fields, CTE correction procedures, dark frames, hot pixel maps, bias frames, count linearity calibration, and count-rate non-linearity corrections which change photometry. We believe the sum of these improved accuracy and the comparison to the early knowledge of WFC3 (or earlier methods of SH0ES) is not informative on present accuracy (or else we are all doomed by the past!). We note with levity better communicated as a verbal touche that the Planck calibration of the overall power spectrum, indicated by the parameter $A_s e^{-2 \tau}$ changed by 3.5 $\sigma$ between the 2013 and 2015 Planck release as documented in Table 1 of the 2015 paper. We celebrate the improved accuracy of Planck and promise we do not see this as any indication that improved accuracy cannot be established (stated with tongue in cheek). We also note an important improvement to the SH0ES data since 2016 (e.g., Riess et al. 2018,2019) is to have now measured Cepheids in the LMC and the Milky Way on the same WFC3 system as extragalactic Cepheids which makes $H_0$ insensitive to future changes in the calibration of WFC3 since $H_0$ is only sensitive to differences in Cepheid measurements.}
\ \par
\end{adjustwidth}
\section{Comparison of Cepheid and TRGB distance moduli}
\label{sec:TRGB}
As discussed in the Sect. \ref{sec:introduction}, recently \cite{Freedman:2019} (herafter F19) have determined a value for
$H_0$ using TRGB as a standard candle. There are $10$ SN host galaxies in common between R16 and F19, which are listed in
Table \ref{table:distance_moduli}. Five of these have TRGB distance measurements as part of the Carnegie-Chicago Hubble Program (CCHP); the remaining
five galaxies labelled `JL' (which are more distant) have TRGB distances from data analysed by \cite{Jang:2017a} but reanalysed by F19.
These distance moduli are listed in Table \ref{table:distance_moduli} together with the distance moduli for the solution
of Eq. (\ref{equ:deg2a}) using NGC 4258 as a distance anchor and the solution of Eq. (\ref{equ:deg3a}) using the LMC as an anchor.
\begin{figure}
\centering
\includegraphics[width=120mm,angle=0]{figures/distance_mod_residual.pdf}
\caption {Differences in the TRGB and Cepheid distance moduli from Table \ref{table:distance_moduli} plotted
against $\mu_{\rm TRGB}$. Filled and open symbols are for the galaxies labelled `CCHP' and `JL' respectively in Table
\ref{table:distance_moduli}. The red and blue dotted lines shows the best-fit offsets to the red points and blue points
respectively.}
\label{fig:distance_modulus}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{lllll} \hline
& & {\rm LMC} \ {\rm anchor} & {\rm N4258} \ {\rm anchor} & {\rm LMC} \ {\rm anchor} \\
galaxy & {\rm TRGB} & $\mu_{\rm TRGB}$ & $\mu_{\rm Cepheid}$ & $\mu_{\rm Cepheid}$ \\ \hline
N4536 & CCHP & $30.96 \pm 0.05$ & $30.92 \pm 0.05$ & $30.80 \pm 0.05$ \\
N3370 & JL & $32.27 \pm 0.05$ & $32.09 \pm 0.05$ & $31.97 \pm 0.04$ \\
N3021 & JL & $32.22 \pm 0.05$ & $32.43 \pm 0.08$ & $32.30 \pm 0.07$ \\
N1448 & CCHP & $31.32 \pm 0.06$ & $31.35 \pm 0.04$ & $31.22 \pm 0.04$ \\
N1309 & JL & $32.50 \pm 0.07$ & $32.51 \pm 0.05$ & $32.40 \pm 0.05$ \\
N5584 & JL & $31.82 \pm 0.10$ & $31.80 \pm 0.04$ & $31.68 \pm 0.04$ \\
N4038 & JL & $31.68 \pm 0.05$ & $31.39 \pm 0.09$ & $31.27 \pm 0.09$ \\
M101 & CCHP &$29.08 \pm 0.04$ & $29.22 \pm 0.04$ & $29.07 \pm 0.04$ \\
N4424 & CCHP & $31.00 \pm 0.06$ & $30.86 \pm 0.11$ & $30.73 \pm 0.11$ \\
N1365 & CCHP & $31.36 \pm 0.05$ & $31.32 \pm 0.06$ & $31.19 \pm 0.05$
\cr \hline
\end{tabular}
\caption{Galaxies with F19 TRGB and SH0ES Cepheid distance moduli. The second column denotes
the source of the TRGB data (see text). Third column lists the TRGB distance modulus and error from
Table 3 of F19, calibrated with the LMC DEB distance. The fourth and fifth columns list the
distance moduli for the solutions of Eqs. (\ref{equ:deg2a}) and (\ref{equ:deg3a}). The errors on these
estimates reflect errors arising from the PL fits only. They do not include errors in the anchor distances,
which would shift all distance moduli up or down by the same number.}
\label{table:distance_moduli}
\end{center}
\end{table}
The dotted lines in Fig. \ref{fig:distance_modulus} show least squares fits of a constant offset. For the red points,
the offset is close to zero. However, for the blue points, there is an offset
\begin{equation}
\mu_{\rm Cepheid} - \mu_{\rm TRGB} = -0.139 \pm 0.024 \ {\rm mag} , \label{equ:dist}
\end{equation}
and since both sets of distance moduli are based on the geometric
distance to the LMC, the error in the DEB distance cancels. If the
calibration of the TRGB is correct (a topic of some controversy
\cite{Yuan:2019, Freedman:2019}), this comparison reveals a
statistically significant ($\approx 6 \sigma$) offset compared to the
LMC calibration of the Cepheid distances. The TRGB distance scale is,
however, compatible with the NGC 4258 calibration of the Cepheid
distances\footnote{Interestingly, \cite{Reid:2019} reached a similar
conclusion.}. The tension between these two calibrations leads to
the offset of Eq. (\ref{equ:dist}) which, of course, is almost
identical to the offset $\delta a$ found in Sect. \ref{subsec:SH0ES
degeneracy}. As a consequence of these offsets, the TRGB value of
$H_0$ must be close the value inferred from the NGC 4258 calibration of
the Cepheid distance scale and should strongly disfavour the value derived
from the LMC calibration.
\begin{figure}
\centering
\includegraphics[width=48mm,angle=0]{figures/pgH0F.pdf} \includegraphics[width=48mm,angle=0]{figures/pgH0R_4258.pdf} \includegraphics[width=48mm,angle=0]{figures/pgH0R_LMC.pdf}
\caption {Distance moduli from Table \ref{table:distance_moduli} plotted against {\tt Supercal} Type Ia SN B-band peak magnitudes from Table 5 of R16. The solid lines show the best fit linear relation of Eq. (\ref{equ:H0a}). The best fit values for $H_0$
from Eq. (\ref{equ:H0b}) for each relation are given in each panel. Note that the quoted errors on $H_0$ includes only the
photometric errors on $\mu$ and $m_{\rm B, SN} +5a_{\rm B}$.}
\label{fig:H0}
\end{figure}
This is illustrated in Fig. \ref{fig:H0}. Here values and errors on $m_{\rm B, SN} + 5a_{\rm B}$ are from Table 5
of R16, where $m_{\rm B, SN}$ is the {\tt Supercal} B-band peak magnitude and $a_{\rm B}$ is the intercept of the SN Ia
magnitude-redshift relation. These are plotted against the SN host distance moduli from Table
\ref{table:distance_moduli}. We perform a least squares fit to determine the offset $\alpha$,
\begin{equation}
\mu = m_{\rm B, SN} + 5a_{\rm B} - \alpha, \label{equ:H0a}
\end{equation}
which gives a value for $H_0$,
\begin{equation}
H_0 = 10^{(0.2\alpha + 5)}, \quad \delta H_0 = 0.2 H_0 \log10 \delta\alpha. \label{equ:H0b}
\end{equation}
Only the errors in $m_{\rm B, SN} +5a_{\rm B}$ and the errors in the distance moduli listed in Table \ref{table:distance_moduli}
are included in the fits. The best fit values of $H_0$ and $1\sigma$ error are listed in each panel of Fig. \ref{fig:H0}.
Note that these error estimates do not include the error in the anchor distance. The resulting values of $H_0$ can be easily understood.
The value of $H_0$ in panel (a) for the TRGB distance moduli is consistent with the F19 value of Eq. (\ref{equ:H0_3}), showing that the subsample
of galaxies and SN data used in Fig. \ref{fig:H0} gives similar results to the full sample analysed by F19. Likewise, the
fit to panel (b) agrees well with the solution Eq. (\ref{equ:deg2a}) and the fit to panel (c) agrees with
Eq. (\ref{equ:deg3a}). Since these results use the same SN data, the low value of $H_0$ derived from the TRGB distance moduli is
caused almost entirely by a {\it calibration} difference. The TRGB calibration
strongly disfavours the LMC calibration of the R16 Cepheids,
as quantified by Eq. (\ref{equ:dist}).
\begin{adjustwidth}{0.25in}{0.25in}
\ \par
\noindent{\underline{\bf Response from SH0ES team:} In our own analyses we find that the {\it relative} distances measured by Cepheids and TRGB agree well, matching G.E. here, thus the difference in derived $H_0$ (about 1.5 $\sigma$ as stated in Freedman et al. 2020) must occur from a disparity in their absolute calibration. We think the best way to compare their calibrations is by using the same geometric distance calibration for each. Doing so using NGC 4258 we see no difference as Cepheids give $H_0$=72.0 and TRGB gives 71-72 (see Jang and Lee 2017 and Reid et al. 2019) and if we use the same SN Ia measurements we find that they are spot on. The disparity then arises only for the LMC where TRGB with the F20 calibration gives $H_0$=70 and Cepheids give 74. We think it likely that this difference may be traced to the extinction of TRGB in the LMC that is used in the F20 analysis. F20 finds $A_I=0.16 \pm 0.02$ mag and $H_0=70$ while Yuan et al. (2019) finds $A_I=0.10$, as did Jang and Lee (2017) and $H_0=72.7$. (Cepheids use NIR Wesenheit magnitudes so determination of absolute extinction is not an issue for Cepheids.) The determination of extinction of TRGB in the LMC is challenging and its resolution is required to conclude whether TRGB and Cepheid agree when using the LMC as an anchor. The comparison using NGC 4258 is cleaner for TRGB since the dust is negligible, the measurement is thus more similar to TRGB in SN hosts, and suggests excellent agreement with Cepheids. It is also worth noting that Freedman et al. (2012) recalibrated the HST KP Cepheid distance ladder with the revised LMC distance, fortuitously the same distance value used today, and found $H_0=74.2 \pm 2.2$ but using different Cepheids, a different HST instrument, optical data, software, etc indicating Cepheids scales have been consistent.}
\ \par
\end{adjustwidth}
| proofpile-arXiv_065-125 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
To estimate a regression when the
errors have a non-identity covariance matrix, we
usually turn first to generalized least squares (GLS). Somewhat
surprisingly, GLS proves to be computationally challenging
in the very simple setting of the unbalanced crossed random
effects models that we study here.
For that problem, the cost to compute the GLS estimate on $N$
data points grows at best like $O(N^{3/2})$ under the usual algorithms.
If we additionally assume Gaussian errors, then
\cite{crelin} show that even evaluating
the likelihood one time costs at least a multiple of $N^{3/2}$.
These costs make the usual algorithms for GLS
infeasible for large data sets such as
those arising in electronic commerce.
In this paper, we present an iterative algorithm based
on a backfitting approach from \cite{buja:hast:tibs:1989}.
This algorithm is known to converge to the
GLS solution. The cost of each iteration is $O(N)$
and so we also study how the number of iterations grows
with~$N$.
The crossed random effects model we consider has
\begin{equation}\label{eq:refmodel}
Y_{ij} =x_{ij}^\mathsf{T}\beta+a_i+b_j+e_{ij},\quad 1\le i\le R,\quad
1\le j\le C
\end{equation}
for random effects $a_i$ and $b_{j}$ and an error $e_{ij}$
with a fixed effects regression parameter $\beta\in\mathbb{R}^p$ for the covariates $x_{ij}\in\mathbb{R}^p$.
We assume that
$a_i\stackrel{\mathrm{iid}}{\sim} (0,\sigma^2_A)$, $b_j\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_B)$, and $e_{ij}\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_E)$
are all independent. It is thus a mixed effects model in which the
random portion has a crossed structure.
The GLS estimate is also the maximum likelihood
estimate (MLE), when $a_i$, $b_{j}$ and $e_{ij}$ are Gaussian.
Because we assume that $p$ is fixed as $N$ grows, we often
leave $p$ out of our cost estimates, giving instead the complexity in $N$.
The GLS estimate $\hat\beta_\mathrm{GLS}$ for crossed random effects can be efficiently
estimated if all $R\times C$ values are available.
Our motivating examples involve ratings data where $R$ people
rate $C$ items and then it is usual that the data are
very unbalanced with a haphazard observational pattern
in which only $N\ll R\times C$ of the $(x_{ij},Y_{ij})$ pairs are observed.
The crossed random effects setting is significantly more difficult
than a hierarchical model with just $a_i+e_{ij}$ but no $b_{j}$
term. Then the observations for index $j$ are `nested within' those
for each level of index $i$. The result is that the covariance matrix
of all observed $Y_{ij}$ values has a block diagonal structure
allowing GLS to be computed in $O(N)$ time.
Hierarchical models are very well suited to Bayesian
computation \citep{gelm:hill:2006}.
Crossed random effects are a much greater challenge.
\cite{GO17} find that the Gibbs sampler can take $O(N^{1/2})$
iterations to converge to stationarity, with each iteration
costing $O(N)$ leading once again to $O(N^{3/2})$ cost.
For more examples where
the costs of solving equations versus sampling from a
covariance attain the same rate see
\cite{good:soka:1989} and \cite{RS97}.
As further evidence of the difficulty of this problem,
the Gibbs sampler was one of nine MCMC algorithms that
\cite{GO17} found to be unsatisfactory.
Furthermore, \cite{lme4} removed the {\tt mcmcsamp} function from the R package lme4
because it was considered unreliable even for the problem
of sampling the posterior distribution of the parameters
from previously fitted models, and even for those with random effects variances
near zero.
\cite{papa:robe:zane:2020} present
an exception to the high cost of a Bayesian approach
for crossed random effects. They propose a collapsed Gibbs
sampler that can potentially mix in $O(1)$ iterations.
To prove this rate, they make an extremely stringent
assumption that every index $i=1,\dots,R$ appears in the
same number $N/C$ of observed data points and similarly
every $j=1,\dots,C$ appears in $N/R$ data points.
Such a condition is tantamount to requiring a designed
experiment for the data and it is much stronger than
what their algorithm seems to need in practice.
Under that condition their mixing rate asymptotes
to a quantity $\rho_{\mathrm{aux}}$, described in our discussion section,
that in favorable circumstances is $O(1)$.
They find empirically that their sampler has a cost that
scales well in many data sets where their balance condition
does not hold.
In this paper we study an iterative linear operation,
known as backfitting, for GLS.
Each iteration costs $O(N)$.
The speed of convergence depends on a certain
matrix norm of that iteration, which we exhibit below.
If the norm remains bounded strictly below $1$
as $N\to\infty$, then
the number of iterations to convergence is $O(1)$.
We are able to show that the matrix norm is $O(1)$
with probability tending to one, under conditions where
the number of observations per row (or per column) is random
and even the expected row or column counts may vary,
though in a narrow range.
While this is a substantial weakening of the conditions in
\cite{papa:robe:zane:2020}, it still fails to cover many
interesting cases. Like them, we find empirically that our
algorithm scales much more broadly than under the
conditions for which scaling is proved.
We suspect that the computational infeasibility of GLS leads
many users to use ordinary least squares (OLS) instead.
OLS has two severe problems.
First, it is \myemph{inefficient} with
$\var(\hat\beta_\mathrm{OLS})$ larger than $\var(\hat\beta_\mathrm{GLS})$.
This is equivalent to OLS ignoring some possibly large
fraction of the information in the data.
Perhaps more seriously, OLS is \myemph{naive}.
It produces an estimate of $\var(\hat\beta_\mathrm{OLS})$ that
can be too small by a large factor. That amounts
to overestimating the quantity of information behind $\hat\beta_\mathrm{OLS}$,
also by a potentially large factor.
The naivete of OLS can be countered by using better variance estimates.
One can bootstrap it by resampling the row and column entities as in \cite{pbs}.
There is also a version of Huber-White variance estimation
for this case in econometrics. See for instance \cite{came:gelb:mill:2011}.
While these methods counter the naivete of OLS, the inefficiency of OLS remains.
The method of moments algorithm in \cite{crelin}
gets consistent asymptotically normal estimates
of $\beta$, $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$.
It produces a GLS estimate $\hat\beta$ that is more
efficient than OLS but still not fully efficient
because it accounts for correlations due to only one of the
two crossed random effects. While inefficient, it is not naive
because its estimate of $\var(\hat\beta)$
properly accounts for variance due to $a_i$, $b_{j}$ and $e_{ij}$.
In this paper we get a GLS estimate $\hat\beta$
that takes account of all three variance components,
making it efficient.
We also provide an estimate of $\var(\hat\beta)$ that accounts
for all three components, so our estimate is not naive.
Our algorithm requires consistent estimates of the variance components
$\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ in computing $\hat\beta$ and $\widehat\var(\hat\beta)$.
We use the method of moments estimators from \cite{GO17} that can
be computed in $O(N)$ work.
By \citet[Theorem 4.2]{GO17}, these estimates of $\sigma^2_A$, $\sigma^2_B$ and
$\sigma^2_E$ are asymptotically uncorrelated and each of them has the same
asymptotic variance it would have had were the other two variance components equal to zero.
It is not known whether they are optimally estimated, much less optimal
subject to an $O(N)$ cost constraint.
The variance component estimates are known to be
asymptotically normal \citep{gao:thesis}.
The rest of this paper is organized as follows.
Section~\ref{sec:missing} introduces our notation and assumptions
for missing data.
Section~\ref{sec:backfitting} presents the backfitting algorithm
from \cite{buja:hast:tibs:1989}. That algorithm was defined for
smoothers, but we are able to cast the estimation of random effect
parameters as a special kind of smoother.
Section~\ref{sec:normconvergence} proves our result about
backfitting being convergent with a probability tending to one
as the problem size increases.
Section~\ref{sec:empiricalnorms} shows numerical measures
of the matrix norm of the backfitting operator. It remains
bounded below and away from one under more conditions than our theory shows.
We find that even one iteration of
the lmer function in lme4 package \cite{lme4} has a cost that grows like $N^{3/2}$
in one setting and like $N^{2.1}$ in another, sparser one.
The backfitting algorithm has cost $O(N)$ in both of these cases.
Section~\ref{sec:stitch} illustrates our GLS algorithm
on some data provided to us by Stitch Fix. These are customer
ratings of items of clothing on a ten point scale.
Section~\ref{sec:discussion} has a discussion of these results.
An appendix contains some regression output for the
Stitch Fix data.
\section{Missingness}\label{sec:missing}
We adopt the notation from \cite{crelin}.
We let $Z_{ij}\in\{0,1\}$ take the value $1$
if $(x_{ij},Y_{ij})$ is observed and $0$ otherwise,
for $i=1,\dots,R$ and $j=1,\dots,C$.
In many of the contexts we consider, the missingness
is not at random and is potentially informative.
Handling such problems is outside the scope of
this paper, apart from a brief discussion in Section~\ref{sec:discussion}.
It is already a sufficient challenge to work without
informative missingness.
The matrix $Z\in\{0,1\}^{R\times C}$, with elements $Z_{ij}$
has $N_{i\sumdot} =\sum_{j=1}^CZ_{ij}$ observations
in `row $i$' and $N_{\sumdot j}=\sum_{i=1}^RZ_{ij}$ observations
in `column $j$'.
We often drop the limits of summation so that $i$
is always summed over $1,\dots,R$ and $j$ over $1,\dots,C$.
When we need additional symbols for row and column indices we
use $r$ for rows and $s$ for columns.
The total sample size is $N=\sum_i\sum_jZ_{ij}
=\sum_iN_{i\sumdot} = \sum_jN_{\sumdot j}$.
There are two co-observation matrices, $Z^\mathsf{T} Z$ and $ZZ^\mathsf{T}$.
Here $(Z^\tran Z)_{js}=\sum_iZ_{ij}Z_{is}$ gives the number of rows in which
data from both columns $j$ and $s$ were observed,
while $(ZZ^\tran)_{ir}=\sum_jZ_{ij}Z_{rj}$ gives the number of
columns in which data from both rows $i$ and $r$ were observed.
In our regression models, we treat $Z_{ij}$ as nonrandom. We are conditioning
on the actual pattern of observations in our data.
When we study the rate at which our backfitting algorithm converges, we
consider $Z_{ij}$ drawn at random. That is, the analyst is solving a GLS
conditionally on the pattern of observations and missingness, while
we study the convergence rates that analyst will see for data
drawn from a missingness mechanism defined in Section~\ref{sec:modelz}.
If we place all of the $Y_{ij}$ into a vector $\mathcal{Y}\in\mathbb{R}^N$ and $x_{ij}$
compatibly into a matrix $\mathcal{X}\in\mathbb{R}^{N\times p}$, then
the naive and inefficient OLS estimator is
\begin{align}\label{eq:bhatols}
\hat\beta_\mathrm{OLS} = (\mathcal{X}^\mathsf{T} \mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{Y}.
\end{align}
This can be computed in $O(Np^2)$ work. We prefer to use
the GLS estimator
\begin{align}\label{eq:bhatgls}\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{V}^{-1}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{V}^{-1}\mathcal{Y},
\end{align}
where $\mathcal{V}\in\mathbb{R}^{N\times N}$ contains all of the $\cov(Y_{ij},Y_{rs})$ in
an ordering compatible with $\mathcal{X}$ and $\mathcal{Y}$. A naive algorithm costs $O(N^3)$
to solve for $\hat\beta_\mathrm{GLS}$.
It can actually be solved through a Cholesky decomposition of an $(R+C)\times (R+C)$ matrix
\citep{sear:case:mccu:1992}.
That has cost $O(R^3+C^3)$.
Now $N\le RC$, with equality only for completely observed data.
Therefore $\max(R,C)\ge \sqrt{N}$, and so $R^3+C^3\ge N^{3/2}$.
When the data are sparsely enough observed it is possible
that $\min(R,C)$ grows more rapidly than $N^{1/2}$.
In a numerical example in Section~\ref{sec:empiricalnorms} we have $\min(R,C)$
growing like $N^{0.70}$.
In a hierarchical model, with $a_i$ but no $b_{j}$ we would find
$\mathcal{V}$ to be block diagonal and then
$\hat\beta_\mathrm{GLS}$ could be computed in $O(N)$ work.
A reviewer reminds us that it has been known since \cite{stra:1969} that
systems of equations can be solved more quickly than cubic time.
Despite that, current software is still dominated by cubic time algorithms.
Also none of the known solutions are quadratic
and so in our setting the cost would be at least a multiple
of $(R+C)^{2+\gamma}$ for some $\gamma>0$ and hence not $O(N)$.
We can write our crossed effects model as
\begin{align}\label{eq:cemodelviaz}
\mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} + \mathcal{Z}_B\boldsymbol{b} + \boldsymbol{e}
\end{align}
for matrices $\mathcal{Z}_A\in\{0,1\}^{N\times R}$ and $\mathcal{Z}_B\in\{0,1\}^{N\times C}$.
The $i$'th column of $\mathcal{Z}_A$ has ones for all of the $N$ observations that
come from row $i$ and zeroes elsewhere. The definition of $\mathcal{Z}_B$ is analogous.
The observation matrix can be written $Z = \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B$.
The vector $\boldsymbol{e}$ has all $N$ values of $e_{ij}$ in compatible order.
Vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ contain the row and column random effects
$a_i$ and $b_{j}$.
In this notation
\begin{equation}
\label{eq:Vee}
\mathcal{V} = \mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\sigma^2_A + \mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}\sigma^2_B + I_N\sigma^2_E,
\end{equation}
where $I_N$ is the $N \times N$ identity matrix.
Our main computational problem is to get
a value for $\mathcal{U}=\mathcal{V}^{-1}\mathcal{X}\in\mathbb{R}^{N\times p}$.
To do that we iterate towards a solution $\boldsymbol{u}\in\mathbb{R}^N$ of $\mathcal{V} \boldsymbol{u}=\boldsymbol{x}$,
where $\boldsymbol{x}\in\mathbb{R}^N$ is one of the $p$ columns of $\mathcal{X}$.
After that, finding
\begin{equation}
\label{eq:betahat}
\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{U})^{-1}(\mathcal{Y}^\mathsf{T}\mathcal{U})^\mathsf{T}
\end{equation}
is not expensive, because $\mathcal{X}^\mathsf{T}\mathcal{U}\in\mathbb{R}^{p\times p}$ and we suppose that $p$ is not large.
If the data ordering in $\mathcal{Y}$ and elsewhere sorts by index $i$, breaking ties by index $j$,
then $\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\in\{0,1\}^{N\times N}$ is
a block matrix with $R$ blocks of ones
of size $N_{i\sumdot}\timesN_{i\sumdot}$ along the diagonal and zeroes elsewhere.
The matrix $\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}$ will not be block diagonal in that ordering.
Instead $P\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T} P^\mathsf{T}$ will be block diagonal with
$N_{\sumdot j}\timesN_{\sumdot j}$ blocks of ones on the diagonal,
for a suitable $N\times N$ permutation matrix $P$.
\section{Backfitting algorithms}\label{sec:backfitting}
Our first goal is to develop computationally efficient ways to
solve the GLS problem \eqref{eq:betahat} for the linear mixed model~\eqref{eq:cemodelviaz}.
We use the backfitting algorithm that
\cite{hast:tibs:1990} and \cite{buja:hast:tibs:1989}
use to fit additive models.
We write $\mathcal{V}$ in (\ref{eq:Vee}) as $\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B
+I_N\right)$ with $\lambda_A=\sigma^2_E/\sigma^2_A$ and
$\lambda_B=\sigma^2_E/\sigma^2_B$,
and define $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$.
Then the GLS estimate of $\beta$ is
\begin{align}
\hat\beta_{\mathrm{GLS}}&=\arg\min_\beta (\mathcal{Y}-\mathcal{X}\beta)^\mathsf{T}\mathcal{W}(\mathcal{Y}-\mathcal{X}\beta)
= (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{Y}\label{eq:betahatw}
\end{align}
and $\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}$.
It is well known (e.g., \cite{robinson91:_that_blup}) that we can obtain
$\hat\beta_{\mathrm{GLS}}$ by solving the
following penalized least-squares problem
\begin{align}\label{eq:minboth}
\min_{\beta,\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2
+\lambda_A\Vert\boldsymbol{a}\Vert^2 +\lambda_B\Vert\boldsymbol{b}\Vert^2.
\end{align}
Then $\hat\beta=\hat\beta_{\mathrm{GLS}}$ and $\hat \boldsymbol{a}$ and $\hat \boldsymbol{b}$ are the
best linear unbiased prediction (BLUP) estimates
of the random effects.
This derivation works for any number of factors, but it is
instructive to carry it through initially for one.
\subsection{One factor}\label{sec:one-factor}
For a single factor,
we simply drop the $\mathcal{Z}_B\boldsymbol{b}$ term from \eqref{eq:cemodelviaz} to get
\begin{equation*}
\mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} +\boldsymbol{e}.
\end{equation*}
Then
$\mathcal{V}=\cov(\mathcal{Z}_A\boldsymbol{a}+\boldsymbol{e})= \sigma^2_A\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T} +\sigma^2_E I_N$, and $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$ as before.
The penalized least squares problem is to solve
\begin{align}\label{eq:equivmina}
\min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2.
\end{align}
We show the details as we need them for a later derivation.
The normal equations from~\eqref{eq:equivmina} yield
\begin{align}
\boldsymbol{0} & = \mathcal{X}^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}),\quad\text{and}\label{eq:normbeta}\\
\boldsymbol{0} & = \mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a})
-\lambda_A\hat\boldsymbol{a}.\label{eq:normbsa}
\end{align}
Solving~\eqref{eq:normbsa} for $\hat\boldsymbol{a}$ and multiplying the solution by $\mathcal{Z}_A$ yields
$$
\mathcal{Z}_A\hat\boldsymbol{a} = \mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta)
\equiv \mathcal{S}_A(\mathcal{Y}-\mathcal{X}\hat\beta),
$$
for an $N\times N$ ridge regression ``smoother matrix'' $\mathcal{S}_A$.
As we explain below this smoother matrix implements shrunken within-group means.
Then substituting $\mathcal{Z}_A\hat\boldsymbol{a}$ into equation~\eqref{eq:normbeta}
yields
\begin{equation}
\label{eq:onefactor}
\hat\beta = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{Y}.
\end{equation}
Using the Sherman-Morrison-Woodbury (SMW) identity, one can show that $\mathcal{W}=I_N-\mathcal{S}_A$ and
hence $\hat\beta$ above equals $\hat\beta_\mathrm{GLS}$
from~\eqref{eq:betahatw}. This is not in itself a new discovery; see
for example \cite{robinson91:_that_blup} or \cite{hast:tibs:1990}
(Section 5.3.3).
To compute the solution in (\ref{eq:onefactor}), we need to compute
$\mathcal{S}_A \mathcal{Y}$ and $\mathcal{S}_A\mathcal{X}$. The heart of the computation in
$\mathcal{S}_A \mathcal{Y}$
is $(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}\mathcal{Y}$.
But $\mathcal{Z}_A^\mathsf{T}
\mathcal{Z}_A=\mathrm{diag}(N_{1\text{\tiny$\bullet$}},N_{2\text{\tiny$\bullet$}},\ldots,N_{R\text{\tiny$\bullet$}})$ and we
see that all we are doing is computing an $R$-vector of shrunken means of the elements
of $\mathcal{Y}$ at each level of the factor $A$; the $i$th element is $\sum_jZ_{ij} Y_{ij}/(N_{i\text{\tiny$\bullet$}}+\lambda_A)$.
This involves a single pass through the $N$ elements of $Y$,
accumulating the sums into $R$ registers, followed by an elementwise
scaling of the $R$ components. Then pre-multiplication by $\mathcal{Z}_A$ simply puts these
$R$ shrunken means back into an
$N$-vector in the appropriate positions. The total cost is $O(N)$.
Likewise $\mathcal{S}_A\mathcal{X}$ does the
same separately for each of the columns of $\mathcal{X}$.
Hence the entire computational cost for \eqref{eq:onefactor} is $O(Np^2)$, the same order as regression on $\mathcal{X}$.
What is also clear is that the indicator matrix
$\mathcal{Z}_A$ is not actually needed here; instead all we need to carry out
these computations is the
factor vector $f_A$ that records the level of factor $A$ for each
of the $N$ observations. In the R language \citep{R:lang:2015} the following pair of operations does
the computation:
\begin{verbatim}
hat_a = tapply(y,fA,sum)/(table(fA)+lambdaA)
hat_y = hat_a[fA]
\end{verbatim}
where {\tt fA} is a categorical variable (factor) $f_A$ of length $N$ containing the row indices $i$ in an order compatible with $Y\in\mathbb{R}^N$ (represented as {\tt y})
and {\tt lambdaA} is $\lambda_A=\sigma^2_A/\sigma^2_E$.
\subsection{Two factors}\label{sec:two-factors}
With two factors we face the problem of incompatible block diagonal
matrices discussed in Section~\ref{sec:missing}.
Define $\mathcal{Z}_G=(\mathcal{Z}_A\!:\!\mathcal{Z}_B)$ ($R+C$ columns),
$\mathcal{D}_\lambda=\mathrm{diag}(\lambda_AI_R,\lambda_BI_C)$,
and $\boldsymbol{g}^\mathsf{T}=(\boldsymbol{a}^\mathsf{T},\boldsymbol{b}^\mathsf{T})$.
Then solving \eqref{eq:minboth} is equivalent to
\begin{align}\label{eq:ming}
\min_{\beta,\boldsymbol{g}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_G\boldsymbol{g}\Vert^2
+\boldsymbol{g}^\mathsf{T}\mathcal{D}_\lambda\boldsymbol{g}.
\end{align}
A derivation similar to that used in the one-factor case gives
\begin{equation}
\label{eq:gfactor}
\hat\beta =
H_\mathrm{GLS}\mathcal{Y}\quad\text{for}\quad
H_\mathrm{GLS} = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G),
\end{equation}
where the hat matrix $H_\mathrm{GLS}$ is written in terms of
a smoother matrix
\begin{equation}
\label{eq:defcsg}
\mathcal{S}_G=\mathcal{Z}_G(\mathcal{Z}_G^\mathsf{T} \mathcal{Z}_G + \mathcal{D}_\lambda)^{-1}\mathcal{Z}_G^\mathsf{T}.
\end{equation}
We can again use SMW to show that $I_N-\mathcal{S}_G=\mathcal{W}$ and hence the
solution $\hat\beta=\hat\beta_{\mathrm{GLS}}$ in \eqref{eq:betahatw}.
But in applying $\mathcal{S}_G$ we do not enjoy the computational
simplifications that occurred in the one factor case, because
\begin{equation*}
\mathcal{Z}_G^\mathsf{T}\mathcal{Z}_G=
\left(
\begin{array}{cc}
\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B\\[0.25ex]
\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B
\end{array}
\right)
=\begin{pmatrix} \mathrm{diag}(N_{i\sumdot}) & Z\\
Z^\mathsf{T} & \mathrm{diag}(N_{\sumdot j})
\end{pmatrix},
\end{equation*}
where $Z\in\{0,1\}^{R\times C}$ is the observation matrix
which has no special structure.
Therefore we need to invert an $(R+C)\times (R+C)$ matrix to apply
$\mathcal{S}_G$ and hence to solve
\eqref{eq:gfactor}, at a cost of at least $O(N^{3/2})$ (see Section~\ref{sec:missing}).
Rather than group $\mathcal{Z}_A$ and $\mathcal{Z}_B$, we keep them separate, and
develop an algorithm to apply the operator $\mathcal{S}_G$ efficiently.
Consider a generic response vector $\mathcal{R}$ (such as $\mathcal{Y}$ or a column of $\mathcal{X}$) and the optimization problem
\begin{align}\label{eq:minab}
\min_{\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{R}-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2
+\lambda_A\|\boldsymbol{a}\|^2+\lambda_B\|\boldsymbol{b}\|^2.
\end{align}
Using $\mathcal{S}_G$ defined at~\eqref{eq:defcsg}
in terms of the indicator variables $\mathcal{Z}_G\in\{0,1\}^{N\times (R+C)}$
it is clear that the fitted values are given by
$\widehat\mathcal{R}=\mathcal{S}_G\mathcal{R}$.
Solving (\ref{eq:minab}) would result in two blocks of estimating
equations similar to equations \eqref{eq:normbeta} and \eqref{eq:normbsa}.
These can be written
\begin{align}\label{eq:backfit}
\begin{split}
\mathcal{Z}_A\hat\boldsymbol{a} & = \mathcal{S}_A(\mathcal{R}-\mathcal{Z}_B\hat\boldsymbol{b}),\quad\text{and}\\
\mathcal{Z}_B\hat\boldsymbol{b} & = \mathcal{S}_B(\mathcal{R}-\mathcal{Z}_A\hat\boldsymbol{a}),
\end{split}
\end{align}
where
$\mathcal{S}_A=\mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ is again
the ridge regression smoothing matrix for row effects and similarly
$\mathcal{S}_B=\mathcal{Z}_B(\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B + \lambda_BI_C)^{-1}\mathcal{Z}_B^\mathsf{T}$ the
smoothing matrix for column effects.
We solve these equations iteratively by block coordinate descent,
also known as backfitting.
The iterations converge to the solution
of~\eqref{eq:minab} \citep{buja:hast:tibs:1989, hast:tibs:1990}.
It is evident that $\mathcal{S}_A,\mathcal{S}_B\in\mathbb{R}^{N\times N}$
are both symmetric matrices. It follows that the limiting smoother
$\mathcal{S}_G$ formed by combining them is also symmetric. See \citet[page 120]{hast:tibs:1990}.
We will need this result later for an important computational shortcut.
Here the simplifications we enjoyed in the one-factor case once again
apply. Each step applies its operator to a vector
(the terms in parentheses on the right hand side in
(\ref{eq:backfit})). For both $\mathcal{S}_A$ and $\mathcal{S}_B$ these are
simply the shrunken-mean operations described for the one-factor case,
separately for factor $A$ and $B$ each time. As before, we do not need to
actually construct $\mathcal{Z}_B$, but simply use a factor $f_B$
that records the level of factor $B$ for each of the $N$ observations.
The above description holds for a generic response $\mathcal{R}$; we apply that algorithm (in
parallel) to $\mathcal{Y}$ and each column of $\mathcal{X}$ to obtain
the quantities $\mathcal{S}_G\mathcal{X}$ and $\mathcal{S}_G\mathcal{Y}$
that we need to compute $H_{\mathrm{GLS}}\mathcal{Y}$ in \eqref{eq:gfactor}.
Now solving (\ref{eq:gfactor}) is $O(Np^2)$ plus a negligible $O(p^3)$ cost.
These computations deliver $\hat\beta_{\mathrm{GLS}}$; if the BLUP
estimates $\hat\boldsymbol{a}$ and $\hat{\boldsymbol{b}}$ are also required, the same algorithm
can be applied to the response $\mathcal{Y}-\mathcal{X}\hat\beta_{\mathrm{GLS}}$, retaining the $\boldsymbol{a}$ and
$\boldsymbol{b}$ at the final iteration.
We can also write
\begin{equation}\label{eq:covbhat}
\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E(\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}.
\end{equation}
It is also clear that we can trivially extend this approach to
accommodate any number of factors.
\subsection{Centered operators}
\label{sec:centered-operators}
The matrices $\mathcal{Z}_A$ and $\mathcal{Z}_B$ both have row sums all ones, since
they are factor indicator matrices (``one-hot encoders''). This
creates a nontrivial intersection between their column spaces, and
that of $\mathcal{X}$ since we always include an intercept, that can
cause backfitting to converge more slowly. In this section we show
how to counter this intersection of column spaces
to speed convergence.
We work with this two-factor model
\begin{align}\label{eq:equivmina1}
\min_{\beta,\boldsymbol{a},\boldsymbol{b}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2+\lambda_B\Vert\boldsymbol{b}\Vert^2.
\end{align}
\begin{lemma}
If $\mathcal{X}$ in model~\eqref{eq:equivmina1}
includes a column of ones (intercept), and $\lambda_A>0$
and $\lambda_B>0$, then the solutions for $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfy
$\sum_{i=1}^R a_i=0$ and $\sum_{j=1}^C b_j=0$.
\end{lemma}
\begin{proof}
It suffices to show this for one factor and with $\mathcal{X}=\mathbf{1}$. The
objective is now
\begin{align}\label{eq:equivsimp}
\min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathbf{1}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2.
\end{align}
Notice that for any candidate solution $(\beta,\{a_i\}_1^R)$, the alternative
solution $(\beta+c,\{a_i-c\}_1^R)$ leaves the loss part of
\eqref{eq:equivsimp} unchanged, since the row sums of $\mathcal{Z}_A$ are all
one. Hence if $\lambda_A>0$, we would always improve $\boldsymbol{a}$ by picking
$c$ to minimize the
penalty term
$\sum_{i=1}^R(a_i-c)^2$, or $c=(1/R)\sum_{i=1}^Ra_i$.
\end{proof}
It is natural then to solve for $\boldsymbol{a}$ and $\boldsymbol{b}$ with these
constraints enforced, instead of waiting for them
to simply emerge in the process of iteration.
\begin{theorem}\label{thm:smartcenter}
Consider the generic optimization problem
\begin{align}\label{eq:equivsimp2}
\min_{\boldsymbol{a}} \Vert \mathcal{R} -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A
\Vert\boldsymbol{a}\Vert^2\quad \mbox{subject to } \sum_{i=1}^Ra_i=0.
\end{align}
Define the partial sum vector $\mathcal{R}^+ = \mathcal{Z}_A^\mathsf{T}\mathcal{R}$
with components $\mathcal{R}^+_{i} = \sum_jZ_{ij}\mathcal{R}_{ij}$,
and let
$$w_i=\frac{(N_{i\text{\tiny$\bullet$}}+\lambda)^{-1}}{\sum_{r}(N_{r\sumdot}+\lambda)^{-1}}.$$
Then the solution $\hat \boldsymbol{a}$ is given by
\begin{align}\label{eq:ahatsoln}
\hat
a_i=\frac{\mathcal{R}^+_{i}-\sum_{r}w_r\mathcal{R}^+_{r}}{N_{i\text{\tiny$\bullet$}}+\lambda_A},
\quad i=1,\ldots,R.
\end{align}
Moreover, the fit is given by
$$\mathcal{Z}_A\hat\boldsymbol{a}=\tilde\mathcal{S}_A\mathcal{R},$$ where $\tilde \mathcal{S}_A$ is a
symmetric operator.
\end{theorem}
The computations are a simple modification of the non-centered case.
\begin{proof}
Let $M$ be an $R\times R$ orthogonal matrix with first column
$\mathbf{1}/\sqrt{R}$. Then $\mathcal{Z}_A\boldsymbol{a}=\mathcal{Z}_AMM^\mathsf{T}\boldsymbol{a}=\tilde
\mathcal{G}\tilde\boldsymbol{\gamma}$ for $\mathcal{G}=\mathcal{Z}_AM$ and
$\tilde\boldsymbol{\gamma}=M^\mathsf{T}\boldsymbol{a}$.
Reparametrizing in this way leads to
the equivalent problem
\begin{align}\label{eq:equivsimp2}
\min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\tilde\mathcal{G}\tilde\boldsymbol{\gamma}\Vert^2 + \lambda_A
\Vert\tilde\boldsymbol{\gamma}\Vert^2,\quad \mbox{subject to } \tilde\gamma_1=0.
\end{align}
To solve (\ref{eq:equivsimp2}), we simply drop the first column of
$\tilde \mathcal{G}$. Let $\mathcal{G}=\mathcal{Z}_AQ$ where $Q$ is the matrix $M$ omitting
the first column, and $\boldsymbol{\gamma}$ the corresponding subvector of
$\tilde\boldsymbol{\gamma}$ having $R-1$ components. We now solve
\begin{align}\label{eq:equivsimp3}
\min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\mathcal{G}\boldsymbol{\gamma}\Vert^2 + \lambda_A
\Vert\tilde\boldsymbol{\gamma}\Vert^2
\end{align}
with no constraints, and the solution is $\hat\boldsymbol{\gamma}=(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}$.
The fit is given by $\mathcal{G}\hat\boldsymbol{\gamma}=\mathcal{G}(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A
I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}=\tilde \mathcal{S}_A\mathcal{R}$, and $\tilde \mathcal{S}_A$ is
clearly a symmetric operator.
To obtain the simplified expression for $\hat\boldsymbol{a}$, we write
\begin{align}
\mathcal{G}\hat\gamma&=\mathcal{Z}_AQ(Q^\mathsf{T}\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A Q+\lambda_A
I_{R-1})^{-1}Q^\mathsf{T}
\mathcal{Z}_A^\mathsf{T}\mathcal{R}\nonumber\\
&=\mathcal{Z}_AQ(Q^\mathsf{T} D Q+\lambda_A
I_{R-1})^{-1}Q^\mathsf{T}
\mathcal{R}^+\label{eq:tosimplify}\\
&=\mathcal{Z}_A\hat\boldsymbol{a},\nonumber
\end{align}
with $D=\mathrm{diag}(N_{i\sumdot})$.
We write $H=Q(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T}$
and $\tilde
Q=(D+\lambda_A I_R)^{\frac12}Q$, and let
\begin{align}
\tilde H&= (D+\lambda_A I_R)^{\frac12} H (D+\lambda_A
I_R)^{\frac12}
= \tilde Q(\tilde Q^\mathsf{T}\tilde Q)^{-1}\tilde
Q^\mathsf{T}.\label{eq:Qproj}
\end{align}
Now (\ref{eq:Qproj}) is a projection matrix in $\mathbb{R}^R$ onto a
$R-1$ dimensional subspace. Let $\tilde q = (D+\lambda_A
I_R)^{-\frac12}\mathbf{1}.$ Then $\tilde q^\mathsf{T} \tilde Q={\boldsymbol{0}}$, and so
$$\tilde H=I_R-\frac{\tilde q\tilde q^\mathsf{T}}{\Vert \tilde
q\Vert^2}.$$
Unraveling this expression we get
$$ H=(D+\lambda_AI_R)^{-1}
-(D+\lambda_AI_R)^{-1}\frac{\mathbf{1}\bone^\mathsf{T}}{\mathbf{1}^\mathsf{T}(D+\lambda_AI_R)^{-1}\mathbf{1}}(D+\lambda_AI_R)^{-1}.$$
With $\hat\boldsymbol{a}=H\mathcal{R}^+$ in (\ref{eq:tosimplify}), this gives the
expressions for each $\hat a_i$ in~\eqref{eq:ahatsoln}.
Finally, $\tilde \mathcal{S}_A = \mathcal{Z}_A H\mathcal{Z}_A^\mathsf{T}$ is symmetric.
\end{proof}
\subsection{Covariance matrix for $\hat\beta_{\mathrm{GLS}}$ with centered operators}
\label{sec:covar-matr-hatb}
In Section~\ref{sec:two-factors} we saw in (\ref{eq:covbhat}) that we
get a simple expression for
$\cov(\hat\beta_{\mathrm{GLS}})$. This simplicity relies on the fact that
$I_N-\mathcal{S}_G=\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$, and the usual cancelation occurs when we
use the sandwich formula to compute this covariance.
When we backfit with our centered smoothers we get a modified residual
operator
$I_N-\widetilde \mathcal{S}_G$ such that the analog of (\ref{eq:gfactor})
still gives us the required coefficient estimate:
\begin{equation}
\label{eq:gfactorc}
\hat\beta_{\mathrm{GLS}} = (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{Y}.
\end{equation}
However, $I_N-\widetilde\mathcal{S}_G\neq \sigma^2_E\mathcal{V}^{-1}$, and so now we need to
resort to the sandwich formula
$ \cov(\hat\beta_{\mathrm{GLS}})=H_\mathrm{GLS} \mathcal{V} H_\mathrm{GLS}^\mathsf{T}$
with $H_\mathrm{GLS}$ from \eqref{eq:gfactor}.
Expanding this we find that
$\cov(\hat\beta_{\mathrm{GLS}})$ equals
\begin{align*}
(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)
\cdot \mathcal{V}\cdot (I_N-\widetilde\mathcal{S}_G)\mathcal{X}(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}.
\end{align*}
While this expression might appear daunting, the computations are simple.
Note first that while $\hat\beta_{\mathrm{GLS}}$ can be computed via
$\tilde\mathcal{S}_G\mathcal{X}$ and $\tilde\mathcal{S}_G\mathcal{Y}$ this expression for $\cov(\hat\beta_{\mathrm{GLS}})$
also involves $\mathcal{X}^\mathsf{T} \tilde\mathcal{S}_G$. When we use the centered operator
from Theorem~\ref{thm:smartcenter} we get a symmetric matrix $\tilde \mathcal{S}_G$.
Let $\widetilde \mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$, the residual
matrix after backfitting each column of $\mathcal{X}$ using these centered operators. Then because
$\widetilde\mathcal{S}_G$ is symmetric, we have
\begin{align}
\hat\beta_{\mathrm{GLS}}&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\mathcal{Y},\quad\text{and} \notag\\
\cov(\hat\beta_{\mathrm{GLS}})&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\cdot\mathcal{V}\cdot\widetilde\mathcal{X}(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}.\label{eq:covbhatgls}
\end{align}
Since $\mathcal{V}=\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B
+I_N\right)$ (two low-rank matrices plus the identity), we can
compute $\mathcal{V}\cdot \widetilde\mathcal{X}$ very efficiently, and hence also the
covariance matrix in~\eqref{eq:covbhatgls}.
The entire algorithm is summarized in Section~\ref{sec:wholeshebang}.
\section{Convergence of the matrix norm}\label{sec:normconvergence}
In this section we prove a bound on the norm of the matrix
that implements backfitting for our random effects $\boldsymbol{a}$ and $\boldsymbol{b}$
and show how this controls the number of iterations required.
In our algorithm, backfitting is applied to $\mathcal{Y}$ as well as to each non-intercept column of $\mathcal{X}$
so we do not need to consider the updates for $\mathcal{X}\hat\beta$.
It is useful to take account of intercept adjustments in backfitting,
by the centerings described in Section~\ref{sec:backfitting}
because the space spanned by $a_1,\dots,a_R$
intersects the space spanned by $b_1,\dots,b_C$ because both
include an intercept column of ones.
In backfitting we alternate between adjusting $\boldsymbol{a}$ given $\boldsymbol{b}$ and
$\boldsymbol{b}$ given $\boldsymbol{a}$. At any iteration, the new $\boldsymbol{a}$ is an affine function of
the previous $\boldsymbol{b}$
and then the new $\boldsymbol{b}$ is an affine function of the new $\boldsymbol{a}$.
This makes the new $\boldsymbol{b}$ an affine function of the previous $\boldsymbol{b}$.
We will study that affine function to find conditions where
the updates converge. If the $\boldsymbol{b}$ updates converge, then so must the $\boldsymbol{a}$
updates.
Because the updates are affine they can be written in the form
$$
\boldsymbol{b} \gets M\boldsymbol{b} + \eta
$$
for $M\in\mathbb{R}^{C\times C}$ and $\eta\in\mathbb{R}^C$.
We iterate this update and
it is convenient to start with $\boldsymbol{b} = \boldsymbol{0}$.
We already know from \cite{buja:hast:tibs:1989} that this backfitting
will converge. However, we want more. We want to avoid
having the number of iterations required grow with $N$.
We can write the solution $\boldsymbol{b}$ as
$$
\boldsymbol{b} = \eta +\sum_{k=1}^\infty M^k\eta,
$$
and in computations we truncate this sum after $K$ steps
producing an error $\sum_{k>K}M^k\eta$. We want
$\sup_{\eta\ne0}\Vert \sum_{k>K}M^k\eta\Vert/\Vert\eta\Vert<\epsilon$
to hold with probability tending to one as the sample size
increases for any $\epsilon$, given sufficiently large $K$.
For this it suffices to have the spectral
radius $\lambda_{\max}(M)<1-\delta$ hold with probability
tending to one for some $\delta>0$.
Now for any $1\le p\le\infty$ we have
$$
\lambda_{\max}(M)\le \Vert M\Vert_{p}
\equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}}
\frac{\Vert M\boldsymbol{x}\Vert_p}{\Vert \boldsymbol{x}\Vert_p}.
$$
The explicit formula
$$
\Vert M\Vert_{1}
\equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}}
\frac{\Vert M\boldsymbol{x}\Vert_1}{\Vert \boldsymbol{x}\Vert_1}
= \max_{1\le s\le C}\sum_{j=1}^C | M_{js}|
$$
makes the matrix $L_1$ matrix norm very tractable theoretically
and so that is the one we study. We look at this and some
other measures numerically in Section~\ref{sec:empiricalnorms}.
\subsection{Updates}
Recall that $Z\in\{0,1\}^{R\times C}$ describes the pattern of observations.
In a model with no intercept, centering the responses and
then taking shrunken means as in \eqref{eq:backfit} would yield
these updates
\begin{align*}
a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}\quad\text{and}\quad
b_j \gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}.
\end{align*}
The update from the old $\boldsymbol{b}$ to the new $\boldsymbol{a}$ and
then to the new $\boldsymbol{b}$
takes the form $\boldsymbol{b}\gets M\boldsymbol{b}+\eta$ for
$M=M^{(0)}$ where
$$
M^{(0)}_{js} =
\frac1{N_{\sumdot j}+\lambda_B}\sum_i \frac{\zisZ_{ij}}{N_{i\sumdot}+\lambda_A}.$$
This update $M^{(0)}$ alternates shrinkage estimates for $\boldsymbol{a}$
and $\boldsymbol{b}$ but does no centering.
We don't exhibit $\eta$ because it does not affect the
convergence speed.
In the presence of an intercept, we know that $\sum_ia_i=0$
should hold at the solution and we can impose this simply
and very directly by centering the $a_i$, taking
\begin{align*}
a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}
-\frac1R\sum_{r=1}^R\frac{\sum_s Z_{rs}(Y_{rs}-b_s)}{N_{r\sumdot}+\lambda_A},
\quad\text{and}\\
b_j &\gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}.
\end{align*}
The intercept estimate will then be $\hat\beta_0=(1/C)\sum_jb_j$ which
we can subtract from $b_j$ upon convergence.
This iteration has the update matrix $M^{(1)}$ with
\begin{align}\label{eq:monejs}
M^{(1)}_{js}
&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rs}(Z_{rj}-N_{\sumdot j}/R)}{N_{r\sumdot}+\lambda_A}
\end{align}
after replacing a sum over $i$ by an equivalent one over $r$.
In practice, we prefer to use the weighted centering from
Section~\ref{sec:centered-operators} to center the $a_i$
because it provides a symmetric smoother $\tilde\mathcal{S}_G$
that supports computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
While it is more complicated to analyze it is easily computable
and it satisfies the optimality condition in Theorem~\ref{thm:smartcenter}.
The algorithm is for a generic response $\mathcal{R}\in\mathbb{R}^N$ such as $\mathcal{Y}$
or a column of $\mathcal{X}$.
Let us illustrate it for the case $\mathcal{R}=\mathcal{Y}$.
We begin with vector of $N$ values $Y_{ij}-b_{j}$
and so $Y^+_i = \sum_sZ_{is}(Y_{is}-b_s).$
Then
$w_i = (N_{i\sumdot}+\lambda_A)^{-1}/\sum_r(N_{r\sumdot}+\lambda_A)^{-1}$
and the updated $a_r$ is
\begin{align*}
\frac{Y^+_r-\sum_iw_i Y^+_i}{N_{r\sumdot}+\lambda_A}
&=
\frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i
\sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}.
\end{align*}
Using shrunken averages of $Y_{ij}-a_i$, the new $b_{j}$ are
\begin{align*}
b_{j} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_rZ_{rj}
\biggl(Y_{rj}-
\frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i
\sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}
\biggr).
\end{align*}
Now $\boldsymbol{b} \gets M\boldsymbol{b}+\eta$ for $M=M^{(2)}$, where
\begin{align}\label{eq:mtwojs}
M^{(2)}_{js}
&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
\biggl(Z_{rs} - \frac{\sum_{i}\frac{Z_{is}}{N_{i\sumdot}+\lambda_{A}}}{\sum_i{\frac{1}{N_{i\sumdot}+\lambda_{A}}}}\biggr).
\end{align}
Our preferred algorithm applies the optimal update
from Theorem~\ref{thm:smartcenter}
to both $\boldsymbol{a}$ and $\boldsymbol{b}$ updates. With that choice we do
not need to decide beforehand which random effects to center
and which to leave uncentered to contain the intercept.
We call the corresponding matrix $M^{(3)}$.
Our theory below analyzes $\VertM^{(1)}\Vert_1$
and $\VertM^{(2)}\Vert_1$
which have simpler expressions than
$\VertM^{(3)}\Vert_1$.
Update $M^{(0)}$ uses symmetric smoothers
for $A$ and $B$. Both are shrunken
averages. The naive centering update $M^{(1)}$ uses
a non-symmetric smoother
$\mathcal{Z}_A(I_R-\mathbf{1}_R\mathbf{1}_R^\mathsf{T}/R)(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A+\lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$
on the $a_i$ with a symmetric smoother on $b_{j}$
and hence it does not generally produce a symmetric
smoother needed for efficient computation
of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
The update $M^{(2)}$ uses two symmetric
smoothers, one optimal and one a simple shrunken mean.
The update $M^{(3)}$ takes the optimal
smoother for both $A$ and $B$.
Thus both $M^{(2)}$ and $M^{(3)}$
support efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
A subtle point is that these symmetric smoothers are
matrices in $\mathbb{R}^{N\times N}$ while the matrices $M^{(k)}\in\mathbb{R}^{C\times C}$
are not symmetric.
\subsection{Model for $Z_{ij}$}\label{sec:modelz}
We will state conditions on $Z_{ij}$ under which
both $\Vert M^{(1)}\Vert_1$ and $\Vert M^{(2)}\Vert_1$
are bounded
below $1$ with probability tending to one, as the problem size grows.
We need the following exponential inequalities.
\begin{lemma}\label{lem:hoeff}
If $X\sim\mathrm{Bin}(n,p)$, then for any $t\ge0$,
\begin{align*}
\Pr( X\ge np+t ) &\le \exp( -2t^2/n ),\quad\text{and}\\
\Pr( X\le np-t ) &\le \exp( -2t^2/n )
\end{align*}
\end{lemma}
\begin{proof}
This follows from Hoeffding's theorem.
\end{proof}
\begin{lemma}\label{lem:binounionbound}
Let $X_i\sim\mathrm{Bin}(n,p)$ for $i=1,\dots,m$, not necessarily independent.
Then for any $t\ge0$,
\begin{align*}
\Pr\Bigl( \max_{1\le i\le m} X_{i} \ge np+t \Bigr) &\le m\exp( -2t^2/n ) ,\quad\text{and}\\
\Pr\Bigl( \min_{1\le i\le m} X_{i} \le np-t \Bigr) &\le m\exp( -2t^2/n ).
\end{align*}
\end{lemma}
\begin{proof}
This is from the union bound applied
to Lemma~\ref{lem:hoeff}.
\end{proof}
Here is our sampling model.
We index the size of our problem by $S\to\infty$.
The sample size $N$ will satisfy $\mathbb{E}(N)\ge S$.
The number of rows and columns in the data set are
$$R = S^\rho\quad\text{and}\quad C=S^\kappa$$
respectively, for positive numbers $\rho$ and $\kappa$.
Because our application domain has $N\ll RC$, we
assume that $\rho+\kappa>1$.
We ignore that $R$ and $C$ above are not necessarily integers.
In our model, $Z_{ij}\sim\mathrm{Bern}(p_{ij})$ independently with
\begin{align}\label{eq:defab}
\frac{S}{RC} \le p_{ij} \le \Upsilon\frac{S}{RC}
\quad\text{for}\quad 1\le\Upsilon<\infty.
\end{align}
That is $1\le p_{ij} S^{\rho+\kappa-1}\le\Upsilon$.
Letting $p_{ij}$ depend on $i$ and $j$
allows the probability model to capture
stylistic preferences affecting the missingness
pattern in the ratings data.
\subsection{Bounds for row and column size}
Letting $X \preccurlyeq Y$ mean that $X$ is stochastically smaller than $Y$, we know that
\begin{align*}
\mathrm{Bin}(R, S^{1-\rho-\kappa}) &\preccurlyeq N_{\sumdot j} \preccurlyeq \mathrm{Bin}( R, \Upsilon S^{1-\rho-\kappa}),\quad\text{and}\\
\mathrm{Bin}(C,S^{1-\rho-\kappa}) &\preccurlyeq N_{i\sumdot} \preccurlyeq \mathrm{Bin}( C, \Upsilon S^{1-\rho-\kappa}).
\end{align*}
By Lemma \ref{lem:hoeff}, if $t\ge0$, then
\begin{align*}
\Pr( N_{i\sumdot} \ge S^{1-\rho}(\Upsilon+t))
&\le \Pr\bigl( \mathrm{Bin}(C,\Upsilon S^{1-\rho-\kappa}) \ge S^{1-\rho}(\Upsilon+t)\bigr)\\
&\le \exp(-2(S^{1-\rho}t)^2/C)\\
&= \exp(-2S^{2-\kappa-2\rho}t^2).
\end{align*}
Therefore if $2\rho+\kappa<2$, we find using
using Lemma~\ref{lem:binounionbound} that
\begin{align*}
&\Pr\bigl( \max_iN_{i\sumdot} \ge S^{1-\rho}(\Upsilon+\epsilon)\bigr)
\le S^\rho\exp(-2S^{2-\kappa-2\rho}\epsilon^2)\to0
\end{align*}
for any $\epsilon>0$.
Combining this with an analogous lower bound,
\begin{align}\label{eq:boundnid}
\lim_{S\to\infty}\Pr\bigl( (1-\epsilon) S^{1-\rho}\le \min_i N_{i\sumdot} \le \max_i N_{i\sumdot} \le (\Upsilon+\epsilon) S^{1-\rho}\bigr)=1
\end{align}
Likewise, if $\rho+2\kappa<2$, then for any $\epsilon>0$,
\begin{align}\label{eq:boundndj}
\lim_{S\to\infty}\Pr\bigl( (1-\epsilon)S^{1-\kappa}\le \min_j N_{\sumdot j} \le \max_j N_{\sumdot j} \le (\Upsilon+\epsilon) S^{1-\kappa}\bigr)=1
\end{align}
\subsection{Interval arithmetic}
We will replace $N_{i\sumdot}$ and other quantities
by intervals that asymptotically contain them with probability one and then use interval arithmetic in order to streamline
some of the steps in our proofs.
For instance,
$$N_{i\sumdot}\in [(1-\epsilon)S^{1-\rho},(\Upsilon+\epsilon)S^{1-\rho}]
= [1-\epsilon,\Upsilon+\epsilon]\times S^{1-\rho}
= [1-\epsilon,\Upsilon+\epsilon]\times \frac{S}{R}$$
holds simultaneously for all $1\le i\le R$ with probability
tending to one as $S\to\infty$.
In interval arithmetic,
$$[A,B]+[a,b]=[a+A,b+B]\quad\text{and}\quad [A,B]-[a,b]=[A-b,B-a].$$
If $0<a\le b<\infty$ and $0<A\le B<\infty$, then
$$[A,B]\times[a,b] = [Aa,Bb]\quad\text{and}\quad [A,B]/[a,b] = [A/b,B/a].$$
Similarly, if $a<0<b$ and $X\in[a,b]$, then
$|X|\in[0,\max(|a|,|b|)]$.
Our arithmetic operations on intervals yield
new intervals guaranteed to contain the results
obtained using any members of the original intervals.
We do not necessarily use the smallest such interval.
\subsection{Co-observation}
Recall that the co-observation matrices are $Z^\mathsf{T} Z\in\{0,1\}^{C\times C}$
and $ZZ^\mathsf{T}\in\{0,1\}^{R\times R}$.
If $s\ne j$, then
$$
\mathrm{Bin}\Bigl( R,\frac{S^2}{R^2C^2}\Bigr)
\preccurlyeq (Z^\tran Z)_{sj}\preccurlyeq
\mathrm{Bin}\Bigl( R,\frac{\Upsilon^2S^2}{R^2C^2}\Bigr).
$$
That is
$\mathrm{Bin}(S^\rho, S^{2-2\rho-2\kappa})
\preccurlyeq
(Z^\tran Z)_{sj}
\preccurlyeq
\mathrm{Bin}(S^\rho, \Upsilon^2S^{2-2\rho-2\kappa}).
$
For $t\ge0$,
\begin{align*}
\Pr\Bigl( \max_s\max_{j\ne s}(Z^\tran Z)_{sj}\ge (\Upsilon^2+t)S^{2-\rho-2\kappa}\Bigr)
&\le \frac{C^2}2\exp( -(tS^{2-\rho-2\kappa})^2/R)\\
&= \frac{C^2}2\exp( -t^2 S^{4-3\rho-4\kappa}).
\end{align*}
If $3\rho+4\kappa<4$
then
\begin{align*}
&\Pr\Bigl( \max_s\max_{j\ne s} \,(Z^\tran Z)_{sj} \ge (\Upsilon^2+\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0,
\quad\text{and}\\
&\Pr\Bigl( \min_s\min_{j\ne s} \,(Z^\tran Z)_{sj} \le (1-\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0,
\end{align*}
for any $\epsilon>0$.
\subsection{Asymptotic bounds for $\Vert M\Vert_1$}
Here we prove upper bounds for $\Vert M^{(k)}\Vert_1$ for $k=1,2$
of equations~\eqref{eq:monejs} and~\eqref{eq:mtwojs}, respectively.
The bounds depend on $\Upsilon$ and there are values of $\Upsilon>1$
for which these norms are bounded strictly below one,
with probability tending to one.
\begin{theorem}\label{thm:m1norm1}
Let $Z_{ij}$ follow the model from Section~\ref{sec:modelz}
with $\rho,\kappa\in(0,1)$, that satisfy $\rho+\kappa>1$,
$2\rho+\kappa<2$ and $3\rho+4\kappa<4$.
Then for any $\epsilon>0$,
\begin{align}\label{eq:claim1}
&
\Pr\bigl( \Vert M^{(1)} \Vert_1\le
\Upsilon^2-\Upsilon^{-2}+\epsilon
\bigr)\to1
,\quad\text{and}\\
&\Pr\bigl( \Vert M^{(2)}\Vert_1\le
\Upsilon^2-\Upsilon^{-2}+\epsilon\bigr)\to1 \label{eq:claim2}
\end{align}
as $S\to\infty$.
\end{theorem}
\begin{figure}[t!
\centering
\includegraphics[width=.8\hsize]{figdomain2}
\caption{
\label{fig:domainofinterest}
The large shaded triangle is the domain of interest $\mathcal{D}$ for
Theorem~\ref{thm:m1norm1}.
The smaller shaded triangle shows a region where the analogous update
to $\boldsymbol{a}$ would have acceptable norm. The points marked are the ones we look at numerically,
including $(0.88,0.57)$ which corresponds to the Stitch Fix data in
Section~\ref{sec:stitch}.
}
\end{figure}
\begin{proof}
Without loss of generality we assume that $\epsilon<1$.
We begin with~\eqref{eq:claim2}.
Let $M=M^{(2)}$.
When $j\ne s$,
\begin{align*}
M_{js}&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
(Z_{rs} -\bar Z_{\text{\tiny$\bullet$} s}),\quad\text{for}\\
\bar Z_{\text{\tiny$\bullet$} s}&=
\sum_i
\frac{Z_{is}}{N_{i\sumdot}+\lambda_A}
\Bigm/
{\sum_{i}\frac{1}{N_{i\sumdot}+\lambda_{A}}}.
\end{align*}
Although $|Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}|\le1$, replacing
$Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}$ by one does not prove to be
sharp enough for our purposes.
Every $N_{r\sumdot}+\lambda_A\in S^{1-\rho} [1-\epsilon, \Upsilon+\epsilon]$
with probability tending to one and so
\begin{align*}
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
&\in
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{[1-\epsilon,\Upsilon+\epsilon]S^{1-\rho}}\\
&\subseteq [1-\epsilon,\Upsilon+\epsilon]^{-1}\bar Z_{\text{\tiny$\bullet$} s} S^{\rho-1}.
\end{align*}
Similarly
\begin{align*}
\bar Z_{\text{\tiny$\bullet$} s} &\in
\frac{\sum_iZ_{is}[1-\epsilon,\Upsilon+\epsilon]^{-1}}
{R[1-\epsilon,\Upsilon+\epsilon]^{-1}}
\subseteq\frac{N_{\sumdot s}}{R}[1-\epsilon,\Upsilon+\epsilon][1-\epsilon,\Upsilon+\epsilon]^{-1}\\
&\subseteq S^{1-\rho-\kappa}
[1-\epsilon,\Upsilon+\epsilon]^2[1-\epsilon,\Upsilon+\epsilon]^{-1}
\end{align*}
and so
\begin{align}\label{eq:zrsbarpart}
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
\in S^{-\kappa}
\frac{[1-\epsilon,\Upsilon+\epsilon]^2}{[1-\epsilon,\Upsilon+\epsilon]^2}
\subseteq \frac1C
\Bigl[
\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2
, \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2
\Bigr].
\end{align}
Next using bounds on the co-observation counts,
\begin{align}\label{eq:zrspart}
\frac1{N_{\sumdot j}+\lambda_B}\sum_r\frac{Z_{rj}Z_{rs}}{N_{r\sumdot}+\lambda_A}
\in \frac{S^{\rho+\kappa-2}(Z^\tran Z)_{sj}}{[1-\epsilon,\Upsilon+\epsilon]^2}
\subseteq
\frac1C
\frac{[1-\epsilon,\Upsilon^2+\epsilon]}{[1-\epsilon,\Upsilon+\epsilon]^2}.
\end{align}
Combining~\eqref{eq:zrsbarpart} and~\eqref{eq:zrspart}
\begin{align*}
M_{js} \in &
\frac1C
\Bigl[
\frac{1-\epsilon}{(\Upsilon+\epsilon)^2}-
\Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2
,
\frac{\Upsilon^2+\epsilon}{1-\epsilon}
-\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2
\Bigr]
\end{align*}
For any $\epsilon'>0$ we can choose $\epsilon$ small enough that
$$M_{js} \in C^{-1}[\Upsilon^{-2}-\Upsilon^2-\epsilon',
\Upsilon^2-\Upsilon^{-2}+{\epsilon'}]
$$
and then $|M_{js}|\le (\Upsilon^2-\Upsilon^{-2}+\epsilon')/C$.
Next, arguments like the preceding
give $|M_{jj}|\le (1-\epsilon')^{-2}(\Upsilon+\epsilon')S^{\rho-1}\to0$.
Then with probability tending to one,
$$
\sum_j|M_{js}|
\le\Upsilon^2-\Upsilon^{-2}
+2\epsilon'.
$$
This bound holds for all $s\in\{1,2,\dots,C\}$, establishing~\eqref{eq:claim2}.
The proof of~\eqref{eq:claim1} is similar.
The quantity $\bar Z_{\text{\tiny$\bullet$} s}$
is replaced by $(1/R)\sum_iZ_{is}/(N_{i\sumdot}+\lambda_A)$.
\end{proof}
It is interesting to find the largest $\Upsilon$ with
$\Upsilon^2-\Upsilon^{-2}\le1$.
It is
$((1+5^{1/2})/2)^{1/2}\doteq 1.27$.
\section{Convergence and computation}\label{sec:empiricalnorms}
In this section we make some computations on synthetic data
following the probability model from Section~\ref{sec:normconvergence}.
First we study the norms of our update matrix $M^{(2)}$
which affects the number of iterations to convergence.
In addition to $\Vert\cdot\Vert_1$ covered in Theorem~\ref{thm:m1norm1}
we also consider $\Vert\cdot\Vert_2$, $\Vert\cdot\Vert_\infty$ and $\lambda_{\max}(\cdot)$.
Then we compare the cost to compute $\hat\beta_\mathrm{GLS}$ by
our backfitting method with that of lmer \citep{lme4}.
The problem size is indexed by $S$.
Indices $i$ go from $1$ to $R=\lceil S^\rho\rceil$
and indices $j$ go from $1$ to $C=\lceil S^\kappa\rceil$.
Reasonable parameter values have $\rho,\kappa\in(0,1)$
with $\rho+\kappa>1$.
Theorem~\ref{thm:m1norm1} applies when
$2\rho+\kappa<2$ and $3\rho+4\kappa<4$.
Figure~\ref{fig:domainofinterest} depicts this
triangular domain of interest $\mathcal{D}$.
There is another triangle $\mathcal{D}'$ where a corresponding update
for $\boldsymbol{a}$ would satisfy the conditions of Theorem~\ref{thm:m1norm1}.
Then $\mathcal{D}\cup\mathcal{D}'$ is a non-convex polygon of five sides.
Figure~\ref{fig:domainofinterest}
also shows $\mathcal{D}'\setminus\mathcal{D}$ as a second triangular region.
For points $(\rho,\kappa)$ near the line $\rho+\kappa=1$, the matrix $Z$
will be mostly ones unless $S$ is very large. For points $(\rho,\kappa)$
near the upper corner $(1,1)$, the matrix $Z$ will be extremely sparse
with each $N_{i\sumdot}$ and $N_{\sumdot j}$ having nearly a
Poisson distribution with mean between $1$ and $\Upsilon$.
The fraction of potential values that have been observed
is $O(S^{1-\rho-\kappa})$.
Given {$p_{ij}$}, we generate our observation matrix via $Z_{ij} \stackrel{\mathrm{ind}}{\sim}\mathrm{Bern}({p_{ij})}$.
These probabilities are first generated via
${p_{ij}}= U_{ij}S^{1-\rho-\kappa}$ where
$U_{ij}\stackrel{\mathrm{iid}}{\sim}\mathbb{U}[1,\Upsilon]$ and $\Upsilon$ is
the largest value for which $\Upsilon^2-\Upsilon^{-2}\le1$.
For small $S$ and $\rho+\kappa$ near $1$ we can get
some values ${p_{ij}>1}$ and in that case we take ${p_{ij}=1}$.
The following $(\rho,\kappa)$ combinations are of interest.
First, $(4/5,2/5)$ is the closest vertex of the domain of interest to the point $(1,1)$.
Second, $(2/5,4/5)$ is outside the domain of interest for the $\boldsymbol{b}$
but within the domain for the analogous $\boldsymbol{a}$ update.
Third, among points with $\rho=\kappa$, the value $(4/7,4/7)$
is the farthest one from the origin that is in the domain of interest.
We also look at some points on the $45$ degree line that are outside
the domain of interest because the sufficient conditions in
Theorem~\ref{thm:m1norm1}
might not be necessary.
In our matrix norm computations we took $\lambda_A=\lambda_B=0$.
This completely removes shrinkage and will make it harder for the algorithm to converge
than would be the case for the positive $\lambda_A$ and $\lambda_B$ that hold
in real data. The values of $\lambda_A$ and $\lambda_B$
appear in expressions $N_{i\sumdot}+\lambda_A$ and $N_{\sumdot j}+\lambda_B$ where their
contribution is asymptotically negligible, so conservatively setting them to zero
will nonetheless be realistic for large data sets.
\begin{figure
\centering
\includegraphics[width=.8\hsize]{norm_n_log_xy_with_lines_revised}
\caption{\label{fig:1normvsn}
Norm
$\Vert M^{(2)}\Vert_1$ of centered update matrix
versus problem size $S$ for different $(\rho, \kappa)$.
}
\end{figure}
\noindent
We sample from the model multiple times at various values of $S$
and plot $\Vert M^{(2)}\Vert_1$ versus $S$ on a logarithmic scale.
Figure~\ref{fig:1normvsn} shows the results.
We observe that $\Vert M^{(2)}\Vert_1$ is below $1$ and decreasing
with $S$ for all the examples $(\rho,\kappa)\in\mathcal{D}$.
This holds also for $(\rho,\kappa)=(0.60,0.60)\not\in\mathcal{D}$.
We chose that point because it is on the convex hull of $\mathcal{D}\cup\mathcal{D}'$.
The point $(\rho,\kappa)=(0.40,0.80)\not\in\mathcal{D}$.
Figure~\ref{fig:1normvsn} shows large values of $\VertM^{(2)}\Vert_1$ for this
case. Those values increase with $S$, but remain below $1$ in the range considered.
This is a case where the update from $\boldsymbol{a}$ to $\boldsymbol{a}$ would have norm well below $1$
and decreasing with $S$, so backfitting would converge.
We do not know whether $\VertM^{(2)}\Vert_1>1$ will occur for larger $S$.
The point $(\rho,\kappa)=(0.70,0.70)$ is not in the domain $\mathcal{D}$
covered by Theorem~\ref{thm:m1norm1}
and we see that $\VertM^{(2)}\Vert_1>1$ and generally increasing with $S$
as shown in Figure~\ref{fig:7070norms}.
This does not mean that backfitting must fail to converge.
Here we find that $\VertM^{(2)}\Vert_2<1$ and generally decreases as $S$
increases. This is a strong indication that
the number of backfitting iterations required
will not grow with $S$ for this $(\rho,\kappa)$ combination.
We cannot tell whether $\VertM^{(2)}\Vert_2$ will decrease to zero
but that is what appears to happen.
We consistently find in our computations
that $\lambda_{\max}(M^{(2)})\le \VertM^{(2)}\Vert_2\le\VertM^{(2)}\Vert_1$.
The first of these inequalities must necessarily hold.
For a symmetric matrix $M$ we know that $\lambda_{\max}(M)=\Vert M\Vert_2$
which is then necessarily no larger than $\Vert M\Vert_1$.
Our update matrices are nearly symmetric but not perfectly so.
We believe that explains why their $L_2$ norms are
close to their spectral radius and also smaller than their $L_1$ norms.
While the $L_2$ norms are empirically more favorable than the $L_1$
norms, they are not amenable to our theoretical treatment.
\begin{figure
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.4]{norm_vs_S_with_lines_70_L1_written_norm_logxy}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.4]{norm_vs_S_with_lines_70_L2_written_norm_logxy_main_correct}
\end{subfigure}
\caption{\label{fig:7070norms}
The left panel shows $\VertM^{(2)}\Vert_1$ versus $S$.
The right panel shows $\VertM^{(2)}\Vert_2$ versus $S$
with a logarithmic vertical scale.
Both have $(\rho,\kappa)=(0.7,0.7)$.
}
\end{figure}
We believe that backfitting will have a spectral radius well below $1$
for more cases than we can as yet prove.
In addition to the previous figures showing matrix norms
as $S$ increases for certain special values of $(\rho,\kappa)$ we
have computed contour maps of those norms over
$(\rho,\kappa)\in[0,1]$ for $S=10{,}000$.
See Figure~\ref{fig:contours}.
To compare the computation times for algorithms we
generated $Z_{ij}$ as above and also took
$x_{ij}\stackrel{\mathrm{iid}}{\sim}\mathcal{N}(0,I_7)$ plus an intercept, making $p=8$
fixed effect parameters.
Although backfitting can run with $\lambda_A=\lambda_B=0$,
lmer cannot do so for numerical reasons. So we took $\sigma^2_A=\sigma^2_B=1$
and $\sigma^2_E=1$ corresponding to $\lambda_A=\lambda_B=1$.
The cost per iteration does not depend on $Y_{ij}$ and hence not
on $\beta$ either. We used $\beta=0$.
Figure~\ref{fig:comptimes} shows computation times
for one single iteration when $(\rho,\kappa)=(0.52,0.52)$ and when $(\rho,\kappa)=(0.70,0.70)$.
The time to do one iteration in lmer grows roughly like $N^{3/2}$
in the first case. For the second case, it appears to grow at
the even faster rate of $N^{2.1}$.
Solving a system of $S^\kappa\times S^\kappa$ equations would cost
$S^{3\kappa} = S^{2.1} = O(N^{2.1})$, which explains the observed rate.
This analysis would predict $O(N^{1.56})$ for $\rho=\kappa=0.52$
but that is only minimally different from $O(N^{3/2})$.
These experiments were carried out in R on a computer
with the macOS operating system, 16 GB of memory and an Intel i7 processor. Each backfitting iteration entails solving \eqref{eq:backfit} along with the fixed effects.
The cost per iteration for backfitting follows closely to the $O(N)$
rate predicted by the theory.
OLS only takes one iteration and it is also of
$O(N)$ cost. In both of these cases $\VertM^{(2)}\Vert_2$ is bounded away
from one so the number of backfitting iterations does not grow with $S$.
For $\rho=\kappa=0.52$,
backfitting took $4$ iterations to converge for the smaller values of $S$
and $3$ iterations for the larger ones.
For $\rho=\kappa=0.70$,
backfitting took $6$ iterations for smaller $S$ and $4$ or $5$ iterations
for larger $S$.
In each case our convergence criterion was a relative
change of $10^{-8}$
as described in Section~\ref{sec:wholeshebang}.
Further backfitting to compute BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$
given $\hat\beta_{\mathrm{GLS}}$
took at most $5$ iterations for $\rho=\kappa=0.52$
and at most $10$ iterations for $\rho=\kappa=0.7$.
In the second example, lme4 did not reach convergence in
our time window so we ran it for just $4$ iterations to measure its cost per iteration.
\begin{figure}[!t]
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.28]{one_norm_reshaped.png}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.28]{infinity_norm_reshaped.png}
\end{subfigure}
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[height = 5.2cm, width = 5.5cm]{two_norm_reshaped.png}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[height = 5.2cm, width = 5.44cm]{spectral_radius_reshaped.png}
\end{subfigure}
\caption{\label{fig:contours}
Numerically computed matrix norms
for $M^{(2)}$ using $S=10{,}000$.
The color code varies with the subfigures.
}
\end{figure}
\begin{figure
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{time_per_iter_vs_n_last_point_1_point_2716_reference_slope_at_end_52_52_review.pdf}
\caption{$(\rho, \kappa) = (0.52,0.52)$}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{backfitting_lmer_time_total}
\caption{$(\rho, \kappa) = (0.70,0.70)$}
\end{subfigure}
\caption{\label{fig:comptimes}
Time for one iteration versus the number of observations, $N$ at two points $(\rho,\kappa)$.
The cost for lmer is roughly $O(N^{3/2})$ in the top panel
and $O(N^{2.1})$ in the bottom panel. The costs for OLS and backfitting
are $O(N)$.
}
\end{figure}
\section{Example: ratings from Stitch Fix}\label{sec:stitch}
We illustrate backfitting for GLS on some data from Stitch Fix.
Stitch Fix sells clothing. They mail their customers a sample of items.
The customers may keep and purchase any of those items that they
want, while returning the others. It is valuable to predict
the extent to which a customer will like an item, not just whether they will purchase it.
Stitch Fix has provided us with some of their client ratings
data. It was anonymized, void of personally identifying
information, and as a sample it does not reflect their
total numbers of clients or items at the time they
provided it. It is also from 2015. While
it does not describe their current business, it is a valuable
data set for illustrative purposes.
The sample sizes for this data are as follows.
We received $N=5{,}000{,}000$ ratings
by $R=762{,}752$ customers on $C=6{,}318$ items.
These values of $R$ and $C$ correspond to the point $(0.88,0.57)$ in Figure~\ref{fig:domainofinterest}.
Thus $C/N\doteq 0.00126$ and $R/N\doteq 0.153$.
The data are not dominated by a single row or column because
$\max_iN_{i\sumdot}/R\doteq 9\times 10^{-6}$ and $\max_jN_{\sumdot j}/N\doteq 0.0143$.
The data are sparse because $N/(RC)\doteq 0.001$.
\subsection{An illustrative linear model}
The response $Y_{ij}$ is a rating on a ten point scale of
the satisfaction of customer $i$ with item $j$.
The data come with features about the clients and
items. In a business setting one would fit and compare
possibly dozens of different regression models to understand the data.
Our purpose here is to study large scale GLS and compare
it to ordinary least squares (OLS) and so we use just one model, not necessarily
one that we would have settled on.
For that purpose we use the same model that was
used in \cite{crelin}. It is not chosen to make OLS look as bad as
possible. Instead it is potentially the first model one might look at in
a data analysis.
For client $i$ and item $j$,
\begin{align}
Y_{ij}& = \beta_0+\beta_1\mathrm{match}_{ij}+\beta_2\mathbb{I}\{\mathrm{client\ edgy}\}_i+\beta_3\mathbb{I}\{\mathrm{item\ edgy}\}_j \notag \\
&\phe + \beta_4\mathbb{I}\{\mathrm{client\ edgy}\}_i*\mathbb{I}\{\mathrm{item\ edgy}\}_j+\beta_5\mathbb{I}\{\mathrm{client\ boho}\}_i \notag \\
&\phe + \beta_6\mathbb{I}\{\mathrm{item\ boho}\}_j+\beta_7\mathbb{I}\{\mathrm{client\ boho}\}_i*\mathbb{I}\{\mathrm{item\ boho}\}_j \notag \\
&\phe + \beta_8\mathrm{material}_{ij}+a_i+b_j+e_{ij}. \notag
\end{align}
Here $\mathrm{material}_{ij}$ is a categorical variable that is implemented via indicator variables for each type of material other than the baseline. Following \cite{crelin}, we chose ‘Polyester’, the most common material, as the baseline.
Some customers and some items were given the adjective `edgy' in the data set. Another adjective was `boho', short for `Bohemian'.
The variable match$_{ij}\in[0,1]$ is an estimate of the probability that the customer keeps the item, made before the item was sent.
The match score is a prediction from a baseline model and is not representative of all algorithms used at Stitch Fix.
All told, the model has $p=30$ parameters.
\subsection{Estimating the variance parameters}\label{sec:estim-vari-param}
We use the method of moments method from \cite{crelin}
to estimate $\theta^\mathsf{T}=(\sigma^2_A, \sigma^2_B, \sigma^2_E)$ in $O(N)$ computation.
That is in turn based on the method that
\cite{GO17} use in the intercept only model where
$Y_{ij} = \mu+a_i+b_{j}+e_{ij}$.
For that model they set
\begin{align*}
U_{A} &= \sum_{i} \sum_{j} Z_{ij}
\Bigl( Y_{ij}-\frac{1}{N_{i\sumdot}}\sum_{j^{\prime}}Z_{ij'}
Y_{ij^{\prime}}\Bigr)^{2}, \\
U_{B} &= \sum_{j}\sum_{i} Z_{ij}
\Bigl(Y_{ij}-\frac{1}{N_{\sumdot j}}\sum_{i^{\prime}}Z_{i'j}
Y_{i^{\prime}j}\Bigr)^{2}, \quad\text{and}\\
U_{E} &= N\sum_{i j} Z_{i j} \Bigl(Y_{i j}-\frac{1}{N}\sum_{i^{\prime} j^{\prime}}Z_{i'j'} Y_{i^{\prime} j^{\prime}}\Bigr)^{2}.
\end{align*}
These are, respectively, sums of within row sums of squares,
sums of within column sums of squares
and a scaled overall sum of squares.
Straightforward calculations
show that
\begin{align*}
\mathbb{E}(U_{A})&=\bigl(\sigma^2_B+\sigma^2_E\bigr)(N-R), \\
\mathbb{E}(U_{B})&=\bigl(\sigma^2_A+\sigma^2_E \bigr)(N-C), \quad\text{and}\\
\mathbb{E}(U_{E})&=\sigma^2_A\Bigl(N^{2}-\sum_{i} N_{i\sumdot}^{2}\Bigr)+\sigma^2_B\Bigl(N^{2}-\sum_{j} N_{\sumdot j}^{2}\Bigr)+\sigma^2_E(N^{2}-N).
\end{align*}
By matching moments, we can estimate $\theta$ by solving the $3 \times 3$ linear system
$$\begin{pmatrix}
0& N-R & N-R \\[.25ex]
N-C & 0 & N-C \\[.25ex]
N^{2}-\Sigma N_{i}^{2} & N^{2}-\Sigma N_{j}^{2} & N^{2}-N
\end{pmatrix}
\begin{pmatrix}
\sigma^2_A \\[.25ex] \sigma^2_B \\[.25ex] \sigma^2_E\end{pmatrix}
=\begin{pmatrix}
U_{A}\\[.25ex] U_{B} \\[.25ex] U_{E}\end{pmatrix}
$$
for $\theta$.
Following \cite{GO17} we note that
$\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\beta = a_i+b_{j}+e_{ij}$
has the same parameter $\theta$ as $Y_{ij}$ have.
We then take a consistent estimate of $\beta$,
in this case $\hat\beta_{\mathrm{OLS}}$ that \cite{GO17} show is consistent for $\beta$,
and define $\hat\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\hat\beta_\mathrm{OLS}$.
We then estimate $\theta$ by the above method
after replacing $Y_{ij}$ by $\hat\eta_{ij}$.
For the Stitch Fix data we obtained
$\hat{\sigma}_{A}^{2} = 1.14$ (customers),
$\hat{\sigma}^{2}_{B} = 0.11$ (items)
and $\hat{\sigma}^{2}_{E} = 4.47$.
\subsection{Computing $\hat\beta_\mathrm{GLS}$}\label{sec:wholeshebang}
The estimated coefficients $\hat\beta_\mathrm{GLS}$ and their standard errors are presented in a table in the appendix.
Open-source R code at
\url{https://github.com/G28Sw/backfit_code}
does these computations.
Here is a concise description of the algorithm we used:
\begin{compactenum}[\quad 1)]
\item Compute $\hat\beta_\mathrm{OLS}$ via \eqref{eq:bhatols}.
\item Get residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{OLS}}$.
\item Compute $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ by the method of moments on $\hat\eta_{ij}$.
\item Compute $\widetilde\mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$ using doubly centered backfitting $M^{(3)}$.
\item Compute $\hat\beta_{\mathrm{GLS}}$ by~\eqref{eq:covbhatgls}.
\item If we want BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ backfit
$\mathcal{Y} -\mathcal{X}\hat\beta_{\mathrm{GLS}}$ to get them.
\item Compute $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ by plugging
$\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ into $\mathcal{V}$ at~\eqref{eq:covbhatgls}.
\end{compactenum}
\smallskip
Stage $k$ of backfitting provides $(\tilde\mathcal{S}_G\mathcal{X})^{(k)}$.
We iterate until
$$
\frac{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k+1)}-(\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}
< \epsilon
$$
where $\Vert \cdot \Vert_F$ is the Frobenius norm
(root mean square of all elements).
Our numerical results use $\epsilon =10^{-8}$.
{
When we want $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ then we need
to use a backfitting strategy with a symmetric smoother
$\tilde\mathcal{S}_G$. This holds for $M^{(0)}$, $M^{(2)}$ and $M^{(3)}$
but not $M^{(1)}$.
After computing $\hat\beta_{\mathrm{GLS}}$ one can return to step 2,
form new residuals
$\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{GLS}}$
and continue through steps 3--7.
We have seen small differences from doing this.
}
\subsection{Quantifying inefficiency and naivete of OLS}
In the introduction we mentioned two serious problems with the use of OLS on crossed
random effects data. The first is that OLS is naive about correlations in the
data and this can lead it to severely underestimate the variance of $\hat\beta$.
The second is that OLS is inefficient compared to GLS by the Gauss-Markov theorem.
Let $\hat\beta_\mathrm{OLS}$ and $\hat\beta_\mathrm{GLS}$ be the OLS and GLS
estimates of $\beta$, respectively. We can compute their
corresponding variance estimates
$\widehat\cov_\mathrm{OLS}(\hat\beta_\mathrm{OLS})$ and $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{GLS})$.
We can also find
$\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{OLS})$, the variance under our GLS model of the
linear combination of $Y_{ij}$ values that OLS uses.
This section explore them graphically.
We can quantify the naivete of OLS
via the ratios
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$
for $j=1,\dots,p$.
Figure~\ref{fig:OLSisnaive} plots these values. They range from $ 1.75$
to $345.28$ and can be interpreted as factors by which OLS naively overestimates
its sample size.
The largest and second largest ratios are for material indicators
corresponding to `Modal' and `Tencel', respectively. These appear
to be two names for the same product with Tencel being a trademarked name
for Modal fibers (made from wood).
We can also identify the linear combination of $\hat\beta_\mathrm{OLS}$
for which $\mathrm{OLS}$ is most naive. We maximize
the ratio
$x^\mathsf{T}\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})x/x^\mathsf{T}\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}})x$
over $x\ne0$.
The resulting maximal ratio is the largest eigenvalue of
$$\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}}) ^{-1}
\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})$$
and it is about $361$ for the Stitch Fix data.
\begin{figure}
\centering
\includegraphics[width=.9\hsize]{figOLSisnaive_katelyn_interaction_polyester_reference}
\caption{\label{fig:OLSisnaive}
OLS naivete
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$
for coefficients $\beta_j$ in the Stitch Fix data.
}
\end{figure}
We can quantify the inefficiency of OLS
via the ratio
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$
for $j=1,\dots,p$.
Figure~\ref{fig:OLSisinefficient} plots these values. They range from just over $1$
to $50.6$ and can be interpreted as factors by which using
OLS reduces the effective sample size. There is a clear outlier: the coefficient of the match
variable is very inefficiently estimated by OLS. The second largest inefficiency
factor is for the intercept term.
The most inefficient linear combination of $\hat\beta$ reaches a
variance ratio of $52.6$, only slightly more inefficient than the match coefficient alone.
\begin{figure}
\centering
\includegraphics[width=.9\hsize]{figOLSisinefficient_katelyn_interaction_polyester_reference}
\caption{\label{fig:OLSisinefficient}
OLS inefficiency
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$
for coefficients $\beta_j$ in the Stitch Fix data.
}
\end{figure}
The variables for which OLS is more naive tend to also be the variables for
which it is most inefficient. Figure~\ref{fig:naivevsinefficient} plots these
quantities against each other for the $30$ coefficients in our model.
\begin{figure}[t]
\centering
\includegraphics[width=.8\hsize]{fignaivevsinefficient_katelyn_interaction_polyester_reference}
\caption{\label{fig:naivevsinefficient}
Inefficiency vs naivete for OLS coefficients in the Stitch Fix data.
}
\end{figure}
\subsection{Convergence speed of backfitting}
The Stitch Fix data have row and column sample sizes
that are much more uneven than our sampling model for $Z$ allows.
Accordingly we cannot rely on Theorem~\ref{thm:m1norm1} to show that
backfitting must converge rapidly for it.
The sufficient conditions in that theorem may not be necessary
and we can compute
our norms and the spectral radius on
the update matrices for the Stitch Fix data using some sparse matrix computations.
Here $Z\in\{0,1\}^{762,752\times6318}$,
so $M^{(k)}\in\mathbb{R}^{6318\times 6318}$ for $k \in \lbrace0,1,2,3\rbrace$.
The results are
$$
\begin{pmatrix}
\Vert M^{(0)}\Vert_1 \ & \ \Vert M^{(0)}\Vert_2 \ & \ |\lambda_{\max}(M^{(0)})|\\[.25ex]
\Vert M^{(1)}\Vert_1 \ & \ \Vert M^{(1)}\Vert_2 \ & \ |\lambda_{\max}(M^{(1)})|\\[.25ex]
\Vert M^{(2)}\Vert_1 \ & \ \Vert M^{(2)}\Vert_2 \ & \ |\lambda_{\max}(M^{(2)})|\\[.25ex]
\Vert M^{(3)}\Vert_1 \ & \ \Vert M^{(3)}\Vert_2 \ & \ |\lambda_{\max}(M^{(3)})|
\end{pmatrix}
=\begin{pmatrix}
31.9525 \ & \ 1.4051 \ & \ 0.64027 \\[.75ex]
11.2191 \ & \ 0.4512 \ & \ 0.33386\\[.75ex]
\phz8.9178 \ & \ 0.4541 \ & \ 0.33407\\[.75ex]
\phz9.2143\ & \ 0.4546 & \ 0.33377\\
\end{pmatrix}.
$$
All the updates have spectral radius comfortably below one.
The centered updates have $L_2$ norm below one
but the uncentered update does not.
Their $L_2$ norms are somewhat larger than their spectral
radii because those matrices are not quite symmetric.
The two largest eigenvalue moduli for $M^{(0)}$ are $0.6403$ and $0.3337$
and the centered updates have spectral radii close to the second
largest eigenvalue of $M^{(0)}$.
This is consistent with an intuitive explanation that the space spanned
by a column of $N$ ones that is common to the columns spaces
of $\mathcal{Z}_A$ and $\mathcal{Z}_B$ is the {biggest impediment} to $M^{(0)}$ and that
all three centering strategies essentially remove it.
The best spectral radius is for $M^{(3)}$, which employs two principled
centerings, although in this data set it made little difference.
Our backfitting algorithm took $8$ iterations when applied to $\mathcal{X}$
and $12$ more to compute the BLUPs.
We used a convergence threshold of $10^{-8}.$
\section{Discussion}\label{sec:discussion}
We have shown that the cost of our backfitting algorithm
is $O(N)$ under strict conditions that are nonetheless
much more general than having $N_{i\sumdot} = N/C$
for all $i=1,\dots,R$ and $N_{\sumdot j} = N/R$ for all $j=1,\dots,C$
as in \cite{papa:robe:zane:2020}.
As in their setting, the backfitting algorithm scales empirically to
much more general problems than those for which
rapid convergence can be proved.
Our contour map of the spectral radius of the update
matrix $M$ shows that this norm is well below $1$
over many more $(\rho,\kappa)$ pairs that our
theorem covers. The difficulty in extending our
approach to those settings is that the spectral radius
is a much more complicated function of the observation
matrix $Z$ than the $L_1$ norm is.
Theorem 4 of \cite{papa:robe:zane:2020}
has the rate of convergence for their collapsed Gibbs
sampler for balanced data.
It involves an auxilliary convergence rate $\rho_{\mathrm{aux}}$
defined as follows.
Consider the Gibbs sampler on $(i,j)$ pairs where
given $i$ a random $j$ is chosen with probability $Z_{ij}/N_{i\sumdot}$
and given $j$ a random $i$ is chosen with probability
$Z_{ij}/N_{\sumdot j}$. That Markov chain has invariant distribution $Z_{ij}/N$
on $(i,j)$ pairs and $\rho_{\mathrm{aux}}$ is the rate at which the chain converges.
In our notation
$$
\rho_{\mathrm{PRZ}} = \frac{N\sigma^2_A}{N\sigma^2_A+R\sigma^2_E}\times\frac{N\sigma^2_B}{N\sigma^2_B+C\sigma^2_E}\times\rho_{\mathrm{aux}}.
$$
In sparse data $\rho_{\mathrm{PRZ}}\approx\rho_{\mathrm{aux}}$ and under our asymptotic
setting $|\rho_{\mathrm{aux}}-\rho_{\mathrm{PRZ}}|\to0$.
\cite{papa:robe:zane:2020} remark that $\rho_{\mathrm{aux}}$ tends to decrease
as the amount of data increases. When it does, then their algorithm
takes $O(1)$ iterations and costs $O(N)$.
They explain that $\rho_{\mathrm{aux}}$ should decrease as the data set
grows because the auxiliary process then gets greater connectivity.
That connectivity increases for bounded $R$ and $C$ with increasing $N$
and from their notation, allowing multiple observations
per $(i,j)$ pair it seems like they have this sort of infill
asymptote in mind.
For sparse data from electronic commerce we think that
an asymptote like the one we study where $R$, $C$ and $N$
all grow is a better description.
It would be interesting to see how $\rho_{\mathrm{aux}}$ develops under such a model.
In Section 5.3 \cite{papa:robe:zane:2020}
state that the convergence rate of the collapsed Gibbs sampler
is $O(1)$ regardless of the asymptotic regime. That section is about
a more stringent `balanced cells' condition where every $(i,j)$ combination
is observed the same number of times, so it does not describe
the `balanced levels' setting where $N_{i\sumdot}=N/R$ and $N_{\sumdot j}=N/C$.
Indeed they provide a counterexample in which there are two
disjoint communities of users and two disjoint sets of items
and each user in the first community has rated every item
in the first item set (and no others) while each user in the
second community has rated every item in the second item
set (and no others). That configuration leads to an unbounded mixing time
for collapsed Gibbs. It is also one where backfitting takes
an increasing number of iterations as the sample size grows.
There are interesting parallels between methods to sample a high
dimensional Gaussian distribution with covariance matrix $\Sigma$
and iterative solvers for the system $\Sigma \boldsymbol{x} = \boldsymbol{b}$.
See \cite{good:soka:1989} and \cite{RS97}
for more on how the convergence rates
for these two problems coincide.
We found that backfitting with one or both updates centered
worked much better than uncentered backfitting.
\cite{papa:robe:zane:2020} used a collapsed sampler
that analytically integrated out the global mean of their model in each update
of a block of random effects.
Our approach treats $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ as nuisance parameters.
We plug in a consistent method of moments based estimator of them
in order to focus on the backfitting iterations.
In Bayesian computations, maximum a posteriori estimators of
variance components under non-informative priors can be
problematic for hierarchical models \cite{gelm:2006},
and so perhaps maximum likelihood estimation of these
variance components would also have been challenging.
Whether one prefers a GLS estimate or a Bayesian one
depends on context and goals. We believe that there is a strong
computational advantage to GLS for large data sets.
The cost of one backfitting iteration is comparable to the cost to generate
one more sample in the MCMC. We may well find that only a dozen
or so iterations are required for convergence of the GLS. A Bayesian
analysis requires a much larger number of draws from the posterior
distribution than that.
For instance, \cite{gelm:shir:2011} recommend an effective sample size of about $100$
posterior draws, with autocorrelations requiring a larger actual sample size.
\cite{vats:fleg:jone:2019} advocate even greater effective sample sizes.
It is usually reasonable to assume that there is a selection
bias underlying which data points are observed.
Accounting for any such selection bias must necessarily
involve using information or assumptions from outside the data set at
hand. We expect that any approach to take proper account of
informative missingness must also make use of solutions to
GLS perhaps after reweighting the observations.
Before one develops any such methods, it is necessary
to first be able to solve GLS without regard to missingness.
Many of the problems in electronic commerce involve categorical outcomes,
especially binary ones, such as whether an item was purchased or not.
Generalized linear mixed models are then appropriate ways to handle
crossed random effects, and we expect that the progress made here
will be useful for those problems.
\section*{Acknowledgements}
This work was supported by the U.S.\ National Science Foundation under grant IIS-1837931.
We are grateful to Brad Klingenberg and Stitch Fix for sharing some test data with us.
We thank the reviewers for remarks that have helped us improve the paper.
\bibliographystyle{imsart-nameyear}
| proofpile-arXiv_065-142 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Related Work}\label{sec:related-work}
\paragraph*{Global Termination}
\emph{Global} termination detection (GTD) is used to determine when
\emph{all} processes have terminated
\cite{matternAlgorithmsDistributedTermination1987,matochaTaxonomyDistributedTermination1998}.
For GTD, it suffices to obtain global message send and receive counts.
Most GTD algorithms also assume a fixed process topology. However, Lai
gives an algorithm in \cite{laiTerminationDetectionDynamically1986}
that supports dynamic topologies such as in the actor model. Lai's
algorithm performs termination detection in ``waves'', disseminating
control messages along a spanning tree (such as an actor supervisor
hierarchy) so as to obtain consistent global message send and receive
counts. Venkatasubramanian et al.~take a similar approach to obtain a
consistent global snapshot of actor states in a distributed
system~\cite{venkatasubramanianScalableDistributedGarbage1992}. However,
such an approach does not scale well because it is not incremental:
garbage cannot be detected until all nodes in the system have
responded. In contrast, DRL does not require a global snapshot, does not
require actors to coordinate their local snapshots, and does not
require waiting for all nodes before detecting local terminated
actors.
\paragraph*{Reference Tracking} We say that an idle actor is \emph{simple garbage} if it has no undelivered messages and no other actor has a reference to it.
Such actors can be detected with distributed reference counting
\cite{watsonEfficientGarbageCollection1987,bevanDistributedGarbageCollection1987,piquerIndirectReferenceCounting1991}
or with reference listing
\cite{DBLP:conf/iwmm/PlainfosseS95,wangDistributedGarbageCollection2006}
techniques. In reference listing algorithms, each actor maintains a
partial list of actors that may have references to it. Whenever $A$ sends $B$ a
reference to $C$, it also sends an $\InfoMsg$ message informing $C$
about $B$'s reference. Once $B$ no longer needs a reference to $C$, it
informs $C$ by sending a $\ReleaseMsg$ message; this message should not be processed by $C$ until all preceding messages from $B$ to $C$ have been delivered. Thus an actor is
simple garbage when its reference listing is
empty.
Our technique uses a form of \emph{deferred reference listing}, in which $A$ may also defer sending $\InfoMsg$
messages to $C$ until it releases its references to $C$. This allows
$\InfoMsg$ and $\ReleaseMsg$ messages to be batched together, reducing communication
overhead.
\paragraph*{Cyclic Garbage}
Actors that are transitively acquainted with one another are said to
form cycles. Cycles of terminated actors are called \emph{cyclic
garbage} and cannot be detected with reference listing alone. Since
actors are hosted on nodes and cycles may span across multiple nodes,
detecting cyclic garbage requires sharing information between nodes to
obtain a consistent view of the global topology. One approach is to
compute a global snapshot of the distributed system
\cite{kafuraConcurrentDistributedGarbage1995} using the Chandy-Lamport
algorithm \cite{chandyDistributedSnapshotsDetermining1985}; but this
requires pausing execution of all actors on a node to compute its
local snapshot.
Another approach is to add edges to the actor reference graph so
that actor garbage coincides with passive object garbage
\cite{vardhanUsingPassiveObject2003,wangActorGarbageCollection2010}. This
is convenient because it allows existing algorithms for distributed
passive object GC, such as
\cite{schelvisIncrementalDistributionTimestamp1989}, to be reused in
actor systems. However, such transformations require that actors know
when they have undelivered messages, which requires some form of
synchronization.
To avoid pausing executions, Wang and Varela proposed a reference
listing based technique called the \emph{pseudo-root} algorithm. The
algorithm computes \emph{approximate} global snapshots and is
implemented in the SALSA runtime
\cite{wangDistributedGarbageCollection2006,wangConservativeSnapshotbasedActor2011}.
The pseudo-root algorithm requires a high number of additional control
messages and requires actors to write to shared memory if they migrate
or release references during snapshot collection. Our protocol
requires fewer control messages and no additional actions between
local actor snapshots. Wang and Varela also explicitly address migration of actors,
a concern orthogonal to our algorithm.
Our technique is inspired by \emph{MAC}, a termination detection
algorithm implemented in the Pony runtime
\cite{clebschFullyConcurrentGarbage2013}. In MAC, actors send a local
snapshot to a designated cycle detector whenever their message queue
becomes empty, and send another notification whenever it becomes non-empty. Clebsch and Drossopoulou prove that for systems with
causal message delivery, a simple request-reply protocol is sufficient
to confirm that the cycle detector's view of the topology is
consistent. However, enforcing causal delivery in a distributed
system imposes additional space and networking costs
\cite{fidge1987timestamps,blessingTreeTopologiesCausal2017}. DRL is
similar to MAC, but does not require causal message delivery, supports
decentralized termination detection, and actors need not take
snapshots each time their message queues become empty. The key insight is that these limitations can be removed by tracking additional information at the actor level.
An earlier version of DRL appeared in
\cite{plyukhinConcurrentGarbageCollection2018}. In this paper, we
formalize the description of the algorithm and prove its safety and
liveness. In the process, we discovered that release acknowledgment
messages are unnecessary and that termination detection is more
flexible than we first thought: it is not necessary for GC to be
performed in distinct ``phases'' where every actor takes a snapshot in
each phase. In particular, once an idle actor takes a snapshot, it
need not take another snapshot until it receives a fresh message.
\section{Preliminaries}
\label{sec:background}
An actor can only receive a message when it is \emph{idle}. Upon
receiving a message, it becomes \emph{busy}. A busy actor can perform
an unbounded sequence of \emph{actions} before becoming idle. In~\cite{aghaFoundationActorComputation1997}, an action may be to spawn an actor, send a message, or perform a (local) computation. We will also assume that actors can perform effects, such as file I/O. The actions an actor performs in response to a message are dictated by its application-level code, called a \emph{behavior}.
Actors can also receive messages from \emph{external} actors (such as the user) by
becoming \emph{receptionists}. An actor $A$ becomes a receptionist when its address is exposed to an external actor. Subsequently, any external actor can potentially obtain $A$'s address and send it a message. It is not possible for an actor system to determine when all external actors have ``forgotten'' a receptionist's address. We will therefore assume that an actor can never cease to be a receptionist once its address has been exposed.
\begin{figure} \centering \tikzfig{contents/diagrams/actor-graph-v2}
\caption{A simple actor system. The first configuration leads to the second after $C$ receives the message $m$, which contains a reference to $E$. Notice that an actor can send a message and ``forget'' its reference to the recipient before the message is delivered, as is the case for actor $F$. In both configurations, $E$ is a potential acquaintance of $C$, and $D$ is potentially reachable from $C$. The only terminated actor is $F$ because all other actors are potentially reachable from unblocked actors.}
\label{fig:actor-graph-example}
\end{figure}
An actor is said to be garbage if it can be destroyed without affecting the system's observable behavior. However, without analyzing an actor’s code, it is not possible to know whether it will have an effect when it receives a message. We will therefore restrict our attention to actors that can be guaranteed to be garbage without inspecting their behavior. According to this more conservative definition, any actor that might receive a message in the future should not be garbage collected because it could, for instance, write to a log file when it becomes busy. Conversely, any actor that is guaranteed to remain idle indefinitely can safely be garbage collected because it will never have any effects; such an actor is said to be \emph{terminated}.
Hence, garbage actors coincide with terminated actors in our model.
Terminated actors can be detected by looking at the global state of the system. We say that an actor $B$ is a \emph{potential acquaintance} of $A$ (and $A$ is a \emph{potential inverse acquaintance} of $B$) if $A$ has a reference to $B$ or if there is an undelivered message to $A$ that contains a reference to $B$. We define \emph{potential reachability} to be the reflexive transitive closure of the potential acquaintance relation. If an actor is idle and has no undelivered messages, then it is \emph{blocked}; otherwise it is \emph{unblocked}. We then observe that an actor is terminated when it is only potentially reachable by blocked actors: Such an actor is idle, blocked, and can only potentially be sent a message by other idle blocked actors. Conversely, without analyzing actor code we cannot safely conclude that an actor is terminated if it is potentially reachable by an unblocked actor. Hence, we say that an actor is terminated if and only if it is blocked and all of its potential inverse acquaintances are terminated.
\section{Conclusion and Future Work}\label{sec:conclusion}\label{sec:future-work}
We have shown how deferred reference listing and message counts can be used to detect termination in actor systems. The technique is provably safe (Theorem~\ref{thm:safety}) and eventually live (Theorem~\ref{thm:liveness}). An implementation in Akka is presently underway.
We believe that DRL satisfies our three initial goals:
\begin{enumerate}
\item \emph{Termination detection does not restrict concurrency in the application.} Actors do not need to coordinate their snapshots or pause execution during garbage collection.
\item \emph{Termination detection does not impose high overhead.} The amortized memory overhead of our technique is linear in the number of unreleased refobs. Besides application messages, the only additional control messages required by the DRL communication protocol are $\InfoMsg$ and $\ReleaseMsg$ messages. These control messages can be batched together and deferred, at the cost of worse termination detection time.
\item \emph{Termination detection scales with the number of nodes in the system.} Our algorithm is incremental, decentralized, and does not require synchronization between nodes.
\end{enumerate}
Since it does not matter what order snapshots are collected in, DRL can be used as a ``building block’’ for more sophisticated garbage collection algorithms. One promising direction is to take a \emph{generational} approach \cite{DBLP:journals/cacm/LiebermanH83}, in which long-lived actors take snapshots less frequently than short-lived actors. Different types of actors could also take snapshots at different rates. In another approach, snapshot aggregators could \emph{request} snapshots instead of waiting to receive them.
In the presence of faults, DRL remains safe but its liveness properties are affected. If an actor $A$ crashes and its state cannot be recovered, then none of its refobs can be released and the aggregator will never receive its snapshot. Consequently, all actors potentially reachable from $A$ can no longer be garbage collected. However, $A$'s failure does not affect the garbage collection of actors it cannot reach. In particular, network partitions between nodes will not delay node-local garbage collection.
Choosing an adequate fault-recovery protocol will likely vary depending on the target actor framework. One option is to use checkpointing or event-sourcing to persist GC state; the resulting overhead may be acceptable in applications that do not frequently spawn actors or create refobs. Another option is to monitor actors for failure and infer which refobs are no longer active; this is a subject for future work.
Another issue that can affect liveness is message loss: If any messages along a refob $\Refob x A B$ are dropped, then $B$ can never be garbage collected because it will always appear unblocked. This is, in fact, the desired behavior if one cannot guarantee that the message will not be delivered at some later point. In practice, this problem might be addressed with watermarking.
\section{Introduction}
The actor model~\cite{books/daglib/0066897,journals/cacm/Agha90} is a
foundational model of concurrency that has been widely adopted for its
scalability: for example, actor languages have been used to implement services at
PayPal~\cite{PayPalBlowsBillion}, Discord~\cite{vishnevskiyHowDiscordScaled2017}, and in the United Kingdom's National Health Service
database~\cite{NHSDeployRiak2013}. In the actor model, stateful processes known as \emph{actors}
execute concurrently and communicate by
sending asynchronous messages to other actors, provided they have a \emph{reference} (also called a \emph{mail address} or \emph{address} in the literature) to the recipient. Actors can also spawn new actors. An actor is said to be \emph{garbage} if it can be destroyed without affecting the system's observable behavior.
Although a number of algorithms for automatic actor GC have been proposed \cite{ clebschFullyConcurrentGarbage2013,
kafuraConcurrentDistributedGarbage1995,
vardhanUsingPassiveObject2003,
venkatasubramanianScalableDistributedGarbage1992,
wangConservativeSnapshotbasedActor2011,
wangDistributedGarbageCollection2006}, actor languages and
frameworks currently popular in industry (such as Akka \cite{Akka},
Erlang \cite{armstrongConcurrentProgrammingERLANG1996}, and Orleans
\cite{bykovOrleansCloudComputing2011}) require that programmers
garbage collect actors manually. We believe this is because the algorithms
proposed thus far are too expensive to implement in distributed
systems. In order to find applicability in real-world actor runtimes, we argue that a GC algorithm should satisfy the following properties:
\begin{enumerate}
\item (\emph{Low latency}) GC should not restrict concurrency in the
application.
\item (\emph{High throughput}) GC should not impose significant space
or message overhead.
\item (\emph{Scalability}) GC should scale with the number of actors and nodes in the system.
\end{enumerate}
To the best of our knowledge, no previous algorithm satisfies all
three constraints. The first requirement precludes any global
synchronization between actors, a ``stop-the-world'' step, or a
requirement for causal order delivery of all messages. The second
requirement means that the number of additional ``control'' messages imposed by the
algorithm should be minimal. The third requirement precludes
algorithms based on global snapshots, since taking a
global snapshot of a system with a large number of nodes is
infeasible.
To address these goals, we have developed a garbage collection
technique called \emph{DRL} for \emph{Deferred Reference Listing}. The primary advantage of DRL is that it is decentralized and incremental: local garbage can be collected at one node without communicating with other nodes. Garbage collection can be performed concurrently with the application and imposes no message ordering constraints. We also expect DRL to be reasonably efficient in practice, since it does not require many additional messages or significant actor-local computation.
DRL works as follows. The \emph{communication protocol} (\cref{sec:model}) tracks information, such as references and message counts, and stores it in each actor's state. Actors periodically send out copies of their local state (called \emph{snapshots}) to be stored at one or more designated \emph{snapshot aggregator} actors. Each aggregator periodically searches its local store to find a subset of snapshots representing terminated actors (\cref{sec:termination-detection}). Once an actor is determined to have terminated, it can be garbage collected by, for example, sending it a \emph{self-destruct} message. Note that our termination detection algorithm itself is \textit{location transparent}.
Since DRL is defined on top of the actor model, it is oblivious to details of a particular implementation (such as how sequential computations are represented). Our technique is therefore applicable to any actor framework and can be
implemented as a library. Moreover, it can also be applied to open
systems, allowing a garbage-collected actor subsystem to interoperate with an external actor
system.
The outline of the paper is as follows. We provide a characterization of actor garbage in Section~\ref{sec:background} and discuss related work in Section~\ref{sec:related-work}. We then provide a
specification of the DRL protocol in Section~\ref{sec:model}.
In Section~\ref{sec:chain-lemma}, we describe a key property of DRL called the \emph{Chain Lemma}. This lemma allows us to prove the safety and liveness properties, which are stated in
Section~\ref{sec:termination-detection}. We then conclude in Section~\ref{sec:future-work} with some discussion of future work and how DRL may be used in practice. To conserve space, all proofs have been relegated to the Appendix.
\section{Termination Detection}\label{sec:termination-detection}
In order to detect non-simple terminated garbage, actors periodically sends a snapshot of their knowledge set to a snapshot aggregator actor. An aggregator in turn may disseminate snapshots it has to other aggregators. Each aggregator maintains a map data structure, associating an actor’s address to its most recent snapshot; in effect, snapshot aggregators maintain an eventually consistent key-value store with addresses as keys and snapshots as values. At any time, an aggregator can scan its local store to find terminated actors and send them a request to self-destruct.
Given an arbitrary set of snapshots $Q$, we characterize the \emph{finalized subsets} of $Q$ in this section. We show that the actors that took these finalized snapshots must be terminated. Conversely, the snapshots of any closed set of terminated actors are guaranteed to be finalized. (Recall that the closure of a set of terminated actors is also a terminated set of actors.) Thus, snapshot aggregators can eventually detect all terminated actors by periodically searching their local stores for finalized subsets. Finally, we give an algorithm for obtaining the maximum finalized subset of a set $Q$ by ``pruning away’’ the snapshots of actors that appear not to have terminated.
Recall that when we speak of a set of snapshots $Q$, we assume each snapshot was taken by a different actor. We will write $\Phi_A \in Q$ to denote $A$'s snapshot in $Q$; we will also write $A \in Q$ if $A$ has a snapshot in $Q$. We will also write $Q \vdash \phi$ if $\Phi \vdash \phi$ for some $\Phi \in Q$.
\begin{definition}
A set of snapshots $Q$ is \emph{closed} if, whenever $Q \vdash \Unreleased(\Refob x A B)$ and $B \in Q$, then also $A\in Q$ and $\Phi_A \vdash \Activated(\Refob x A B)$.
\end{definition}
\begin{definition}
An actor $B \in Q$ \emph{appears blocked} if, for every $Q \vdash \Unreleased(\Refob x A B)$, then $\Phi_A,\Phi_B \in Q$ and $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$.
\end{definition}
\begin{definition}
A set of snapshots $Q$ is \emph{finalized} if it is closed and every actor in $Q$ appears blocked.
\end{definition}
This definition corresponds to our characterization in Section~\ref{sec:garbage-defn}: An actor is terminated precisely when it is in a closed set of blocked actors.
\begin{restatable}[Safety]{theorem}{Safety}\label{thm:safety}
If $Q$ is a finalized set of snapshots at time $t_f$ then the actors in $Q$ are all terminated at $t_f$.
\end{restatable}
We say that the \emph{final action} of a terminated actor is the last non-snapshot event it performs before becoming terminated. Notice that an actor's final action can only be an \textsc{Idle}, \textsc{Info}, or \textsc{Release} event. Note also that the final action may come \emph{strictly before} an actor becomes terminated, since a blocked actor may only terminate after all of its potential inverse acquaintances become blocked.
The following lemma allows us to prove that DRL is eventually live. It also shows that an non-finalized set of snapshots must have an unblocked actor.
\begin{restatable}{lemma}{Completeness}\label{lem:terminated-is-complete}
Let $S$ be a closed set of terminated actors at time $t_f$. If every actor in $S$ took a snapshot sometime after its final action, then the resulting set of snapshots is finalized.
\end{restatable}
\begin{theorem}[Liveness]\label{thm:liveness}
If every actor eventually takes a snapshot after performing an \textsc{Idle}, \textsc{Info}, or \textsc{Release} event, then every terminated actor is eventually part of a finalized set of snapshots.
\end{theorem}
\begin{proof}
If an actor $A$ is terminated, then the closure $S$ of $\{A\}$ is a terminated set of actors. Since every actor eventually takes a snapshot after taking its final action, \cref{lem:terminated-is-complete} implies that the resulting snapshots of $S$ are finalized.
\end{proof}
We say that a refob $\Refob x A B$ is \emph{unreleased} in $Q$ if $Q \vdash \Unreleased(x)$. Such a refob is said to be \emph{relevant} when $B \in Q$ implies $A \in Q$ and $\Phi_A \vdash \Activated(x)$ and $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$; intuitively, this indicates that $B$ has no undelivered messages along $x$. Notice that a set $Q$ is finalized if and only if all unreleased refobs in $Q$ are relevant.
Observe that if $\Refob x A B$ is unreleased and irrelevant in $Q$, then $B$ cannot be in any finalized subset of $Q$. We can therefore employ a simple iterative algorithm to find the maximum finalized subset of $Q$: for each irrelevant unreleased refob $\Refob x A B$ in $Q$, remove the target $B$ from $Q$. Since this can make another unreleased refob $\Refob y B C$ irrelevant, we must repeat this process until a fixed point is reached. In the resulting subset $Q'$, all unreleased refobs are relevant. Since all actors in $Q \setminus Q'$ are not members of any finalized subset of $Q$, it must be that $Q'$ is the maximum finalized subset of $Q$.
\section{Chain Lemma}\label{sec:chain-lemma}
To determine if an actor has terminated, one must show that all of its potential inverse acquaintances have terminated. This appears to pose a problem for termination detection, since actors cannot have a complete listing of all their potential inverse acquaintances without some synchronization: actors would need to consult their acquaintances before creating new references to them. In this section, we show that the DRL protocol provides a weaker guarantee that will nevertheless prove sufficient: knowledge about an actor's refobs is \emph{distributed} across the system and there is always a ``path'' from the actor to any of its potential inverse acquaintances.
\begin{figure}
\centering
\tikzfig{contents/diagrams/chain-lemma}
\caption{An example of a chain from $B$ to $x_3$.}
\label{fig:chain-example}
\end{figure}
Let us construct a concrete example of such a path, depicted by Fig.~\ref{fig:chain-example}. Suppose that $A_1$ spawns $B$, gaining a refob $\Refob{x_1}{A_1}{B}$. Then $A_1$ may use $x_1$ to create $\Refob{x_2}{A_2}{B}$, which $A_2$ may receive and then use $x_2$ to create $\Refob{x_3}{A_3}{B}$.
At this point, there are unreleased refobs owned by $A_2$ and $A_3$ that are not included in $B$'s knowledge set. However, Fig.~\ref{fig:chain-example} shows that the distributed knowledge of $B,A_1,A_2$ creates a ``path'' to all of $B$'s potential inverse acquaintances. Since $A_1$ spawned $B$, $B$ knows the fact $\Created(x_1)$. Then when $A_1$ created $x_2$, it added the fact $\CreatedUsing(x_1, x_2)$ to its knowledge set, and likewise $A_2$ added the fact $\CreatedUsing(x_2, x_3)$; each fact points to another actor that owns an unreleased refob to $B$ (Fig.~\ref{fig:chain-example} (1)).
Since actors can remove $\CreatedUsing$ facts by sending $\InfoMsg$ messages, we also consider (Fig.~\ref{fig:chain-example} (2)) to be a ``path'' from $B$ to $A_3$. But notice that, once $B$ receives the $\InfoMsg$ message, the fact $\Created(x_3)$ will be added to its knowledge set and so there will be a ``direct path'' from $B$ to $A_3$. We formalize this intuition with the notion of a \emph{chain} in a given configuration $\Config{\alpha}{\mu}{\rho}{\chi}$:
\begin{definition}
A \emph{chain to $\Refob x A B$} is a sequence of unreleased refobs $(\Refob{x_1}{A_1}{B}),\allowbreak \dots,\allowbreak (\Refob{x_n}{A_n}{B})$ such that:
\begin{itemize}
\item $\alpha(B) \vdash \Created(\Refob{x_1}{A_1}{B})$;
\item For all $i < n$, either $\alpha(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$ or the message $\Msg{B}{\InfoMsg(x_i,x_{i+1})}$ is in transit; and
\item $A_n = A$ and $x_n = x$.
\end{itemize}
\end{definition}
We say that an actor $B$ is \emph{in the root set} if it is a receptionist or if there is an application message $\AppMsg(x,R)$ in transit to an external actor with $B \in \text{targets}(R)$. Since external actors never release refobs, actors in the root set must never terminate.
\begin{restatable}[Chain Lemma]{lemma}{ChainLemma}
\label{lem:chain-lemma}
Let $B$ be an internal actor in $\kappa$. If $B$ is not in the root set, then there is a chain to every unreleased refob $\Refob x A B$. Otherwise, there is a chain to some refob $\Refob y C B$ where $C$ is an external actor.
\end{restatable}
\begin{remark*}
When $B$ is in the root set, not all of its unreleased refobs are guaranteed to have chains. This is because an external actor may send $B$'s address to other receptionists without sending an $\InfoMsg$ message to $B$.
\end{remark*}
An immediate application of the Chain Lemma is to allow actors to detect when they are simple garbage. If any actor besides $B$ owns an unreleased refob to $B$, then $B$ must have a fact $\Created(\Refob x A B)$ in its knowledge set where $A \ne B$. Hence, if $B$ has no such facts, then it must have no nontrivial potential inverse acquaintances. Moreover, since actors can only have undelivered messages along unreleased refobs, $B$ also has no undelivered messages from any other actor; it can only have undelivered messages that it sent to itself. This gives us the following result:
\begin{theorem}
Suppose $B$ is idle with knowledge set $\Phi$, such that:
\begin{itemize}
\item $\Phi$ does not contain any facts of the form $\Created(\Refob x A B)$ where $A \ne B$; and
\item for all facts $\Created(\Refob x B B) \in \Phi$, also $\Phi \vdash \SentCount(x,n) \land \RecvCount(x,n)$ for some $n$.
\end{itemize}
Then $B$ is simple garbage.
\end{theorem}
\section{A Two-Level Semantic Model}\label{sec:model}
Our computation model is based on the two
level approach to actor semantics
\cite{venkatasubramanianReasoningMetaLevel1995}, in which a lower \emph{system-level} transition system interprets the operations performed by a higher, user-facing
\emph{application-level} transition system. In this section, we define the DRL communication protocol at the system level. We do not provide a
transition system for the application level computation model, since it is
not relevant to garbage collection (see
\cite{aghaFoundationActorComputation1997} for how it can be
done). What is relevant to us is that corresponding to each
application-level action is a system-level transition that tracks
references.
We will therefore define \emph{system-level configurations} and
\emph{transitions on system-level configurations}. We will refer to
these, respectively, as configurations and transitions in the rest of
the paper.
\subsection{Overview}
\label{sec:overview}
Actors in DRL use \emph{reference objects} (abbreviated \emph{refobs}) to send messages, instead of using plain actor addresses. Refobs are similar to unidirectional channels and can only be used by their designated \emph{owner} to send messages to their \emph{target}; thus in order for $A$ to give $B$ a reference to $C$, it must explicitly create a new refob owned by $B$. Once a refob is no longer needed, it should be \emph{deactivated} by its owner and removed from local state.
The DRL communication protocol enriches each actor's state with a list of refobs that it currently owns and associated message counts representing the number of messages sent using each refob. Each actor also maintains a subset of the refobs of which it is the target, together with associated message receive counts. Lastly, actors perform a form of ``contact tracing'' by maintaining a subset of the refobs that they have created for other actors; we provide details about the bookkeeping later in this section.
The additional information above allows us to detect termination by inspecting actor snapshots. If a set of snapshots is consistent (in the sense of \cite{chandyDistributedSnapshotsDetermining1985}) then we can use the ``contact tracing'' information to determine whether the set is \emph{closed} under the potential inverse acquaintance relation (see \cref{sec:chain-lemma}). Then, given a consistent and closed set of snapshots, we can use the message counts to determine whether an actor is blocked. We can therefore find all the terminated actors within a consistent set of snapshots.
In fact, DRL satisfies a stronger property: any set of snapshots that ``appears terminated'' in the sense above is guaranteed to be consistent. Hence, given an arbitrary closed set of snapshots, it is possible to determine which of the corresponding actors have terminated. This allows a great deal of freedom in how snapshots are aggregated. For instance, actors could place their snapshots in a global eventually consistent store, with a garbage collection thread at each node periodically inspecting the store for local terminated actors.
\paragraph*{Reference Objects}
\begin{figure} \centering \tikzfig{contents/diagrams/references}
\caption{An example showing how refobs are created and
destroyed. Below each actor we list all the ``facts'' related to $z$ that are stored in its local state. Although not pictured in the figure, $A$ also obtains facts $\Activated(x)$ and $\Activated(y)$ after spawning actors $B$ and $C$, respectively. Likewise, actors $B,C$ obtain facts $\Created(x),\Created(y)$, respectively, upon being spawned.}
\label{fig:refob-example}
\end{figure}
A refob is a triple $(x,A,B)$, where $A$ is the owner actor's address, $B$ is the target actor's address, and $x$ is a globally unique token. An actor can cheaply generate such a token by combining its address with a local sequence number, since actor systems already guarantee that each address is unique. We will stylize a triple $(x,A,B)$ as $\Refob x A B$. We will also sometimes refer to such a refob as simply $x$, since tokens act as unique identifiers.
When an actor $A$ spawns an actor $B$ (Fig.~\ref{fig:refob-example}
(1, 2)) the DRL protocol creates a new refob
$\Refob x A B$ that is stored in both $A$ and $B$'s system-level
state, and a refob $\Refob y B B$ in $B$'s state. The refob $x$ allows $A$ to send application-level messages to
$B$. These messages are denoted $\AppMsg(x,R)$, where $R$ is the sett of refobs contained in the message that $A$ has created for $B$. The refob $y$ corresponds to the \texttt{self} variable present in some actor languages.
If $A$ has active refobs $\Refob x A B$ and $\Refob y A C$, then it can
create a new refob $\Refob z B C$ by generating a token $z$. In
addition to being sent to $B$, this refob must also temporarily be
stored in $A$'s system-level state and marked as ``created using $y$''
(Fig.~\ref{fig:refob-example} (3)). Once $B$ receives $z$, it must add
the refob to its system-level state and mark it as ``active''
(Fig.~\ref{fig:refob-example} (4)). Note that $B$ can have multiple
distinct refobs that reference the same actor in its state; this
can be the result of, for example, several actors concurrently sending
refobs to $B$. Transition rules for spawning actors and sending
messages are given in Section~\ref{sec:standard-actor-operations}.
Actor $A$ may remove $z$ from its state once it has sent a
(system-level) $\InfoMsg$ message informing $C$ about $z$
(Fig.~\ref{fig:refob-example} (4)). Similarly, when $B$ no longer
needs its refob for $C$, it can ``deactivate'' $z$ by removing it
from local state and sending $C$ a (system-level) $\ReleaseMsg$
message (Fig.~\ref{fig:refob-example} (5)). Note that if $B$ already
has a refob $\Refob z B C$ and then receives another $\Refob {z'} B C$,
then it can be more efficient to defer deactivating the extraneous
$z'$ until $z$ is also no longer needed; this way, the $\ReleaseMsg$
messages can be batched together.
When $C$ receives an $\InfoMsg$ message, it records that the refob
has been created, and when $C$ receives a $\ReleaseMsg$ message, it
records that the refob has been released
(Fig.~\ref{fig:refob-example} (6)). Note that these messages may
arrive in any order. Once $C$ has received both, it is permitted to
remove all facts about the refob from its local state. Transition
rules for these reference listing actions are given in
Section~\ref{sec:release-protocol}.
Once a refob has been created, it cycles through four states:
pending, active, inactive, or released. A refob $\Refob z B C$ is
said to be \emph{pending} until it is received by its owner $B$. Once
received, the refob is \emph{active} until it is \emph{deactivated}
by its owner, at which point it becomes \emph{inactive}. Finally,
once $C$ learns that $z$ has been deactivated, the refob is said to
be \emph{released}. A refob that has not yet been released is
\emph{unreleased}.
Slightly amending the definition we gave in \cref{sec:background}, we say that $B$ is a \emph{potential acquaintance} of $A$
(and $A$ is a \emph{potential inverse acquaintance} of $B$) when there
exists an unreleased refob $\Refob x A B$. Thus, $B$ becomes a potential acquaintance of $A$ as soon as $x$ is created, and only ceases to be an acquaintance once it has received a $\ReleaseMsg$ message for every refob $\Refob y A B$ that has been created so far.
\begin{figure} \centering \tikzfig{contents/diagrams/message-counts-timelines-simpler}
\caption{A time diagram for actors $A,B,C$, demonstrating message counts and consistent snapshots. Dashed arrows represent messages and dotted lines represent consistent cuts. In each cut above, $B$'s message send count agrees with $C$'s message receive count.}
\label{fig:message-counts}
\end{figure}
\paragraph*{Message Counts and Snapshots}
For each refob $\Refob x A B$, the owner $A$ counts the
number of $\AppMsg$ and $\InfoMsg$ messages sent along $x$; this count can be deleted when $A$
deactivates $x$. Each message is annotated with the refob used to
send it. Whenever $B$ receives an $\AppMsg$ or $\InfoMsg$ message along $x$, it
correspondingly increments a receive count for $x$; this count can be deleted once $x$
has been released. Thus the memory overhead of message counts is linear in
the number of unreleased refobs.
A snapshot is a copy of all the facts in an actor's system-level state at some point in time. We will assume throughout the paper that in every set of snapshots $Q$, each snapshot was taken by a different actor. Such a set is also said to form a \emph{cut}. Recall that a cut is consistent if no snapshot in the cut causally precedes any other \cite{chandyDistributedSnapshotsDetermining1985}. Let us also say that $Q$ is a set of \emph{mutually quiescent} snapshots if there are no undelivered messages between actors in the cut. That is, if $A \in Q$ sent a message to $B \in Q$ before taking a snapshot, then the message must have been delivered before $B$ took its snapshot. Notice that if all snapshots in $Q$ are mutually quiescent, then $Q$ is consistent.
Notice also that in Fig.~\ref{fig:message-counts}, the snapshots of $B$ and $C$ are mutually quiescent when their send and receive counts agree. This is ensured in part because each refob has a unique token: If actors associated message counts with actor names instead of tokens, then $B$’s snapshots at $t_0$ and $t_3$ would both contain $\SentCount(C,1)$. Thus, $B$’s snapshot at $t_3$ and $C$’s snapshot at $t_0$ would appear mutually quiescent, despite having undelivered messages in the cut.
We would like to conclude that snapshots from two actors $A,B$ are mutually quiescent if and only if their send and receive counts are agreed for every refob $\Refob x A B$ or $\Refob y B A$. Unfortunately, this fails to hold in general for systems with unordered message delivery. It also fails to hold when, for instance, the owner actor takes a snapshot before the refob is activated and the target actor takes a snapshot after the refob is released. In such a case, neither knowledge set includes a message count for the refob and they therefore appear to agree. However, we show that the message counts can nevertheless be used to bound the number of undelivered messages for purposes of our algorithm (\cref{lem:msg-counts}).
\paragraph*{Definitions}
We use the capital letters $A,B,C,D,E$ to denote actor addresses.
Tokens are denoted $x,y,z$, with a special reserved token $\NullToken$
for messages from external actors.
A \emph{fact} is a value that takes one of the following forms:
$\Created(x)$, $\Released(x)$, $\CreatedUsing(x,y)$, $\Activated(x)$, $\Unreleased(x)$,
$\SentCount(x,n)$, or $\RecvCount(x,n)$ for some refobs $x,y$ and
natural number $n$. Each actor's state holds a set of facts about
refobs and message counts called its \emph{knowledge set}. We use
$\phi,\psi$ to denote facts and $\Phi,\Psi$ to denote finite sets of
facts. Each fact may be interpreted as a \emph{predicate} that
indicates the occurrence of some past event. Interpreting a set of
facts $\Phi$ as a set of axioms, we write $\Phi \vdash \phi$ when
$\phi$ is derivable by first-order logic from $\Phi$ with the
following additional rules:
\begin{itemize}
\item If $(\not\exists n \in \mathbb N,\ \SentCount(x,n) \in
\Phi)$ then $\Phi \vdash \SentCount(x,0)$
\item If $(\not\exists n \in \mathbb N,\ \RecvCount(x,n) \in
\Phi)$ then $\Phi \vdash \RecvCount(x,0)$
\item If $\Phi \vdash \Created(x) \land \lnot \Released(x)$ then
$\Phi \vdash \Unreleased(x)$
\item If $\Phi \vdash \CreatedUsing(x,y)$ then $\Phi \vdash
\Created(y)$
\end{itemize}
For convenience, we define a pair of functions
$\IncSent(x,\Phi),\IncRecv(x,\Phi)$ for incrementing message
send/receive counts, as follows: If $\SentCount(x,n) \in \Phi$ for some
$n$, then
$\IncSent(x,\Phi) = (\Phi \setminus \{\SentCount(x,n)\}) \cup
\{\SentCount(x,n+1)\}$; otherwise,
$\IncSent(x,\Phi) = \Phi \cup \{\SentCount(x,1)\}$. Likewise for
$\IncRecv$ and $\RecvCount$.
Recall that an actor is either \emph{busy} (processing a message) or
\emph{idle} (waiting for a message). An actor with knowledge set
$\Phi$ is denoted $[\Phi]$ if it is busy and $(\Phi)$ if it is idle.
Our specification includes both \emph{system messages} (also called
\emph{control messages}) and \emph{application messages}. The former
are automatically generated by the DRL protocol and handled at the system
level, whereas the latter are explicitly created and consumed by
user-defined behaviors. Application-level messages are denoted
$\AppMsg(x,R)$. The argument $x$ is the refob used to send the
message. The second argument $R$ is a set of refobs created by the
sender to be used by the destination actor. Any remaining application-specific data in the message is omitted in our notation.
The DRL communication protocol uses two kinds of system messages. $\InfoMsg(y, z, B)$ is a message sent from an actor $A$ to an actor $C$, informing it that a new refob $\Refob z B C$ was created using $\Refob y A C$. $\ReleaseMsg(x,n)$ is a message sent from an actor $A$ to an actor $B$, informing it that the refob $\Refob x A B$ has been deactivated and should be released.
A \emph{configuration} $\Config{\alpha}{\mu}{\rho}{\chi}$ is a
quadruple $(\alpha,\mu,\rho,\chi)$ where: $\alpha$ is a mapping from actor addresses to knowledge sets; $\mu$ is a mapping from actor addresses to multisets of messages; and $\rho,\chi$ are sets of actor addresses. Actors in $\dom(\alpha)$ are \emph{internal actors} and actors in $\chi$ are
\emph{external actors}; the two sets may not intersect. The mapping $\mu$ associates each actor with undelivered messages to that actor. Actors in
$\rho$ are \emph{receptionists}. We will ensure $\rho \subseteq \dom(\alpha)$ remains
valid in any configuration that is derived from a configuration where
the property holds (referred to as the locality laws in
\cite{Baker-Hewitt-laws77}).
Configurations are denoted by $\kappa$, $\kappa'$, $\kappa_0$,
etc. If an actor address $A$ (resp. a token $x$), does not occur in
$\kappa$, then the address (resp. the token) is said to be
\emph{fresh}. We assume a facility for generating fresh addresses and
tokens.
In order to express our transition rules in a pattern-matching style, we will employ the following shorthand. Let $\alpha,[\Phi]_A$ refer to a
mapping $\alpha'$ where $\alpha'(A) = [\Phi]$ and $\alpha =
\alpha'|_{\dom(\alpha') \setminus \{A\}}$. Similarly, let
$\mu,\Msg{A}{m}$ refer to a mapping $\mu'$ where $m \in \mu'(A)$ and
$\mu = \mu'|_{\dom(\mu') \setminus \{A\}} \cup \{A \mapsto \mu'(A)
\setminus \{m\}\}$. Informally, the expression $\alpha,[\Phi]_A$ refers to a set of actors containing both $\alpha$ and the busy actor $A$ (with knowledge set $\Phi$); the expression $\mu, \Msg{A}{m}$ refers to the set of messages containing both $\mu$ and the message $m$ (sent to actor $A$).
The rules of our transition system define atomic transitions from one configuration
to another. Each transition rule has a label $l$, parameterized by some
variables $\vec x$ that occur in the left- and right-hand
configurations. Given a configuration $\kappa$, these parameters
functionally determine the next configuration $\kappa'$. Given
arguments $\vec v$, we write $\kappa \Step{l(\vec v)} \kappa'$ to denote a semantic step from $\kappa$ to $\kappa'$ using rule $l(\vec v)$.
We refer to a label with arguments $l(\vec v)$ as an \emph{event},
denoted $e$. A sequence of events is denoted $\pi$. If $\pi =
e_1,\dots,e_n$ then we write $\kappa \Step \pi \kappa'$ when $\kappa
\Step{e_1} \kappa_1 \Step{e_2} \dots \Step{e_n} \kappa'$. If there
exists $\pi$ such that $\kappa \Step \pi \kappa'$, then $\kappa'$ is
\emph{derivable} from $\kappa$. An \emph{execution} is a sequence of events $e_1,\dots,e_n$ such that
$\kappa_0 \Step{e_1} \kappa_1 \Step{e_2} \dots \Step{e_n} \kappa_n$,
where $\kappa_0$ is the initial configuration
(Section~\ref{sec:initial-configuration}). We say that a property holds \emph{at time $t$} if it holds in $\kappa_t$.
\subsection{Initial Configuration}\label{sec:initial-configuration}
The initial configuration $\kappa_0$ consists of a single actor in a
busy state:
$$\Config{[\Phi]_A}{\emptyset}{\emptyset}{\{E\}},$$
where
$\Phi = \{\Activated(\Refob x A E),\ \Created(\Refob y A A),\
\Activated(\Refob y A A)\}$. The actor's knowledge set includes a
refob to itself and a refob to an external actor $E$. $A$ can
become a receptionist by sending $E$ a refob to itself.
Henceforth, we will only consider configurations that are derivable
from an initial configuration.
\subsection{Standard Actor Operations}\label{sec:standard-actor-operations}
\begin{figure}[t]
$\textsc{Spawn}(x, A, B)$
$$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\Phi \cup \{ \Activated(\Refob x A B) \}]_A, [\Psi]_B}{\mu}{\rho}{\chi}$$
\begin{tabular}{ll}
where & $x,y,B$ fresh\\
and & $\Psi = \{ \Created(\Refob x A B),\ \Created(\Refob {y} B B),\ \Activated(\Refob y B B) \}$
\end{tabular}
\vspace{0.5cm}
$\textsc{Send}(x,\vec y, \vec z, A, B,\vec C)$
$$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncSent(x,\Phi) \cup \Psi]_A}{\mu, \Msg{B}{\AppMsg(x,R)}}{\rho}{\chi}$$
\begin{tabular}{ll}
where & $\vec y$ and $\vec z$ fresh and $n = |\vec y| = |\vec z| = |\vec C|$\\
and & $\Phi \vdash \Activated(\Refob x A B)$ and $\forall i \le n,\ \Phi \vdash \Activated(\Refob{y_i}{A}{C_i})$\\
and & $R = \{\Refob{z_i}{B}{C_i}\ |\ i \le n \}$ and $\Psi = \{\CreatedUsing(y_i,z_i)\ |\ i \le n \}$
\end{tabular}
\vspace{0.5cm}
$\textsc{Receive}(x,B,R)$
$$\Config{\alpha, (\Phi)_B}{\mu, \Msg{B}{\AppMsg(x,R)}}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncRecv(x,\Phi) \cup \Psi]_B}{\mu}{\rho}{\chi}$$
\begin{tabular}{ll}
where $\Psi = \{\Activated(z)\ |\ z \in R\}$
\end{tabular}
\vspace{0.5cm}
$\textsc{Idle}(A)$
$$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi}$$
\caption{Rules for standard actor interactions.}
\label{rules:actors}
\end{figure}
Fig.~\ref{rules:actors} gives transition rules for standard actor operations, such as spawning actors and sending messages. Each of these rules corresponds a rule in the standard operational semantics of actors~\cite{aghaFoundationActorComputation1997}. Note that each rule is atomic, but can just as well be implemented as a sequence of several smaller steps without loss of generality because actors do not share state -- see \cite{aghaFoundationActorComputation1997} for a formal proof.
The \textsc{Spawn} event allows a busy actor $A$ to spawn a new actor $B$ and creates two refobs $\Refob x A B,\ \Refob y B B$. $B$ is initialized with knowledge about $x$ and $y$ via the facts $\Created(x),\Created(y)$. The facts $\Activated(x), \Activated(y)$ allow $A$ and $B$ to immediately begin sending messages to $B$. Note that implementing \textsc{Spawn} does not require a synchronization protocol between $A$ and $B$ to construct $\Refob x A B$. The parent $A$ can pass both its address and the freshly generated token $x$ to the constructor for $B$. Since actors typically know their own addresses, this allows $B$ to construct the triple $(x,A,B)$. Since the \texttt{spawn} call typically returns the address of the spawned actor, $A$ can also create the same triple.
The \textsc{Send} event allows a busy actor $A$ to send an application-level message to $B$ containing a set of refobs $z_1,\dots,z_n$ to actors $\vec C = C_1,\dots,C_n$ -- it is possible that $B = A$ or $C_i = A$ for some $i$. For each new refob $z_i$, we say that the message \emph{contains $z_i$}. Any other data in the message besides these refobs is irrelevant to termination detection and therefore omitted. To send the message, $A$ must have active refobs to both the target actor $B$ and to every actor $C_1,\dots,C_n$ referenced in the message. For each target $C_i$, $A$ adds a fact $\CreatedUsing(y_i,z_i)$ to its knowledge set; we say that $A$ \emph{created $z_i$ using $y_i$}. Finally, $A$ must increment its $\SentCount$ count for the refob $x$ used to send the message; we say that the message is sent \emph{along $x$}.
The \textsc{Receive} event allows an idle actor $B$ to become busy by consuming an application message sent to $B$. Before performing subsequent actions, $B$ increments the receive count for $x$ and adds all refobs in the message to its knowledge set.
Finally, the \textsc{Idle} event puts a busy actor into the idle state, enabling it to consume another message.
\subsection{Release Protocol}\label{sec:release-protocol}
\begin{figure}[t!]
$\textsc{SendInfo}(y,z,A,B,C)$
$$\Config{\alpha, [\Phi \cup \Psi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncSent(y,\Phi)]_A}{\mu,\Msg{C}{\InfoMsg(y,z,B)}}{\rho}{\chi}$$
\begin{tabular}{ll}
where $\Psi = \{\CreatedUsing(\Refob y A C,\Refob z B C)\}$
\end{tabular}
\vspace{0.5cm}
$\textsc{Info}(y,z,B,C)$
$$\Config{\alpha, (\Phi)_C}{\mu, \Msg{C}{\InfoMsg(y,z,B)}}{\rho}{\chi} \InternalStep \Config{\alpha, (\IncRecv(y,\Phi) \cup \Psi)_C}{\mu}{\rho}{\chi}$$
\begin{tabular}{ll}
where $\Psi = \{\Created(\Refob z B C)\}$
\end{tabular}
\vspace{0.5cm}
$\textsc{SendRelease}(x,A,B)$
$$\Config{\alpha, [\Phi \cup \Psi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\Phi]_A}{\mu, \Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi}$$
\begin{tabular}{ll}
where &$\Psi = \{\Activated(\Refob x A B), \SentCount(x,n)\}$\\
and & $\not\exists y,\ \CreatedUsing(x,y) \in \Phi$
\end{tabular}
\vspace{0.5cm}
$\textsc{Release}(x,A,B)$
$$\Config{\alpha, (\Phi)_B}{\mu, \Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi \cup \{\Released(x)\})_B}{\mu}{\rho}{\chi}$$
\begin{tabular}{l}
only if $\Phi \vdash \RecvCount(x,n)$
\end{tabular}
\vspace{0.5cm}
$\textsc{Compaction}(x,B,C)$
$$\Config{\alpha, (\Phi \cup \Psi)_C}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_C}{\mu}{\rho}{\chi}$$
\begin{tabular}{ll}
where & $\Psi = \{\Created(\Refob x B C), \Released(\Refob x B C), \RecvCount(x,n)\}$ for some $n \in \mathbb N$\\
or & $\Psi = \{\Created(\Refob x B C), \Released(\Refob x B C)\}$ and $\forall n \in \mathbb N,\ \RecvCount(x,n) \not\in \Phi$
\end{tabular}
\vspace{0.5cm}
$\textsc{Snapshot}(A, \Phi)$
$$\Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi}$$
\caption{Rules for performing the release protocol.}
\label{rules:release}
\end{figure}
Whenever an actor creates or receives a refob, it adds facts to its knowledge set. To remove these facts when they are no longer needed, actors can perform the \emph{release protocol} defined in Fig.~\ref{rules:release}. All of these rules are not present in the standard operational semantics of actors.
The \textsc{SendInfo} event allows a busy actor $A$ to inform $C$ about a refob $\Refob z B C$ that it created using $y$; we say that the $\InfoMsg$ message is sent \emph{along $y$} and \emph{contains $z$}. This event allows $A$ to remove the fact $\CreatedUsing(y,z)$ from its knowledge set. It is crucial that $A$ also increments its $\SentCount$ count for $y$ to indicate an undelivered $\InfoMsg$ message sent to $C$: it allows the snapshot aggregator to detect when there are undelivered $\InfoMsg$ messages, which contain refobs. This message is delivered with the \textsc{Info} event, which adds the fact $\Created(\Refob z B C)$ to $C$'s knowledge set and correspondingly increments $C$'s $\RecvCount$ count for $y$.
When an actor $A$ no longer needs $\Refob x A B$ for sending messages, $A$ can deactivate $x$ with the \textsc{SendRelease} event; we say that the $\ReleaseMsg$ is sent \emph{along $x$}. A precondition of this event is that $A$ has already sent messages to inform $B$ about all the refobs it has created using $x$. In practice, an implementation may defer sending any $\InfoMsg$ or $\ReleaseMsg$ messages to a target $B$ until all $A$'s refobs to $B$ are deactivated. This introduces a trade-off between the number of control messages and the rate of simple garbage detection (Section~\ref{sec:chain-lemma}).
Each $\ReleaseMsg$ message for a refob $x$ includes a count $n$ of the number of messages sent using $x$. This ensures that $\ReleaseMsg(x,n)$ is only delivered after all the preceding messages sent along $x$ have been delivered. Once the \textsc{Release} event can be executed, it adds the fact that $x$ has been released to $B$'s knowledge set. Once $C$ has received both an $\InfoMsg$ and $\ReleaseMsg$ message for a refob $x$, it may remove facts about $x$ from its knowledge set using the \textsc{Compaction} event.
Finally, the \textsc{Snapshot} event captures an idle actor's knowledge set. For simplicity, we have omitted the process of disseminating snapshots to an aggregator. Although this event does not change the configuration, it allows us to prove properties about snapshot events at different points in time.
\subsection{Composition and Effects}\label{sec:actor-composition}
\begin{figure}
$\textsc{In}(A,R)$
$$\Config{\alpha}{\mu}{\rho}{\chi} \ExternalStep \Config{\alpha}{\mu, \Msg{A}{\AppMsg(\NullToken, R)}}{\rho}{\chi \cup \chi'}$$
\begin{tabular}{ll}
where & $A \in \rho$ and $R = \{ \Refob{x_1}{A}{B_1}, \dots, \Refob{x_n}{A}{B_n} \}$ and $x_1,\dots,x_n$ fresh\\
and & $\{B_1,\dots,B_n\} \cap \dom(\alpha) \subseteq \rho$ and $\chi' = \{B_1,\dots,B_n\} \setminus \dom(\alpha)$ \\
\end{tabular}
\vspace{0.5cm}
$\textsc{Out}(x,B,R)$
$$\Config{\alpha}{\mu,\Msg{B}{\AppMsg(x, R)}}{\rho}{\chi} \ExternalStep \Config{\alpha}{\mu}{\rho \cup \rho'}{\chi}$$
\begin{tabular}{ll}
where $B \in \chi$ and $R = \{ \Refob{x_1}{B}{C_1}, \dots, \Refob{x_n}{B}{C_n} \}$ and $\rho' = \{C_1,\dots,C_n\} \cap \dom(\alpha)$
\end{tabular}
\vspace{0.5cm}
$\textsc{ReleaseOut}(x,B)$
$$\Config{\alpha}{\mu,\Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi \cup \{B\}} \ExternalStep \Config{\alpha}{\mu}{\rho}{\chi \cup \{B\}}$$
\vspace{0.2cm}
$\textsc{InfoOut}(y,z,A,B,C)$
$$\Config{\alpha}{\mu,\Msg{C}{\InfoMsg(y,z,A,B)}}{\rho}{\chi \cup \{C\}} \ExternalStep \Config{\alpha}{\mu}{\rho}{\chi \cup \{C\}}$$
\caption{Rules for interacting with the outside world.}
\label{rules:composition}
\end{figure}
We give rules to dictate how internal actors interact with external actors in
Fig.~\ref{rules:composition}. The \textsc{In} and \textsc{Out} rules correspond to similar rules in the standard operational semantics of actors.
Since internal garbage collection protocols are not exposed to the outside world, all $\ReleaseMsg$ and $\InfoMsg$ messages sent to external actors are simply dropped by the \textsc{ReleaseOut} and \textsc{InfoOut} events. Likewise, only $\AppMsg$ messages can enter the system. Since we cannot statically determine when a receptionist's address has been forgotten by all external actors, we assume that receptionists are never terminated. The resulting ``black box'' behavior of our system is the same as the actor systems in \cite{aghaFoundationActorComputation1997}. Hence, in principle DRL can be gradually integrated into a codebase by creating a subsystem for garbage-collected actors.
The \textsc{In} event allows an external actor to send an application-level message to a receptionist $A$ containing a set of refobs $R$, all owned by $A$. Since external actors do not use refobs, the message is sent using the special $\NullToken$ token. All targets in $R$ that are not internal actors are added to the set of external actors.
The \textsc{Out} event delivers an application-level message to an external actor with a set of refobs $R$. All internal actors referenced in $R$ become receptionists because their addresses have been exposed to the outside world.
\subsection{Garbage}\label{sec:garbage-defn}
We can now operationally characterize actor garbage in our model. An actor $A$ can \emph{potentially receive a message} in $\kappa$ if there is a sequence of events (possibly of length zero) leading from $\kappa$ to a configuration $\kappa'$ in which $A$ has an undelivered message. We say that an actor is \emph{terminated} if it is idle and cannot potentially receive a message.
An actor is \emph{blocked} if it satisfies three conditions: (1) it is idle, (2) it is not a receptionist, and (3) it has no undelivered messages; otherwise, it is \emph{unblocked}. We define \emph{potential reachability} as the reflexive transitive closure of the potential acquaintance relation. That is, $A_1$ can potentially reach $A_n$ if and only if there is a sequence of unreleased refobs $(\Refob {x_1} {A_1} {A_2}), \dots, (\Refob {x_n} {A_{n-1}} {A_n})$; recall that a refob $\Refob x A B$ is unreleased if its target $B$ has not yet received a $\ReleaseMsg$ message for $x$.
Notice that an actor can potentially receive a message if and only if it is potentially reachable from an unblocked actor. Hence an actor is terminated if and only if it is only potentially reachable by blocked actors. A special case of this is \emph{simple garbage}, in which an actor is blocked and has no potential inverse acquaintances besides itself.
We say that a set of actors $S$ is \emph{closed} (with respect to the potential inverse acquaintance relation) if, whenever $B \in S$ and there is an unreleased refob $\Refob x A B$, then also $A \in S$. Notice that the closure of a set of terminated actors is also a set of terminated actors.
\section{Appendix}
\subsection{Basic Properties}
\begin{lemma}\label{lem:release-is-final}
If $B$ has undelivered messages along $\Refob x A B$, then $x$ is an unreleased refob.
\end{lemma}
\begin{proof}
There are three types of messages: $\AppMsg, \InfoMsg,$ and $\ReleaseMsg$. All three messages can only be sent when $x$ is active. Moreover, the \textsc{Release} rule ensures that they must all be delivered before $x$ can be released.
\end{proof}
\begin{lemma}\label{lem:facts-remain-until-cancelled}
$\ $
\begin{itemize}
\item Once $\CreatedUsing(\Refob y A C, \Refob z B C)$ is added to $A$'s knowledge set, it will not be removed until after $A$ has sent an $\InfoMsg$ message containing $z$ to $C$.
\item Once $\Created(\Refob z B C)$ is added to $C$'s knowledge set, it will not be removed until after $C$ has received the (unique) $\ReleaseMsg$ message along $z$.
\item Once $\Released(\Refob z B C)$ is added to $C$'s knowledge set, it will not be removed until after $C$ has received the (unique) $\InfoMsg$ message containing $z$.
\end{itemize}
\end{lemma}
\begin{proof}
Immediate from the transition rules.
\end{proof}
\begin{lemma}\label{lem:msg-counts}
Consider a refob $\Refob x A B$. Let $t_1, t_2$ be times such that $x$ has not yet been deactivated at $t_1$ and $x$ has not yet been released at $t_2$. In particular, $t_1$ and $t_2$ may be before the creation time of $x$.
Suppose that $\alpha_{t_1}(A) \vdash \SentCount(x,n)$ and $\alpha_{t_2}(B) \vdash \RecvCount(x,m)$ and, if $t_1 < t_2$, that $A$ does not send any messages along $x$ during the interval $[t_1,t_2]$ . Then the difference $\max(n - m,0)$ is the number of messages sent along $x$ before $t_1$ that were not received before $t_2$.
\end{lemma}
\begin{proof}
Since $x$ is not deactivated at time $t_1$ and unreleased at time $t_2$, the message counts were never reset by the \textsc{SendRelease} or \textsc{Compaction} rules. Hence $n$ is the number of messages $A$ sent along $x$ before $t_1$ and $m$ is the number of messages $B$ received along $x$ before $t_2$. Hence $\max(n - m, 0)$ is the number of messages sent before $t_1$ and \emph{not} received before $t_2$.
\end{proof}
\subsection{Chain Lemma}
\ChainLemma*
\begin{proof}
We prove that the invariant holds in the initial configuration and at all subsequent times by induction on events $\kappa \Step e \kappa'$, omitting events that do not affect chains. Let $\kappa = \Config{\alpha}{\mu}{\rho}{\chi}$ and $\kappa' = \Config{\alpha'}{\mu'}{\rho'}{\chi'}$.
In the initial configuration, the only refob to an internal actor is $\Refob y A A$. Since $A$ knows $\Created(\Refob{y}{A}{A})$, the invariant is satisfied.
In the cases below, let $x,y,z,A,B,C$ be free variables, not referencing the variables used in the statement of the lemma.
\begin{itemize}
\item $\textsc{Spawn}(x,A,B)$ creates a new unreleased refob $\Refob x A B$, which satisfies the invariant because $\alpha'(B) \vdash \Created(\Refob x A B)$.
\item $\textsc{Send}(x,\vec y, \vec z, A,B,\vec C)$ creates a set of refobs $R$. Let $(\Refob z B C) \in R$, created using $\Refob y A C$.
If $C$ is already in the root set, then the invariant is trivially preserved. Otherwise, there must be a chain $(\Refob{x_1}{A_1}{C}), \dots, (\Refob{x_n}{A_n}{C})$ where $x_n = y$ and $A_n = A$. Then $x_1,\dots,x_n,z$ is a chain in $\kappa'$, since $\alpha'(A_n) \vdash \CreatedUsing(x_n,z)$.
If $B$ is an internal actor, then this shows that every unreleased refob to $C$ has a chain in $\kappa'$. Otherwise, $C$ is in the root set in $\kappa'$. To see that the invariant still holds, notice that $\Refob z B C$ is a witness of the desired chain.
\item $\textsc{SendInfo}(y,z,A,B,C)$ removes the $\CreatedUsing(y,z)$ fact but also sends $\InfoMsg(y,z,B)$, so chains are unaffected.
\item $\textsc{Info}(y,z,B,C)$ delivers $\InfoMsg(y,z,B)$ to $C$ and adds $\Created(\Refob z B C)$ to its knowledge set.
Suppose $\Refob z B C$ is part of a chain $(\Refob{x_1}{A_1}{C}), \dots, (\Refob{x_n}{A_n}{C})$, i.e. $x_i = y$ and $x_{i+1} = z$ and $A_{i+1} = B$ for some $i < n$. Since $\alpha'(C) \vdash \Created(\Refob{x_{i+1}}{A_{i+1}}{C})$, we still have a chain $x_{i+1},\dots,x_n$ in $\kappa'$.
\item $\textsc{Release}(x,A,B)$ releases the refob $\Refob x A B$. Since external actors never release their refobs, both $A$ and $B$ must be internal actors.
Suppose the released refob was part of a chain $(\Refob{x_1}{A_1}{B}), \dots, (\Refob{x_n}{A_n}{B})$, i.e. $x_i = x$ and $A_i = A$ for some $i < n$. We will show that $x_{i+1},\dots,x_n$ is a chain in $\kappa'$.
Before performing $\textsc{SendRelease}(x_i,A_i,B)$, $A_i$ must have performed the $\textsc{Info}(x_i,x_{i+1},\allowbreak A_{i+1},B)$ event. Since the $\InfoMsg$ message was sent along $x_i$, Lemma~\ref{lem:release-is-final} ensures that the message must have been delivered before the present \textsc{Release} event. Furthermore, since $x_{i+1}$ is an unreleased refob in $\kappa'$, Lemma~\ref{lem:facts-remain-until-cancelled} ensures that $\alpha'(B) \vdash \Created(\Refob{x_{i+1}}{A_{i+1}}{B})$.
\item $\textsc{In}(A,R)$ adds a message from an external actor to the internal actor $A$. This event can only create new refobs that point to receptionists, so it preserves the invariant.
\item $\textsc{Out}(x,B,R)$ emits a message $\AppMsg(x,R)$ to the external actor $B$. Since all targets in $R$ are already in the root set, the invariant is preserved.
\end{itemize}
\end{proof}
\subsection{Termination Detection}
Given a set of snapshots $Q$ taken before some time $t_f$, we write $Q_t$ to denote those snapshots in $Q$ that were taken before time $t < t_f$. If $\Phi_A \in Q$, we denote the time of $A$'s snapshot as $t_A$.
\Completeness*
Call this set of snapshots $Q$. First, we prove the following lemma.
\begin{lemma}\label{lem:completeness-helper}
If $Q \vdash \Unreleased(\Refob x A B)$ and $B \in Q$, then $x$ is unreleased at $t_B$.
\end{lemma}
\begin{proof}
By definition, $Q \vdash \Unreleased(\Refob x A B)$ only if $Q \vdash \Created(x) \land \lnot \Released(x)$. Since $Q \not\vdash \Released(x)$, we must also have $\Phi_B \not\vdash \Released(x)$. For $Q \vdash \Created(x)$, there are two cases.
Case 1: $\Phi_B \vdash \Created(x)$. Since $\Phi_B \not\vdash \Released(x)$, \cref{lem:facts-remain-until-cancelled} implies that $x$ is unreleased at time $t_B$.
Case 2: For some $C \in Q$ and some $y$, $\Phi_C \vdash \CreatedUsing(y,x)$. Since $C$ performed its final action before taking its snapshot, this implies that $C$ will never send the $\InfoMsg$ message containing $x$ to $B$.
Suppose then for a contradiction that $x$ is released at time $t_B$. Since $\Phi_B \not\vdash \Released(x)$, \cref{lem:facts-remain-until-cancelled} implies that $B$ received an $\InfoMsg$ message containing $x$ before its snapshot. But this is impossible because $C$ never sends this message.
\end{proof}
\begin{proof}[Proof (\cref{lem:terminated-is-complete})]
By strong induction on time $t$, we show that $Q$ is closed and that every actor appears blocked.
\textbf{Induction hypothesis:} For all times $t' < t$, if $B \in Q_{t'}$ and $Q \vdash \Unreleased(\Refob x A B)$, then $A \in Q$, $Q \vdash \Activated(x)$, and $Q \vdash \SentCount(x,n)$ and $Q \vdash \RecvCount(x,n)$ for some $n$.
Since $Q_0 = \emptyset$, the induction hypothesis holds trivially in the initial configuration.
Now assume the induction hypothesis. Suppose that $B \in Q$ takes its snapshot at time $t$ with $Q \vdash \Unreleased(\Refob x A B)$, which implies $Q \vdash \Created(x) \land \lnot\Released(x)$.
$Q \vdash \Created(x)$ implies that $x$ was created before $t_f$. \cref{lem:completeness-helper} implies that $x$ is also unreleased at time $t_f$, since $B$ cannot perform a \textsc{Release} event after its final action. Hence $A$ is in the closure of $\{B\}$ at time $t_f$, so $A \in Q$.
Now suppose $\Phi_A \not\vdash \Activated(x)$. Then either $x$ will be activated after $t_A$ or $x$ was deactivated before $t_A$. The former is impossible because $A$ would need to become unblocked to receive $x$. Since $x$ is unreleased at time $t_f$ and $t_A < t_f$, the latter implies that there is an undelivered $\ReleaseMsg$ message for $x$ at time $t_f$. But this is impossible as well, since $B$ is blocked at $t_f$.
Finally, let $n$ such that $\Phi_B \vdash \RecvCount(x,n)$; we must show that $\Phi_A \vdash \SentCount(x,n)$. By the above arguments, $x$ is active at time $t_A$ and unreleased at time $t_B$. Since both actors performed their final action before their snapshots, all messages sent before $t_A$ must have been delivered before $t_B$. By Lemma~\ref{lem:msg-counts}, this implies $\Phi_A \vdash \SentCount(x,n)$.
\end{proof}
We now prove the safety theorem, which states that if $Q$ is a finalized set of snapshots, then the corresponding actors of $Q$ are terminated. We do this by showing that at each time $t$, all actors in $Q_t$ are blocked and all of their potential inverse acquaintances are in $Q$.
Consider the first actor $B$ in $Q$ to take a snapshot. We show, using the Chain Lemma, that the closure of this actor is in $Q$. Then, since all potential inverse acquaintances of $B$ take snapshots strictly after $t_B$, it is impossible for $B$ to have any undelivered messages without appearing unblocked.
For every subsequent actor $B$ to take a snapshot, we make a similar argument with an additional step: If $B$ has any potential inverse acquaintances in $Q_{t_B}$, then they could not have sent $B$ a message without first becoming unblocked.
\Safety*
\begin{proof}
Proof by induction on events. The induction hypothesis consists of two clauses that must both be satisfied at all times $t \le t_f$.
\begin{itemize}
\item \textbf{IH 1:} If $B \in Q_t$ and $\Refob x A B$ is unreleased, then $Q \vdash \Unreleased(x)$.
\item \textbf{IH 2:} The actors of $Q_t$ are all blocked.
\end{itemize}
\paragraph*{Initial configuration} Since $Q_0 = \emptyset$, the invariant trivially holds.
\paragraph*{$\textsc{Snapshot}(B, \Phi_B)$}
Suppose \(B \in Q\) takes a snapshot at time \(t\). We show that if $\Refob x A B$ is unreleased at time $t$, then $Q \vdash \Unreleased(x)$ and there are no undelivered messages along $x$ from $A$ to $B$. We do this with the help of two lemmas.
\begin{lemma}\label{lem:complete-ref}
If $Q \vdash \Unreleased(\Refob x A B)$, then $x$ is unreleased at time $t$ and there are no undelivered messages along $x$ at time $t$. Moreover, if $t_A > t$, then there are no undelivered messages along $x$ throughout the interval $[t,t_A]$.
\end{lemma}
\begin{proof}[Proof (Lemma)]
Since $Q$ is closed, we have $A \in Q$ and $\Phi_A \vdash \Activated(x)$. Since $B$ appears blocked, we must have $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$.
Suppose $t_A > t$. Since $\Phi_A \vdash \Activated(x)$, $x$ is not deactivated and not released at $t_A$ or $t$. Hence, by Lemma~\ref{lem:msg-counts}, every message sent along $x$ before $t_A$ was received before $t$. Since message sends precede receipts, each of those messages was sent before $t$. Hence there are no undelivered messages along $x$ throughout $[t,t_A]$.
Now suppose $t_A < t$. Since $\Phi_A \vdash \Activated(x)$, $x$ is not deactivated and not released at $t_A$. By IH 2, $A$ was blocked throughout the interval $[t_A,t]$, so it could not have sent a $\ReleaseMsg$ message. Hence $x$ is not released at $t$. By Lemma~\ref{lem:msg-counts}, all messages sent along $x$ before $t_A$ must have been delivered before $t$. Hence, there are no undelivered messages along $x$ at time $t$.
\end{proof}
\begin{lemma}\label{lem:complete-chains}
Let $\Refob{x_1}{A_1}{B}, \dots, \Refob{x_n}{A_n}{B}$ be a chain to $\Refob x A B$ at time $t$. Then $Q \vdash \Unreleased(x)$.
\end{lemma}
\begin{proof}[Proof (Lemma)]
Since all refobs in a chain are unreleased, we know $\forall i \le n,\ \Phi_B \not\vdash \Released(x_i)$ and so $Q \not\vdash \Released(x_i)$. It therefore suffices to prove, by induction on the length of the chain, that $\forall i \le n,\ Q \vdash \Created(x_i)$.
\textbf{Base case:} By the definition of a chain, $\alpha_t(B) \vdash \Created(x_1)$, so $\Created(x_1) \in \Phi_B$.
\textbf{Induction step:} Assume $Q \vdash \Unreleased(x_i)$, which implies $A_i \in Q$. Let $t_i$ be the time of $A_i$'s snapshot.
By the definition of a chain, either the message $\Msg{B}{\InfoMsg(x_i,x_{i+1})}$ is in transit at time $t$, or $\alpha_t(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$. But the first case is impossible by Lemma~\ref{lem:complete-ref}, so we only need to consider the latter.
Suppose $t_i > t$. Lemma~\ref{lem:complete-ref} implies that $A_i$ cannot perform the $\textsc{SendInfo}(x_i,x_{i+1},A_{i+1},B)$ event during $[t,t_i]$. Hence $\alpha_{t_i}(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$, so $Q \vdash \Created(x_{i+1})$.
Now suppose $t_i < t$. By IH 2, $A_i$ must have been blocked throughout the interval $[t_i,t]$. Hence $A_i$ could not have created any refobs during this interval, so $x_{i+1}$ must have been created before $t_i$. This implies $\alpha_{t_i}(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$ and therefore $Q \vdash \Created(x_{i+1})$.
\end{proof}
Lemma~\ref{lem:complete-chains} implies that $B$ cannot be in the root set. If it were, then by the Chain Lemma there would be a refob $\Refob y C B$ with a chain where $C$ is an external actor. Since $Q \vdash \Unreleased(y)$, there would need to be a snapshot from $C$ in $Q$ -- but external actors do not take snapshots, so this is impossible.
Since $B$ is not in the root set, there must be a chain to every unreleased refob $\Refob x A B$. By Lemma~\ref{lem:complete-chains}, $Q \vdash \Unreleased(x)$. By Lemma~\ref{lem:complete-ref}, there are no undelivered messages to $B$ along $x$ at time $t$. Since $B$ can only have undelivered messages along unreleased refobs (Lemma~\ref{lem:release-is-final}), the actor is indeed blocked.
\paragraph*{$\textsc{Send}(x,\vec y, \vec z, A,B,\vec C)$}
In order to maintain IH 2, we must show that if $B \in Q_t$ then this event cannot occur. So suppose $B \in Q_t$. By IH 1, we must have $Q \vdash \Unreleased(\Refob x A B)$, so $A \in Q$. By IH 2, we moreover have $A \not\in Q_t$ -- otherwise $A$ would be blocked and unable to send this message. Since $B$ appears blocked in $Q$, we must have $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$. Since $x$ is not deactivated at $t_A$ and unreleased at $t_B$, \cref{lem:msg-counts} implies that every message sent before $t_A$ is received before $t_B$. Hence $A$ cannot send this message to $B$ because $t_A > t > t_B$.
In order to maintain IH 1, suppose that one of the refobs sent to $B$ in this step is $\Refob z B C$, where $C \in Q_t$. Then in the next configuration, $\CreatedUsing(y,z)$ occurs in $A$'s knowledge set. By the same argument as above, $A \in Q \setminus Q_t$ and $\Phi_A \vdash \SentCount(y,n)$ and $\Phi_C \vdash \RecvCount(y,n)$ for some $n$. Hence $A$ cannot perform the $\textsc{SendInfo}(y,z,A,B,C)$ event before $t_A$, so $\Phi_A \vdash \CreatedUsing(y,z)$ and $Q \vdash \Created(z)$.
\paragraph*{SendInfo(y,z,A,B,C)}
By the same argument as above, $A \not\in Q_t$ cannot send an $\InfoMsg$ message to $B \in Q_t$ without violating message counts, so IH 2 is preserved.
\paragraph*{$\textsc{SendRelease}(x,A,B)$}
Suppose that $A \not\in Q_t$ and $B \in Q_t$. By IH 1, $\Refob x A B$ is unreleased at time $t$. Since $Q$ is finalized, $\Phi_A \vdash \Activated(x)$. Hence $A$ cannot deactivate $x$ and IH 2 is preserved.
\paragraph*{$\textsc{In}(A,R)$}
Since every potential inverse acquaintance of an actor in $Q_t$ is also in $Q$, none of the actors in $Q_t$ is a receptionist. Hence this rule does not affect the invariants.
\paragraph*{$\textsc{Out}(x,B,R)$}
Suppose $(\Refob y B C) \in R$ where $C \in Q_t$. Then $y$ is unreleased and $Q \vdash \Unreleased(y)$ and $B \in Q$. But this is impossible because external actors do not take snapshots.
\end{proof}
| proofpile-arXiv_065-147 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The bent-core liquid crystals (BLCs) are a novel class of liquid crystal (LC) mesogens that manifest various unique and exciting properties such as chirality, ferroelectricity and biaxiality \cite{Photinos_biaxial_JMC,Takezoe_BLC_JJAP,Jakli_BLC_LCR,Francescangeli_cybo_SM,Punjani_Golam,Keith_NBLC_SM}. They are known to form several exotic mesophases such as the twist-bend nematic (N$_{tb}$) phase, the blue phase (BP) and the banana (B1-B7) phases \cite{Takezoe_BLC_JJAP,Cestari_Ntb_PRE,V_Borshch_Ntb_NatCom,Jakli_BLC_doped_JMC}. The nematic (N) phase of BLCs itself manifests a few of the aforementioned distinct features, such as ferroelectric response, fast switching and macroscopic biaxiality \cite{Shankar_Cybo_CPC,Shankar_Cybo_AFM,Ghosh_BLC_Ferro_JMCC,Francescangeli_Ferro_AFM,Photinos_biaxial_JMC,Francescangeli_cybo_SM}. The main reason behind these extraordinary features is the locally polar cybotactic clusters formed by BLC molecules in their N phase \cite{Francescangeli_cybo_SM,Punjani_Golam,Keith_NBLC_SM, Shankar_Cybo_CPC,Shankar_Cybo_AFM,Ghosh_BLC_Ferro_JMCC,Francescangeli_Ferro_AFM}. Due to bent molecular shape and the lack of translational symmetry, the BLC molecules in their N phase experience steric hindrance. This causes stacking of the BLC molecules in smectic layers (clusters) \cite{Scaramuzza_BLC_JAP,Jakli_Rheological_SM}. These stacks of molecules are termed as `cybotactic' clusters because they are more ordered compared to the surrounding molecules. The clusters and the other BLC molecules together constitute the macroscopic N phase \cite{Francescangeli_cybo_SM}. Recent reports, aided by various experimental techniques, have established the existence of cybotactic clusters in the nematic, smectic and even in the isotropic phases \cite{Kashima_PolarBLC_JMCC,Alaasar_Cluster_JMCC,Ghosh_BLC_Ferro_JMCC,Jakli_BLC_mixture_PRE,Domenici_NMR_SM,Goodby_Unusual_SM}. Although studied extensively, the origins of cluster formation and the effects of external factors (e.g. nanoparticle doping, electric field) on these clusters remain an open problem. Further studies are required for the manipulation and successful tailoring of cybotactic clusters for applications in science and technology, including novel BLC-driven devices. \\
Suspension of nanoparticles (NPs) in the LC matrix to improve or to selectively modify the physical properties of LCs is a widely used technique in today’s liquid crystal science. Studies have shown that the dispersion of nanoparticles in LCs can improve the electro-optic properties, modify the elastic anisotropy and the dielectric constants, and reduce the transition temperatures \cite{Takatoh_LCNP_JJAP, NAClark_LCCNT_APL, WLee_TNLC_APL, Ghosh_BLCCNT_JML, JitendraK_QDBLC_JML}. The incorporation of NPs can also affect the orientation of LCs and induce a homeotropic alignment \cite{Hegmann_NPLC_LC}. Varying the size and shapes of the dopant NPs also have a profound effect on the physical properties of LCs \cite{Orlandi_LCNP_PCCP, Mirzaei_LCQD_composite_JMC, Kinkead_QDLC_JMC}. Recently, a new class of semiconductor NPs have been discovered, called the quantum dots (QDs). Incorporation of these QDs in the LC matrix may also affect or alter the physical properties of LCs, such as a reduction in the dielectric anisotropy, faster response times, changes in the phase transition temperatures and altered boundary conditions \cite{Mirzaei_LCQD_composite_JMC, Kinkead_QDLC_JMC,Mirzaei_QDLC_dopant_JMC,Zhang_LCQD_JJAP,Urbanski_NPLC_Bulk_CPC,JitendraK_QDBLC_JML}. Changes in the dielectric anisotropy ($\Delta\epsilon$) provide an indirect measure of changes in the order parameter ($S$), because $\Delta\epsilon \propto S$ \cite{JitendraK_QDBLC_JML, maier_orderparameter}. The QDs are usually capped with functionalized ligands that prevent aggregation. In particular, this makes QDs good candidates for stabilising dilute suspensions in doping or dispersion LC experiments. To date, there has been work on QDs dispersed in calamitic nematic LCs (NLCs) while their effect on bent-core NLCs is relatively open \cite{JitendraK_QDBLC_JML}. In particular, little is known about the effect of QDs or doping in general, on the cybotactic clusters in bent core NLCs and in the absence of systematic experimental and theoretical studies on these lines, doped bent core NLC systems cannot meet their full potential. \\
We study a dilute homogeneous suspension of a QD-doped thermotropic BLC (details in the next section), confined in a planar cell with fixed boundary conditions on both cell surfaces. In particular, the undoped counterpart exhibits cybotactic clusters. Our primary investigations concern comparisons between the doped and undoped system, that give quantitative and qualitative insight into the effects of doping, the interplay between doping and cluster formation and how these effects can be tailored by temperature and external stimuli. This paper builds on our first paper \cite{patranabish2019one} wherein we focussed on a one-dimensional theoretical study of the N phase of a BLC, confined in a planar cell, within a phenomenological Landau-de Gennes (LdG) framework inspired by previous insightful modelling in \cite{madhusudana2017two}. This model is based on the premise that the N phase of BLC is characterized by two order parameters: $S_g$ that measures the ordering of the ground-state molecules (outside the clusters) and $S_c$ that measures the ordering within the smectic-like cybotactic clusters, with coupling between the two effects captured by an empirical parameter $\gamma$. In \cite{patranabish2019one}, we theoretically study the effects of spatial inhomogeneities, confinement and the coupling parameter, $\gamma$, on $S_g$ and $S_c$. Little is known about the material-dependent values of $\gamma$ or indeed how it could be experimentally controlled. Our theoretical studies showed that larger values of $\gamma$ substantially increase the interior values of $S_g$ and $S_c$ i.e. $\gamma$ promotes order outside and within the clusters of the N phase of the BLC, the effects being more pronounced for lower temperatures. However, the coupling also enhances the values of $S_g$, for temperatures above the nematic-isotropic transition temperature i.e. the bent core NLC can exhibit nematic order when the calamitic N phase does not e.g. for temperatures above the nematic-isotropic transition temperature. The model in \cite{patranabish2019one} is simplified in many ways, but yet sheds qualitative insight into the powerful prospects offered by cybotactic clusters in BLCs and how they can be used to manipulate nematic order and phase transitions for tailor-made applications.\\
In this paper, we report a combined experimental and theoretical analysis of a QDs dispersed bent-core nematic LC 14-2M-CH$_3$, in the dilute regime. The dilute regime applies to systems of nano-scale QDs (much smaller than the system size) with a low concentration of QDs, and the QDs are uniformly dispersed without any aggregation effects. We perform optical texture observations, dielectric measurements, optical birefringence measurements and the orientational order parameter calculations on the pristine BLC and its QDs-dispersed counterpart. The N phase of 14-2M-CH$_3$ contains cybotactic clusters, as already reported in our earlier work \cite{Patranabish_JML}. We find that the N phase of the QDs-dispersed counterpart also contains cybotactic clusters, albeit with modified properties. We report a number of interesting experimental results for the QDs-dispersed BLC system - the optical birefringence ($\Delta n$) is lowered and the macroscopic order parameter ($S$) is reduced compared to the undoped counterpart for a given temperature, the activation energy ($E_a$) increases compared to the undoped counterpart and based on the measurements of the relaxation frequencies ($f_R$) and activation energies, we deduce that the size of the cybotactic clusters decreases with QDs doping. We complement our experiments with a theoretical LdG-type model for the N phase of the QD-doped BLC, using the framework developed in \cite{canevari2019design}. This framework is not specific to QDs or to BLCs but to generic dilute doped LC systems and effectively captures the effects of the homogeneously suspended inclusions (in this case QDs) in terms of an additional contribution to the free energy. Hence, we apply this approach to the LdG free energy of a BLC system proposed in \cite{patranabish2019one} and \cite{madhusudana2017two} and qualitatively capture the effects of the QDs by means of suitable novel additional energetic terms. These additional terms, in principle, depend on the properties of the QDs e.g. size, shape, anchoring and preferred order etc. We introduce a weighted mean scalar order parameter, $S_m$, the theoretical analogue of the experimentally measured scalar order parameter. This simplistic approach does capture the doping-induced reduction in the mean order parameter $S_m$, which in turn qualitatively explains the reduction in birefringence, dielectric anisotropy. We present our experimental results in three parts below, followed by the mathematical model, numerical results and perspectives for future work.
\section{Experimental}
\begin{table}[b]
\caption{Phase sequence and transition temperatures observed in this study (using POM) during slow cooling.}
\begin{ruledtabular}
\begin{tabular}{lc}
Compound & \makecell{Phase sequence and transition \\ temperatures ($^\circ$C)}\\
\hline
14-2M-CH$_3$ & Iso 134 N$_{Cyb}$ 106 Cryst. \\
14-2M-CH$_3$ + 0.5 wt\% QDs & Iso 134 N$_{Cyb}$ 104 Cryst. \\
\end{tabular}
\end{ruledtabular}
\end{table}
A thermotropic bent-core nematic liquid crystal (LC) 14-2M-CH$_3$ was used for the experimental study and also as the host for the studied LC nanocomposite. The LC material was obtained from Prof. N.V.S. Rao's group at the Department of Chemistry, Assam University, Silchar, Assam, India. The molecular formula of 14-2M-CH$_3$, synthetic scheme details etc. are available in our earlier paper \cite{Patranabish_JML}. The CdSe/ZnS core-shell type quantum dots (QDs) of diameter 5.6 nm (Core dia: 2.8 nm + Shell thickness 1.4 nm) were procured from Sigma-Aldrich, Merck (USA) for preparing the LC nanocomposites. The spherical QDs were stabilized with the encapsulation of octadecylamine ligands, have absorption maxima in the range from 510 to 540 nm and emission wavelengths lying in the range of 530 to 540 nm, as provided by the manufacturer. The sequence of performed experimental steps are as follows: preparation of QDs dispersed LC nanocomposite, optical texture observation and evaluation of transition temperatures, orientational order parameter determination $via$ optical birefringence measurements, and dielectric characterization. All the experimental measurements were carried out while slowly cooling the sample from the isotropic liquid. \\
\begin{figure}[t]
\centering
\includegraphics[width = 0.9 \linewidth]{"Fig_1_LCQD_colloid".pdf}
\caption{Visibly homogeneous solutions of (a) 14-2M-CH$_3$ (b) CdSe/ZnS QDs and (c) nanocomposite (14-2M-CH$_3$ + 0.5 wt\% CdSe/ZnS QD) in Chloroform (CHCl$_3$).}
\label{Fig1}
\end{figure}
To prepare the LC nanocomposite, CdSe/ZnS QDs were taken at 0.5 wt\% concentration and mixed with the LC compound 14-2M-CH$_3$. To obtain a homogeneous dispersion of the quantum dots in the LC matrix, chloroform was added to the mixture, and the mixture was ultrasonicated till a visibly homogeneous dispersion was achieved (Figure 1). The mixture was kept at $\sim$ 60 $^\circ$C for 2-3 hours and it was then left overnight at room temperature for the slow evaporation of chloroform \cite{pradeepkumar}. Once the chloroform was completely evaporated, 0.5wt\% QDs dispersed LC nanocomposites were obtained. They were checked visually through a polarizing optical microscope several times, but no aggregation of QDs were noticed.\\
\begin{figure*}
\centering
\includegraphics[width = \linewidth]{"Fig_2_textures".pdf}
\caption{Birefringent textural colour variation with temperature of (a-e) the bent-core LC 14-2M-CH$_3$ and (f-j) the 0.5wt\% CdSe/ZnS QDs dispersed 14-2M-CH$_3$, during cooling, respectively. In each image, \textbf{\textit{r}} indicates the rubbing direction and the scale-bar indicates 100 $\mu \mathrm{m}$. The periodic white spots in the background of the images are features of the LC cell (Instec Inc., USA) caused by PI printing fabric in the production line.}
\label{Fig2}
\end{figure*}
Indium Tin Oxide (ITO) coated 5 $\mu$m planar (homogeneous alignment) cells (Instec Inc., USA) were used for the experiments. Two different cells, of this type, were used for the pristine LC and the LC nanocomposite, respectively. The LCs were filled in the cells $via$ capillary action around 10 $^\circ$C above the clearing temperature. During measurements, the cells were kept inside an Instec HCS302 hot-stage and the temperature was maintained using an Instec MK1000 temperature controller with an accuracy of $\pm$ 0.01 $^{\circ}$C. The liquid crystalline textures were recorded using an OLYMPUS BX-51P polarizing optical microscope (POM) attached to a computer, with the sample placed between two crossed polarizers. \\
The phase behaviour and transition temperatures of the LC 14-2M-CH$_3$ and its nanocomposite were determined using the POM while slowly cooling from the isotropic liquid (0.5 $^\circ$C/min) \cite{JitendraK_QDBLC_JML, pradeepkumar}. The transition temperatures of the pristine bent-core LC 14-2M-CH$_3$ were also determined previously using differential scanning calorimetry (DSC) at a scan rate of 5 $^\circ$C/min (reported elsewhere) \cite{Patranabish_JML}. The transition temperatures of the pristine LC and its nanocomposite, as obtained from the POM observations, are summarized in Table 1. The dielectric measurements were carried out in the frequency range of 20 Hz - 2 MHz using an Agilent E4980A precision LCR meter. The measuring voltage was kept at $V_{rms} = 0.2$ V. For transmission dependent birefringence measurements and the related order parameter calculations, the sample was placed between two crossed Glan-Thompson polarizers (GTH10M, Thorlabs, Inc.) and perpendicularly illuminated with a He-Ne Laser ($\sim$ 633 nm) \cite{susantaPRE, susantaJMC}. The rubbing direction $\vec{r}$ (\textit{i.e.} the LC director $\widehat{n}$) of the planar LC cell was kept at 45$^\circ$ with respect to the polarizer (P)/analyzer (A) pass-axes. Transmitted power at the output end was measured using a Gentec PH100-Si-HA-OD1 photo-detector attached to a Gentec Maestro power meter. \\
\section{Result and discussion}
\subsection{Polarizing optical microscopy}
The LC material is introduced in a 5 $\mu$m planar LC cell $via$ capillary action around 10 $^\circ$C above the isotropic-nematic transition temperature, and the textures were recorded between crossed polarizers. The textures recorded for the LC 14-2M-CH$_3$ and its 0.5 wt$\%$ QDs dispersed nanocomposite, during slow cooling from the isotropic liquid, are shown in Figure 2. The textures of the LC nanocomposite exhibited fairly homogeneous colours (and hence, alignment) similar to that of the pristine LC. This indicates a good, homogeneous dispersion of QDs in the LC matrix without any aggregation \cite{JitendraK_QDBLC_JML}. Close to the isotropic-nematic (Iso-N) transition temperature, we observe a sharp colour change owing to the development of nematic order in these systems (see Figures 2(a) and 2(f)). As the temperature is further lowered, uniform marble textures, typical of the nematic phase, appear with colours varying with temperature \cite{Patranabish_JML}. The isotropic-nematic transition temperature remains nearly unaltered after the incorporation of QDs. In the N phase, the emergent colours change with decreasing temperature, which indicates that the birefringence ($\Delta n$) also changes with temperature. A qualitative measurement of this change in birefringence can be made by matching the colours with the Michel-Levy chart for a given thickness \cite{Michel_Levy}. We deduce that $\Delta n$ increases with decreasing temperature from this mapping. Also, the change in $\Delta n$ with temperature is found to be quite high ($\sim$ 0.06). This is suggestive of highly ordered microstructures in the N phase of the BLC compound \cite{Keith_NBLC_SM,Nafees_RSCAdv_2015}. Also, from Figure 2 we can clearly see that the temperature dependent textural colour sequence changes/shifts after incorporation of the QDs. With the help of Michel-Levy chart, we qualitatively deduce, that the $\Delta n$ values, for a fixed temperature, are lowered on incorporation of the QDs, implying a reduction in the corresponding nematic order parameter $S$, since, $\Delta n \propto S$ \cite{pradeepkumar}. Experimentally, $\Delta n$ measurements and the associated order parameter ($S$) calculations have also been performed and they are discussed in detail in the sub-section III-B.
\subsection{Optical birefringence measurements and orientational order parameter calculations}
The birefringence ($\Delta n$) measurements of the LC sample and its nanocomposite, as a function of temperature, has been performed with the optical transmission technique. The planar LC sample is perpendicularly illuminated with a He-Ne laser ($\lambda$ $\sim$ 633 nm) and placed between two crossed Glan-Thompson polarizers such that the optic axis makes an angle $\varphi = 45^{\circ}$ with the polarizer/analyzer pass axis. The power at the output end is measured with a photodetector. The transmitted light intensity is then given in terms of the phase retardation ($\delta$) as \cite{dierking_LCtextures},
\begin{equation}
I = I_0 \sin^2 2 \varphi \sin^2 \frac{\delta}{2} = \frac{I_0}{2} (1 - \cos \delta),
\end{equation}
\begin{figure}[b]
\centering
\includegraphics[width = 0.8\linewidth]{"Fig_3_birefringence_fit".pdf}
\caption{Experimental values of birefringence ($\Delta n$) for the LC 14-2M-CH$_3$ (half-filled squares) and its nanocomposite (half-filled circles); the solid lines (pure LC: red, LC nanocomposite: blue) represent the four-parameter fit to the experimental data using Equation (3). The related fitting parameter values are shown in the figure. The fitting parameters $\chi^2$ and $R^2$ are generated by the fitting algorithm, so that $\chi^2 \sim 0$ and $R^2 \sim 1$ describe good fits.}
\label{Fig3}
\end{figure}
Here, $\delta = \frac{2 \pi }{\lambda} \Delta n d$ is the phase retardation,
$I$ is the transmitted light intensity, $I_0$ is the incident light intensity, $\varphi$ (= 45$^\circ$) is the azimuthal angle, i.e., the angle made by optic axis with the polarizer/analyzer pass axis, $\lambda$ is the incident light wavelength, $\Delta n = n_e - n_o$ is the birefringence, $n_e$ and $n_o$ are the extraordinary and ordinary refractive indices of the LC, respectively, and $d$ is the thickness of the LC cell. The birefringence, $\Delta n$, is measured directly from the experimental results using equation (1), as a function of temperature. In Figure 3, we plot the experimentally measured birefringence ($\Delta n$) values for pure 14-2M-CH$_3$ (half-filled squares) and its nanocomposite (half-filled circles), at different temperatures. For both cases, on cooling from the isotropic liquid, $\Delta n$ manifests a sharp increase following the Isotropic-N phase transition, basically due to an enhancement in the nematic order. On further cooling, $\Delta n$ retains the trend but now the increase is relatively slow. It is to be noted that the birefringence values decrease appreciably due to the incorporation of QDs in the entire mesophase range.\\
\begin{figure}[t]
\centering
\includegraphics[width = 0.8\linewidth]{"Fig_4_OOP".pdf}
\caption{Orientational order parameter ($S$) as a function of temperature for the bent-core LC 14-2M-CH$_3$ and its 0.5wt\% CdSe/ZnS QDs incorporated nanocomposite.}
\label{Fig4}
\end{figure}
For precise determination of the temperature dependence of the nematic order parameter (\textit{S}), we resort to the four-parameter power-law expression, which is in agreement with the mean-field theory of weakly first-order transitions \cite{four_parameter,susantaPRE},
\begin{equation}
S(T) = S^{**} + A \left\lvert\left(1 - \frac{T}{T^{**}}\right)\right\rvert^\beta,
\end{equation}
Here, $T$ is the absolute temperature, $T^{**}$ is the absolute superheating limit of the nematic phase; at $T=T^{**}$, $S(T^{**})=S^{**}$, $\beta$ is the critical exponent and $A$ is a constant. At $T=0$, $S(0)=1$, which implies $1 = S^{**}+A$. The birefringence, ($\Delta n$), can then be expressed as \cite{susantaPRE},
\begin{equation}
\Delta n = \xi\left[S^{**} + (1-S^{**}) \left\lvert\left(1 - \frac{T}{T^{**}}\right)\right\rvert^\beta\right],
\end{equation}
where, $\xi=(\Delta\alpha/\langle\alpha\rangle)[(n_I^2-1)/2n_I]$, $\Delta\alpha$ is the molecular polarizability anisotropy, $\langle\alpha\rangle$ is the mean polarizability and $n_I$ is the refractive index in isotropic phase just above the Isotropic-N transition temperature. The experimental birefringence ($\Delta n$) data has been well fitted with equation (3), which involves four fit parameters $\xi$, $S^{**}$, $\beta$ and $T^{**}$. The obtained fitting plots (pure LC: red solid line, LC nanocomposite: blue solid line) along with the fit parameter values, are shown in Figure 3. The four-parameter fitting is considered superior to the Haller's method that involves lesser number of fit parameters \cite{Haller_HallerFit,susantaPRE,four_parameter}. We obtain $\xi = 0.324$, $S^{**} = 0.109$, $T^{**} = 134.232 ^{\circ}C$ and $\beta = 0.251$ for the pure LC. For the LC nanocomposite, we obtain $\xi = 0.317$, $S^{**} = 0.059$, $T^{**} = 134.199 ^{\circ}C$ and $\beta = 0.253$. The fit parameter values remain almost unaltered after the incorporation of QDs, except the value of $S^{**}$, which is reduced almost by a factor of $\frac{1}{2}$. It is indicative that the QDs have a significant effect on the nematic order in the LC mesophase. The value of the critical exponent $\beta$ is around 0.25, in both cases, which is in excellent agreement with the theoretically predicted values for the nematic phase \cite{susantaPRE,four_parameter}. The temperature dependent macroscopic orientational order parameter ($S$) is calculated using equation (2) and using the parameter values obtained from the fittings. The obtained temperature-dependent profiles of $S$, for both the cases, are shown in Figure 4. The order parameter $S$ decreases appreciably after the incorporation of QDs. The decrease in order parameter can be ascribed to the reduction of cybotactic cluster size after QDs incorporation, as will be discussed in the dielectric studies section. The nematic phase range, as observed from the birefringence measurements, were found around 134-106 $^\circ$C for the pure LC and around 134-104 $^\circ$C for the QDs incorporated LC, complying with the POM observations.\\
\subsection{Dielectric Studies}
\begin{figure}[b]
\centering
\includegraphics[width = \linewidth]{Fig_5_dielectric_spectroscopy.pdf}
\caption{Frequency-dependent real ($\epsilon'$) and imaginary ($\epsilon ''$) parts of dielectric permittivity of (a-b) pristine LC (14-2M-CH$_3$) and (c-d) QDs dispersed LC nanocomposite (14-2M-CH$_{3}$ + 0.5wt\% CdSe/ZnS QDs), at different temperatures, during cooling. ($f$ in Hz)}
\label{Fig5}
\end{figure}
Dielectric measurements have been carried out in a frequency range of 20 Hz $-$ 2 MHz (measuring voltage V$_{rms}$ = 0.2 V) and at different temperatures during the cooling cycle. The complex dielectric permittivity ($\epsilon^*$) of LCs, in the frequency domain, is expressed as, $\epsilon^*(f) = \epsilon'(f) - i\epsilon''(f)$ \cite{Haase_relaxation}. Here, $\epsilon'$ and $\epsilon''$ are the real and the imaginary parts of the complex dielectric permittivity, respectively. The dielectric spectra of $\epsilon'$ and $\epsilon''$, obtained from experiments, for the LC 14-2M-CH$_3$ and its QDs dispersed nanocomposite are shown in Figure 5. The maximum experimental error for the dielectric measurements lie within $\pm 1\%$. From Figure 5(a), we can see that the value of $\epsilon'$ at lower frequencies is $\sim$ 110, for the LC 14-2M-CH$_3$. Such high values of $\epsilon'$ have been recently observed in bent-core LCs containing cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC}. The dielectric absorption spectra unveils the associated relaxation processes in the medium. The absorption spectra of 14-2M-CH$_3$ is depicted in Figure 5(b). At any temperature, two distinct relaxation peaks (or modes) can be identified: a low-frequency mode (M$_1$) and a high-frequency mode (M$_2$). The two modes represent different relaxation processes present in the LC medium. Collective relaxation processes (due to cybotactic clusters) are known to give rise to low-frequency absorption peaks similar to M$_1$, and they are widely encountered in the N phases of bent-core LCs \cite{Haase_relaxation, Ghosh_BLC_Ferro_JMCC, Shankar_Cybo_AFM, Shankar_Cybo_CPC, Scaramuzza_BLC_JAP, Jakli_Cybo_DirectObs_PRL}. The relaxation frequencies ($f_R$) associated with cybotactic clusters can vary in the range of few tens of Hz to a few hundred Hz \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC, Scaramuzza_BLC_JAP}. Therefore, the mode M$_1$ is attributed to collective relaxation processes originating from cybotactic clusters present in the N phase of the LC. These clusters only occupy a fraction of the volume, and not all molecules form these clusters \cite{madhusudana2017two,patranabish2019one}. Experiments show that the clusters can also exist in the isotropic phase, and their size does not change significantly across the I-N transition - a unique property of BLCs that warrants further investigation \cite{Ghosh_BLC_Ferro_JMCC, Patranabish_JML, madhusudana2017two, Panarin_Vij_Cybo_ratio_BJN, Wiant_BLC_PRE}. As reported in \cite{Patranabish_JML}, through detailed small-angle X-ray scattering (SAXS) and dielectric measurements, the N phase of the pristine LC 14-2M-CH$_3$ is cybotactic in nature, \textit{i.e.}, it contains smectic-like cybotactic clusters. Also, the mode M$_1$ is not associated with ionic impurities because no polarization response (ionic) could be detected for both the pure and the doped LCs (applied voltage up to 80 $V_{pp}$, frequencies between mHz to kHz range) \cite{Jakli_BLC_LCR, Patranabish_JML}. The high-frequency mode M$_2$ represents reorientation of the LC molecular short-axis (around the long molecular axis), subject to planar anchoring conditions \cite{Ghosh_BLC_Ferro_JMCC, Patranabish_JML}. On entering the crystalline phase, M$_2$ was no more visible, which signifies that M$_2$ is a feature of the LC phase itself and it is not related to the cell's ITO electrodes. Further, with increasing temperature, the strength of M$_2$ is decreasing. It suggests that in the isotropic phase, at temperatures much higher than the isotropic-nematic transition, the mode M$_2$ will be completely absent. Therefore, we attribute M$_2$ to the reorientation of the LC molecular short-axis. Similar high-frequency modes were observed in a 5-ring bent-core LC CNRbis12OBB in the N phase and they were attributed to the independent rotation of dipolar groups (around the long axis) \cite{tadapatri2010permittivity}.\\
\begin{figure}[b]
\centering
\includegraphics[width = \linewidth]{"Fig_6_DC_bias".pdf}
\caption{DC bias suppression of the low-frequency relaxation mode (M$_1$) in (a) pure 14-2M-CH$_3$ at 129 $^\circ$C and (b) 0.5 wt\% QDs incorporated 14-2M-CH$_3$ at 121 $^\circ$C. ($f$ in Hz)}
\label{Fig6}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\linewidth]{"Fig_7_Static".pdf}
\caption{Dielectric permittivity of 14-2M-CH$_3$ and its nanocomposite at 10 kHz, as a function of temperature.}
\label{Fig7}
\end{figure}
For the 0.5 wt\% QDs incorporated 14-2M-CH$_3$, the dispersion curve is shown in Figure 5(c). Similar to the pristine LC 14-2M-CH$_3$, we can see that the value of $\epsilon'$ at lower frequencies is large ($\sim$ 110). The absorption spectra of the LC nanocomposite is depicted in Figure 5(d). At any temperature, two distinct relaxation peaks (or modes) can be identified: a low-frequency mode (M$_1$) and a high-frequency mode (M$_2$). After the incorporation of QDs, a relative change in the associated relaxation frequencies ($f_R$) of the modes can be observed, compared to the pristine LC. The values of $f_R$ have been evaluated from the experimental data and it has been discussed in detail later in this section. The $f_R$ associated with M$_1$ is denoted by $f_{R1}$ and for M$_2$, it is denoted by $f_{R2}$. By comparison with the results obtained for the pristine LC 14-2M-CH$_3$, it is evident that the collective processes (and hence the cybotactic clusters) survive in the QDs dispersed LC nanocomposite. However, to establish this firmly, additional DC bias measurements have been performed. For collective processes, when a DC bias voltage is applied across a LC cell, the relaxation process ceases to exist. As a result, the dielectric relaxation modes get suppressed and at high voltages they become extinct \cite{douali_dielectric, Ghosh_BLC_Ferro_JMCC, Haase_relaxation}. A DC bias voltage of amplitude up to 20 V was applied across the LC cell and the dielectric measurements were performed (Figure 6). For the pure LC 14-2M-CH$_3$, a continuous and gradual suppression of mode M$_1$ with an applied DC bias voltage is observed (Figure 6(a)). It is a confirmatory proof of collective relaxations, and hence the presence of cybotactic clusters in the N phase of the LC \cite{Ghosh_BLC_Ferro_JMCC,Patranabish_JML}. Similarly to the pristine LC, we observe that the mode M$_1$ of the LC nanocomposite becomes suppressed (Figure 6(b)), and then completely absent at higher voltages ($\sim$ 20 V). This observation further confirms the collective behaviour of M$_1$ \cite{Ghosh_BLC_Ferro_JMCC,Patranabish_JML}, and hence corroborates retention of the cybotactic nematic phase (N$_{Cyb}$) in the QDs dispersed LC nanocomposite. The high-frequency mode M$_2$, however, does not show any change on DC bias application and hence it represents reorientation of LC molecular short axis. Moreover, we note that in the doped LC, similar to the pristine LC, M$_2$ is absent in the crystalline state and its strength decreases with increasing temperature.\\
The permittivity ($\epsilon'$) values at $f =$ 10 kHz, have been evaluated as a function of temperature (Figure 7). It shows that on incorporation of QDs, the permittivity ($\epsilon'$) increases appreciably. In a planar configuration, $\epsilon'$ represents $\epsilon_{\perp}$. The dielectric anisotropy ($\Delta \epsilon$) is defined as, $\Delta \epsilon = \epsilon_{||} - \epsilon_{\perp}$. Therefore, an increase in $\epsilon'$ implies a decrease in $\Delta \epsilon$. Further, a reduction in the dielectric anisotropy is indicative of decreasing macroscopic order parameter (since $\Delta \epsilon \propto S$) \cite{JitendraK_QDBLC_JML, maier_orderparameter}. It agrees well with the observations made in sections III-B and III-A.\\
To analyze the dielectric modes and the effects of incorporation of QDs, the associated dielectric parameters (e.g. dielectric strength ($\delta \epsilon$), relaxation frequency ($f_R$)) have been evaluated by fitting the experimental dielectric data (both $\epsilon'$ and $\epsilon''$, simultaneously), using the well-known Havriliak-Negami fit function. The frequency-dependent complex dielectric permittivity, $\epsilon^{*}(f) $, can be described by the modified Havriliak-Negami (H-N) equation, \cite{Havriliak_1966, Havriliak_1967,Ghosh_HNFit_JML,Ghosh_HNFit_LC,susantaJMC} which also includes contributions from the dc conductivity ($\sigma_0$):
\begin{equation}
\epsilon^*(f) = \epsilon_{\infty} + \sum_{k=1}^N \frac{\delta \epsilon_k}{[1 + (i 2\pi f\tau_k)^{\alpha_k}]^{\beta_k}} - \frac{i \sigma_0}{\epsilon_0 (2 \pi f)^s}
\end{equation}
\begin{figure}[b]
\centering
\includegraphics[width = \linewidth]{"Fig_8_HN_fit".pdf}
\caption{Simultaneous fitting of the real ($\epsilon'$) and the imaginary ($\epsilon''$) parts of complex dielectric permittivity (in \textit{log} scale) using the Havriliak-Negami (H-N) equations in - (a) pure 14-2M-CH$_3$ and (b) 0.5 wt\% QDs incorporated 14-2M-CH$_3$ ($f$ in Hz). Experimental data - The green squares represent $\epsilon'$ and the hollow circles represent $\epsilon''$. Fit data - The red solid line represents fit to $\epsilon'$ and the blue solid line represents fit to $\epsilon''$. The fitting parameters $\chi^2$ and $R^2$ are generated by the fitting algorithm, so that $\chi^2 \sim 0$ and $R^2 \sim 1$ describe good fits.}
\label{Fig8}
\end{figure}
The last term on the right-hand side of equation (4) describes the motion of free-charge carriers in the sample. The characteristic dielectric parameters such as the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) are obtained by fitting the experimental dielectric permittivity ($\epsilon'$) and dielectric loss ($\epsilon''$) data simultaneously to the real and the imaginary parts of equation (4) given by \cite{Ghosh_BLC_Ferro_JMCC,Ghosh_HNFit_JML,Ghosh_HNFit_LC,Golam_Hockey_BLC_ACSOmega,susantaJMC},
\small{
\begin{equation}
\begin{aligned}
& \epsilon' = \epsilon_{\infty} + \\
& \sum_{k=1}^N \frac{\delta \epsilon_k \cos (\beta_k \theta)}{[1+(2\pi f \tau_k)^{2\alpha_k} + 2(2\pi f\tau_k)^{\alpha_k} \cos (\alpha_k \pi / 2)]^{\beta_k/2}} \\
\end{aligned}
\end{equation}
}
\small{
\begin{equation}
\begin{aligned}
& \epsilon'' = \frac{\sigma_0}{\epsilon_0 (2 \pi f)^s} + \\
& \sum_{k=1}^N \frac{\delta \epsilon_k \sin (\beta_k \theta)}{[1+(2\pi f \tau_k)^{2\alpha_k} + 2(2\pi f\tau_k)^{\alpha_k} \cos (\alpha_k \pi / 2)]^{\beta_k/2}} \\
\end{aligned}
\end{equation}
}
Here,
\begin{equation}
\theta= \tan^{-1} \left[ \frac{(2 \pi f \tau_k)^{\alpha_k}\sin(\alpha_k \pi/2)}{(1+ (2 \pi f \tau_k)^{\alpha_k} \cos(\alpha_k \pi/2)} \right]
\end{equation}
\begin{figure}[b]
\centering
\includegraphics[width = \linewidth]{"Fig_9_strength_fR".pdf}
\caption{Temperature-dependent variation of the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) corresponding to M$_1$ and M$_2$ of (a-b) the pristine LC and (c-d) 0.5 wt $\%$ QDs incorporated LC.}
\label{Fig9}
\end{figure}
Here, $f$ is the frequency, $\epsilon_{\infty}$ is the high-frequency limit of permittivity, $\delta \epsilon_k$ is the dielectric strength for $k$-th relaxation process, $\sigma_0$ is the dc conductivity, $\epsilon_0$ is the free-space permittivity ($8.854*10^{-12}$ F/m), $s$ is a fitting parameter responsible for the nonlinearity in dc conductivity part (for ohmic behaviour, $s$ = 1), $k$ is the number of relaxation processes, $\tau_k ( = 1/2\pi f_k)$ is the relaxation time for $k$-th relaxation process, $\alpha_k$ and $\beta_k$ are the empirical fit parameters that describe symmetric and non-symmetric broadening, respectively, of the $k$-th relaxation peak. In our case, in the absorption curve, we have two different relaxation peaks and hence $k =$ 1 and 2. A representative of the obtained Havriliak-Negami (H-N) fits are shown in Figure 8. The values of $\alpha_1$ and $\alpha_2$ lie in the range of 0.97 $-$ 1 while the values of $\beta_1$ and $\beta_2$ lie in the range of 0.93 $-$ 1 (we perform the fitting over a range of temperatures $106^{\circ}$C$-134^{\circ}$C and the ranges in $\alpha_1 ..\beta_2$ are specified). In the study of a 5-ring bent-core LC C1Pbis10BB and its mixtures with a calamitic nematic LC 6OO8, the authors reported a Debye-type low-frequency relaxation mode $B_{||1}$ \cite{salamon2010dielectric}. They also write that smectic-like clusters can induce a Debye-type relaxation in the low-frequency region of dielectric spectrum. Our dielectric results also indicate that M$_1$ is a near Debye-like relaxation process and the associated relaxation frequencies overlap with the mode $B_{||1}$ reported in \cite{salamon2010dielectric}. The variations of the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) of modes M$_1$ and M$_2$ with temperature, as obtained from the fitting, are shown in Figure 9. The results show that $\delta \epsilon_1$ ($i.e.$ corresponding to M$_1$) for 14-2M-CH$_3$ increases slightly, from $\sim$ 98 to $\sim$ 104, with increasing temperature. Similarly, $\delta \epsilon_1$ for the QDs dispersed nanocomposite increases slightly, from $\sim$ 100 to $\sim$ 105, with increasing temperature. Thus, the dielectric strength $\delta \epsilon_1$ is largely unaffected on doping. Again, the value of $\delta \epsilon_1$ is quite large and it is similar to other bent-core LCs with cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC}. The dielectric strength $\delta \epsilon_2$ associated with M$_2$ is found to be very small and increases with decreasing temperature - from $\sim$ 4.5 to 5.5 for both 14-2M-CH$_3$ and its nanocomposite.\\
The relaxation frequency ($f_{R1}$) associated with M$_1$ lies in the range of $\sim$ 170 Hz to $\sim$ 320 Hz for the pure LC, similar to several other bent-core LCs with cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC}. The incorporation of QDs causes $f_{R1}$ to shift to higher frequencies ($\sim$ 220 Hz to $\sim$ 430 Hz). It indicates that there is an apparent reduction in size of the smectic-like cybotactic clusters \cite{Panarin_Vij_Cybo_ratio_BJN}. This reduction can be estimated qualitatively by taking the ratio of the relaxation frequency $f_{R1}$, for the pristine LC and its doped counterpart. The ratio has an average value $\sim$ 0.67, where the average is taken over two sets of measurements and a range of temperatures. This ratio signifies a relative change in the average number of molecules ($N_c$) present in each cluster (in accordance with our earlier theoretical model and the experiments) \cite{patranabish2019one,Panarin_Vij_Cybo_ratio_BJN}. The decrease in the measured order parameter ($S$) on doping can be ascribed to reduced cluster sizes on QDs incorporation. In our earlier theoretical work on bent-core nematic LCs, we take $N_c$ = 50 \cite{patranabish2019one}. Recent observations have shown that the typical size of smectic-like cybotactic clusters lie in the range of few tens of nanometres to around a hundred nanometres \cite{Jakli_Cybo_DirectObs_PRL}. Again, the typical dimension of a bent-core LC molecule is around 2$-$3 nanometres. Therefore, the number $N_c$ = 50 is justified in the case of pure (undoped) bent-core LCs, with cybotactic clusters. For the QDs dispersed bent-core nematic LCs, we can take $N_c$ $\sim$ 33 (= 50 x 0.67) as a reasonable value. The relaxation frequency $f_{R1}$ manifests a gradual decrease with decreasing temperature, divulging an Arrhenius-type behaviour ($f_R=f_0 \exp(-E_a/k_B T)$; $f_0$ is a temperature-independent constant, $E_a$ is the activation energy, $k_B$ is the Boltzmann’s constant and $T$ is the absolute temperature). $f_{R2}$ also demonstrates an Arrhenius-like behaviour. \\
\begin{figure}[t]
\centering
\includegraphics[width =\linewidth] {"Fig_10_E_a".pdf}
\caption{Arrhenius plot of the M$_1$ relaxation frequency ($f_{R1}$) in the nematic (N) phase of (a) pristine and (b) 0.5wt\% CdSe/ZnS QDs incorporated 14-2M-CH$_3$. The activation energy ($E_a$) is calculated from the slope of the linear fit, represented by the solid red line.}
\label{Fig10}
\end{figure}
The activation energy ($E_a$) associated with a relaxation process encodes the minimum amount of energy required for that process to take place \cite{Haase_relaxation}. The value of $E_a$ associated with the relaxation processes can be obtained by plotting $f_R$ as a function of $1/T$, using the relation $f_R=f_0 \exp(-E_a/k_B T)$. The Arrhenius plots of $ln$($f_{R1}$) vs. $1/T$ (for M$_1$) for the two compounds are shown in Figure 10. The activation energy ($E_a$), associated with M$_1$, is
evaluated from the slope of the linear fit, as shown in Figure 10. The value of $E_a$ ($\sim$ 29.12 kJ/mol) increases significantly after the incorporation of QDs ($\sim$ 37.68 kJ/mol). For a small cluster size, the cluster's dipole moment ($\mu$) also becomes small. Hence, more energy is required to interact with an external electric field. Therefore, an increased value of $E_a$ for M$_1$, after the incorporation of CdSe/ZnS QDs, implies a decrease in the size of cybotactic clusters. It also concurs with our earlier observations. The activation energy associated with M$_2$ has rather small values ($\sim$ 8 kJ/mol) and it does not change significantly after the incorporation of QDs.
\iffalse
\begin{figure*}[t]
\includegraphics[width = \linewidth]{Res_1.png}
\caption{$\gamma = 5$ (in $10^7/4$ cgs units) with different $W$ and $N_c$}\label{Res_1}
\end{figure*}
\fi
\section{Mathematical Model}
In this section, we propose a simple mathematical model that describes the QD doping-induced reduction of the nematic scalar order parameter for dilute suspensions, that could subsequently be improved to describe novel features such as ferroelectricity, chirality, biaxiality and transition pathways between multiple stable states. Since experimental domain is a simple planar cell, denoted by $\Omega \subset \mathbb{R}^3$, with height about 5 microns ($5 \times 10^{-6} \mathrm{m}$), we assume a characteristic length of the system
\begin{equation}
x_s = 5 \times 10^{-8} \mathrm{m},
\end{equation}
as in \cite{patranabish2019one}, so that the cell thickness is $100$ units of $x_s$. The cross-sectional dimensions of the cell are much larger than the cell height, so it is reasonable to assume that structural variations only occur across the cell height i.e. this is a one-dimensional problem. We assume that the QDs are spherical in shape, with an average radius of $2.8$ nanometres; the size of the QDs is much smaller than the typical separation between them and the total volume occupied by the QDs is small. Let $R$ denote the radius of a QD ($2.8$ nanometres as reported in these experiments) and we define a small parameter $\epsilon$ so that
$$ \epsilon^{\alpha} = \frac{R}{x_s} = R_0 = 0.056$$ for some $1<\alpha < \frac{3}{2}$. The definition of $\epsilon$ is not unique, provided it is a small parameter, relevant for \emph{dilute uniform suspensions of QDs} \cite{canevari2019design}. In particular, our mathematical model is \emph{restricted} to dilute suspensions and will need to be modified for non-dilute systems.
In \cite{madhusudana2017two}, N. V. Madhusudana proposes a Landau-de Gennes (LdG) type two-state model for the N phase of BLCs, accounting for cybotactic clusters. This two-state model is a phenomenological model based on the premise that the N phase of the BLC comprises two different types of molecules: ground state (GS) molecules and excited state (ES) molecules. The ES molecules define the smectic-like cybotactic clusters and the GS molecules are located outside the clusters. The generic LdG theory models a single component system e.g. the GS molecules, typically assumed to be rod-like molecules that tend to align with each other, yielding long-range orientational order \cite{ravnik2009landau}. Madhusudana's model is a two component model, the GS and ES molecules, with additional coupling effects and in \cite{patranabish2019one}, we describe the N phase of the BLC by two macroscopic tensor order parameters (with a number of simplifying assumptions)
\begin{equation}
\begin{aligned}
\Q_g = \sqrt{\frac{3}{2} } S_g \left( \n_g \otimes \n_g - \frac{1}{3} \mathrm{I} \right)\\
\quad \Q_c = \sqrt{\frac{3}{2} } S_c \left( \n_c \otimes \n_c - \frac{1}{3} \mathrm{I} \right)
\end{aligned}
\end{equation}
respectively, where $\Q_g$ is the LdG order parameter for the GS molecules, and $\Q_c$ is the LdG order parameter associated with the ES molecules. In both cases, we assume that $\Q_g$ and $\Q_c$ have uniaxial symmetry i.e. the GS (ES) molecules align along a single averaged distinguished direction $\n_g$ (respectively $\n_c$), and assume that $\n_g$ and $\n_c$ are constant unit-vectors or directions, \textbf{so that there are no director distortions or defects}. There are two scalar order parameters, $S_c$ and $S_g$, corresponding to ES (excited state) and GS (ground state) molecules respectively. As is standard with variational approaches, the experimentally observed configurations are described by local or global minimizers of a suitably defined LdG-type energy in terms of $S_c$, $S_g$ and the coupling between them.
In \citeauthor{patranabish2019one}, the authors theoretically study stable $(S_g, S_c)$ profiles in a simple planar cell geometry (as experimentally studied in this manuscript), as a function of the temperature, in terms of minimizers of the following LdG-type energy (heavily inspired by the work in \cite{madhusudana2017two} with additional elastic effects that account for spatial inhomogeneities):
\begin{equation}\label{Energy1}
\begin{aligned}
\mathcal{F} & = \int_{\Omega} (1 - a_x) \left( \frac{a_g}{2}(T - T^{*})S_g^2 - \frac{B_g}{3} S_g^3 + \frac{C_g}{4} S_g^4 - E_{el} S_g \right) \\
& + \frac{a_x}{N_c} \left( - (1 - a_x) \gamma S_g S_c + \frac{\alpha_c}{2} S_c^2 + \frac{\beta_c}{4} S_c^4 \right) \\
&- a_xJ E_{el} S_c + K_g |\nabla S_g|^2 + K_c |\nabla S_c|^2 \dd \x \\
\end{aligned}
\end{equation}
Here, the subscripts $g$ and $c$ denote the GS and the
ES molecules, respectively, and the clusters are essentially
formed by the ES molecules. They work with a one-constant approximation and $K_g$ and $K_c$ are the elastic
constants of the GS and ES molecules respectively. $a_g$,$B_g$,$C_g$ are the material dependent parameters in the LdG free energy, $T^*$ is the nematic supercooling temperature such that the isotropic phase of the GS phase is unstable for $T<T^*$.
The parameter, $\gamma$, is the coupling parameter between the GS molecules and the
clusters \cite{madhusudana2017two}. The coefficients, $\alpha_c$ and $\beta_c$, are saturation parameters to ensure that the absolute value of $S_c < 1$ for physically relevant parameter regimes , $N_c$ is the number of ES molecules in each
cluster, $a_x$ is the mole fraction of the ES
molecules. $J$ accounts for the shape anisotropy of ES molecules.
$E_{el}$ is the electric field energy ($\frac{1}{2}\epsilon_0 \Delta \epsilon E^2$) where $\epsilon_0$ is the
free-space permittivity, $\Delta \epsilon$ is the dielectric anisotropy, $E$ is
the applied electric field.
\iffalse
1.2,. This yields A = 0.045 × 10$^7$,B = 0.3825 × 10$^7$,C =
1.0125 × 10$^7$,D = 0.00225 × 10$^7$, E = 1800,M = 0.0001
× 10$^7$,N = 0.002 × 10$^7$, and P = 240 (in respective cgs
units). Therefore we have C1 = 0.31142,C2 = 0.05 (for γ =
5), C3 = 0.034,C4 = 0.00222,C5 = 0.000615, and C6 =
0.00453
\fi
The mathematically and physically pertinent question is - how is the energy (\ref{Energy1}) modified by the uniformly suspended QDs in the dilute limit? Following the elegant homogenisation framework developed in \cite{canevari2019design}, the additional doping can be described by an effective field in the dilute limit, and \textbf{this effective field strongly depends on the shape and anchoring conditions on the QDs, but not the size of the QDs in the dilute limit (as will be made clearer below and the size will matter in the non-dilute limit).}
We assume the QDs are spherical in shape, as stated above, and impose preferred alignment tensors on the QD surfaces, $\Q_v^g$ ( $\Q_v^c$), for QDs outside (inside the) clusters respectively
We assume that $\Q_v^g$ and $\Q_v^c$ are constant tensors, given by,
\begin{equation}
\Q_v^g = \sqrt{\frac{3}{2} } S_g^b \left( \n_g \otimes \n_g - \frac{1}{3} \mathrm{I} \right).
\end{equation}
and
\begin{equation}
\Q_v^c = \sqrt{\frac{3}{2} } S_c^b \left( \n_c \otimes \n_c - \frac{1}{3} \mathrm{I} \right).
\end{equation}
for some fixed $S_g^b, S_c^b > 0$. There is no clear argument for the choice of $S_g^b$ and $S_c^b$ in this model but we make reasonable choices below. \textbf{Further, we assume that $\n_g$ ($\n_c$) is the same for $\Q_g$ and $\Q_v^g$ (likewise for $\Q_c$ and $\Q_v^c$), so that there is no director distortion at the QD interfaces.} Assuming a Rapini-Papoular surface anchoring energy on the QD surfaces, the QD surface energy density is given by
\begin{equation}\label{surface_reduced}
f_s^g(S_g, S_c, S_g^b, S_c^b) = W_g^0 |S_g - S_g^b|^2 + W_c^0 |S_c - S_c^b|^2,
\end{equation}
where $W_g^0$ and $W_c^0$ are the anchoring coefficients on the QD-GS interfaces, QD-ES interfaces respectively
For relatively strong anchoring on the QD interfaces, $W_c^0$ and $W_g^0$ can be taken to be approximately $1 \times 10^{-2} \mathrm{J/m}^2$ \cite{ravnik2009landau}. In particular, $W_c^0$ is zero for QDs outside the clusters and $W_g^0$ is zero for QDs inside the clusters. Next, we follow the paradigm in \cite{canevari2019design} to describe the collective effects of a uniform dilute suspension of QDs, with surface energies as in (\ref{surface_reduced}), in terms of an additional \emph{homogenised} effective term in the free energy (\ref{Energy1}).
As in \cite{patranabish2019one}, we let $A = (1 - a_x) a_g (T - T^{*}),~~ B = (1 - a_x) B_g,~~C = (1 - a_{x}) C_g, D = a_x(1 - a_x) \gamma / N_c, E = (1 - a_x) E_{el}, \quad M = \alpha_c a_{x} / N_c, N = \beta_c a_x / N_c, \quad P = J E_{el} a_x, \quad W_g = (1 - a_x) W_g^0$ and $W_c = \frac{a_x}{N_c} W_c^0$,
where $a_x$ is the fixed mole fraction of the ES molecules. Moreover, we assume the $E_{el} = 0$ throughout this paper. In agreement with the parameter values used in \cite{patranabish2019one}, we use fixed values $K_g =
K_c = K = 15pN$ ; $a_g = 0.04, B_g = 1.7,C_g = 4.5,\alpha_c = 0.22, \beta_c = 4.0$ ($\alpha_g,B_g,C_g,\alpha_c$
and $\beta_c$ in $10^6/4$ SI units).
Following the methods in \cite{canevari2019design}, we describe the dilute QD-doped BLC system by means of the total free energy below, without rigorous justification but rather as a phenomenological model to describe salient features of novel nanocomposites.
\begin{equation}\label{Energy2}
\begin{aligned}
\mathcal{F} & = \int \left( \frac{A}{2}S_g^2 - \frac{B}{3} S_g^3 + \frac{C}{4} S_g^4 \right) \\
& + \left( - D S_g S_c + \frac{M}{2} S_c^2 + \frac{N}{4} S_c^4 \right) + K_g |\nabla S_g|^2 + K_c |\nabla S_c|^2 \dd \x \\
& + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} W_g |S_g - S_g^b|^2 \dd S + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} W_c |S_c - S_c^b|^2 \dd S, \\
\end{aligned}
\end{equation}
\\ where $\mathcal{P}$ is the collection of the QDs in the suspension and $1 < \alpha< \frac{3}{2}$, so that $\epsilon^{ 3- 2\alpha} \to 0$ as $\epsilon \to 0$. The pre-factor of $\epsilon^{3 - 2\alpha}$ is specific to dilute systems. \textbf{The main novelty is the surface energy term originating from the QD-GS and QD-ES interfaces, and the homogenized effective field is derived in the $\epsilon \to 0$ limit, as will be discussed below.}
We non-dimensionalize the free-energy (\ref{Energy2}) by letting
$\bar{\x} = \x/x_s, \quad \bar{S_g} = \sqrt{\frac{27C^2}{12 B^2}}S_g, \quad \bar{S_c} = \sqrt{\frac{27 C^2}{12 B^2}}S_c \quad \bar{\mathcal{F}} = \frac{27^2 C^3}{72 B^4 x_s^3}\mathcal{F}$,
Dropping all \emph{bars} for convenience (so that $S_g$ and $S_b$ denote the scaled order parameters), the dimensionless energy is (we take $E_{el} = 0$)
\begin{equation}
\begin{aligned} \label{EnergyH}
\mathcal{F} & = \int_{\Omega_\epsilon} \left( \frac{t}{2}S_g^2 - S_g^3 + \frac{1}{2} S_g^4 \right) + \left( - C_1 S_g S_c + C_2 S_c^2 + C_3 S_c^4 \right) \\
& + \kappa_g \left(\frac{d S_g}{dx} \right)^2 + \kappa_c \left(\frac{d S_c}{dx} \right)^2 \dd \x \\
& + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} w_g |S_g - S_g^b|^2 \dd S + \epsilon^{3 - 2 \alpha} \int_{\pp \mathcal{P}} w_c |S_c - S_c^b|^2 \dd S, \\
\end{aligned}
\end{equation}
where $\Omega_\epsilon$ is the three-dimensional planar cell with the QDs removed, $\mathcal{P}$ is the collection of the three-dimensional spherical QDs with re-scaled radius $\epsilon^{\alpha}$ for $1 < \alpha < \frac{3}{2}$ (see the definition of $\epsilon$ above), and
\begin{equation}
\begin{aligned}
& t = \frac{27 AC}{6 B^2}, \quad C_1 = \frac{27 CD}{6 B^2}, \quad C_2 = \frac{27 C M}{12 B^2}, \quad C_3 = \frac{N}{2C},\\
& \kappa_g = \frac{27 C K_g }{6 B^2 x_s^2}, \quad \kappa_c = \frac{27 C K_c}{6 B^2 x_s^2} \\
& w_g = \frac{27 C W_g}{6 B^2 x_s} \quad w_c = \frac{27 C W_c}{6 B^2 x_s} . \\
\end{aligned}
\end{equation} Note that $|\nabla S_g|^2 = \left(\frac{d S_g}{dx} \right)^2$ since we assume that structural variations in $S_g$ and $S_c$ only occur across the cell height, $0\leq x \leq 100$ (recall the choice of $x_s$ above).
In \cite{canevari2019design}, the authors study minimizers of free energies of the form,
\begin{equation}
\label{eq:homnew}\int\int\int_{\Omega_\epsilon} f_{el}(\nabla \Q ) + f_b (\Q) dV +
\epsilon^{3 - 2\alpha} \int\int_{\pp \mathcal{P}} f_s\left(\Q, \nu \right) dA, \end{equation} with $1 < \alpha < \frac{3}{2}$, where $f_{el}(\nabla \Q )$ is a general convex function of the gradient of an arbitrary order parameter $\Q$, $f_b$ is a polynomial function of the scalar invariants of $\Q$, and $f_s$ are arbitrary surface energies on the QD interfaces. The dilute limit is described by the $\epsilon \to 0$ limit, and minimizers of (\ref{eq:homnew}) converge to minimizers of the following homogenized energy, as $\epsilon \to 0$,
\begin{equation}
\label{eq:homnew2}
\mathcal{F}_h (\Q) = \int_{\Omega} f_{el}(\nabla \Q) + f_b(\Q) + f_{hom}(\Q) dV,
\end{equation}
where $f_{hom} = \int_{\partial \omega} f_s\left(\Q, \nu \right) dS$, $\omega$ is a representative QD and $\nu$ is the outward normal to $\partial \omega$. \textbf{In particular, the shape, anchoring conditions, material properties including encapsulation properties of the QD inclusions are absorbed in the definition of $f_{hom}$. The distortion effects around the QDs are also described by $f_{hom}$ for dilute systems.} In our case, the QDs are spherical inclusions and applying the results in \cite{canevari2019design}, we have $f_{hom} = \int_{\pp B(\mathbf{0}, 1)} f_s(\Q, \nu) dA$, $B(\mathbf{0}, 1) \subset \mathbb{R}^3$ is a generic three-dimensional unit ball and $f_s$ is the surface energy (\ref{surface_reduced}).
We apply this result to calculate the homogenized potential corresponding to (\ref{surface_reduced}), see below:
\begin{equation}
\label{eq:fhom}
f_{hom}(S_g, S_c ) = w_{g}^{(1)} S_g^2 - w_{g}^{(2)} S_g +w_{c}^{(1)} S_c^2 - w_{c}^{(2)} S_
\end{equation}
where
\begin{equation}
\omega_{g}^{(1)} = 4 \pi w_g, \quad \omega_{g}^{(2)} = 8 \pi S_g^b w_g
\end{equation}
and
\begin{equation}
w_{c}^{(1)} = 4 \pi w_c, \quad w_{c}^{(2)} = 8 \pi S_c^b w_c.
\end{equation}
Hence, the total non-dimensionalized \emph{homogenized} free energy is given by
\begin{equation}\label{bulk_energy}
\begin{aligned}
\mathcal{F} = \int_{\Omega} & \left( \left( \frac{t}{2} + w_g^{(1)} \right)S_g^2 - \sqrt{6} S_g^3 + \frac{1}{2} S_g^4 \right) \\
& + \left( - C_1 S_g S_c + (C_2 + w_{c}^{(1)} ) S_c^2 + C_3 S_c^4 \right) \\
& + \kappa_g \left(\frac{d S_g}{dx}\right)^2 + \kappa_c \left( \frac{d S_c}{dx} \right)^2 - w_{g}^{(2)} S_g - w_{c}^{(2)} S_c \dd \x. \\
\end{aligned}
\end{equation}
For the parameter values as stated before, we have $C_1 = 0.0700692$, $C_2 = 0.0017$ and $C_3 = 0.0040$.
\iffalse
\begin{figure}[!h]
\centering
\begin{overpic}
[width = 0.9 \linewidth]{gamma_fixed_Nc_5_W.eps}
\put(-5, 72){\large (a)}
\end{overpic}
\vspace{2em}
\begin{overpic}[width = 0.9\linewidth]{Nc_fixed_gamma_5_W.eps}
\put(-5, 72){\large (b)}
\end{overpic}
\caption{(a) Bulk mean order parameter as a function of $\gamma$ for fixed $N_c = 50$ and $T = 379$ with $W = 0.01$ and $W = 0$. (b) Bulk mean order parameter as a function of $N_c$ for fixed $\gamma = 5$ and $T = 379$ with $W = 0.01$ and $W = 0$.}\label{Nc}
\end{figure}
\fi
Then the equilibrium/ physically observable $(S_g, S_c)$ profiles are solutions of the Euler-Lagrange equations corresponding to (\ref{bulk_energy}).
\begin{equation}\label{EL_bulk}
\begin{cases}
& \kappa_g \frac{d^2 S_g}{dx^2} = 2 S_g^3 - 3 \sqrt{6} S_g^2 + (t + w_g^{(1)}) S_g - C_1 S_c - w_g^{(1)} \\
& \kappa_c \frac{d^2 S_c}{dx^2} = 4 C_3 S_c^3 + (2 C_2 + 2 w_c^{(1)}) S_c - C_1 S_g - w_c^{(2)} .\\
\end{cases}
\end{equation} These equations need to be complemented by boundary conditions for $S_g$ and $S_c$, we fix Dirichlet boundary conditions for the scalar order parameters on the bottom ($x=0$) and top ($x=100$) of the planar cell i.e.
\begin{equation} \label{dirichletbcs}
S_g = \frac{3 + \sqrt{9 - 8t}}{4}, \quad S_c = 0 ~ \textrm{on $x=0$ and $x=100$,}
\end{equation} which corresponds to the absence of clusters on the planar cell boundaries. The boundary conditions (\ref{dirichletbcs}) are not special and we believe that our qualitative conclusions would hold for other choices of the Dirichlet boundary conditions too. We assume that $\mathbf{n}_g$ and $\mathbf{n}_c$ are constant unit vectors, and our analysis is independent of the choice of $\mathbf{n}_g$ and $\mathbf{n}_c$, provided they are constant vectors. We also need to specify $S_g^b$ and $S_c^b$ to determine $\omega_g^{(2)}$ and $w_c^{(2)}$ above and we choose
\begin{equation}
S_g^b = \frac{3 + \sqrt{9 - 8t}}{4}, \quad S_c^b = 0,
\end{equation}
with $W_g = W_c = W$.
Next, we numerically solve the coupled equations in (\ref{EL_bulk}) to compute the equilibrium profiles of $(S_g, S_c)$ as a function of temperature, and different values of $W$. The parameters $N_c$ and $\gamma$ are coupled i.e. larger clusters are likely to have larger values of $N_c$ and $\gamma$, and we expect $N_c$ and $\gamma$ to be smaller for the doped system compared to its undoped counterpart, based on the experimental results that suggest smaller cybotactic clusters in QD-doped BLCs compared to their undoped counterparts. We define the bulk mean order parameter $S_m$, which is a weighted scalar order parameter as shown below
\begin{equation}
S_m = (1 - a_x) S_g + a_x S_c.
\end{equation}
\textbf{The weighted scalar order parameter, $S_m$, is the theoretical analogue of the measured order parameters from experimental birefringence measurements.} We use the value of $S_m$ at room temperature (293 K) with $(N_c, W, \gamma) = (50, 0, 5)$ to normalize $S_m$. Recall that $N_c=50$ and $\gamma = 5$ have been used to study the undoped BLC system in \cite{patranabish2019one}.
In Figure \ref{Sm_T}, we plot $S_m$ as function of temperature for undoped and doped systems, for different values of $W$. For the undoped system, $(N_c, \gamma, W) = (50, 5, 0)$ by analogy with the values used in \cite{patranabish2019one}. \textbf{For QD-doped systems, the experimental results suggest that the clusters are shrunk by a factor of $0.67$ (qualitatively deduced by the ratio of the relaxation frequencies) and hence we take $N_c = 50 \times 0.67 =33.5$ and $\gamma = 5 \times 0.67 =3.35$ for doped systems.} We plot the solution, $S_m$-profiles for three doped systems - $(N_c, \gamma, W) = (33.5, 3.35, 0.01)$, $(N_c, \gamma, W) = (33.5, 3.35, 0.001)$, and $(N_c, \gamma, W) = (33.5, 3.35, 0.0001)$ in Figure \ref{Sm_T}.
\begin{figure}[!h]
\centering
\includegraphics[width = \linewidth]{dope_un_dope.eps}
\caption{Bulk mean order parameter as a function of temperature for the undoped (green circle) and the doped (red square, blue triangle, purple diamond) cases (Temperature `$T$' is in K).}\label{Sm_T}
\end{figure}
It is clear that this simple model clearly captures the doping-induced reduction in the values of $S_m$, consistent with the experimental results in Figure~$4$. $S_m$ also decreases with increasing $T$ as expected. The numerically computed values of $S_m$ for the doped systems in Figure~\ref{Sm_T} are lower than the experimentally reported values in Figure~$4$ and we think an exact fitting between the numerical and the experimental results is unlikely at this stage. Further, the numerical results are very sensitive to the values of the anchoring coefficient and we simply do not have reliable estimates of the anchoring coefficients for QDs. In fact, the numerically computed $S_m$'s, if fitted to the experimental data, could provide an ingenious method for estimating the surface energy coefficients for QD-doped BLC systems. It is clear that the predictions of the doped model approach the predictions of the undoped model as $W \to 0$, as expected. We do not pursue this further in this manuscript.
This simple model provides a clear explanation of why the QD-doping reduces $S_m$ in experiments - the interaction/anchoring between the QDs and the host BLC matrix increases the effective temperature (the coefficient of $S_c^2$ and $S_g^2$ in (\ref{bulk_energy})) of both the GS state and the cybotactic clusters (ES state), and hence at a given temperature, the QD-doped system experiences a shifted higher temperature and the shift is directly determined by the homogenized potential, $f_{hom}$, which in turn is determined by the anchoring and shape of the dispersed QDs. This QD-induced effective temperature shift necessarily reduces $S_m$ compared to its undoped counterpart. A doping-induced reduced $S_m$ qualitatively explains the experimentally observed reduced dielectric anisotropy and birefringence in doped systems, compared to undoped systems. \textbf{However, several open questions remain, particularly as to how we ascribe values to the cluster parameters, $N_c$ and $\gamma$, and how to describe the effects of QD-doping on these cluster parameters. Nevertheless, this is the first attempt to mathematically model dilute suspensions of QDs in the nematic phase of BLC materials, with cybotactic clusters, providing qualitative agreement with experiments.}
\section{Conclusion}
We perform experimental and theoretical studies of a QDs-dispersed BLC 14-2M-CH$_{3}$ inside a planar cell. We believe that QDs are attractive nano-inclusions for stable suspensions, without aggregation effects. We present experimental optical profiles for the pristine LC and the QDs incorporated LC nanocomposite systems, tracking the textural colour changes with temperature. We perform experimental measurements of the dielectric permittivity, including the dielectric dispersion and absorption spectra, and use fitting algorithms to calculate relaxation frequencies and dielectric strengths, that are used to validate the existence of cybotactic clusters in the doped and undoped systems, the reduction of cluster sizes in doped systems and corresponding increase in activation energies. We also present experimental measurements of the optical birefringence and the orientational order parameters of the doped and undoped systems. All the experiments demonstrate doping-induced reduction of orientational order and cluster sizes, that manifest in doping-induced reduced birefringence and a reduced dielectric anisotropy (qualitatively) at a fixed temperature. In terms of future experiments, we would like to investigate biaxiality in these QDs-dispersed BLC systems, chirality and prototype devices based on such simple planar cell geometries. For example, we could treat the planar cell to have conflicting boundary conditions on both the cell surfaces, naturally inducing inhomogeneous director profiles.
We support some of our experimental findings with a homogenized Landau-de Gennes type model for a doped BLC system, with two scalar order parameters, $S_g$ and $S_c$, and constant director profiles. In particular, we capture the doping-induced reduction in the mean scalar order parameter which is an informative and illuminating first step. The theory can be embellished in many ways, to make it physically realistic e.g. elastic anisotropy involving additional terms in the elastic energy density, non-constant director profiles captured by non-constant $\n_g$ and $\n_c$ and understanding how the QDs affect the cybotactic clusters. This could be done by using a general two-tensor model, $\Q_g$ and $\Q_c$, without making any additional assumptions about uniaxial symmetry or constant directors, as in our mathematical model in this paper. However, it will be a challenge to describe the cybotactic cluster-mediated coupling between $\Q_g$ and $\Q_c$ without these restrictive assumptions, and some of these directions will be pursued in future work.
\section*{Credit taxonomy}
S. Patranabish and A. Sinha conducted the experiments and analysed the experimental results. Y. Wang and A. Majumdar performed the modelling and comparisons between experiments and modelling.
\section*{Acknowledgement}
The authors would like to thank Prof. N.V.S. Rao, Department of Chemistry, Assam University, Assam, India and Dr. Golam Mohiuddin, Xi'an Jiaotong University, Xi'an, China for providing the liquid crystal samples. The authors thank Dr. Giacomo Canevari for helpful discussions on the homogenised potentials. The authors also thank Anjali Sharma and Ingo Dierking for useful feedback on the experimental sections. S.P. thanks Dr. Susanta Chakraborty for useful discussions on fitting of the experimental data. S.P. acknowledges IIT Delhi for financial support under Full-time Institute Assistantship. The authors would like to thank DST-UKIERI for generous funding to support the 3-year collaborative
project.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| proofpile-arXiv_065-152 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\baselineskip=20pt
\newfont{\elevenmib}{cmmib10 scaled\magstep1}
\newcommand{\preprint}{
\begin{flushleft}
\end{flushleft}\vspace{-1.3cm}
\begin{flushright}\normalsize
\end{flushright}}
\newcommand{\Title}[1]{{\baselineskip=26pt
\begin{center} \Large \bf #1 \\ \ \\ \end{center}}}
\newcommand{\Author}{\begin{center}
\large \bf
Xiaotian Xu${}^{a}$, Junpeng Cao${}^{a,b,c,d}$, Yi Qiao${}^{a,e}$, Wen-Li
Yang${}^{d,e,f,g}\footnote{Corresponding author: wlyang@nwu.edu.cn}$, Kangjie Shi${}^e$ and Yupeng
Wang${}^{a,d,h}\footnote{Corresponding author: yupeng@iphy.ac.cn}$
\end{center}}
\newcommand{\Address}{\begin{center}
${}^a$ Beijing National Laboratory for Condensed Matter
Physics, Institute of Physics, Chinese Academy of Sciences, Beijing
100190, China\\
${}^b$ Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China \\
${}^c$ School of Physical Sciences, University of Chinese Academy of
Sciences, Beijing, China\\
${}^d$ Peng Huanwu Center for Fundamental Theory, Xian 710127, China\\
${}^e$ Institute of Modern Physics, Northwest University,
Xian 710127, China\\
${}^f$ Shaanxi Key Laboratory for Theoretical Physics Frontiers, Xian 710127, China\\
${}^g$ Physics school, Northwest University, Xian 710127, China\\
${}^h$ The Yangtze River Delta Physics Research Center, Liyang, Jiangsu, China
\end{center}}
\newcommand{\Accepted}[1]{\begin{center}
{\large \sf #1}\\ \vspace{1mm}{\small \sf Accepted for Publication}
\end{center}}
\preprint
\thispagestyle{empty}
\bigskip\bigskip\bigskip
\Title{Graded off-diagonal Bethe ansatz solution of the $SU(2|2)$ spin chain model with generic integrable boundaries} \Author
\Address
\vspace{1cm}
\begin{abstract}
\bigskip
The graded off-diagonal Bethe ansatz method is proposed to study supersymmetric quantum integrable models (i.e., quantum integrable models associated with superalgebras). As an example,
the exact solutions of the $SU(2|2)$ vertex model with both periodic and generic open boundary conditions are constructed.
By generalizing the fusion techniques to the supersymmetric case, a closed set of operator product identities about the transfer matrices are derived, which allows us to give the eigenvalues in terms of
homogeneous or inhomogeneous $T-Q$ relations. The method and results provided in this paper can be generalized to other high rank supersymmetric quantum integrable models.
\vspace{1truecm} \noindent {\it PACS:} 75.10.Pq, 02.30.Ik, 71.10.Pm
\noindent {\it Keywords}: Bethe Ansatz; Lattice Integrable Models; $T-Q$ Relation
\end{abstract}
\newpage
\section{Introduction}
\label{intro} \setcounter{equation}{0}
Quantum integrable models \cite{Bax82} play important roles in fields of theoretical physics, condensed matter physics, field theory and
mathematical physics, since exact solutions of those models may provide useful benchmarks to understand a variety of many-body problems.
During the past several decades, much attention has been paid to obtain exact solutions of integrable systems with unusual boundary conditions.
With the development of topological physics and string theory, study on off-diagonal boundaries becomes an interesting issue.
Many interesting phenomena such as edge states, Majorana zero modes, and topological excitations have been found.
Due to the existence of off-diagonal elements contained in boundaries, particle numbers with different intrinsic degrees of freedom are
not conserved anymore and the usual $U(1)$ symmetry is broken. This leads to absence of a proper reference state which is crucial in the conventional Bethe ansatz scheme. To overcome this problem,
several interesting methods \cite{Alc87,Skl88, Nep02, Cao03, Yan04, Gie05, Yan07, Bas06, Bas07, Bas10, Bas13,Fra08, Fra11, Nic13,cao13, wang15, Bel13, Bel15, Pim15, Ava15} are proposed. A remarkable one is the off-diagonal Bethe ansatz (ODBA) \cite{cao13, wang15}, which allow us to construct the exact spectrum systematically. The nested ODBA has also been developed to deal with the models with different Lie algebras such as $A_n$ \cite{Cao14, Cao15}, $A_2^{(2)}$ \cite{Hao14}, $B_2$ \cite{ Li_119}, $C_2$ \cite{Li_219} and $D_3^{(1)}$ \cite{Li_319}.
Nevertheless, there exists another kind of high rank integrable models which are related to superalgebras \cite{Fra00} such as the
$SU(m|n)$ model, the Hubbard model, and the supersymmetric $t-J$ model.
The $SU(m|n)$ model has many applications in AdS/CFT correspondence \cite{Mal99,Bei12}, while the Hubbard and $t-J$ model have many applications in the strongly correlated
electronic theory. These models with $U(1)$ symmetry have been studied extensively \cite{yb, Ess05,Perk81,Vega91,Yue96,Yue1,Yue2,Yue3,Yue4,Yue5,Mar97}.
A general method to approach
such kind of models with off-diagonal boundaries is still missing.
In this paper, we develop a graded version of nested ODBA to study supersymmetric integrable models (integrable models associated with superalgebras). As an example, the $SU(2|2)$ model with both periodic and off-diagonal boundaries is studied.
The structure of the paper is as follows. In section 2, we study the $SU(2|2)$ model with periodic boundary condition.
A closed set of operator identities is constructed by using the fusion procedure. These identities allow us to characterize the eigenvalues of the transfer matrices in terms of
homogeneous $T-Q$ relation. In section 3,
we study the model with generic open boundary conditions. It is demonstrated that similar identities can be constructed and the spectrum can be expressed in terms of inhomogeneous $T-Q$ relation. Section 4 is attributed to concluding remarks. Some technical details can be found in the appendices.
\section{$SU(2|2)$ model with periodic boundary condition}
\label{c2} \setcounter{equation}{0}
\subsection{The system}
Let ${V}$ denote a $4$-dimensional graded linear space with a basis $\{|i\rangle|i=1,\cdots,4\}$, where the Grassmann parities
are $p(1)=0$, $p(2)=0$, $p(3)=1$ and $p(4)=1$, which endows the fundamental
representation of the $SU(2|2)$ Lie superalgebra. The dual space is spanned by the dual basis $\{\langle i|\,\,|i=1,\cdots,4\}$ with an inner product: $\langle i|j\rangle=\delta_{ij}$.
Let us further introduce the ${Z_{2}}$-graded $N$-tensor space ${V}\otimes { V}\otimes\cdots{V}$ which has a basis $\{|i_1,i_2,\cdots,i_N\rangle=|i_N\rangle_N\,\cdots|i_2\rangle_2\,|i_1\rangle_1\,|\,i_l=1,\cdots,4;\,l=1,\cdots,N\}$,
and its dual with a basis $\{\langle i_1,i_2,\cdots,i_N|=\langle i_1|_1\,\langle i_2|_2\,\cdots\langle i_N|_N\,|\,i_l=1,\cdots,4;\,l=1,\cdots,N\}$.
For the matrix $A_j\in {\rm End}({ V_j})$, $A_j$ is a super
embedding operator in the ${Z_{2}}$-graded $N$-tensor space ${V}\otimes { V}\otimes\cdots{V}$, which acts as $A$ on the $j$-th
space and as identity on the other factor spaces. For the matrix $R_{ij}\in {\rm
End}({ V_i}\otimes { V_j})$, $R_{ij}$ is a super embedding
operator in the ${Z_{2}}$ graded tensor space, which acts as
identity on the factor spaces except for the $i$-th and $j$-th ones.
The super tensor product of two operators
is the graded one satisfying the rule\footnote{For $A=\sum_{\alpha,\,\beta}A_{\beta}^{\alpha}{|\beta\rangle}{\langle\alpha|}$ and $B=\sum_{\delta,\,\gamma}B_{\delta}^{\gamma}{|\delta\rangle}{\langle\gamma|}$, the super tensor product $A\otimes B=\sum_{\a,\b,\gamma,\delta}(A_{\beta}^{\alpha}{|\beta\rangle}_1{\langle\alpha|}_{1})\,\, (B_{\delta}^{\gamma}{|\delta\rangle}_2{\langle\gamma|}_2)=\sum_{\a,\b,\gamma,\delta}(-1)^{p(\delta)[p(\alpha)+p(\beta)]}A_{\beta}^{\alpha}B_{\delta}^{\gamma}{|\delta\rangle}_2
{|\beta\rangle}_1\,{\langle\alpha|}_{1}{\langle\gamma|}_2$.} $(A\otimes B)_{\beta \delta}^{\alpha
\gamma}=(-1)^{[p(\alpha)+p(\beta)]p(\delta)}A^{\alpha}_{\beta}B^{\gamma}_{\delta}$
\cite{Gra13}.
The supersymmetric $SU(2|2)$ model is described by the $16\times 16$ $R$-matrix
\begin{equation}
\label{rm}
R_{12}(u)
=\left(
\begin{array}{cccc|cccc|cccc|cccc}
u+\eta & & & & & & & & & & & & & & & \\
& u & & & \eta & & & & & & & & & & & \\
& & u & & & & & & \eta & & & & & & & \\
& & & u & & & & & & & & & \eta & & & \\
\hline
& \eta & & & u & & & & & & & & & & & \\
& & & & & u+\eta & & & & & & & & & & \\
& & & & & & u & & & \eta & & & & & & \\
& & & & & & & u & & & & & & \eta & & \\
\hline
& & \eta & & & & & & u & & & & & & & \\
& & & & & & \eta & & & u & & & & & & \\
& & & & & & & & & & u-\eta & & & & & \\
& & & & & & & & & & & u & & & -\eta & \\
\hline
& & & \eta & & & & & & & & & u & & & \\
& & & & & & & \eta & & & & & & u & & \\
& & & & & & & & & & & -\eta & & & u & \\
& & & & & & & & & & & & & & & u-\eta \\
\end{array}
\right),
\end{equation}
where $u$ is the spectral parameter and $\eta$ is the crossing parameter.
The $R$-matrix (\ref{rm}) enjoys the following properties
\begin{eqnarray}
{\rm regularity}&:&R _{12}(0)=\eta P_{12},\nonumber\\[4pt]
{\rm unitarity}&:&R_{12}(u)R_{21}(-u) = \rho_1(u)\times {\rm id},\nonumber\\[4pt]
{\rm crossing-unitarity}&:&R_{12}^{st_1}(-u)R_{21}^{st_1}(u)=\rho_2(u)\times {\rm id},\nonumber
\end{eqnarray}
where $P_{12}$ is the $Z_2$-graded permutation operator with the definition
\begin{eqnarray}
P_{\beta_{1}\beta_{2}}^{\alpha_{1}\alpha_{2}}=(-1)^{p(\alpha_{1})p(\alpha_{2})} \delta_{\alpha_{1}\beta_{2}}
\delta_{\beta_{1}\alpha_{2}},
\end{eqnarray}
$R_{21}(u)=P_{12}R_{12}(u)P_{12}$,
$st_i$ denotes the super transposition in the $i$-th space
$(A^{st_i})_{ij}=A_{ji}(-1)^{p(i)[p(i)+p(j)]}$, and the functions $\rho_1(u)$ and
$\rho_2(u)$ are given by
\begin{eqnarray}
\rho_1(u)=-({u}-\eta)({u}+\eta), \quad
\rho_2(u)=-u^2.
\end{eqnarray}
The $R$-matrix (\ref{rm}) satisfies the graded Yang-Baxter equation (GYBE) \cite{Kul1, Kul86}
\begin{eqnarray}
R_{12}(u-v)R_{13}(u)R_{23}(v)=R_{23}(v)R_{13}(u)R_{12}(u-v)\label{YBE}.
\end{eqnarray}
In terms of the matrix entries, GYBE (\ref{YBE}) reads
\bea
&&\sum_{\beta_1,\beta_2,\beta_3}R(u-v)_{\beta_1\beta_2}^{\alpha_1\alpha_2}R(u)_{\gamma_1\beta_3}^{\beta_1\alpha_3}
R(v)_{\gamma_2\gamma_3}^{\beta_2\beta_3}(-1)^{(p(\beta_1)+p(\gamma_1))p(\beta_2)}\no\\[4pt]
&&=\sum_{\beta_1,\beta_2,\beta_3}R(v)_{\beta_2\beta_3}^{\alpha_2\alpha_3}R(u)_{\beta_1\gamma_3}^{\alpha_1\beta_3}
R(u-v)_{\gamma_1\gamma_2}^{\beta_1\beta_2}(-1)^{(p(\alpha_1)+p(\beta_1))p(\beta_2)}.
\eea
For the periodic boundary condition, we introduce the ``row-to-row" (or one-row) monodromy matrix
$T_0(u)$
\begin{eqnarray}
T_0 (u)=R _{01}(u-\theta_1)R _{02}(u-\theta_2)\cdots R _{0N}(u-\theta_N),\label{T1}
\end{eqnarray}
where the subscript $0$ means the auxiliary space $V_0$,
the other tensor space $V^{\otimes N}$ is the physical or quantum space, $N$ is the number of sites and $\{\theta_j|j=1,\cdots,N\}$ are the inhomogeneous parameters.
In the auxiliary space, the monodromy matrix (\ref{T1}) can be written as a $4\times 4$ matrix with operator-valued elements
acting on ${\rm V}^{\otimes N}$.
The explicit forms of the elements of monodromy matrix (\ref{T1}) are
\bea
\Big\{[T_0(u)]^{a}_b\Big\}_{\beta_1\cdots\beta_N}^{\alpha_1\cdots\alpha_N}&=&\sum_{c_2,\cdots,c_N}R_{0N}(u)_{c_N\beta_N}^{a\alpha_N}\cdots R_{0j}(u)_{c_j\beta_j}^{c_{j+1}\alpha_j}\cdots R_{01}(u)_{b\beta_1}^{c_2\alpha_1}\no\\[4pt]
&&\times (-1)^{\sum_{j=2}^{N}(p(\alpha_j)+p(\beta_j))\sum_{i=1}^{j-1}p(\alpha_i)}.
\eea
The monodromy matrix $T_0(u)$ satisfies the graded Yang-Baxter relation
\begin{eqnarray}
R _{12}(u-v)T_1 (u) T_2 (v) =T_2 (v)T_1 (u)R_{12}(u-v).
\label{ybe}
\end{eqnarray}
The transfer matrix $t_p(u)$ of the system is defined as the super partial trace of the monodromy matrix in the auxiliary space
\begin{eqnarray}
t_p(u)=str_0\{T_0 (u)\}=\sum_{\alpha=1}^{4}(-1)^{p(\alpha)}[T_0(u)]_{\alpha}^{\alpha}.
\end{eqnarray}
From the graded Yang-Baxter relation (\ref{ybe}), one can prove that the transfer matrices with different spectral parameters
commute with each other, $[t_p(u), t_p(v)]=0$. Thus $t_p(u)$ serves as the
generating functional of all the conserved quantities, which ensures the
integrability of the system. The model Hamiltonian is constructed by \cite{Yue1}
\bea
H_p=\frac{\partial \ln t_p(u)}{\partial u}|_{u=0,\{\theta_j\}=0}.\label{peri-Ham}
\eea
\subsection{Fusion}
One of the wonderful properties of $R$-matrix is that it may degenerate to the projection
operators at some special points, which makes it possible to do the fusion procedure
\cite{Kul81, Kul82, Kar79, Kir86, Kir87, Tsu97}. It is easy to check that the $R$-matrix (\ref{rm}) has two degenerate points.
The first one is $u=\eta$. At which, we have
\bea
R _{12}(\eta)= 2\eta P_{12}^{(8)},\label{Int-R1}\eea
where $P_{12}^{(8)}$ is a 8-dimensional supersymmetric projector
\bea
P_{12}^{(8)}=\sum_{i=1}^{8}|f_i\rangle \langle f_i|, \label{1-project}\eea
and the corresponding basis vectors are
\bea
&&|f_1\rangle= |11\rangle, \quad |f_2\rangle =\frac{1}{\sqrt{2}}(|12\rangle +|21\rangle ), \quad|f_3\rangle =|22\rangle,\nonumber\\
&&|f_4\rangle=\frac{1}{\sqrt{2}}(|34\rangle -|43\rangle ),\quad |f_5\rangle=\frac{1}{\sqrt{2}}(|13\rangle +|31\rangle ),\quad|f_6\rangle=\frac{1}{\sqrt{2}}(|14\rangle +|41\rangle ),\nonumber\\
&&|f_7\rangle=\frac{1}{\sqrt{2}}(|23\rangle +|32\rangle ),\quad|f_8\rangle= \frac{1}{\sqrt{2}}(|24\rangle +|42\rangle ),\no
\eea
with the corresponding parities
\bea
p(f_1)=p(f_2)=p(f_3)=p(f_4)=0, \quad p(f_5)=p(f_6)=p(f_7)=p(f_8)=1. \no
\eea
The operator $P_{12}^{(8)}$ projects the original 16-dimensional tensor space $V_1\otimes V_2$ into
a new 8-dimensional projected space
spanned by $\{|f_i\rangle|i=1,\cdots,8\}$.
Taking the fusion by the operator (\ref{1-project}), we construct the fused $R$-matrices
\bea
&&R_{\langle 12\rangle 3}(u)=(u+\frac{1}{2}\eta)^{-1}P^{ (8) }_{12}R _{23}(u-\frac{1}{2}\eta)R _{13}(u+\frac{1}{2}\eta)P^{ (8) }_{12}\equiv R_{\bar 1 3}(u), \label{hhgg-1}\\[4pt]
&&R_{3 \langle 21\rangle}(u)=(u+\frac{1}{2}\eta)^{-1}P^{ (8) }_{21}R _{32}(u-\frac{1}{2}\eta)R _{31}(u+\frac{1}{2}\eta)P^{ (8) }_{21}\equiv R_{3\bar 1}(u), \label{hhgg-2}
\eea
where $P^{ (8) }_{21}$ can be obtained from $P^{ (8) }_{12}$ by exchanging $V_1$ and $V_2$. For simplicity, we denote the projected space as
$V_{\bar 1}=V_{\langle12\rangle}=V_{\langle21\rangle}$.
The fused $R$-matrix ${R}_{\bar{1}2}(u)$ is a $32\times 32$ matrix defined in the tensor space $V_{\bar 1}\otimes V_2$ and has the properties
\bea
&&R_{\bar{1}2}(u)R_{2\bar{1}}(-u)=\rho_3(u)\times {\rm id}, \no\\[4pt]
&&R_{\bar{1}2}(u)^{st_{\bar 1}} R_{2\bar{1}}(-u)^{st_{\bar 1}}=\rho_4(u)\times {\rm id},
\eea
where
\bea
\rho_3(u)=-(u+\frac{3}{2}\eta)(u-\frac{3}{2}\eta),\quad \rho_4(u)=-(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta).
\eea
From GYBE (\ref{YBE}), one can prove that the following fused graded Yang-Baxter equations hold
\bea
R_{\bar{1}2}(u-v) R_{\bar{1}3}(u) R_{23}(v)
=R_{23}(v)R_{\bar{1}3}(u)R_{\bar{1}2}(u-v).\label{fuse-qybe1}
\eea
It is easy to check that the elements of fused $R$-matrices $R_{\bar{1}2}(u)$ and $R_{2\bar{1}}(u)$
are degree one polynomials of $u$.
At the point of $u=-\frac{3}{2}\eta$, the fused $R$-matrix $R_{\bar{1}2}(u)$ can also be written as a projector
\bea R_{\bar{1}2}(-\frac{3}{2}\eta)= -3\eta P^{(20) }_{{\bar 1}2},\label{Fusion-5-4}
\eea
where $P^{(20) }_{\bar{1} 2}$ is a 20-dimensional supersymmetric projector
\bea
P^{(20) }_{\bar{1}2}=\sum_{i=1}^{20} |\phi_i\rangle \langle \phi_i|, \label{cjjc}\eea
with the basis vectors
\bea
&&|\phi_1\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|f_1\rangle\otimes|2\rangle -|f_2\rangle\otimes|1\rangle),\quad |\phi_2\rangle=\frac{1}{\sqrt{3}}( |f_2\rangle\otimes|2\rangle -\sqrt{2}|f_3\rangle\otimes|1\rangle),\nonumber\\[4pt]
&&|\phi_{3}\rangle=\frac{1}{\sqrt{6}}(2|f_6\rangle\otimes|3\rangle+|f_5\rangle\otimes|4\rangle +|f_4\rangle\otimes|1\rangle ),\quad|\phi_{4}\rangle=\frac{1}{\sqrt{2}}(|f_5\rangle\otimes|4\rangle -|f_4\rangle\otimes|1\rangle ),\nonumber\\[4pt]
&&|\phi_{5}\rangle =\frac{1}{\sqrt{6}}(|f_8\rangle\otimes|3\rangle+2|f_4\rangle\otimes|2\rangle -|f_7\rangle\otimes|4\rangle ),\quad |\phi_{6}\rangle=\frac{1}{\sqrt{2}}(|f_7\rangle\otimes|4\rangle +|f_8\rangle\otimes|3\rangle ),\nonumber\\[4pt]
&&|\phi_{7}\rangle =|f_5\rangle\otimes|3\rangle ,\quad |\phi_{8}\rangle=|f_7\rangle\otimes|3\rangle,\quad |\phi_{9}\rangle=|f_6\rangle\otimes|4\rangle ,\quad|\phi_{10}\rangle =|f_8\rangle\otimes|4\rangle,\nonumber\\[4pt]
&&|\phi_{11}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|f_1\rangle\otimes|3\rangle -|f_5\rangle\otimes|1\rangle ),\quad |\phi_{12}\rangle =\frac{1}{\sqrt{3}}( \sqrt{2}|f_1\rangle\otimes|4\rangle -|f_6\rangle\otimes|1\rangle),\nonumber\\[4pt]
&&|\phi_{13}\rangle =\frac{1}{\sqrt{6}}(|f_7\rangle\otimes|1\rangle+|f_2\rangle\otimes|3\rangle -2|f_5\rangle\otimes|2\rangle ),\quad |\phi_{14}\rangle=\frac{1}{\sqrt{2}}(|f_2\rangle\otimes|3\rangle -|f_7\rangle\otimes|1\rangle )\nonumber\\[4pt]
&&|\phi_{15}\rangle =\frac{1}{\sqrt{6}}(|f_8\rangle\otimes|1\rangle+|f_2\rangle\otimes|4\rangle -2|f_6\rangle\otimes|2\rangle ),\quad |\phi_{16}\rangle=\frac{1}{\sqrt{2}}(|f_2\rangle\otimes|4\rangle -|f_8\rangle\otimes|1\rangle ),\nonumber\\[4pt]
&&|\phi_{17}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|f_3\rangle\otimes|3\rangle -|f_7\rangle\otimes|2\rangle ),\quad|\phi_{18}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|f_3\rangle\otimes|4\rangle -|f_8\rangle\otimes|2\rangle ),\nonumber\\[4pt]
&&|\phi_{19}\rangle =|f_4\rangle\otimes|3\rangle ,\quad |\phi_{20}\rangle=|f_4\rangle\otimes|4\rangle.\no
\eea
The corresponding parities of the basis vectors are
\bea
p(\phi_1)=p(\phi_2)=\cdots =p(\phi_{10})=0,\quad p(\phi_{11})=p(\phi_{12})=\cdots =p(\phi_{20})=1. \no
\eea
The operator $P^{(20)}_{{\bar 1}2}$ is a projector on the 32-dimensional product space $V_{\bar 1}\otimes V_2$ which projects $V_{\bar 1}\otimes V_2$ into
its 20-dimensional subspace
spanned by $\{|\phi_i\rangle, i=1,\cdots,20\}$.
Taking the fusion by the projector $P^{(20)}_{{\bar 1}2}$, we obtain another new fused $R$-matrix
\bea &&
{R}_{\langle {\bar 1}2\rangle 3}(u)
=(u-\eta)^{-1}P^{(20) }_{2\bar{1}}R_{\bar{1}3}(u+\frac{1}{2}\eta) R_{23}(u-\eta)P^{(20)}_{2\bar{1} }\equiv {R}_{\tilde 1 3}(u), \label{fu-2}\\[4pt]
&&{R}_{3\langle 2{\bar 1}\rangle}(u)
=(u-\eta)^{-1}P^{(20) }_{{\bar 1}2}R_{3\bar{1}}(u+\frac{1}{2}\eta) R_{32}(u-\eta)P^{(20)}_{{\bar 1}2}\equiv {R}_{3\tilde 1}(u), \label{fu-22}
\eea
where $P^{(20)}_{2{\bar 1}}$ can be obtained from $P^{(20)}_{{\bar 1}2}$ by exchanging $V_{\bar 1}$ and $V_2$.
For simplicity, we denote the projected subspace as $V_{\tilde 1}=V_{\langle\bar 12\rangle}=V_{\langle2\bar 1\rangle}$.
The fused $R$-matrix $R_{\tilde{1}2}(u)$ is a $80\times 80$ matrix defined in the tensor space $V_{\tilde 1}\otimes V_2$ and
satisfies following graded Yang-Baxter equations
\bea
R_{{\tilde 1}2}(u-v) R_{{\tilde 1}3}(u) R_{{2}3}(v)= R_{{2}3}(v) R_{{\tilde 1}3}(u)R_{{\tilde 1}2}(u-v).\label{sdfu-22}
\eea
The elements of fused $R$-matrix $R_{\tilde{1} 2}(u)$ are also degree one polynomials of $u$.
The second degenerate point of $R$-matrix (\ref{rm}) is $u=-\eta$. At which we have
\bea
R_{12}(-\eta)= -2\eta \bar P_{12}^{(8)}=-2\eta(1-P_{12}^{(8)}),\label{2-pewrerroject}
\eea
where $\bar P_{12}^{(8)}$ is an 8-dimensional supersymmetric projector in terms of
\bea
\bar P_{12}^{(8)}=\sum_{i=1}^{8}|g_i\rangle \langle g_i|, \label{2-project}\eea
with
\bea
&&|g_1\rangle= \frac{1}{\sqrt{2}}(|12\rangle -|21\rangle ), \quad |g_2\rangle=|33\rangle, \quad |g_3\rangle= \frac{1}{\sqrt{2}}(|34\rangle +|43\rangle ),\nonumber\\
&&|g_4\rangle=|44\rangle,\quad |g_5\rangle =\frac{1}{\sqrt{2}}(|13\rangle -|31\rangle ), \quad |g_6\rangle=\frac{1}{\sqrt{2}}(|14\rangle -|41\rangle ) \nonumber\\
&&|g_7\rangle=\frac{1}{\sqrt{2}}(|23\rangle -|32\rangle ),\quad |g_8\rangle =\frac{1}{\sqrt{2}}(|24\rangle -|42\rangle ). \label{fuse-q1ybe2}
\eea
The corresponding parities are
\bea
p(g_1)=p(g_2)=p(g_3)=p(g_4)=0,\quad p(g_5)=p(g_6)=p(g_7)=p(g_8)=1. \no
\eea
The operator $\bar P_{12}^{(8)}$ projects the 16-dimensional product space $V_1\otimes V_2$ into
a new 8-dimensional projected space spanned by $\{|g_i\rangle|i=1,\cdots,8\}$.
Taking the fusion by the projector $\bar P_{12}^{(8)}$, we obtain the fused $R$-matrices
\bea
&&R_{\langle 12\rangle^\prime 3}(u)=(u-\frac{1}{2}\eta)^{-1}\bar P^{ (8) }_{12}R _{23}(u+\frac{1}{2}\eta)R _{13}(u-\frac{1}{2}\eta)\bar P^{ (8) }_{12}\equiv R_{\bar{1}^\prime 3}(u), \label{hhgg-3}\\[4pt]
&&R_{3 \langle 21\rangle^\prime}(u)=(u-\frac{1}{2}\eta)^{-1}\bar P^{ (8) }_{21}R _{32}(u+\frac{1}{2}\eta)R _{31}(u-\frac{1}{2}\eta)\bar P^{ (8) }_{21}\equiv R_{3\bar{1}^\prime }(u).\label{hhgg-4}
\eea
For simplicity, we denote the projected space as
$V_{\bar 1^\prime}=V_{\langle12\rangle^\prime}=V_{\langle21\rangle^\prime}$.
The fused $R$-matrix $R_{\bar{1}^\prime 2}(u)$ is a $32\times 32$ matrix defined in the product space $V_{\bar{1}^\prime }\otimes V_2$ and possesses the properties
\bea
&&R_{\bar{1}^\prime 2}(u) R_{2\bar{1}^\prime }(-u)=\rho_5(u)\times {\rm id}, \no\\[4pt]
&&R_{\bar{1}^\prime 2}(u)^{st_{{\bar 1}^\prime }} R_{2\bar{1}^\prime }(-u)^{st_{{\bar 1}^\prime }}=\rho_6(u)\times {\rm id}, \no\\[6pt]
&&R_{\bar{1}^\prime 2}(u-v) R_{\bar{1}^\prime 3}(u) R_{23}(v)
=R_{23}(v)R_{\bar{1}^\prime 3}(u)R_{\bar{1}^\prime 2}(u-v),\label{fuse-qybe2}
\eea
where
\bea
\rho_5(u)=-(u-\frac{3}{2}\eta)(u+\frac{3}{2}\eta),\quad \rho_6(u)=-(u-\frac{1}{2}\eta)(u+\frac{1}{2}\eta).
\eea
Now, we consider the fusions of $R_{\bar{1}^\prime 2}(u)$, which include two different cases. One is the fusion in the auxiliary space $V_{\bar 1}$ and
the other is the fusion in the quantum space $V_2$. Both are necessary to close the fusion processes.
We first introduce the fusion in the auxiliary space. At the point $u=\frac{3}{2}\eta$, we have
\bea R_{\bar{1}^\prime 2}(\frac{3}{2}\eta)= 3\eta P^{(20) }_{\bar{1}^\prime 2},\label{Fusion-20-1}\eea
where $P^{(20) }_{\bar{1}^\prime 2}$ is a 20-dimensional supersymmetric projector with the form of
\bea
P^{(20) }_{\bar{1}^\prime 2}=\sum_{i=1}^{20} |\tilde{\phi}_i\rangle \langle \tilde{\phi}_i|, \label{xiao1}\eea
and the corresponding vectors are
\bea
&&|\tilde{\phi}_1\rangle =|g_1\rangle\otimes|1\rangle,\quad |\tilde{\phi}_2\rangle=|g_1\rangle\otimes|2\rangle,\nonumber\\[4pt]
&&|\tilde{\phi}_3\rangle=\frac{1}{\sqrt{2}}(|g_3\rangle\otimes|1\rangle -|g_5\rangle\otimes|4\rangle ),\quad |\tilde{\phi}_{4}\rangle=\frac{1}{\sqrt{6}}( |g_5\rangle\otimes4\rangle+|g_3\rangle\otimes|1\rangle-2|g_6\rangle\otimes|3\rangle ),\nonumber\\[4pt]
&&|\tilde{\phi}_{5}\rangle=\frac{1}{\sqrt{2}}(|g_8\rangle\otimes|3\rangle -|g_7\rangle\otimes|4\rangle ) ,\quad |\tilde{\phi}_{6}\rangle=\frac{1}{\sqrt{6}}(2|g_3\rangle\otimes|2\rangle-|g_7\rangle\otimes4\rangle -|g_8\rangle\otimes|3\rangle ),\nonumber\\[4pt]
&&|\tilde{\phi}_{7}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|g_2\rangle\otimes|1\rangle -|g_5\rangle\otimes|3\rangle ),\quad |\tilde{\phi}_{8}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_2\rangle\otimes|2\rangle -|g_7\rangle\otimes|3\rangle ),\nonumber\\[4pt]
&&|\tilde{\phi}_{9}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|1\rangle -|g_6\rangle\otimes|4\rangle ),\quad|\tilde{\phi}_{10}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|2\rangle -|g_8\rangle\otimes|4\rangle),\nonumber\\[4pt]
&&|\tilde{\phi}_{11}\rangle=|g_5\rangle\otimes|1\rangle,\quad|\tilde{\phi}_{12}\rangle =|g_6\rangle\otimes|1\rangle),\nonumber\\[4pt]
&&|\tilde{\phi}_{13}\rangle =\frac{1}{\sqrt{2}}(|g_7\rangle\otimes|1\rangle-|g_1\rangle\otimes|3\rangle ),\quad |\tilde{\phi}_{14}\rangle=\frac{1}{\sqrt{6}}(|g_7\rangle\otimes|1\rangle+2|g_5\rangle\otimes2\rangle +|g_1\rangle\otimes|3\rangle )\nonumber\\[4pt]
&&|\tilde{\phi}_{15}\rangle=\frac{1}{\sqrt{2}}(|g_8\rangle\otimes|1\rangle -|g_1\rangle\otimes|4\rangle ),\quad |\tilde{\phi}_{16}\rangle=\frac{1}{\sqrt{6}}(|g_6\rangle\otimes|2\rangle +2|g_8\rangle\otimes1\rangle +|g_1\rangle\otimes|4\rangle ),\nonumber\\[4pt]
&&|\tilde{\phi}_{17}\rangle=|g_7\rangle\otimes|2\rangle,\quad |\tilde{\phi}_{18}\rangle=|g_8\rangle\otimes|2\rangle,\nonumber\\[4pt]
&&|\tilde{\phi}_{19}\rangle=\frac{1}{\sqrt{3}}(|g_3\rangle\otimes|3\rangle-\sqrt{2}|g_2\rangle\otimes|4\rangle ),\quad|\tilde{\phi}_{20}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|3\rangle -|g_3\rangle\otimes|4\rangle).\no
\eea
The parities read
\bea
p(\tilde\phi_1)=p(\tilde\phi_2)=\cdots=p(\tilde\phi_{10})=0,\quad p(\tilde\phi_{11})=p(\tilde\phi_{12})=\cdots=p(\tilde\phi_{20})=1.\no
\eea
The operator $P^{(20)}_{{\bar 1}^\prime 2}$ projects the 32-dimensional product space $V_{{\bar 1}^\prime} \otimes V_2$ into
a 20-dimensional projected space spanned by $\{|\tilde \phi_i\rangle, i=1,\cdots,20\}$.
Taking the fusion by the projector $P^{(20)}_{{\bar 1}^\prime 2}$, we obtain the following fused $R$-matrices
\bea
&&{R}_{\langle {\bar 1}^\prime 2\rangle 3}(u)=
(u+\eta)^{-1}P^{(20) }_{2\bar{1}^\prime }R_{\bar{1}^\prime 3}(u-\frac{1}{2}\eta) R_{23}(u+\eta)P^{(20)}_{2\bar{1}^\prime }\equiv R_{{\tilde 1}^\prime 3}(u), \label{fu-4}\\[4pt]
&&{R}_{3\langle 2{\bar 1}^\prime\rangle}(u)=
(u+\eta)^{-1}P^{(20) }_{{\bar 1}^\prime 2}R_{3\bar{1}^\prime }(u-\frac{1}{2}\eta) R_{32}(u+\eta)P^{(20)}_{{\bar 1}^\prime 2}\equiv R_{3{\tilde 1}^\prime}(u). \label{fu-44}
\eea
For simplicity, we denote the projected space as
$V_{\tilde 1^\prime}=V_{\langle \bar 1^\prime 2\rangle}=V_{\langle 2 \bar 1^\prime \rangle}$.
The fused $R$-matrix $R_{\tilde{1}^\prime 2}(u)$ is a $80\times 80$ one defined in the product spaces $V_{{\tilde 1}^\prime}\otimes V_2$ and
satisfies following graded Yang-Baxter equation
\bea
R_{{\tilde 1}^\prime 2}(u-v) R_{{\tilde 1}^\prime 3}(u) R_{{2}3}(v)
= R_{{2}3}(v) R_{{\tilde 1}^\prime 3}(u)R_{{\tilde 1}^\prime 2}(u-v).\label{fwusdfwa-44}
\eea
A remarkable fact is that after taking the correspondences
\bea
|\phi_i\rangle\longrightarrow|\psi_i\rangle,\quad |\tilde\phi_i\rangle\longrightarrow|\tilde\psi_i\rangle, \quad i=1,\cdots,20,\label{vec-corresp}
\eea
the two fused $R$-matrices $R_{\tilde 1 2}(u)$ given by (\ref{fu-2}) and $R_{{\tilde 1}^\prime 2}(u)$ given by (\ref{fu-4}) are identical,
\bea
R_{\tilde 1 2}(u)=R_{{\tilde 1}^\prime 2}(u),\label{peri-iden}
\eea
which allows us to close the recursive fusion processe.
The fusion of $R_{\bar{1}^\prime 2}(u)$ in the quantum space is carried out by the projector $P_{23}^{(8)}$, and the resulted fused $R$-matrix is
\bea
R_{{\bar 1}^\prime \langle 23\rangle}(u)= (u+\eta)^{-1}P_{23}^{(8)}R_{{\bar 1}^\prime 3}(u-\frac{1}{2}\eta)R_{{\bar 1}^\prime 2}(u+\frac{1}{2}\eta)P_{23}^{(8)}\equiv R_{{\bar 1}^\prime \bar 2}(u), \eea
which is a $64\times 64$ matrix defined in the space $V_{\bar{1}^\prime }\otimes V_{\bar 2}$ and satisfies the graded Yang-Baxter equation
\bea
R_{{\bar 1}^\prime\bar 2}(u-v)R_{{\bar 1}^\prime 3}(u)R_{\bar 2 3}(v)=R_{\bar 2 3}(v)R_{{\bar 1}^\prime 3}(u)R_{{\bar 1}^\prime\bar 2}(u-v),\label{sdfusdsd-22}
\eea
which will help us to find the complete set of conserved quantities.
\subsection{Operator product identities}
Now, we are ready to extend the fusion from one site to the whole system.
From the fused $R$-matrices given by (\ref{hhgg-1}), (\ref{fu-2}), (\ref{hhgg-3}) and (\ref{fu-4}), we construct the fused monodromy matrices as
\begin{eqnarray}
&&T_{\bar 0}(u)=R_{\bar 01}(u-\theta_1)R_{\bar 02}(u-\theta_2)\cdots R_{\bar 0N}(u-\theta_N), \no \\
&&T_{\bar 0^\prime}(u)=R_{\bar 0^\prime 1}(u-\theta_1)R_{\bar 0^\prime 2}(u-\theta_2)\cdots R_{\bar 0^\prime N}(u-\theta_N), \no \\
&&T_{\tilde 0}(u)=R_{\tilde 01}(u-\theta_1)R_{\tilde 02}(u-\theta_2)\cdots R_{\tilde 0N}(u-\theta_N), \no \\
&&T_{\tilde 0^\prime}(u)=R_{\tilde 0^\prime1}(u-\theta_1)R_{\tilde 0^\prime2}(u-\theta_2)\cdots R_{\tilde 0^\prime N}(u-\theta_N),\label{T6}
\end{eqnarray}
where the subscripts $\bar 0$, $\bar 0^\prime$, $\tilde 0$ and $\tilde 0^\prime $ mean the auxiliary spaces, and
the quantum spaces in all the monodromy matrices are the same. By using the graded Yang-Baxter equations (\ref{fuse-qybe1}), (\ref{sdfu-22}),
(\ref{fuse-qybe2}), (\ref{fwusdfwa-44}) and (\ref{sdfusdsd-22}), one can prove that the monodromy matrices
satisfy the graded Yang-Baxter relations
\begin{eqnarray}
&&R_{\bar 12} (u-v) T_{\bar 1}(u) T_2(v)= T_2(v) T_{\bar 1}(u) R_{\bar 12}(u-v), \no \\
&&R_{\bar 1^\prime 2} (u-v) T_{\bar 1^\prime }(u) T_2(v)= T_2(v) T_{\bar 1^\prime }(u) R_{\bar 1^\prime 2} (u-v), \no \\
&&R_{\bar 1^\prime \bar 2} (u-v) T_{\bar 1^\prime }(u) T_{\bar 2}(v)= T_{\bar 2}(v) T_{\bar 1^\prime }(u) R_{\bar 1^\prime \bar 2} (u-v), \no \\
&&R_{\tilde 12} (u-v) T_{\tilde 1}(u) T_2(v)= T_2(v) T_{\tilde 1}(u) R_{\tilde 12}(u-v), \no \\
&&R_{\tilde 1^\prime 2} (u-v) T_{\tilde 1^\prime }(u) T_2(v)= T_2(v) T_{\tilde 1^\prime }(u) R_{\tilde 1^\prime 2} (u-v). \label{yybb4}
\end{eqnarray}
According to the property that the $R$-matrices in above equations can degenerate into
the projectors $P^{(8)}_{12}$, $\bar P^{(8)}_{12}$, $P^{(20)}_{\bar{1}2} $, $P^{(20)}_{\bar{1}^\prime 2}$ and using
the definitions (\ref{T6}),
we obtain following fusion relations among the monodromy matrices
\bea
&&P^{ (8) }_{12}T_2 (u)T_1 (u+\eta)P^{(8) }_{12}=\prod_{l=1}^N
(u-\theta_l+\eta)T_{\bar 1}(u+\frac{1}{2}\eta), \no\\[4pt]
&&\bar P^{ (8) }_{12}T_2 (u)T_1 (u-\eta)\bar P^{(8) }_{12}
=\prod_{l=1}^N
(u-\theta_l-\eta)T_{\bar 1^\prime}(u-\frac{1}{2}\eta), \no\\[4pt]
&&P^{(20) }_{2\bar{1}} T_{\bar{1}} (u+\frac{1}{2}\eta) T_2(u-\eta)P^{(20)}_{2\bar{1}}
=\prod_{l=1}^N
(u-\theta_l-\eta){T}_{\tilde 1}(u),\no\\[4pt]
&&P^{(20) }_{2\bar{1}^\prime } T_{\bar{1}^\prime } (u-\frac{1}{2}\eta)T_2(u+\eta)P^{(20)
}_{2\bar{1}^\prime }=\prod_{l=1}^N
(u-\theta_l+\eta){T}_{\tilde 1^\prime }(u).\label{fut-6}
\eea
The fused transfer matrices are defined as the super partial traces of fused monodromy matrices in the auxiliary space
\bea
{t}^{(1)}_p(u)=str_{\bar 0} T_{\bar 0}(u), \; {t}^{(2)}_p(u)=str_{\bar 0^\prime} T_{\bar 0^\prime}(u), \;
\tilde{t}^{(1)}_p(u)=str_{\tilde 0} T_{\tilde 0}(u), \; \tilde{t}^{(2)}_p(u)=str_{\tilde 0^\prime} T_{\tilde 0^\prime }(u).\no
\eea
From Eq.(\ref{fut-6}), we know that these fused transfer matrices with certain spectral difference must satisfy some
intrinsic relations. We first consider the quantity
\bea
\hspace{-1.2truecm}&&\hspace{-1.2truecm}t_p(u)t_p(u+\eta)=str_{12}\{T_1(u)T_2(u+\eta)\}\no\\[4pt]
&&\hspace{8mm}=str_{12}\{(P_{12}^{(8)}+\bar P_{12}^{(8)})T_1(u)T_2(u+\eta)(P_{12}^{(8)}+\bar P_{12}^{(8)})\}\no\\[4pt]
&&\hspace{8mm}=str_{12}\{P_{12}^{(8)}T_1(u)T_2(u+\eta)P_{12}^{(8)}\}+str_{12}\{\bar P_{12}^{(8)}\bar P_{12}^{(8)}T_1(u)T_2(u+\eta)\bar P_{12}^{(8)}\}\no\\[4pt]
&&\hspace{8mm}=str_{12}\{P_{12}^{(8)}T_1(u)T_2(u+\eta)P_{12}^{(8)}\}+str_{12}\{\bar P_{12}^{(8)}T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}\bar P_{12}^{(8)}\}\no\\[4pt]
&&\hspace{8mm}=\prod_{j=1}^{N}(u-\theta_j+\eta) t_p^{(1)}(u+\frac{1}{2}\eta)+\prod_{j=1}^{N}(u-\theta_j) t_p^{(2)}(u+\frac{1}{2}\eta).\label{fui-3tan}
\eea
Here we give some remarks. Both $V_1$ and $V_2$ are the 4-dimensional auxiliary spaces.
From Eq.(\ref{fui-3tan}), we see that the 16-dimensional auxiliary space $V_{1}\otimes V_2$
can be projected into two 8-dimensional subspaces, $V_{1}\otimes V_2=V_{\langle12\rangle}\oplus V_{\langle12\rangle^\prime}$.
One is achieved by the 8-dimensional projector $P_{12}^{(8)}$ defined in the subspace $V_{\langle12\rangle}\equiv V_{\bar 1}$,
and the other is achieved by the 8-dimensional projector $\bar P_{12}^{(8)}$ defined in the subspace $V_{\langle 12\rangle^\prime}\equiv V_{\bar 1^\prime}$.
The vectors in $P_{12}^{(8)}$ and those in $\bar P_{12}^{(8)}$ constitute the complete basis of $V_{1}\otimes V_2$, and all the vectors are orthogonal,
\bea
P_{12}^{(8)}+\bar P_{12}^{(8)}=1,~~P_{12}^{(8)}\bar P_{12}^{(8)}=0.\no
\eea
From Eq.(\ref{fui-3tan}), we also know that the product of two transfer matrices with fixed spectral difference can be written as the summation of
two fused transfer matrices $t_p^{(1)}(u)$ and $ t_p^{(2)}(u)$.
At the point of $u=\theta_j-\eta$, the coefficient of the fused transfer matrix $ t_p^{(1)}(u)$ is zero, while at
the point of $u=\theta_j$, the coefficient of the fused transfer matrix $ t_p^{(2)}(u)$ is zero.
Therefore, at these points, only one of them has the contribution.
Motivated by Eq.(\ref{fut-6}), we also consider the quantities
\bea
\hspace{-0.8truecm}&&\hspace{-0.8truecm} t_p^{(1)}(u+\frac{1}{2}\eta)t_p(u-\eta)=str_{\bar 12}\{(P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)})T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)(P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)})\}\no\\[4pt]
&&=str_{\bar 12}\{P_{2\bar 1}^{(20)}T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)P_{2\bar 1}^{(20)}\}+str_{\bar 12}\{\tilde P_{2\bar 1}^{(12)}T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)\tilde P_{2\bar 1}^{(12)}\}\no\\[4pt]
&&=\prod_{j=1}^{N}(u-\theta_j-\eta)\tilde t_p^{(1)}(u)+\prod_{j=1}^{N}(u-\theta_j)\bar{t}_p^{(1)}(u), \label{fui-3tan-1}\\
\hspace{-0.8truecm}&&\hspace{-0.8truecm} t_p^{(2)}(u-\frac{1}{2}\eta)t_p(u+\eta)=str_{\bar 1^\prime 2}\{(P_{2\bar 1^\prime }^{(20)}
+\tilde P_{2\bar 1^\prime }^{(12)})T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P_{2\bar 1^\prime }^{(12)})\}\no\\[4pt]
&&=str_{\bar 1^\prime 2}\{P_{2\bar 1^\prime }^{(20)}T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)P_{2\bar 1^\prime }^{(20)}\}
+str_{\bar 1^\prime 2}\{\tilde P_{2\bar 1^\prime }^{(12)}T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)\tilde P_{2\bar 1^\prime }^{(12)}\}\no\\[4pt]
&&=\prod_{j=1}^{N}(u-\theta_j+\eta)\tilde t_p^{(2)}(u)+\prod_{j=1}^{N}(u-\theta_j)\bar{t}_p^{(2)}(u).\label{fui-3tan-2}
\eea
During the derivation, we have used the relations
\bea
P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)}=1,~~P_{2\bar 1}^{(20)}\tilde P_{2\bar 1}^{(12)}=0, ~~
P_{2\bar 1^\prime}^{(20)}+\tilde P_{2\bar 1^\prime}^{(12)}=1,~~P_{2\bar 1^\prime}^{(20)}\tilde P_{2\bar 1^\prime}^{(12)}=0.\no
\eea
From Eq.(\ref{fui-3tan-1}), we see that the 32-dimensional auxiliary space $V_{\bar 1}\otimes V_2$
can be projected into a 20-dimensional subspace $V_{\langle \bar 12\rangle}\equiv V_{\tilde 1}$ by the projector $P_{\bar 12}^{(20)}$
and a 12-dimensional subspace $V_{\overline{\langle \bar 12\rangle}}$ by the projector $\tilde P_{\bar 12}^{(12)}$, $V_{\bar 1}\otimes V_2=V_{\langle \bar 12\rangle} \oplus
V_{\overline{\langle \bar 12\rangle}}$.
The vectors in $P_{\bar 12}^{(20)}$ and $\tilde P_{\bar 12}^{(12)}$ are the complete and orthogonal
basis.
Eq.(\ref{fui-3tan-1}) also gives that the quantity $t_p^{(1)}(u+\frac{1}{2}\eta)t_p(u-\eta)$
is the summation of two new fused transfer matrices $\tilde t_p^{(1)}(u)$ and $\bar{t}_p^{(1)}(u)$ with some coefficients.
In Eq.(\ref{fui-3tan-2}), the 32-dimensional auxiliary space $V_{\bar 1^\prime}\otimes V_2$ is
projected into a 20-dimensional and a 12-dimensional subspaces by the operators $P_{\bar 1^\prime 2}^{(20)}$ and $\tilde P_{\bar 1^\prime 2}^{(12)}$, respectively.
Thus the quantity $t_p^{(2)}(u-\frac{1}{2}\eta)t_p(u+\eta)$ is the summation of two fused transfer matrices $\tilde t_p^{(2)}(u)$ and $\bar{t}_p^{(2)}(u)$ with some coefficients.
At the point of $u=\theta_j-\eta$, the coefficient of $\tilde t_p^{(1)}(u)$ in Eq.(\ref{fui-3tan-1})
and that of $\tilde t_p^{(2)}(u)$ in Eq.(\ref{fui-3tan-1}) are zero. While at the point of $u=\theta_j$, the coefficient of $\bar{t}_p^{(1)}(u)$ in Eq.(\ref{fui-3tan-1})
and that of $\bar{t}_p^{(2)}(u)$ in Eq.(\ref{fui-3tan-2}) are zero.
Here, the explicit forms of $\tilde P_{\bar 12}^{(12)}$, $\tilde P_{\bar 1^\prime 2}^{(12)}$, $\bar{t}_p^{(1)}(u)$ and $\bar{t}_p^{(2)}(u)$ are omitted
because we donot use them.
Combining the above analysis, we obtain the operator product identities of the transfer matrices at the fixed points as
\bea && t_p(\theta_j)t_p (\theta_j+\eta)=\prod_{l=1}^N
(\theta_j-\theta_l+\eta) t^{(1)}_p(\theta_j+\frac{1}{2}\eta),\label{futp-4-1} \\[4pt]
&& t_p(\theta_j)t_p (\theta_j-\eta)=\prod_{l=1}^N
(\theta_j-\theta_l-\eta) t^{(2)}_p(\theta_j-\frac{1}{2}\eta),\label{futp-4-2} \\[4pt]
&& t^{(1)}_p(\theta_j+\frac{1}{2}\eta)t_p (\theta_j-\eta)=\prod_{l=1}^N
(\theta_j-\theta_l-\eta)\tilde t_{p}^{(1)}(\theta_j),\label{futp-4-3}\\[4pt]
&& t^{(2)}_p(\theta_j-\frac{1}{2}\eta)t_p (\theta_j+\eta)=\prod_{l=1}^N
(\theta_j-\theta_l+\eta)\tilde t_{p}^{(2)}(\theta_j), \quad j=1, \cdots, N.\label{futp-4-4}
\eea
From the property (\ref{peri-iden}), we obtain that the fused transfer matrices $\tilde{t}^{(1)}_p(u)$ and $\tilde{t}^{(2)}_p(u)$
are equal
\bea
\tilde{t}^{(1)}_p(u)=\tilde{t}^{(2)}_p(u). \label{futp-6}
\eea
With the help of Eqs. (\ref{futp-6}), (\ref{futp-4-3}) and (\ref{futp-4-4}),
we can obtain the constraint among $t_p(u)$, $ t^{(1)}_p(u)$ and $ t^{(2)}_p(u)$,
\bea
t^{(1)}_p (\theta_j+\frac{1}{2}\eta) t_p(\theta_j-\eta)
=\prod_{l=1}^N\frac{\theta_j-\theta_l-\eta}{\theta_j-\theta_l+\eta} t^{(2)}_p (\theta_j-\frac{1}{2}\eta) t_p(\theta_j+\eta).\label{peri-ope-3}
\eea
Then Eqs.(\ref{futp-4-1}), (\ref{futp-4-2}) and (\ref{peri-ope-3}) constitute the closed recursive fusion relations.
From the definitions, we know that the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$
are the operator polynomials of $u$ with degree $N-1$. Then, the $3N$ conditions (\ref{futp-4-1}), (\ref{futp-4-2}) and (\ref{peri-ope-3})
are sufficient to solve them.
From the graded Yang-Baxter relations (\ref{yybb4}), the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$
commutate with each other, namely,
\bea
[t_p(u),{t}^{(1)}_p(u)]=[t_p(u),{t}^{(2)}_p(u)]=[{t}^{(1)}_p(u),{t}^{(2)}_p(u)]=0.
\eea
Therefore, they have common eigenstates and can be diagonalized simultaneously.
Let $|\Phi\rangle$ be a common eigenstate. Acting the transfer matrices on this eigenstate, we have
\bea
t_p(u)|\Phi\rangle=\Lambda_p(u)|\Phi\rangle,\quad
t_p^{(1)}(u)|\Phi\rangle= \Lambda_p^{(1)}(u)|\Phi\rangle,\quad
t_p^{(2)}(u)|\Phi\rangle=\Lambda_p^{(2)}(u)|\Phi\rangle,\no
\eea
where $\Lambda_p(u)$, ${\Lambda}^{(1)}_p(u)$ and ${\Lambda}^{(2)}_p(u)$ are the eigenvalues of
$t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$, respectively. Meanwhile, acting the operator product identities (\ref{futp-4-1}),
(\ref{futp-4-2}) and (\ref{peri-ope-3}) on the state $|\Phi\rangle$, we have the functional relations among these eigenvalues
\bea && \Lambda_p(\theta_j)\Lambda_p (\theta_j+\eta)=\prod_{l=1}^N
(\theta_j-\theta_l+\eta){\Lambda}^{(1)}_p(\theta_j+\frac{1}{2}\eta),\no \\[4pt]
&& \Lambda_p(\theta_j)\Lambda_p (\theta_j-\eta)=\prod_{l=1}^N
(\theta_j-\theta_l-\eta){\Lambda}^{(2)}_p(\theta_j-\frac{1}{2}\eta),\no \\[4pt]
&& \Lambda^{(1)}_p (\theta_j+\frac{1}{2}\eta) \Lambda_p(\theta_j-\eta)=\prod_{l=1}^N\frac{\theta_j-\theta_l-\eta}{\theta_j-\theta_l+\eta}
\Lambda^{(2)}_p (\theta_j-\frac{1}{2}\eta) \Lambda_p(\theta_j+\eta),\label{futpl-3}
\eea
where $j=1,2,\cdots N$. Because the eigenvalues $\Lambda_p(u)$, ${\Lambda}^{(1)}_p(u)$ and ${\Lambda}^{(2)}_p(u)$
are the polynomials of $u$ with degree $N-1$, the above $3N$ conditions (\ref{futpl-3}) can determine these eigenvalues completely.
\subsection{$T-Q$ relations}
Let us introduce the $z$-functions
\begin{eqnarray}
z_p^{(l)}(u)=\left\{
\begin{array}{ll}
\displaystyle(-1)^{p(l)}Q^{(0)}_p(u)\frac{Q_p^{(l-1)}(u+\eta)Q_p^{(l)}(u-\eta)}{Q_p^{(l)}(u)Q_p^{(l-1)}(u)}, &l=1,2,\\[6mm]
\displaystyle(-1)^{p(l)}Q^{(0)}_p(u)\frac{Q_p^{(l-1)}(u-\eta)Q_p^{(l)}(u+\eta)}{Q_p^{(l)}(u)Q_p^{(l-1)}(u)}, &l=3,4,\end{array}
\right.
\end{eqnarray}
where the $Q$-functions are
\bea
&&Q_p^{(0)}(u)=\prod_{l=1}^{N}(u-\theta_j),\quad
Q^{(m)}_p(u)=\prod_{j=1}^{L_m}(u-\lambda_j^{(m)}), \quad m=1, 2, 3,\quad Q_p^{(4)}(u)=1,\no
\eea
and $\{L_m|m=1,2,3\}$ are the numbers of the Bethe roots $\{\lambda_j^{(m)}\}$.
According to the closed functional relations (\ref{futpl-3}), we construct the eigenvalues of the transfer matrices in terms of the homogeneous $T-Q$ relations
\bea &&\Lambda_p (u)=\sum_{l=1}^{4}z_p^{(l)}(u)
\no\\[4pt]
&&\Lambda_p^{(1)}(u)=\Big[Q_p^{(0)}(u+\frac{1}{2}\eta)\Big]^{-1}\Big\{\sum_{l=1}^{2}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(l)}(u-\frac{1}{2}\eta)\no\\
&&~~~~~~~~~~~~~~~+\sum_{l=2}^{4}
\sum_{m=1}^{l-1}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(m)}(u-\frac{1}{2}\eta)\Big\}
\no\\[4pt]
&&\Lambda_p^{(2)}(u)=\Big[Q_p^{(0)}(u-\frac{1}{2}\eta)\Big]^{-1}\Big\{\sum_{l=3}^{4}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(l)}(u-\frac{1}{2}\eta)\no\\[4pt]
&&~~~~~~~~~~~~~~~+\sum_{l=2}^{4}
\sum_{m=1}^{l-1}z_p^{(l)}(u-\frac{1}{2}\eta)z_p^{(m)}(u+\frac{1}{2}\eta)\Big\}.\label{ep-3}
\eea
The regularities of the eigenvalues $\Lambda_p(u)$, $\Lambda_p^{(1)}(u)$ and $\Lambda_p^{(2)}(u)$ give rise to the constraints that the Bethe roots
$\{\lambda_j^{(m)}\}$ should satisfy the Bethe ansatz equations (BAEs)
\bea &&\frac{Q_p^{(0)}(\lambda_j^{(1)}+\eta)}{Q_p^{(0)}(\lambda_j^{(1)})}=-\frac{Q_p^{(1)}(\lambda_j^{(1)}+\eta)Q_p^{(2)}(\lambda_j^{(1)}-\eta)}
{Q_p^{(2)}(\lambda_j^{(1)})Q_p^{(1)}(\lambda_j^{(1)}-\eta)},~~j=1,\cdots,L_1
\no\\
&&\frac{Q_p^{(1)}(\lambda_j^{(2)}+\eta)}{Q_p^{(1)}(\lambda_j^{(2)})}=\frac{Q_p^{(3)}(\lambda_j^{(2)})}{Q_p^{(3)}(\lambda_j^{(2)})},~~j=1,\cdots,L_2
\no\\
&&\frac{Q_p^{(2)}(\lambda_j^{(3)}-\eta)}{Q_p^{(2)}(\lambda_j^{(3)})}=-\frac{Q_p^{(3)}(\lambda_j^{(3)}-\eta)}{Q_p^{(3)}(\lambda_j^{(3)}+\eta)},~~j=1,\cdots,L_3. \label{BAE-period-3}\eea
We have verified that the above BAEs indeed guarantee all the $T-Q$ relations (\ref{ep-3}) are polynomials and
satisfy the functional relations (\ref{futpl-3}). Therefore, we arrive at the conclusion
that $\Lambda_p(u)$, $\Lambda_p^{(1)}(u)$ and $\Lambda_p^{(2)}(u)$ given by (\ref{ep-3}) are indeed the eigenvalues of
the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$, ${t}^{(2)}_p(u)$, respectively.
The eigenvalues of the Hamiltonian (\ref{peri-Ham}) are
\begin{eqnarray}
E_p= \frac{\partial \ln \Lambda_p(u)}{\partial
u}|_{u=0,\{\theta_j\}=0}.
\end{eqnarray}
\section{$SU(2|2)$ model with off-diagonal boundary reflections}
\setcounter{equation}{0}
\subsection{Boundary integrability}
In this section, we consider the system with open boundary conditions.
The boundary reflections are characterized by the reflection matrix $K^-(u)$ at one side and $K^+(u)$ at the other side.
The integrability requires that $K^-(u)$ satisfies the graded reflection equation (RE) \cite{Che84, Bra98}
\begin{equation}
R _{12}(u-v){K^{-}_{ 1}}(u)R _{21}(u+v) {K^{-}_{2}}(v)=
{K^{-}_{2}}(v)R _{12}(u+v){K^{-}_{1}}(u)R _{21}(u-v),
\label{r1}
\end{equation}
while $K^+(u)$ satisfies the graded dual RE
\begin{eqnarray}
R_{12}(v-u)K_1^+(u)R_{21}(-u-v)K_2^+(v)=K_2^+(v)R_{12}(-u-v)K_1^+(u)R_{21}(v-u).
\label{r2}
\end{eqnarray}
The general solution of reflection matrix $K_0^{-}(u)$ defined in the space $V_0$ satisfying the graded RE (\ref{r1}) is
\bea
K_0^{-}(u)=\xi+uM,\quad M=\left(\begin{array}{cccc}1 &c_1&0&0\\[6pt]
c_2&-1 &0&0\\[6pt]
0&0 &-1 &c_3\\[6pt]
0&0&c_4&1 \end{array}\right), \label{K-matrix-1}\eea
and the dual reflection matrix $K^+(u)$ can be obtained by the mapping
\begin{equation}
K_0^{ +}(u)=K_0^{ -}(-u)|_{\xi,c_i\rightarrow
\tilde{\xi},\tilde{c}_i }, \label{K-matrix-2}
\end{equation}
where the $\xi$, $\tilde{\xi}$ and $\{c_i, \tilde{c}_i |i=1,\cdots,4\}$
are the boundary parameters which describe
the boundary interactions, and the integrability requires
\bea
c_1c_2=c_3c_4,\quad
\tilde{c}_1\tilde{c}_2=\tilde{c}_3\tilde{c}_4.\no
\eea
The reflection matrices (\ref{K-matrix-1}) and (\ref{K-matrix-2}) have the off-diagonal elements,
thus the numbers of ``quasi-particles" with different intrinsic degrees of freedom are not conserved during the reflection processes.
Meanwhile, the $K^-(u)$ and $K^+(u)$ are not commutative,
$[K^-(u),K^+(v)]$ $\neq 0$, which means that they cannot be diagonalized simultaneously.
Thus it is quite hard to derive the exact solutions of the system via the conventional Bethe ansatz because of the
absence of a proper reference state. We will develop the graded nested ODBA to solve the system exactly.
For the open case, besides the standard ``row-to-row" monodromy matrix $T_0(u)$ specified by (\ref{T1}), one needs to
consider the reflecting monodromy matrix
\begin{eqnarray}
\hat{T}_0 (u)=R_{N0}(u+\theta_N)\cdots R_{20}(u+\theta_{2}) R_{10}(u+\theta_1),\label{Tt11}
\end{eqnarray}
which satisfies the graded Yang-Baxter relation
\begin{eqnarray}
R_{ 12} (u-v) \hat T_{1}(u) \hat T_2(v)=\hat T_2(v) \hat T_{ 1}(u) R_{12} (u-v)\label{haishi0}.
\end{eqnarray}
The transfer matrix $t(u)$ is defined as
\begin{equation}
t(u)= str_0 \{K_0^{ +}(u)T_0 (u) K^{ -}_0(u)\hat{T}_0 (u)\}\label{tru}.
\end{equation}
The graded Yang-Baxter relations (\ref{ybe}), (\ref{haishi0}) and reflection equations (\ref{r1}), (\ref{r2})
lead to the fact that the transfer matrices with different spectral parameters commutate with each other, $[t(u), t(v)]=0$. Therefore, $t(u)$ serves
as the generating function of all the conserved quantities and the system is integrable.
The model Hamiltonian with open boundary condition can be written out in terms of transfer matrix (\ref{tru}) as
\begin{eqnarray}
H=\frac{1}{2}\frac{\partial \ln t(u)}{\partial
u}|_{u=0,\{\theta_j\}=0}. \label{hh}
\end{eqnarray}
The hermiticity of Hamiltonian (\ref{hh}) further requires $c_1=c_2^{*}$ and $c_3=c_4^{*}$.
\subsection{Fused reflection matrices}
In order to solve the eigenvalue problem of the transfer matrix (\ref{tru}), we should study the fusion of boundary reflection matrices \cite{Mez92, Zho96}. The main idea of the fusion for reflection matrices associated with a supersymmetric model is expressed in Appendix A. Focusing on the supersymmetric $SU(2|2)$ model with the boundary reflection matrices (\ref{K-matrix-1}) and (\ref{K-matrix-2}), we can take fusion according to Eqs.(\ref{oled-3})-(\ref{oled-4}) or
(\ref{oled-13})-(\ref{oled-14}).
The two 8-dimensional fusion associated with the super projectors $P_{12}^{(8)}$ (\ref{1-project}) and $\bar P_{12}^{(8)}$ (\ref{2-project}) gives
\bea
&&
{K}^{-}_{\bar 1}(u)=(u+\frac{1}{2}\eta)^{-1}P_{21}^{(8)}K_1^{-}(u-\frac{1}{2}\eta)R_{21}(2u)K_2^{-}(u+\frac{1}{2}\eta)P_{12}^{(8)},\no\\[4pt]
&&
{K}^{+}_{\bar 1}(u)=(u-\frac{1}{2}\eta)^{-1}P_{12}^{(8)}K_2^+(u+\frac{1}{2}\eta)R_{12}(-2u)K_1^+(u-\frac{1}{2}\eta)P_{21}^{(8)},\no\\[4pt]
&& {K}^{-}_{\bar 1^\prime }(u)=(u-\frac{1}{2}\eta)^{-1}\bar P_{21}^{(8)}K_1^{-}(u+\frac{1}{2}\eta)R_{21}(2u)K_2^{-}
(u-\frac{1}{2}\eta)\bar P_{12}^{(8)},\no\\[4pt]
&& K^{+}_{\bar 1^\prime}(u)=(u+\frac{1}{2}\eta)^{-1}\bar P_{12}^{(8)}K_2^{+}(u-\frac{1}{2}\eta)
R_{12}(-2u)K_1^{+}(u+\frac{1}{2}\eta)\bar P_{21}^{(8)}.\label{open-k4}
\eea
By specific calculation, we know that all the fused $K$-matrices are the $8\times8$ ones and their matric elements are the polynomials of $u$ with maximum degree two.
The fused reflection $K$-matrices (\ref{open-k4}) satisfy the resulting graded reflection equations. We can further use
the reflection matrices $K_{\bar 1}^{\pm}(u)$ [or $K_{\bar 1^\prime }^{\pm}(u)$] and $K_2^{\pm}(u)$
to obtain the $20$-dimensional projector $ P_{{\bar 1}2}^{(20)}$ (\ref{cjjc}) [or $P_{{\bar 1}^\prime 2}^{(20)}$ (\ref{xiao1})]. The
resulted new fused reflection matrices are
\bea && {K}^{-}_{\tilde 1}(u)=(u-
\eta)^{-1}
P_{2{\bar1}}^{(20)} K_{\bar{1}}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{{2}}\eta)K_{2}^{-}(u-\eta)P_{{\bar 1}2}^{(20)}, \no \\[4pt]
&& {K}^{+}_{\tilde 1}(u)=(2u+\eta)^{-1}
P_{{\bar 1}2}^{(20)} K_{2}^{+}(u-\eta)R_{{\bar 1}2}(-2u+\frac{1}{{2}}\eta) K_{\bar{1}}^{+}(u+\frac{1}{2}\eta)P_{2{\bar 1}}^{(20)}, \no \\[4pt]
&& {K}^{-}_{\tilde 1^\prime }(u)=(u+
\eta)^{-1}
P_{2{\bar1^\prime }}^{(20)} K_{\bar{1}^\prime }^{-}(u-\frac{1}{2}\eta)R_{2\bar 1^\prime}(2u+\frac{1}{{2}}\eta) K_{2}^{-}(u+\eta)P_{{\bar 1}^\prime 2}^{(20)},\no \\[4pt]
&& {K}^{+}_{\tilde 1^\prime }(u)=(2u-\eta)^{-1}
P_{{\bar 1}^\prime 2}^{(20)} K_{2}^{+}(u+\eta)R_{{\bar 1}^\prime 2}(-2u-\frac{1}{{2}}\eta) K_{\bar{1}^\prime }^{+}(u-\frac{1}{2}\eta)P_{2{\bar 1^\prime }}^{(20)}.\label{fuseref4} \eea
It is easy to check that the fused reflection matrices (\ref{fuseref4}) are the $20\times 20$ ones where the
matric elements are polynomials of $u$ with maximum degree three.
Moreover, keeping the correspondences (\ref{vec-corresp}) in mind, we have the important relations that the fused
reflection matrices defined in the projected subspace $V_{ \tilde 1}$ and that defined in the projected subspace $V_{ \tilde 1^\prime}$ are equal
\bea
{K}^{-}_{\tilde 1}(u)={K}^{-}_{\tilde 1^\prime }(u), \quad {K}^{+}_{\tilde 1}(u)={K}^{+}_{\tilde 1^\prime }(u), \label{k-iden}
\eea
which will be used to close the fusion processes with boundary reflections.
\subsection{Operator production identities}
For the model with open boundary condition, besides the fused monodromy matrices (\ref{T6}), we also need the fused reflecting monodromy matrices, which are
constructed as
\begin{eqnarray}
&&\hat{T}_{\bar 0}(u)=R_{ N\bar 0}(u+\theta_N)\cdots R_{2\bar 0}(u+\theta_2)R_{1\bar 0}(u+\theta_1), \no \\[4pt]
&&\hat{T}_{\bar 0^\prime}(u)=R_{N\bar 0^\prime}(u+\theta_N)\cdots R_{2\bar 0^\prime}(u+\theta_2)R_{1\bar 0^\prime}(u+\theta_1).\label{openT6}
\end{eqnarray}
The fused reflecting monodromy matrices satisfy the graded Yang-Baxter relations
\begin{eqnarray}
&&R_{1\bar 2} (u-v) \hat{T}_1(u) \hat{ T}_{\bar 2}(v) = \hat{ T}_{\bar 2}(v) \hat{T}_1(u) R_{1\bar 2} (u-v), \no \\[4pt]
&&R_{1\bar 2^\prime} (u-v) \hat{T}_1(u) \hat{T}_{\bar 2^\prime}(v) = \hat{ T}_{\bar 2^\prime}(v) \hat{T}_1(u) R_{1\bar 2^\prime} (u-v), \no \\[4pt]
&&R_{\bar 1\bar 2^\prime} (u-v) \hat{T}_{\bar 1}(u) \hat{T}_{\bar 2^\prime}(v) = \hat{ T}_{\bar 2^\prime}(v) \hat{T}_{\bar 1}(u) R_{\bar 1\bar 2^\prime} (u-v).\label{yyBB222}
\end{eqnarray}
The fused transfer matrices are defined as
\bea
&&t^{(1)}(u)= str_{\bar 0}\{K^{+}_{\bar{0}}(u) T_{\bar 0}(u) K^{-}_{\bar{0}}(u) \hat{T}_{\bar 0}(u)\},\no \\[4pt]
&&t^{(2)}(u)= str_{\bar 0^\prime}\{K^{+}_{\bar{0}^\prime}(u) T_{\bar 0^\prime}(u) K^{-}_{\bar{0}^\prime}(u) \hat{ T}_{\bar 0^\prime}(u)\}.\label{openTransfer-5}\eea
Using the method we have used in the periodic case, we can obtain the operator product identities among the fused transfer matrices as
\bea && t (\pm\theta_j)t (\pm\theta_j+\eta)=-\frac{1}{
4} \frac{(\pm\theta_j)(\pm\theta_j+\eta)
}{(\pm\theta_j+\frac{1}{{2}}\eta)^2}\nonumber\\[4pt]
&&\hspace{20mm}\times\prod_{l=1}^N
(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta) t^{(1)}(\pm\theta_j+\frac{1}{2}\eta),\label{openident1} \\[4pt]
&& t (\pm\theta_j)t (\pm\theta_j-\eta)=-\frac{1}{
4} \frac{(\pm\theta_j)(\pm\theta_j-\eta)
}{(\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt]
&&\hspace{20mm}\times\prod_{l=1}^N
(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) t^{(2)}(\pm\theta_j-\frac{1}{2}\eta),\label{openident2}\\[4pt]
&&
t (\pm\theta_j-\eta){ t}^{(1)}(\pm\theta_j+\frac{1}{{2}}\eta)=\frac{(\pm\theta_j+\frac{1}{2}\eta)^2(\pm\theta_j-\eta)}{(\pm\theta_j+\eta)
(\pm\theta_j-\frac{1}{{2}}\eta)^2}\no\\[4pt]&&~~~~~\times
\prod_{l=1}^N \frac{(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) }{(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)} t (\pm\theta_j+\eta){t}^{(2)}(\pm\theta_j-\frac{1}{{2}}\eta).\label{openident3}
\eea
The proof of the above operator identities is given in Appendix B.
From the definitions, we know that the transfer matrix $t(u)$ is a operator polynomial of $u$ with degree $2N+2$ while
the fused ones ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$ are the operator polynomials of $u$ both with degree $2N+4$.
Thus they can be completely determined by $6N+13$ independent conditions.
The recursive fusion relations (\ref{openident1}), (\ref{openident2}) and (\ref{openident3}) gives $6N$ constraints and we still need 13 ones, which can be
achieved by analyzing the values of transfer matrices at some special points. After some direct calculation, we have
\bea
&& t(0)=0,\quad {t}^{(1)}(0)=0,\quad {t}^{(2)}(0)=0, \quad {t}^{(1)}(\frac{\eta}{2})=-2\xi \tilde{\xi} t(\eta), \no \\[4pt]
&& {t}^{(1)}(-\frac{\eta}{2})=-2\xi \tilde{\xi} t(-\eta), \quad
{t}^{(2)}(\frac{\eta}{2})=2\xi \tilde{\xi} t(\eta), \quad {t}^{(2)}(-\frac{\eta}{2})=2\xi \tilde{\xi} t(-\eta),\no \\[4pt]
&& \frac{\partial {t}^{(1)}(u)}{\partial u}|_{u=0}+ \frac{\partial {t}^{(2)}(u)}{\partial u}|_{u=0}=0. \label{specialvalue4}
\eea
Meanwhile, the asymptotic behaviors of $t(u)$, $ t^{(1)}(u)$ and $ t^{(2)}(u)$ read
\bea
&& t(u)|_{u\rightarrow\infty}=-[c_1\tilde{c}_2+\tilde{c}_1c_2-c_3\tilde{c}_4-\tilde{c}_3c_4] u^{2N+2}\times {\rm id}
-\eta \hat U u^{2N+1}+\cdots, \no \\[4pt]
&& {t}^{(1)}(u)|_{u\rightarrow\infty}=-4\{2[c_3c_4\tilde{c}_3\tilde{c}_4-\tilde{c_3}c_4-c_3\tilde{c}_4-1]+(1+c_1\tilde{c}_2)^2+(1+\tilde{c_1}c_2)^2\no\\[4pt]
&&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)\}u^{2N+4}\times{\rm id}
-4\eta\hat Q u^{2N+3}+\cdots, \no \\[4pt]
&& {t}^{(2)}(u)|_{u\rightarrow\infty}=-4\{2[c_1c_2\tilde{c}_1\tilde{c}_2-\tilde{c}_1c_2-c_1\tilde{c}_2-1]+(1+c_3\tilde{c}_4)^2+(1+\tilde{c}_3c_4)^2\no\\[4pt]
&&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4)+\tilde{c}_3c_4\}u^{2N+4}\times{\rm id}
+\cdots.\label{openasym3}
\eea
Here we find that the operator $\hat{U}$ related to the coefficient of transfer matrix $t(u)$ with degree $2N+1$ is given by
\bea
\hat U= \sum_{i=1}^{N}\hat U_i=\sum_{i=1}^{N}(M_i \tilde{M}_i+\tilde{M}_i M_i),\label{openasym5}
\eea
where $M_i$ is given by (\ref{K-matrix-1}), $\tilde M_i$ is determined by (\ref{K-matrix-2}) and the operator $\hat U_i$ is
\bea
\hat U_i=\left(
\begin{array}{cccc}
2+c_1\tilde{c}_2+\tilde{c}_1c_2 & 0 & 0 & 0 \\
0 & 2+c_1\tilde{c}_2+\tilde{c}_1c_2 & 0 & 0 \\
0 & 0 & 2+c_3\tilde{c}_4+\tilde{c}_3c_4 & 0 \\
0 & 0 & 0 & 2+c_3\tilde{c}_4+\tilde{c}_3c_4 \\
\end{array}
\right)_i.
\eea
We note that $\hat U_i$ is the operator defined in the $i$-th physical space $V_i$ and can be expressed by a diagonal matrix with constant elements.
The summation of $\hat U_i$ in Eq.(\ref{openasym5}) is the direct summation and the representation matrix of operator $\hat U$ is also a diagonal one with constant elements. Moreover, we find that the operator $\hat{Q}$ related to the coefficient of the fused transfer matrix ${t}^{(1)}(u)$ with degree $ 2N+3$ is given by
\bea
\hat Q=\sum_{i=1}^{N}\hat Q_i,\label{openasym4}
\eea
where the operator $\hat Q_i$ is defined in $i$-th physical space $V_i$ with the matrix form of
\bea
&&\hat Q_i=\left(
\begin{array}{cccc}
\alpha & 0 & 0 & 0 \\
0 & \alpha & 0 & 0 \\
0 & 0 & \beta & 0 \\
0 & 0 & 0 & \beta \\
\end{array}
\right)_i, \no \\[4pt]
&& \alpha=2-2\tilde{c}_1\tilde{c}_2+4c_1\tilde{c}_2+(c_1\tilde{c}_2)^2+4\tilde{c}_1c_2-2c_1c_2+(\tilde{c}_1c_2)^2,\no \\[4pt]
&& \beta=2-2\tilde{c}_3\tilde{c}_4-(c_1\tilde{c}_2)^2-(\tilde{c_1}c_2)^2-4c_1c_2\tilde{c}_1\tilde{c}_2+4c_3\tilde{c}_4+2c_1\tilde{c}_2c_3\tilde{c}_4\no\\[4pt]
&&\hspace{10mm}+2\tilde{c}_1c_2c_3\tilde{c}_4
+4\tilde{c}_3c_4+2c_1\tilde{c}_2\tilde{c}_3c_4+2\tilde{c}_1c_2\tilde{c}_3c_4-2c_3c_4.\no
\eea
Again, the operator $\hat Q_i$ is a diagonal matrix with constant elements and the summation of $\hat Q_i$ in Eq.(\ref{openasym4}) is the direct summation.
So far, we have found out the $6N+13$ relations (\ref{openident1}), (\ref{openident2}), (\ref{openident3}),
(\ref{specialvalue4})-(\ref{openasym4}), which allow us to determine the eigenvalues of the transfer matrices $t(u)$, ${t}^{(1)}(u)$ and $ t^{(2)}(u)$.
\subsection{Functional relations}
From the graded Yang-Baxter relations (\ref{yybb4}), (\ref{yyBB222}) and graded reflection equations (\ref{r1}) (\ref{r2}),
one can prove that the transfer matrices $t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$
commutate with each other, namely,
\bea
[t(u), {t}^{(1)}(u)]=[t(u), {t}^{(2)}(u)]=[ {t}^{(1)}(u), {t}^{(2)}(u)]=0.\label{opencom}
\eea
Therefore, they have common eigenstates and can be diagonalized simultaneously.
Let $|\Phi\rangle$ be a common eigenstate. Acting the transfer matrices on this eigenstate, we have
\bea
&&t(u)|\Psi\rangle=\Lambda(u)|\Psi\rangle,\no\\
&& t^{(1)}(u)|\Psi\rangle= \Lambda^{(1)}(u)|\Psi\rangle,\no\\
&& t^{(2)}(u)|\Psi\rangle= \Lambda^{(2)}(u)|\Psi\rangle.\no
\eea
where $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ are the eigenvalues of
$t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$, respectively.
It is easy to check that the eigenvalue $\Lambda(u)$ is a polynomial of $u$ with degree of $2N+2$,
and both ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ are the polynomials of $u$ with degree $2N+4$.
Thus $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ can be determined by $6N+13$ independent conditions.
Acting the operator product identities (\ref{openident1}), (\ref{openident2}) and (\ref{openident3}) on the state $|\Phi\rangle$,
we obtain the functional relations among the eigenvalues
\bea && \Lambda(\pm\theta_j)\Lambda(\pm\theta_j+\eta)=-\frac{1}{
4} \frac{(\pm\theta_j)(\pm\theta_j+\eta)
}{(\pm\theta_j+\frac{1}{{2}}\eta)^2}\nonumber\\[4pt]
&&\hspace{10mm}\times\prod_{l=1}^N
(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta) \Lambda^{(1)}(\pm\theta_j+\frac{1}{2}\eta), \no \\[4pt]
&& \Lambda (\pm\theta_j)\Lambda(\pm\theta_j-\eta)=-\frac{1}{
4} \frac{(\pm\theta_j)(\pm\theta_j-\eta)
}{(\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt]
&&\hspace{10mm}\times\prod_{l=1}^N
(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) \Lambda^{(2)}(\pm\theta_j-\frac{1}{2}\eta), \no \\[4pt]
&& \Lambda (\pm\theta_j-\eta){ \Lambda}^{(1)}(\pm\theta_j+\frac{1}{{2}}\eta)=
\frac{(\pm\theta_j+\frac{1}{2}\eta)^2(\pm\theta_j-\eta)}{(\pm\theta_j+\eta)
(\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt]
&&\hspace{10mm}\times
\prod_{l=1}^N \frac{(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta)}{(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)} \Lambda (\pm\theta_j+\eta)
{\Lambda}^{(2)}(\pm\theta_j-\frac{1}{{2}}\eta),\label{eigenident3}
\eea
where $j=1,2,\cdots,N$. Acting Eqs.(\ref{specialvalue4}) and (\ref{openasym3}) on the state $|\Phi\rangle$, we have
\bea
&& \Lambda(0)=0,\quad {\Lambda}^{(1)}(0)=0,\quad {\Lambda}^{(2)}(0)=0, \quad
{\Lambda}^{(1)}(\frac{\eta}{2})=-2\xi \tilde{\xi} \Lambda(\eta), \no \\[4pt]
&&{\Lambda}^{(1)}(-\frac{\eta}{2})=-2\xi \tilde{\xi} \Lambda(-\eta), \quad
{\Lambda}^{(2)}(\frac{\eta}{2})=2\xi \tilde{\xi} \Lambda(\eta),\quad {\Lambda}^{(2)}(-\frac{\eta}{2})=2\xi \tilde{\xi} \Lambda(-\eta),\no\\[4pt]
&& \frac{\partial {\Lambda}^{(1)}(u)}{\partial u}|_{u=0}+ \frac{\partial {\Lambda}^{(2)}(u)}{\partial u}|_{u=0}=0, \no \\[4pt]
&& \Lambda(u)|_{u\rightarrow\infty}=-[c_1\tilde{c}_2+\tilde{c}_1c_2-c_3\tilde{c}_4-\tilde{c}_3c_4] u^{2N+2}, \no \\[4pt]
&& {\Lambda}^{(1)}(u)|_{u\rightarrow\infty}=-4\{2[c_3c_4\tilde{c}_3\tilde{c}_4-\tilde{c_3}c_4-c_3\tilde{c}_4-1]+(1+c_1\tilde{c}_2)^2+(1+\tilde{c_1}c_2)^2\no\\[4pt]
&&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)\}u^{2N+4}, \no \\[4pt]
&& {\Lambda}^{(2)}(u)|_{u\rightarrow\infty}=-4\{2[c_1c_2\tilde{c}_1\tilde{c}_2-\tilde{c}_1c_2-c_1\tilde{c}_2-1]+(1+c_3\tilde{c}_4)^2+(1+\tilde{c}_3c_4)^2\no\\[4pt]
&&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4)+\tilde{c}_3c_4\}u^{2N+4}.\label{openasym33}
\eea
Because the operators $\hat U$ given by (\ref{openasym5}) and $\hat Q$ given by (\ref{openasym4}) can be expressed by the constant diagonal matrices, they commutate
with each other and commutate with all the fused transfer matrices. Thus the state $|\Phi\rangle$ also is the eigenvalues of $\hat U$ and $\hat Q$.
After detailed calculation, the operator $\hat U$ has $N+1$ different eigenvalues
\bea
N(2+c_1\tilde{c}_2+\tilde{c}_1c_2)+k(c_3\tilde{c}_4+\tilde{c}_3c_4-c_1
\tilde{c}_2-\tilde{c}_1c_2), \quad k=0,1,\cdots,N.\label{higher-1}
\eea
Eq.(\ref{higher-1}) gives all the possible values of coefficients of the polynomial $\Lambda(u)$ with the degree $2N+1$.
Acting the operator $\hat U$ on the state $|\Phi\rangle$, one would obtain one of them. With direct calculation, we also know the operator
$\hat Q$ has $N+1$ different eigenvalues
\bea
&&N\big[2-2\tilde{c}_1\tilde{c}_2+4c_1\tilde{c}_2+(c_1\tilde{c}_2)^2+4\tilde{c}_1c_2-2c_1c_2+(\tilde{c}_1c_2)^2\big]\nonumber\\[4pt]
&&\hspace{10mm}+k\big[2(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)-2(c_1\tilde{c}_2+\tilde{c}_1c_2)^2\nonumber\\[4pt]
&&\hspace{10mm}+4(c_3\tilde{c}_4+\tilde{c}_3c_4-c_1\tilde{c}_2-\tilde{c}_1c_2)], \quad k=0,1,\cdots N.\label{higher-2}
\eea
Eq.(\ref{higher-2}) indeed gives all the possible values of coefficients of polynomial $\Lambda^{(1)}(u)$ with the degree $2N+3$.
The operator $\hat Q$ acting on the state $|\Phi\rangle$ gives one of them.
Then we arrive at that the above $6N+13$ relations (\ref{eigenident3})-(\ref{higher-2}) enable us to completely determine
the eigenvalues $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ which are expressed as the inhomogeneous $T-Q$ relations in the next subsection.
\subsection{Inhomogeneous $T-Q$ relations}
For simplicity, we define $z^{(l)}(u)$, $x_1(u)$ and $x_2(u)$ functions
\begin{eqnarray}
z^{(l)} (u)&=&\left\{
\begin{array}{ll}
\displaystyle(-1)^{p(l)}\alpha_l(u)Q^{(0)}(u)K^{(l)}(u)\frac{Q^{(l-1)}(u+\eta)Q^{(l)}(u-\eta)}{Q^{(l)}(u)Q^{(l-1)}(u)}, &l=1,2,\\[6mm]
\displaystyle(-1)^{p(l)}\alpha_l(u)Q^{(0)}(u)K^{(l)}(u)\frac{Q^{(l-1)}(u-\eta)Q^{(l)}(u+\eta)}{Q^{(l)}(u)Q^{(l-1)}(u)}, &l=3,4,\end{array}
\right. \no \\
x_1 (u)&=&u^2Q^{(0)}(u+\eta)Q^{(0)}(u)\frac{f^{(1)}(u)Q^{(2)}(-u-\eta)}{Q^{(1)}(u)},\no\\[4pt]
x_2 (u)&=&u^2Q^{(0)}(u+\eta)Q^{(0)}(u)Q^{(0)}(-u)\frac{f^{(2)}(u)Q^{(2)}(-u-\eta)}{Q^{(3)}(u)}.\no
\end{eqnarray}
Here the structure factor $\alpha_{l}(u)$ is defined as
\begin{eqnarray}
\alpha_l(u)=\left\{
\begin{array}{ll}
\displaystyle\frac{u}{u+\frac{1}{2}\eta}, &l=1,4,\\[6mm]
\displaystyle\frac{u^2}{(u+\frac{1}{2}\eta)(u+\eta)}, &l=2,3.\end{array}
\right.\no
\end{eqnarray}
The $Q$-functions are
\bea
&&Q^{(0)}(u)=\prod_{l=1}^{N}(u-\theta_l)(u+\theta_l),\quad
Q^{(m)}(u)=\prod_{j=1}^{L_m}(u-\lambda_{j}^{(m)})(u+\lambda_{j}^{(m)}+m\eta), \quad m=1,2,\no \\
&&Q^{(3)}(u)=\prod_{j=1}^{L_3}(u-\lambda_{j}^{(m)})(u+\lambda_{j}^{(m)}+\eta),\quad Q^{(4)}(u)=1,\label{higher-3}
\eea
where $L_1$, $L_2$ and $L_3$ are the non-negative integers which describe the numbers of Bethe roots $\lambda_{j}^{(1)}$, $\lambda_{j}^{(2)}$ and $\lambda_{j}^{(3)}$, respectively.
The forms of functions $K^{(l)}(u)$ are related with the boundary reflections and given by
\bea
&&K^{(1)}(u)=(\xi+\sqrt{1+c_1c_2}u)(\tilde{\xi}+\sqrt{1+\tilde{c}_1\tilde{c}_2}u),\no\\[4pt]
&&K^{(2)}(u)=(\xi-\sqrt{1+c_1c_2}(u+\eta))(\tilde{\xi}-\sqrt{1+\tilde{c}_1\tilde{c}_2}(u+\eta)),\no\\[4pt]
&&K^{(3)}(u)=(\xi+\sqrt{1+c_1c_2}(u+\eta))(\tilde{\xi}+\sqrt{1+\tilde{c}_1\tilde{c}_2}(u+\eta)),\no\\[4pt]
&&K^{(4)}(u)=(\xi-\sqrt{1+c_1c_2}u)(\tilde{\xi}-\sqrt{1+\tilde{c}_1\tilde{c}_2}u).
\eea
The polynomials $f^{(l)}(u)$ in the inhomogeneous terms $x_1(u)$ and $x_2(u)$ are
\bea
f^{(l)}(u)=g_lu(u+\eta)(u-\eta)(u+\frac{1}{2}\eta)^2(u+\frac{3}{2}\eta)(u-\frac{1}{2}\eta)(u+2\eta),\quad l=1,2,\label{func}
\eea
where $g_l$ are given by
\bea
&&g_1=-2-\tilde{c}_1c_2-c_1\tilde{c}_2-2\sqrt{(1+c_1c_2)(1+\tilde{c}_1\tilde{c}_2)},\no\\[4pt]
&&g_2=2+c_3\tilde{c}_4+\tilde{c}_3c_4+2\sqrt{(1+c_1c_2)(1+\tilde{c}_1\tilde{c}_2)}.
\eea
By using the above functions and based on Eqs.(\ref{eigenident3})-(\ref{higher-2}), we construct the eigenvalues $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ as following inhomogeneous $T-Q$ relations
\bea
&&\Lambda (u)=\sum_{l=1}^4 z^{(l)} (u)+x_1 (u)+x_2 (u),\no
\\[4pt]
&&\Lambda^{(1)}(u)=-4u^2[Q^{(0)}(u+\frac{1}{2}\eta)(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta)]^{-1}\Big\{\sum_{l=1}^4\sum_{m=1}^2
\tilde z^{(l)} (u+\frac{1}{2}\eta)\tilde z^{(m)}(u-\frac{1}{2}\eta)\no\\[4pt]
&&\qquad\quad-z^{(1)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)+z^{(4)}(u+\frac{1}{2}\eta)z^{(3)}(u-\frac{1}{2}\eta)\Big\}, \no
\\[4pt]
&&\Lambda^{(2)}(u)=-4u^2[Q^{(0)}(u-\frac{1}{2}\eta)(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta)]^{-1}\Big\{\sum_{l=1}^4\sum_{m=3}^4
\tilde z^{(l)} (u+\frac{1}{2}\eta)\tilde z^{(m)}(u-\frac{1}{2}\eta)\no\\[4pt]
&&\qquad\quad+z^{(1)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)-z^{(4)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)\Big\},\label{eigen3}
\eea
where
\bea
\tilde z^{(1)}(u)=z^{(1)}(u)+x_1 (u),~\tilde z^{(2)}(u)=z^{(2)}(u),~\tilde z^{(3)}(u)=z^{(3)}(u),~\tilde z^{(4)}(u)=z^{(4)}(u)+x_2 (u).\no
\eea
Since all the eigenvalues are the polynomials, the residues of Eq.(\ref{eigen3}) at the apparent poles should be
zero, which gives the Bethe ansatz equations
\bea &&1+\frac{\lambda_{l}^{(1)}}{\lambda_{l}^{(1)}+\eta}\frac{K^{(2)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)})}{K^{(1)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)}+\eta)}
\frac{Q^{(1)}(\lambda_{l}^{(1)}+\eta)Q^{(2)}(\lambda_{l}^{(1)}-\eta)}{Q^{(1)}(\lambda_{l}^{(1)}-\eta)Q^{(2)}(\lambda_{l}^{(1)})}\no\\[4pt]
&&\qquad=-\frac{\lambda_{l}^{(1)}(\lambda_{l}^{(1)}+\frac{1}{2}\eta)f^{(1)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)})Q^{(2)}(-\lambda_{l}^{(1)}-\eta)}
{K^{(1)}(\lambda_{l}^{(1)})Q^{(1)}(\lambda_{l}^{(1)}-\eta)},\quad l=1,\cdots,L_1,\no\\[4pt]
&&\frac{K^{(3)}(\lambda_{l}^{(2)})}{K^{(2)}(\lambda_{l}^{(2)})}\frac{Q^{(3)}(\lambda_{l}^{(2)}+\eta)}{Q^{(3)}(\lambda_{l}^{(2)})}=
\frac{Q^{(1)}(\lambda_{l}^{(2)}+\eta)}{Q^{(1)}(\lambda_{l}^{(2)})},\quad l=1,\cdots,L_2,\no\\[4pt]
&&\frac{\lambda_{l}^{(3)}(\lambda_{l}^{(3)}+\frac{1}{2}\eta)Q^{(0)}(\lambda_{l}^{(3)}+\eta)Q^{(0)}(-\lambda_{l}^{(3)})f^{(2)}(\lambda_{l}^{(3)})Q^{(2)}(-\lambda_{l}^{(3)}-\eta)}
{K^{(4)}(\lambda_{l}^{(3)})Q^{(3)}(\lambda_{l}^{(3)}-\eta)}\no \\[4pt]
&&\qquad =1+\frac{\lambda_{l}^{(3)}}{\lambda_{l}^{(3)}+\eta}\frac{K^{(3)}(\lambda_{l}^{(3)})}{K^{(4)}(\lambda_{l}^{(3)})}\frac{Q^{(2)}(\lambda_{l}^{(3)}-\eta)Q^{(3)}(\lambda_{l}^{(3)}+\eta)}
{Q^{(2)}(\lambda_{l}^{(3)})Q^{(3)}(\lambda_{l}^{(3)}-\eta)},\quad l=1,\cdots,L_3.\label{open-BAE}
\eea
From the analysis of asymptotic behaviors and contributions of second higher order of corresponding polynomials, the numbers of Bethe roots should satisfy
\bea
L_1=L_2+N+4,\quad L_3=2N+L_2+4,\quad L_2=k, \quad k=0, 1, \cdots, N.
\eea
Some remarks are in order. The coefficient of term with $u^{2N+1}$ in the polynomial $\Lambda(u)$
and that of term with $u^{2N+3}$ in the polynomial $\Lambda^{(1)}(u)$ are not related with Bethe roots.
The constraints (\ref{higher-1}) and (\ref{higher-2}) require $L_2=k$, where $k=0,\cdots,N$ is related to the eigenvalues of the operators $\hat{U}$ and $\hat{Q}$. Then the Bethe ansatz equations (\ref{open-BAE}) can describe all the eigenstates of the system.
The second set of Bethe ansatz equations in Eq.(\ref{open-BAE}) are the homogeneous ones. This is because that the reflection matrices $K^{(\pm)}(u)$ are the blocking ones.
The matrix elements involving both bosonic (where the parity is 0) and fermionic (where the parity is 1) bases are zero.
The integrability of the system requires that the reflection processes from bosonic basis to fermionic one and vice versa are forbidden.
We note that the Bethe ansatz equations obtained from the regularity of $\Lambda(u)$ are
the same as those obtained from the regularities of $\Lambda^{(1)}(u)$ and $\Lambda^{(2)}(u)$. Meanwhile, the functions $Q^{(m)}(u)$ has two zero points, which
should give the same Bethe ansatz equations.
We have checked that the inhomogeneous $T-Q$ relations (\ref{eigen3}) satisfy the above mentioned $6N+13$ conditions
(\ref{eigenident3})-(\ref{higher-2}). Therefore, $\Lambda(u)$, $\Lambda^{(1)}(u)$ and $\Lambda^{(2)}(u)$ are
the eigenvalues of transfer matrices $t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$, respectively.
Finally, the eigenvalues of Hamiltonian (\ref{hh}) are obtained from $\Lambda(u)$ as
\bea
E=\frac{\partial \ln \Lambda(u)}{\partial u}|_{u=0,\{\theta_j\}=0}.
\eea
\section{Conclusion}
In this paper, we develop a graded nested off-diagonal Bethe ansatz method
and study the exact solutions of the supersymmetric $SU(2|2)$ model with both periodic and off-diagonal boundary conditions.
After generalizing fusion to the supersymmetric case, we obtain the closed sets of operator product identities.
For the periodic case, the eigenvalues are given in terms of the homegeneous $T-Q$ relations (\ref{ep-3}).
While for the open case, the eigenvalues are given by the inhomogeneous $T-Q$ relations (\ref{eigen3}). This scheme can be generalized to other high rank supersymmetric quantum integrable models.
\section*{Acknowledgments}
The financial supports from the National Program for Basic Research of MOST (Grant Nos. 2016YFA0300600 and
2016YFA0302104), the National Natural Science Foundation of China
(Grant Nos. 11934015,
11975183, 11947301, 11774397, 11775178 and 11775177), the Major Basic Research Program of Natural Science of Shaanxi Province
(Grant Nos. 2017KCT-12, 2017ZDJC-32), Australian Research Council (Grant No. DP 190101529), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33000000), the National Postdoctoral Program for Innovative Talents (BX20180350) and the Double First-Class University Construction Project of Northwest University are gratefully acknowledged.
\section*{Appendix A: Fusion of the reflection matrices}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
The general fusion procedure of the reflection matrices was given \cite{Mez92, Zho96}. We will generalize the method developed in \cite{Hao14} to study
the fusion of the reflections matrices for super symmetric models (taking the $SU(2|2)$ model as an example). The (graded) reflection equation at special point gives
\begin{equation}
R_{12}(-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(2u-\alpha) {K^{-}_{2}}(u)=
{K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(-\alpha), \label{oled-1}
\end{equation}
where $R_{12}(-\alpha)=P_{12}^{(d)}S_{12}$ as we defined perviously. Multiplying Eq.(\ref{oled-1}) with the projector $P_{12}^{(d)}$ from left and using the property $P_{12}^{(d)} R_{12}(-\alpha)= R_{12}(-\alpha)$, we have
\begin{eqnarray}
&&R_{12}(-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(2u-\alpha) {K^{-}_{2}}(u)
\no \\
&&\qquad \qquad =P_{12}^{(d)}
{K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(-\alpha).\label{oled-2}
\end{eqnarray}
Comparing the right hand sides of Eqs.(\ref{oled-1}) and (\ref{oled-2}), we obtain
\begin{equation}
P_{12}^{(d)} {K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}=
{K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}.\label{oled-3}
\end{equation}
Which give the general principle of fusion of the reflection matrices.
If we define $P_{12}^{(d)} {K^{-}_{2}}(u)$ $R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}$ as the fused reflection matrix $K^-_{\langle 1 2\rangle}(u)\equiv K^-_{\bar 1}(u)$,
where the integrability requires that the inserted $R$-matrix with determined spectral parameter is necessary,
we can prove the the fused $K$-matrix $K^-_{\bar 1}(u)$ also satisfies the (graded) reflection equation
\begin{eqnarray}
&& R_{\bar 12}(u-v) K^{-}_{\bar 1}(u) R _{2\bar 1}(u+v) K^{-}_{2}(v)
=P_{00'}^{(d)}R_{0'2}(u-v)R_{02}(u-v-\alpha)P_{00'}^{(d)}\no \\ &&\qquad
\times P_{00'}^{(d)} {K^{-}_{0'}}(u)R_{00'}(2u-\alpha){K^{-}_{0}}(u-\alpha)P_{0'0}^{(d)}
P_{0'0}^{(d)}R_{20'}(u+v)R_{20}(u+v-\alpha)P_{0'0}^{(d)} K^{-}_{2}(v)
\no \\ &&
\quad
=P_{00'}^{(d)}R_{0'2}(u-v)R_{02}(u-v-\alpha){K^{-}_{0'}}(u)R_{00'}(2u-\alpha)
\no \\ &&\qquad
\times {K^{-}_{0}}(u-\alpha)R_{20'}(u+v)R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)}
\no \\ &&\quad
=P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{02}(u-v-\alpha)R_{00'}(2u-\alpha)R_{20'}(u+v)
\no \\ &&\qquad
\times {K^{-}_{0}}(u-\alpha) R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)}
\no \\ &&\quad
=P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)R_{00'}(2u-\alpha)
\no \\ &&\qquad
\times R_{02}(u-v-\alpha){K^{-}_{0}}(u-\alpha) R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)}
\no \\ &&\quad
=P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)R_{00'}(2u-\alpha)
\no \\ &&\qquad
\times K^{-}_{2}(v) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)}
\no \\ &&\quad
=P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)K^{-}_{2}(v)
\no \\ &&\qquad
\times R_{00'}(2u-\alpha) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)}
\no \\ &&\quad
=P_{00'}^{(d)}K^{-}_{2}(v) R_{0'2}(u+v) {K^{-}_{0'}}(u) R_{20'}(u-v)
\no \\ &&\qquad
\times R_{00'}(2u-\alpha) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)}
\no \\ &&\quad
=K^{-}_{2}(v) P_{00'}^{(d)} R_{0'2}(u+v) {K^{-}_{0'}}(u) R_{02}(u+v-\alpha)R_{00'}(2u-\alpha) R_{20'}(u-v)
\no \\ &&\qquad
\times {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)}
\no \\ &&\quad
=K^{-}_{2}(v) P_{00'}^{(d)} R_{0'2}(u+v) R_{02}(u+v-\alpha){K^{-}_{0'}}(u) R_{00'}(2u-\alpha) {K^{-}_{0}}(u-\alpha)
\no \\ &&\qquad
\times R_{20'}(u-v) R_{20}(u-v-\alpha) P_{0'0}^{(d)}
\no \\ &&\quad
=K^{-}_{2}(v) R_{\bar 12}(u+v) K^{-}_{\bar 1}(u) R _{2\bar 1}(u-v).
\end{eqnarray}
In the derivation, we have used the relation
\begin{equation}
P_{21}^{(d)}R_{32}(u)R_{31}(u-\alpha)P_{21}^{(d)}=
R_{32}(u)R_{31}(u-\alpha)P_{21}^{(d)}\equiv R_{3\bar 1}(u).\label{so112led-3}
\end{equation}
From the dual reflection equation (\ref{r2}), we obtain the general construction principle of fused dual reflection matrices
\begin{equation}
P_{12}^{(d)} {K^{+}_{2}}(u)R_{12}(-2u-\alpha){K^{+}_{1}}(u+\alpha)P_{21}^{(d)}=
{K^{+}_{2}}(u)R_{12}(-2u-\alpha){K^{+}_{1}}(u+\alpha)P_{21}^{(d)}.\label{oled-4}
\end{equation}
If $R_{12}(-\beta)=S_{12}P_{12}^{(d)}$, the corresponding fusion relations are
\begin{eqnarray}
&&P_{12}^{(d)} {K^{-}_{1}}(u-\beta)R_{21}(2u-\beta){K^{-}_{2}}(u)P_{21}^{(d)}=
P_{12}^{(d)} {K^{-}_{1}}(u-\beta)R_{21}(2u-\beta){K^{-}_{2}}(u),\label{oled-13}\\[6pt]
&&P_{12}^{(d)} {K^{+}_{1}}(u+\beta)R_{21}(-2u-\beta){K^{+}_{2}}(u)P_{21}^{(d)} \no \\[6pt]
&&\qquad =
P_{12}^{(d)} {K^{+}_{1}}(u+\beta)R_{21}(-2u-\beta){K^{+}_{2}}(u).\label{oled-14}
\end{eqnarray}
Finally the fused K-matrices in subsection 3.2 can be carried out according to Eqs.(\ref{oled-3})-(\ref{oled-4}) or
(\ref{oled-13})-(\ref{oled-14}).
\section*{Appendix B: Proof of the operator product identities}
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
We introduce the reflection monodromy matrices
\bea
&&\hat{T}_{\tilde 0}(u)=R_{N\tilde 0}(u+\theta_N)\cdots R_{2\tilde 0}(u+\theta_2) R_{1\tilde 0}(u+\theta_1) \label{openT5}, \no \\[4pt]
&&\hat{T}_{\tilde 0^\prime}(u)=R_{N\tilde 0^\prime}(u+\theta_N)\cdots R_{2\tilde 0^\prime}(u+\theta_2) R_{1\tilde 0^\prime}(u+\theta_1),\label{openT7}\eea
which satisfy the graded Yang-Baxter equations
\bea
&&R_{1\tilde 2} (u-v) \hat{T}_1(u) \hat{ T}_{\tilde 2}(v) = \hat{ T}_{\tilde 2}(v) \hat{T}_1(u) R_{1\tilde 2} (u-v), \no \\[4pt]
&&R_{1\tilde 2^\prime} (u-v) \hat{T}_1(u) \hat{T}_{\tilde 2^\prime}(v) = \hat{ T}_{\tilde 2^\prime}(v) \hat{T}_1(u) R_{1\tilde 2^\prime} (u-v).\label{yyBb333-1}\eea
In order to solve the transfer matrix $t(u)$ (\ref{tru}), we still need the fused transfer matrices which are defined as
\bea
&&\tilde t^{(1)}(u)= str_{\tilde 0}\{{K}^{+}_{\tilde{0} }(u) T_{\tilde 0}(u) {K}^{-}_{\tilde{0} }(u) \hat{T}_{\tilde 0}(u)\}, \no \\[4pt]
&&\tilde t^{(2)}(u)= str_{\tilde 0^\prime}\{{K}^{+}_{\tilde{0}^\prime }(u) T_{\tilde 0^\prime}(u) {K}^{-}_{\tilde{0} }(u) \hat{ T}_{\tilde 0^\prime}(u)\}.\label{openTransfer-6}
\eea
Similar with periodic case, from the property that above $R$-matrices can degenerate into
the projectors and using the definitions (\ref{openT6}) and (\ref{openT7}),
we obtain following fusion relations among the reflecting monodromy matrices
\bea
&&P^{ (8) }_{21}\hat{T}_2 (u)\hat{T}_1 (u+\eta)P^{(8) }_{21}=\prod_{l=1}^N
(u+\theta_l+\eta)\hat{ T}_{\bar 1}(u+\frac{1}{2}\eta),\no\\[4pt]
&&\bar P^{ (8) }_{21}\hat{T}_2 (u)\hat{T}_1 (u-\eta)\bar P^{(8) }_{21}=\prod_{l=1}^N
(u+\theta_l-\eta)\hat{T}_{\bar 1^\prime }(u-\frac{1}{2}\eta),\no\\[4pt]
&&P^{(20) }_{{\bar 1}2} \hat{T}_{\bar{1}} (u+\frac{1}{2}\eta) \hat{T}_2(u-\eta)P^{(20)
}_{{\bar 1}2}=\prod_{l=1}^N
(u+\theta_l-\eta)\hat{T}_{\tilde 1}(u),\no\\[4pt]
&&P^{(20) }_{{\bar 1}^\prime 2} \hat{T}_{\bar{1}^\prime} (u-\frac{1}{2}\eta)\hat{T}_2(u+\eta)P^{(20)
}_{{\bar 1}^\prime 2}=\prod_{l=1}^N
(u+\theta_l+\eta)\hat{T}_{\tilde 1^\prime }(u).\label{fut-66}\eea
From the definitions, we see that the auxiliary spaces are erased by taking the super partial traces and the physical spaces are the same.
We remark that these transfer matrices are not independent. Substituting Eqs.(\ref{peri-iden}) and (\ref{k-iden}) into the definitions (\ref{openTransfer-6}), we obtain
that the fused transfer matrices $\tilde{t}^{(1)}(u)$ and $\tilde{t}^{(2)}(u)$ are equal
\bea
\tilde{t}^{(1)}(u)=\tilde{t}^{(2)}(u).\label{id0}
\eea
Consider the quantity
\bea
&&t(u)t(u+\eta)
=str_{12}\{K_1^+(u)T_1(u)K_1^-(u)\hat T_1(u)\no\\[4pt]
&&\hspace{12mm}\times[T_2(u+\eta)K_2^-(u+\eta)\hat T_2(u+\eta)]^{st_2}[K_2^+(u+\eta)]^{st_2}\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{K_1^{+}(u)T_1(u)K_1^-(u)\hat T_1(u)\no\\[4pt]
&&\hspace{12mm}\times[T_2(u+\eta)K_2^-(u+\eta)\hat T_2(u+\eta)]^{st_2}R_{21}^{st_2}(2u+\eta)R_{12}^{st_2}(-2u-\eta)[K_2^+(u+\eta)]^{st_2}\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)T_1(u)T_2(u+\eta)\no\\[4pt]
&&\hspace{12mm}\times K_1^-(u)R_{21}(2u+\eta)K_2^-(u+\eta)\hat T_1(u)\hat T_2(u+\eta)\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{(P_{12}^{(8)}+\bar P_{21}^{(8)})K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)\no\\[4pt]
&&\hspace{12mm}\times(P_{21}^{(8)}+\bar P_{12}^{(8)}) T_1(u)T_2(u+\eta)(P_{21}^{(8)}+\bar P_{12}^{(8)})K_1^-(u)\no\\[4pt]
&&\hspace{12mm}\times R_{21}(2u+\eta)K_2^-(u+\eta)(P_{12}^{(8)}+\bar P_{21}^{(8)})\hat T_1(u)\hat T_2(u+\eta)(P_{12}^{(8)}+\bar P_{21}^{(8)})\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{[P_{12}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)P_{21}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times [P_{21}^{(8)} T_1(u)T_2(u+\eta)P_{21}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times [P_{21}^{(8)} K_1^-(u)R_{21}(2u+\eta)K_2^-(u+\eta)P_{12}^{(8)}][P_{12}^{(8)}\hat T_1(u)\hat T_2(u+\eta) P_{12}^{(8)}]\}\no\\[4pt]
&&\hspace{8mm}+[\rho_2(2u+\eta)]^{-1}str_{12}\{[\bar P_{21}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)\bar P_{12}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times [\bar P_{12}^{(8)} T_1(u)T_2(u+\eta)\bar P_{12}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times[\bar P_{12}^{(8)}K_1^-(u) R_{21}(2u+\eta)K_2^-(u+\eta)\bar P_{21}^{(8)}][\bar P_{21}^{(8)}\hat T_1(u)\hat T_2(u+\eta)\bar P_{21}^{(8)}]\}\no\\[4pt]
&&\hspace{8mm}=t_1(u)+t_2(u).\label{openident1-tan-1}
\eea
The first term is the fusion by the 8-dimensional projectors and the result is
\bea
&&t_1(u)=[\rho_2(2u+\eta)]^{-1}(u+\eta)(u)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\no\\[4pt]
&&\hspace{12mm}\times str_{\langle12\rangle}\{K_{\langle12\rangle}^{+}(u+\frac12\eta)T_{\langle12\rangle}^{(8)}(u+\frac12\eta)K_{\langle12\rangle}^{-}(u+\frac12\eta)
\hat T_{\langle12\rangle}^{(8)}(u+\frac12\eta)\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta) t^{(1)}(u+\frac12\eta).
\eea
The second term is the fusion by the other 8-dimensional projectors. Detailed calculation gives
\bea
&&t_2(u)=[\rho_2(2u+\eta)]^{-1}str_{12}\{\bar P_{21}^{(8)}[\bar P_{21}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)]\bar P_{12}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times \bar P_{12}^{(8)}[\bar P_{12}^{(8)} T_1(u)T_2(u+\eta)]\bar P_{12}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times\bar P_{12}^{(8)}[\bar P_{12}^{(8)}K_1^-(u) R_{21}(2u+\eta)K_2^-(u+\eta)]\bar P_{21}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times\bar P_{21}^{(8)}[\bar P_{12}^{(8)}\hat T_1(u)\hat T_2(u+\eta)]\bar P_{21}^{(8)}\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{\bar P_{21}^{(8)}[K_1^+(u)R_{21}(-2u-\eta)K_2^+(u+\eta)\bar P_{12}^{(8)}]\bar P_{12}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times \bar P_{12}^{(8)}[T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}]\bar P_{12}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times\bar P_{12}^{(8)}[K_2^-(u+\eta)R_{12}(2u+\eta)K_1^-(u)\bar P_{21}^{(8)}]\bar P_{21}^{(8)}\no\\[4pt]
&&\hspace{12mm}\times\bar P_{21}^{(8)}[\hat T_2(u+\eta)\hat T_1(u)\bar P_{12}^{(8)}]\bar P_{21}^{(8)}\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{[\bar P_{21}^{(8)}K_1^+(u)R_{21}(-2u-\eta)K_2^+(u+\eta)\bar P_{12}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times [\bar P_{12}^{(8)}T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times[\bar P_{12}^{(8)}K_2^-(u+\eta)R_{12}(2u+\eta)K_1^-(u)\bar P_{21}^{(8)}]\no\\[4pt]
&&\hspace{12mm}\times[\bar P_{21}^{(8)}\hat T_2(u+\eta)\hat T_1(u)\bar P_{21}^{(8)}]\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\no\\[4pt]
&&\hspace{12mm}\times str_{\langle12\rangle^\prime}\{K_{\langle12\rangle^\prime}^{+}(u+\frac12\eta)T_{\langle12\rangle^\prime}(u+\frac 12 \eta)K_{\langle12\rangle^\prime}^{-}(u+\frac12\eta)
\hat T_{\langle12\rangle^\prime}(u+\frac12\eta)\}\no\\[4pt]
&&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j) t^{(2)}(u+\frac12\eta).
\eea
In the derivation, we have used the relations
\bea
&&str_{12}\{A_{12}^{st_1}B_{12}^{st_1}\}=str_{12}\{A_{12}^{st_2}B_{12}^{st_2}\}=str_{12}\{A_{12}B_{12}\},\no\\[4pt]
&&\hat T_1(u)R_{21}(2u+\eta)T_2(u+\eta)=T_2(u+\eta)R_{21}(2u+\eta)\hat T_1(u),\no\\[4pt]
&&P_{12}^{(8)}+\bar P_{12}^{(8)}=1,~~P_{21}^{(8)}+\bar P_{21}^{(8)}=1,~~ P_{12}^{(8)}\bar P_{12}^{(8)}=P_{21}^{(8)}\bar P_{21}^{(8)}=0,~~ P_{12}^{(8)}=P_{21}^{(8)},~~\bar P_{12}^{(8)}=\bar P_{21}^{(8)}.\no
\eea
In addition,
\bea
&& t^{(1)}(u+\frac{1}{2}\eta)t(u-\eta)=str_{\bar 12}\{K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1}(u+\frac{1}{2}\eta)
K_{\bar 1}^{-}(u+\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times [T_2(u-\eta)K_2^-(u-\eta)\hat T_2(u-\eta)]^{st_2}[K_2^+(u-\eta)]^{st_2}\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1}
(u+\frac{1}{2}\eta)K_{\bar 1}^{-}(u+\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times [T_2(u-\eta)K_2^-(u-\eta)\hat T_2(u-\eta)]^{st_2}[R_{2\bar 1}(2u-\frac{1}{2}\eta)]^{st_2}\no\\[4pt]
&&\hspace{12mm}\times[R_{\bar 12}(-2u+\frac{1}{2}\eta)]^{st_2}[K_2^+(u-\eta)]^{st_2}\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times T_2(u-\eta)K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta)
\hat T_2(u-\eta)\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)
K_{\bar 1}^{+}(u+\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times (P_{2\bar 1}^{(20)}+\tilde P^{(12)}_{2\bar 1})T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)(P_{2\bar 1}^{(20)}+\tilde P^{(12)}_{2\bar 1})\no\\[4pt]
&&\hspace{12mm}\times K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})\no\\[4pt]
&&\hspace{12mm}\times \hat T_{\bar 1}(u+\frac{1}{2}\eta)
\hat T_2(u-\eta)(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{P_{\bar 12}^{(20)}K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta)P_{2\bar 1}^{(20)}\no\\[4pt]
&&\hspace{12mm}\times T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)P_{2\bar 1}^{(20)}K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\no\\[4pt]
&&\hspace{12mm}\times P_{\bar 12}^{(20)}\hat T_{\bar 1}(u+\frac{1}{2}\eta)
\hat T_2(u-\eta)P_{\bar 12}^{(20)}\}\no\\[4pt]
&&\hspace{12mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{\tilde P_{\bar 12}^{(12)}K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta)
\tilde P_{2\bar 1}^{(12)}\no\\[4pt]
&&\hspace{12mm}\times T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)\tilde P_{2\bar 1}^{(12)}K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\no\\[4pt]
&&\hspace{12mm}\times \tilde P_{\bar 12}^{(12)}\hat T_{\bar 1}(u+\frac{1}{2}\eta)
\hat T_2(u-\eta)\tilde P_{\bar 12}^{(12)}\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j-\eta)(u+\theta_j-\eta)\no\\[4pt]
&&\hspace{12mm}\times str_{\langle\bar 12\rangle}\{K_{\langle\bar 12\rangle}^{+}(u)T_{\langle\bar 12\rangle}(u) K_{\langle\bar 12\rangle}^{-}(u)
\hat T_{\langle\bar 12\rangle}(u)\}\no\\[4pt]
&&\hspace{8mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)
\no\\[4pt]
&&\hspace{12mm}\times str_{\overline{\langle\bar 12\rangle}}\{ K_{\overline{\langle\bar 12\rangle}}^{+}(u)T_{\overline{\langle\bar 12\rangle}}(u) K_{\overline{\langle\bar 12\rangle}}^{-}(u)
\hat T_{\overline{\langle\bar 12\rangle}}(u)\}\no\\[4pt]
&&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j-\eta)(u+\theta_j-\eta)\tilde t^{(1)}(u)\no\\
&&\hspace{12mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\bar{t}^{(1)}(u),\label{tan}\\
&& t^{(2)}(u-\frac{1}{2}\eta)t(u+\eta)=\rho_6^{-1}(2u+\frac{1}{2}\eta)str_{\bar 1^\prime 2}\{K_2^+(u+\eta)R_{\bar 1^\prime 2}(-2u-\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times K_{\bar 1^\prime }^{+}(u-\frac{1}{2}\eta)T_{\bar 1^\prime }(u-\frac{1}{2}\eta) T_2(u+\eta)K_{\bar 1^\prime }^{-}(u-\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)K_2^{-}(u+\eta)\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta)
\hat T_2(u+\eta)\}\no\\[4pt]
&&\hspace{8mm}=\rho_6^{-1}(2u-\frac{1}{2}\eta)str_{\bar 1^\prime 2}\{(P_{\bar 1^\prime 2}^{(20)}+\tilde P^{(12)}_{\bar 1^\prime 2})K_2^+(u+\eta)R_{\bar 1^\prime 2}(-2u-\frac{1}{2}\eta)\no\\[4pt]
&&\hspace{12mm}\times K_{\bar 1^\prime }^{+}(u-\frac{1}{2}\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P^{(12)}_{2\bar 1^\prime })T_{\bar 1^\prime }(u-\frac{1}{2}\eta)
T_2(u+\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P^{(12)}_{2\bar 1^\prime })\no\\[4pt]
&&\hspace{12mm}\times K_{\bar 1^\prime }^{-}(u-\frac{1}{2}\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)K_2^{-}(u+\eta)(P_{\bar 1^\prime 2}^{(20)}
+\tilde P^{(12)}_{\bar 1^\prime 2})\no\\[4pt]
&&\hspace{12mm}\times \hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta)
\hat T_2(u+\eta)(P_{\bar 1^\prime 2}^{(20)}+\tilde P^{(12)}_{\bar 1^\prime 2})\}\no\\[4pt]
&&\hspace{8mm}=\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\no\\[4pt]
&&\hspace{12mm}\times str_{\langle \bar 1^\prime 2\rangle}\{ K_{\langle\bar 1^\prime 2\rangle}^{+}(u)T_{\langle\bar 1^\prime 2\rangle}(u) K_{\langle\bar 1^\prime 2\rangle}^{-}(u)
\hat T_{\langle\bar 1^\prime 2\rangle}(u)\}\no\\[4pt]
&&\hspace{12mm}+\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\no\\[4pt]
&&\hspace{12mm}\times str_{\overline{\langle \bar 1^\prime 2\rangle}}\{K_{\overline{\langle \bar 1^\prime 2\rangle}}^{+}(u)T_{\overline{\langle \bar 1^\prime 2\rangle}}(u)
K_{\overline{\langle \bar 1^\prime 2\rangle}}^{-}(u)\hat T_{\overline{\langle \bar 1^\prime 2\rangle}}(u)\}\no\\
&&\hspace{8mm}=\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\tilde t^{(2)}(u)\no\\
&&\hspace{12mm}+\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\bar{t}^{(2)}(u),\label{tan-09}
\eea
where we have used the relations
\bea
&&\hat T_{\bar 1}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)T_2(u-\eta)=T_2(u-\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta),\no\\[4pt]
&&P_{\bar 12}^{(20)}+\tilde P_{\bar 12}^{(12)}=1,~~P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)}=1,~~P_{\bar 12}^{(20)}\tilde P_{\bar 12}^{(12)}=0,~~P_{2\bar 1}^{(20)}\tilde P_{2\bar 1}^{(12)}=0, \no \\[4pt]
&&\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)T_2(u+\eta)=T_2(u+\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta),\no\\[4pt]
&&P_{\bar 1^\prime 2}^{(20)}+\tilde P_{\bar 1^\prime 2}^{(20)}=1,~~P_{2\bar 1^\prime }^{(20)}+\tilde P_{2\bar 1^\prime }^{(12)}=1,
~~P_{\bar 1^\prime 2}^{(20)}\tilde P_{\bar 1^\prime 2}^{(12)}=0,~~P_{2\bar 1^\prime }^{(20)}\tilde P_{2\bar 1^\prime }^{(12)}=0.\no
\eea
Focusing on the special points introduced in the main text, we have
\bea
&&t(\pm\theta_j-\eta) t^{(1)}(\pm\theta_j+\frac{1}{2}\eta)=-\frac{1}{2}\frac{(\pm\theta_j+\frac{1}{2}\eta)
(\pm\theta_j-\eta)}{(\pm\theta_j)(\pm\theta_j-\frac{1}{2}\eta)}\no\\
&&\hspace{20mm}\times\prod_{l=1}^{N}(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta)\tilde t^{(1)}(\pm\theta_j),\label{open-ope-1}\\
&&t(\pm\theta_j+\eta) t^{(2)}(\pm\theta_j-\frac{1}{2}\eta)=-\frac{1}{2}\frac{(\pm\theta_j-\frac{1}{2}\eta)
(\pm\theta_j+\eta)}{(\pm\theta_j)(\pm\theta_j+\frac{1}{2}\eta)}\no\\
&&\hspace{20mm}\times\prod_{l=1}^{N}(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)\tilde t^{(2)}(\pm\theta_j), \quad j=1,2,\cdots,N.\label{open-ope-2}
\eea
With the help of Eqs. (\ref{id0}), (\ref{open-ope-1}) and (\ref{open-ope-2}), we can derive the relation (\ref{openident3}).
Finally, we have proven the identities (\ref{openident1})-(\ref{openident3}).
| proofpile-arXiv_065-168 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Triangles are the basic substructure of networks and play critical roles in network analysis.
Due to the importance of triangles, triangle counting problem (TC), which counts the number of triangles in a given graph, is essential for analyzing networks and generally considered as the first fundamental step in calculating metrics such as clustering coefficient and transitivity ratio, as well as other tasks such as community discovery, link prediction, and Spam filtering \cite{tcReview}.
TC problem is not hard but they are all memory bandwidth intensive thus time-consuming. As a result, researchers from both academia and industry have proposed many TC acceleration methods ranging from sequential to parallel, single-machine to distributed, and exact to approximate.
From the computing hardware perspective, these acceleration strategies are generally executed on CPU, GPU or FPGA, and are based on Von-Neumann architecture \cite{tcReview,XiongTCCPGPU,XiongTCFPGA}.
However, due to the fact that most graph processing algorithms have low computation-memory ratio and high random data access patterns, there are frequent data transfers between the computational unit and memory components which consumes a large amount of time and energy.
In-memory computing paradigm performs computation where the data resides. It can save most of the off-chip data communication energy and latency by exploiting the large internal memory inherent bandwidth and inherent parallelism \cite{MutluDRAM,DBLP:conf/dac/LiXZZLX16}. As a result, in-memory computing has appeared as a viable way to carry out the computationally-expensive and memory-intensive tasks \cite{LiBingOverview,FanAligns}.
This becomes even more promising when being integrated with the emerging non-volatile STT-MRAM memory technologies. This integration, called Processing-In-MRAM (PIM), offers fast write speed, low write energy, and high write endurance among many other benefits \cite{wang2018current,DBLP:journals/tvlsi/JainRRR18}.
In the literature, there have been some explorations on in-memory graph algorithm accelerations \cite{ChenHPCA,FanGraphs,WangYuASPDAC,QianMicro}, however, existing TC algorithms, including the intersection-based and the matrix multiplication-based ones, cannot be directly implemented in memory. For large sparse graphs, highly efficient PIM architecture, efficient graph data compression and data mapping mechanisms are all critical for the efficiency of PIM accelerations. Although there are some compression methods for sparse graph, such as compressed sparse column (CSC), compressed sparse row (CSR), and coordinate list (COO) \cite{ChenHPCA}, these representations cannot be directly applied to in-memory computation either.
In this paper, we propose and design the first in-memory TC accelerator that overcomes the above barriers.
Our main contributions can be summarized as follows:
\begin{itemize}
\item We propose a novel TC method that uses massive bitwise operations to enable in-memory implementations.
\item We propose strategies for data reuse and exchange, and data slicing for efficient graph data compression and mapping onto in-memory computation architectures.
\item We build a TC accelerator with the sparsity-aware processing-in-MRAM architecture. A device-to-architecture co-simulation demonstrates highly encouraging results.
\end{itemize}
The rest of the paper is organized as follows:
Section~\ref{sec:preliminary} provides some preliminary knowledge of TC and in-memory computing.
Section~\ref{sec:tc} introduces the proposed TC method with bitwise operations, and Section~\ref{sec:pimArch} elaborates a sparsity-aware processing-in-MRAM architecture which enables highly efficient PIM accelerations.
Section~\ref{sec:exper} demonstrates the experimental results and Section~\ref{sec:conclusion} concludes.
\section{Preliminary}\label{sec:preliminary}
\subsection{Triangle Counting}
Given a graph, triangle counting (TC) problem seeks to determine the number of triangles. The sequential algorithms for TC can be classified into two groups.
In the {matrix multiplication based algorithms}, a triangle is a closed path of length three, namely a path of three vertices begins and ends at the same vertex. If $A$ is the adjacency matrix of graph $G$, $A^3[i][i]$ represents the number of paths of length three beginning and ending with vertex $i$. Given that a triangle has three vertices and will be counted for each vertex, and the graph is undirected (that is, a triangle $i-p-q-i$ will be also counted as $i-q-p-i$), the number of triangles in $G$ can be obtained as $trace(A^3)/6$, where $trace$ is the sum of elements on the main diagonal of a matrix.
In the {set intersection based algorithms}, it iterates over each edge and finds common elements from adjacency lists of head and tail nodes.
A lot of CPU, GPU and FPGA based optimization techniques have been proposed \cite{tcReview,XiongTCCPGPU,XiongTCFPGA}. These works show promising results of accelerating TC, however, these strategies all suffer from the performance and energy bottlenecks brought by the significant amount of data transfers in TC.
\subsection{In-Memory Computing with STT-MRAM}
STT-MRAM is a promising candidate for the next generation main memory because of its properties such as near-zero leakage, non-volatility, high endurance, and compatibility with the CMOS manufacturing process \cite{wang2018current}. In particular, prototype STT-MRAM chip demonstrations and commercial MRAM products have been available by companies such as Everspin and TSMC.
STT-MRAM stores data with magnetic-resistances instead of conventional charge based store and access. This enables MRAM to provide inherent computing capabilities for bitwise logic with minute changes to peripheral circuitry \cite{DBLP:journals/tvlsi/JainRRR18}\cite{yang2018exploiting}.
As the left part of Fig.~\ref{fig:cim} shows, a typical STT-MRAM bit-cell consists of an access transistor and a Magnetic Tunnel Junction (MTJ), which is controlled by bit-line (BL), word-line (WL) and source-line (SL).
The relative magnetic orientations of pinned ferromagnetic layer (PL) and free ferromagnetic layer (FL) can be stable in parallel (\texttt{P} state) or anti-parallel (\texttt{AP} state), corresponding to low resistance ($R_{\rm P}$) and high resistance ($R_{\rm AP}$, $R_{\rm AP}>R_{\rm P}$), respectively.
\texttt{READ} operation is done by enabling WL signal, applying a voltage $V_{\rm read}$ across BL and SL, and sensing the current that flows ($I_{\rm P}$ or $I_{AP}$) through the MTJ. By comparing the sense current with a reference current ($I_{\rm ref}$,), the data stored in MTJ cell (logic `0' or logic `1') could be readout.
\texttt{WRITE} operation can be performed by enabling WL, then applying an appropriate voltage ($V_{\rm write}$) across BL and SL to pass a current that is greater than the critical MTJ switching current.
To perform bitwise logic operation, as demonstrated in the right part of Fig.~\ref{fig:cim}, by simultaneously enabling $WL_i$ and $WL_j$, then applying $V_{\rm read}$ across $BL_k$ and $SL_k$ ($k \in [0,n-1]$), the current that feeds into the $k$-th sense amplifier (SA) is a summation of the currents flowing through $MTJ_{i,k}$ and $MTJ_{j,k}$, namely $I_{i,k}+I_{j,k}$.
With different reference sensing current, various logic functions of the enabled word line can be implemented.
\begin{figure}[t]
\centering
\includegraphics[width = 0.9\linewidth]{cim.pdf}
\caption{Typical STT-MRAM bit-cell and paradigm of computing in STT-MRAM array.}
\label{fig:cim}
\end{figure}
\section{Triangle Counting with Bitwise Operations}\label{sec:tc}
In this section, we seek to perform TC with massive bitwise operations, which is the enabling technology for in-memory TC accelerator.
Let $A$ be the adjacency matrix representation of a undirected graph $G(V,E)$, where $A[i][j]\in \{0,1\}$ indicates whether there is an edge between vertices $i$ and $j$.
If we compute $A^2=A*A$, then the value of $A^2[i][j]$ represents the number of distinct paths of length two between vertices $i$ and $j$.
In the case that there is an edge between vertex $i$ and vertex $j$, and $i$ can also reach $j$ through a path of length two, where the intermediate vertex is $k$, then vertices $i$, $j$, and $k$ form a triangle.
As a result, the number of triangles in $G$ is equal to the number of non-zero elements ($nnz$) in $A \cap A^2$ (the symbol `$\cap$' defines element-wise multiplication here), namely
\begin{equation}\label{equ:eq1}
TC(G)=nnz(A \cap A^2)
\end{equation}
Since $A[i][j]$ is either zero or one, we have
\begin{equation}\label{equ:eq2}
(A\cap A^2)[i][j]=
\begin{cases}
0, & \text{if}\ A[i][j]=0;\\
A^2[i][j], & \text{if}\ A[i][j]=1.
\end{cases}
\end{equation}
According to Equation~(\ref{equ:eq2}),
\begin{equation}\label{equ:eq3}
\begin{split}
nnz(A \cap A^2)&=\sum\sum\nolimits_{A[i][j]=1}A^2[i][j]\\
\end{split}
\end{equation}
Because the element in $A$ is either zero or one, the bitwise Boolean \texttt{AND} result is equal to that of the mathematical multiplication, thus
\begin{equation}\label{equ:eq4}
\begin{split}
A^2[i][j]& =\sum_{k=0}^{n} A[i][k]*A[k][j]=\sum_{k=0}^{n} {AND}(A[i][k],A[k][j])\\
& ={BitCount}({AND}(A[i][*],A[*][j]^T))
\end{split}
\end{equation}
in which \texttt{BitCount} returns the number of `1's in a vector consisting of `0' and `1', for example, $BitCount(0110)=2$.
Combining equations ~(\ref{equ:eq1}), (\ref{equ:eq3}) and (\ref{equ:eq4}), we have
\begin{equation}
\begin{split}
TC(G)&={BitCount}({AND}(A[i][*],A[*][j]^T)),\\
&\quad \text{in which }A[i][j]=1
\end{split}
\end{equation}
Therefore, TC can be completed by only \texttt{AND} and \texttt{BitCount} operations (massive for large graphs).
Specifically, for each non-zero element $A[i][j]=1$, the $i$-th row ($R_i=A[i][*]$) and the $j$-th column ($C_j=A[*][j]^T$) are executed \texttt{AND} operation, then the \texttt{AND} result is sent to a bit counter module for accumulation.
Once all the non-zero elements are processed as above, the value in the accumulated \texttt{BitCount} is exactly the number of triangles in the graph.
\begin{figure}[htb]
\centering
\includegraphics[width = 0.85\linewidth]{TCProcedure1.pdf}
\caption{Demonstrations of triangle counting with \texttt{AND} and \texttt{BitCount} bit-wise operations.}
\label{fig:TCProcedure}
\end{figure}
Fig.~\ref{fig:TCProcedure} demonstrates an illustrative example for the proposed TC method.
As the left part of the figure shows, the graph has four vertices, five edges and two triangles ($0-1-2-0$ and $1-2-3-1$), and the adjacency matrix is given.
The non-zero elements in $A$ are $A[0][1]$, $A[0][2]$, $A[1][2]$, $A[1][3]$, and $A[2][3]$.
For $A[0][1]$, row $R_0$=`0110' and column $C_1$=`1000' are executed with \texttt{AND} operation, then the \texttt{AND} result `0000' is sent to the bit counter and gets a result of zero. Similar operations are performed to other four non-zero elements.
After the execution of the last non-zero element $A[2][3]$ is finished, the accumulated \texttt{BitCount} result is two, thus the graph has two triangles.
The proposed TC method has the following advantages. First, it avoids the time-consuming multiplication. When the operation data are either zero or one, we can implement the multiplication with \texttt{AND} logic.
Second, the proposed method does not need to store the intermediate results that are larger than one (such as the elements in $A^2$), which are cumbersome to store and calculate.
Third, it does not need complex control logic.
Given the above three advantages, the proposed TC method is suitable for in-memory implementations.
\section{Sparsity-Aware Processing-In-MRAM Architecture}\label{sec:pimArch}
To alleviate the memory bottleneck caused by frequent data transfers in traditional TC algorithms, we implement an in-memory TC accelerator based on the novel TC method presented in the previous section.
Next, we will discuss several dataflow mapping techniques to minimize space requirements, data transfers and computation in order to accelerate the in-memory TC computation.
\subsection{Data Reuse and Exchange}
Recall that the proposed TC method iterates over each non-zero element in the adjacency matrix, and loads corresponding rows and columns into computational memory for \texttt{AND} operation, followed by a \texttt{BitCount} process. When the size of the computational memory array is given, it is important to reduce the unnecessary space and memory operations.
We observe that for \texttt{AND} computation, the non-zero elements in a row reuse the same row, and the non-zero elements in a column reuse the same column. The proposed data reuse mechanism is based on this observation.
Assume that the non-zero elements are iterated by rows, then the current processed row only needs to be loaded once, at the same time the corresponding columns are loaded in sequence.
Once all the non-zero elements in a row have been processed, this row will no longer be used in future computation, thus we can overwrite this row by the next row to be processed.
However, the columns might be used again by the non-zero elements from the other rows.
Therefore, before loading a certain column into memory for computation, we will first check whether this column has been loaded, if not, the column will be loaded to a spare memory space. In case that the memory is full, we need to select one column to be replaced with the current column. We choose the least recently used (LRU) column for replacement, and more optimized replacement strategy could be possible.
As demonstrated in Fig.~\ref{fig:TCProcedure}, in step $1$ and step $2$, the two non-zero elements $A[0][1]$ and $A[0][2]$ of row $R_0$ are processed respectively, and corresponding columns $C_1$ and $C_2$ are loaded to memory.
Next, while processing $A[1][2]$ and $A[1][3]$, $R_1$ will overlap $R_0$ and reuse existing $C_2$ in step $3$, and load $C_3$ in step $4$.
In step $5$, to process $A[2][3]$, $R_1$ will be overlapped by $R_2$, and $C_3$ is reused.
Overlapping the rows and reusing the columns can effectively reduce unnecessary space utilization and memory \texttt{WRITE} operations.
\subsection{Data Slicing}
To utilize the sparsity of the graph to reduce the memory requirement and unnecessary computation, we propose a data slicing strategy for graph data compression.
Assume $R_i$ is the $i$-th row, and $C_j$ is the $j$-th column of the adjacency matrix $A$ of graph $G(V,E)$. The slice size is $|S|$ (each slice contains $|S|$ bits), then each row and column has $\lceil\frac{|V|}{|S|}\rceil$ number of slices.
The $k$-th slice in $R_i$, which is represented as $R_i S_k$, is the set of $\{A[i][k*|S|],\cdots,A[i][(k+1)*|S|-1]$.
We define that slice $R_i S_k$ is \textbf{\textit{valid}} if and only if $\exists A[i][t] \in R_i S_k,A[i][t]=1,t\in [k*|S|,(k+1)*|S|-1]$.
Recall that in our proposed TC method, for each non-zero element in the adjacency matrix, we compute the \texttt{AND} result of the corresponding row and column.
With row and column slicing, we will perform the \texttt{AND} operation in the unit of slices. For each $A[i][j]=1$, we only process the valid slice pairs, namely only when both the row slice $R_i S_k$ and column slice $C_j S_k$ are valid, we will load the valid slice pair $(R_iS_k,C_jS_k)$ to the computational memory array and perform \texttt{AND} operation.
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.82\linewidth]{rs1.pdf}
\caption{Sparsity-aware data slicing and mapping.}
\label{fig:rowslicing}
\end{figure}
Fig.~\ref{fig:rowslicing} demonstrates an example, after row and column slicing, only slice pairs $(R_iS_3,C_jS_3)$ and $(R_iS_5,C_jS_5)$ are valid, therefore, we only load these slices for \texttt{AND} computation. This scheme can reduce the needed computation significantly, especially in the large sparse graphs.
\textit{Memory requirement of the compressed graph data.}
With the proposed row and column slicing strategy, we need to store the index of valid slices and the detailed data information of these slices.
Assuming that the number of valid slices is $N_{VS}$, the slice size is $|S|$, and we use an integer (four Bytes) to store each valid slice index, then the needed space for overall valid slice index is $IndexLength = N_{VS} \times 4$ Bytes.
The needed space to store the data information of valid slices is $DataLength = N_{VS} \times |S| / 8$ Bytes.
Therefore, the overall needed space for graph $G$ is $N_{VS} \times (|S| / 8 + 4)$ Bytes, which is determined by the sparsity of $G$ and the slice size.
In this paper, we set $|S|=64$ in the experimental result section.
Given that most graphs are highly sparse, the needed space to store the graph can be trivial. \textbf{Moreover, the proposed format of compressed graph data is friendly for directly mapping onto the computational memory arrays to perform in-memory logic computation.}
\begin{algorithm}[t]
\footnotesize
\KwIn{Graph $G(V,E)$.}
\KwOut{The number of triangles in $G$.}
$TC\_G$ = 0\;
Represent $G$ with adjacent matrix $A$\;
\For {each edge $e\in E$ with $A[i][j]=1$}{
Partition $R_i$ into slices\;
Partition $C_j$ into slices\;
\For {each valid slice pair ($R_iS_k$,$C_jS_k$)}{
$TC\_G$ += \textbf{COMPUTE} ($R_iS_k$,$C_jS_k$)\;
}
}
\textbf{return} $TC\_G$ as the number of triangles in $G$.\\
----------------------------------------\\
\textbf{COMPUTE} ($Slice1$, $Slice2$)
{\\
load $Slice1$ into memory\;
\If {$Slice2$ has not been loaded}{
\eIf {there is no enough space}{
Replace least recently used slice with $Slice2$\;
}{Load $Slice2$ into memory\;}
}
\textbf{return} $BitCount(AND(Slice1,Slice2))$.
}
\caption{TCIM: Triangle Counting with Processing-In-MRAM Architecture.}
\label{alg:dataMapping}
\end{algorithm}
\subsection{Processing-In-MRAM Architecture}
\begin{figure*}[t]
\centering
\includegraphics[width = 0.75\linewidth]{OverallArch.pdf}
\caption{Overall processing-in-MRAM architecture.}
\label{fig:overallArch}
\end{figure*}
Fig.~\ref{fig:overallArch} demonstrates the overall architecture of processing-in-MRAM.
The graph data will be sliced and compressed, and represented by the valid slice index and corresponding slice data.
According to the valid slice indexes in the data buffer, we load the corresponding valid slice pairs into computational STT-MRAM array for bitwise computation.
The storage status of STT-MRAM array (such as which slices have been loaded) is also recorded in the data buffer and utilized for data reuse and exchange.
As for the computational memory array organization, each chip consists of multiple Banks and works as computational array.
Each Bank is comprised of multiple computational memory sub-arrays, which are connected to a global row decoder and a shared global row buffer. Read circuit and write driver of the memory array are modified for processing bitwise logic functions. Specifically, the operation data are all stored in different rows in memory arrays. The rows associated with operation data will be activated simultaneously for computing. Sense amplifiers are enhanced with \texttt{AND} reference circuits to realize either \texttt{READ} or \texttt{AND} operations. By generating $R_\text{ref-AND}\in (R_\text{P-P},R_\text{P-AP})$, the output by the sense amplifier is the \texttt{AND} result of the data that is stored in the enabled WLs.
\subsection{Pseudo-code for In-Memory TC Acceleration}
Algorithm~\ref{alg:dataMapping} demonstrates the pseudo-code for TC accelerations with the proposed processing-in-MRAM architecture.
It iterates over each edge of the graph, partitions the corresponding rows and columns into slides, then loads the valid slice pairs onto computational memory for \texttt{AND} and \texttt{BitCount} computation. In case that there is no enough memory space, it adopts an LRU strategy to replace a least recently used slice.
\section{Experimental Results}\label{sec:exper}
\subsection{Experimental Setup}
To validate the effectiveness of the proposed approaches, comprehensive device-to-architecture evaluations along with two in-house simulators are developed.
At the device level, we jointly use the Brinkman model and Landau-Lifshitz-Gilbert (LLG) equation to characterize MTJ \cite{yang2015radiation}. The key parameters for MTJ simulation are demonstrated in Table~\ref{tab:parameter}.
For the circuit-level simulation, we design a Verilog-A model for 1T1R STT-MRAM device, and characterize the circuit with $45$nm FreePDK CMOS library.
We design a bit counter module based on Verilog HDL to obtain the number of non-zero elements in a vector. Specifically, we split the vector and feed each $8$-bit sub-vector into an $8$-$256$ look-up-table to get its non-zero element number, then sum up the non-zero numbers in all sub-vectors. We synthesis the module with Synopsis Tool and conduct post-synthesis simulation based on $45$nm FreePDK.
After getting the device level simulation results, we integrate the parameters in the open-source NVSim simulator \cite{NVSim} and obtain the memory array performance.
In addition, we develop a simulator in Java for the processing-in-MRAM architecture, which simulates the proposed function mapping, data slicing and data mapping strategies.
Finally, a behavioural-level simulator is developed in Java, taking architectural-level results and memory array performance to calculate the latency and energy that spends on TC in-memory accelerator.
To provide a solid comparison with other accelerators, we select from the real-world graphs from SNAP dataset \cite{snapnets} (see TABLE~\ref{tab:graphpara}), and run comparative baseline intersect-based algorithm on Inspur blade system with the Spark GraphX framework on Intel E5430 single-core CPU. Our TC in-memory acceleration algorithm also runs on single-core CPU, and the STT-MRAM computational array is set to be $16$ MB.
\begin{table}[htbp]
\setlength{\tabcolsep}{14pt}
\footnotesize
\caption{Key parameters for MTJ simulation.}
\label{tab:parameter}
\centering
\begin{tabular}{l|l}
\specialrule{0.8pt}{0pt}{0pt}
Parameter & Value \\ \hline
MTJ Surface Length & $40$ $nm$ \\
MTJ Surface Width & $40$ $nm$ \\
Spin Hall Angle & $0.3$ \\
Resistance-Area Product of MTJ & $10^{-12}$ $\Omega \cdot m^2$ \\
Oxide Barrier Thickness & $0.82$ $nm$ \\
TMR & $100\%$ \\
Saturation Field & $10^6$ $A/m$ \\
Gilbert Damping Constant & $0.03$ \\
Perpendicular Magnetic Anisotropy & $4.5 \times 10^5$ $A/m$ \\
Temperature & $300 K$ \\
\specialrule{0.8pt}{0pt}{0pt}
\end{tabular}
\end{table}
\begin{table}[t]
\setlength{\tabcolsep}{10pt}
\footnotesize
\caption{Selected graph dataset.}
\label{tab:graphpara}
\centering
\begin{tabular}{l|rrr}
\specialrule{0.8pt}{0pt}{0pt}
Dataset & \# Vertices & \# Edges & \# Triangles \\ \hline
ego-facebook & 4039 & 88234 & 1612010 \\
email-enron & 36692 & 183831 & 727044 \\
com-Amazon & 334863 & 925872 & 667129 \\
com-DBLP & 317080 & 1049866 & 2224385 \\
com-Youtube & 1134890 & 2987624 & 3056386 \\
roadNet-PA & 1088092 & 1541898 & 67150 \\
roadNet-TX & 1379917 & 1921660 & 82869 \\
roadNet-CA & 1965206 & 2766607 & 120676 \\
com-LiveJournal & 3997962 & 34681189 & 177820130 \\
\specialrule{0.8pt}{0pt}{0pt}
\end{tabular}
\end{table}
\subsection{Benefits of Data Reuse and Exchange}
TABLE~\ref{tab:sliceDataSize} shows the memory space required for the bitwise computation. For example, the largest graph {\it com-lj} will need $16.8$ MB without incurring any data exchange. On average, only $18$ KB per $1000$ vertices is needed for in-memory computation.
\begin{table}[t]
\setlength{\tabcolsep}{6pt}
\footnotesize
\caption{Valid slice data size (MB).}
\label{tab:sliceDataSize}
\centering
\begin{tabular}{lr|lr|lr}
\specialrule{0.8pt}{0pt}{0pt}
ego-facebook & 0.182 & com-DBLP & 7.6 & roadNet-TX & 12.38 \\
email-enron & 1.02 & com-Youtube & \bf{16.8} & roadNet-CA & \bf{16.78} \\
com-Amazon & 7.4 & roadNet-PA & 9.96 & com-lj & \bf{16.8} \\
\specialrule{0.8pt}{0pt}{0pt}
\end{tabular}
\end{table}
When the STT-MRAM computational memory size is smaller than those listed in TABLE~\ref{tab:sliceDataSize}, data exchange will happen. For example, with $16$ MB, the three large graphs will have to do data exchange as shown in Fig.~\ref{fig:datareuse}. In this figure, we also list the percentages of data hit (average $72\%$) and data miss (average $28\%$). Recall that the first time a data slice is loaded, it is always a miss, and a data hit implies that the slice data has already been loaded. So this shows that the proposed data reuse strategy saves on average $72\%$ memory \texttt{WRITE} operations.
\begin{figure}[htb]
\centering
\includegraphics[width = 0.9\linewidth]{dataReuse.pdf}
\caption{Percentages of data hit/miss/exchange.}
\label{fig:datareuse}
\end{figure}
\subsection{Benefits of Data Slicing}
As shown in TABLE~\ref{tab:validSlice}, the average percentage of valid slices in the five largest graphs is only $0.01\%$. Therefore, the proposed data slicing strategy could significantly reduce the needed computation by $99.99\%$.
\begin{table}[htbp]
\setlength{\tabcolsep}{4pt}
\caption{Percentage of valid slices.}
\label{tab:validSlice}
\centering
\begin{tabular}{lr||lr||lr}
\specialrule{0.8pt}{0pt}{0pt}
ego-facebook & 7.017\% & com-DBLP & 0.036\% & roadNet-TX & 0.010\% \\
email-enron & 1.607\% & com-Youtube & 0.013\% & roadNet-CA & 0.007\% \\
com-Amazon & 0.014\% & roadNet-PA & 0.013\% & com-lj & 0.006\% \\
\specialrule{0.8pt}{0pt}{0pt}
\end{tabular}
\end{table}
\subsection{Performance and Energy Results}
TABLE~\ref{tab:graphperf} compares the performance of our proposed in-memory TC accelerator against a CPU baseline implementation, and the existing GPU and FPGA accelerators.
One can see a dramatic reduction of the execution time in the last columns from the previous three columns. Indeed, without PIM, we achieved an average $53.7\times$ speedup against the baseline CPU implementation because of data slicing, reuse, and exchange. With PIM, another $25.5\times$ acceleration is obtained.
Compared with the GPU and FPGA accelerators, the improvement is $9\times$ and $23.4\times$, respectively. It is important to mention that we achieve this with a single-core CPU and $16$ MB STT-MRAM computational array.
\begin{table}[htbp]
\setlength{\tabcolsep}{5pt}
\caption{Runtime (in seconds) comparison among our proposed methods, CPU, GPU and FPGA implementations.}
\label{tab:graphperf}
\centering
\begin{tabular}{l|r|r|r|r|r}
\specialrule{0.8pt}{0pt}{0pt}
\multirow{2}{*}{Dataset} & \multirow{2}{*}{CPU} & \multirow{2}{*}{GPU \cite{XiongTCFPGA}} & \multirow{2}{*}{FPGA \cite{XiongTCFPGA}} & \multicolumn{2}{c}{This Work}\\ \cline{5-6}
& & & & w/o PIM & TCIM \\ \hline
ego-facebook & 5.399 & 0.15 & 0.093 & 0.169 & 0.005 \\
email-enron & 9.545 & 0.146 & 0.22 & 0.8 & 0.021 \\
com-Amazon & 20.344 & N/A & N/A & 0.295 & 0.011 \\
com-DBLP & 20.803 & N/A & N/A & 0.413 & 0.027 \\
com-Youtube & 61.309 & N/A & N/A & 2.442 & 0.098 \\
roadNet-PA & 77.320 & 0.169 & 1.291 & 0.704 & 0.043 \\
roadNet-TX & 94.379 & 0.173 & 1.586 & 0.789 & 0.053 \\
roadNet-CA & 146.858 & 0.18 & 2.342 & 3.561 & 0.081 \\
com-LiveJournal & 820.616 & N/A & N/A & 33.034 & 2.006 \\
\specialrule{0.8pt}{0pt}{0pt}
\end{tabular}
\end{table}
As for the energy savings, as shown in Fig.~\ref{fig:energy}, our approach has $20.6\times$ less energy consumption compared to the energy-efficient FPGA implementation \cite{XiongTCFPGA}, which benefits from the non-volatile property of STT-MRAM and the in-situ computation capability.
\begin{figure}[htbp]
\centering
\includegraphics[width = 1.0\linewidth]{energy.pdf}
\caption{Normalized results of energy consumption for TCIM with respect to FPGA.}
\label{fig:energy}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose a new triangle counting (TC) method, which uses massive bitwise logic computation, making it suitable for in-memory implementations.
We further propose a sparsity-aware processing-in-MRAM architecture for efficient in-memory TC accelerations: by data slicing, the computation could be reduced by $99.99\%$, meanwhile the compressed graph data can be directly mapped onto STT-MRAM computational memory array for bitwise operations, and the proposed data reuse and exchange strategy reduces $72\%$ of the memory \texttt{WRITE} operations.
We use device-to-architecture co-simulation to demonstrate that the proposed TC in-memory accelerator outperforms the state-of-the-art GPU and FPGA accelerations by $9\times$ and $23.4\times$, respectively, and achieves a $20.6\times$ energy efficiency improvement over the FPGA accelerator.
Besides, the proposed graph data compression and data mapping strategies are not restricted to STT-MRAM or TC problem. They can also be applied to other in-memory accelerators with other non-volatile memories.
\bibliographystyle{unsrt}
\scriptsize
\begingroup
| proofpile-arXiv_065-182 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
For many decades, quantum phase transitions (QPT) have been the subject of intense studies in several areas of physics~\cite{Sachdev2011}. In a closed system with unitary dynamics, the hallmark of an equilibrium QPT is the non-analytic behavior of an observable upon changing a physical parameter~\cite{Vojta2000,Greiner2002,Brown2017}.
In recent years, a new frontier has emerged in many-body physics, investigating non-equilibrium phase transitions. In that regard and as a suitable testbed, driven-dissipative quantum systems and their phase transitions have been the subject of many studies. Observation of exciton-polariton BEC in semiconductors~\cite{Deng2010,Carusotto2013} and cuprites~\cite{Bao2019} and their superfluidity~\cite{Amo2009,Lerario2017}, probing the first-order phase transitions, dynamical hysteresis, and Kibble-Zurek quench mechanism in microcavities~\cite{Rodriguez2017,Fink2018}, and demonstration of dynamical bifurcation and optical bistability in circuit QED ~\cite{Siddiqi2005,Yin2012,Liu2017,Fitzpatrick2017,Elliott2018, Andersen2020} are a few examples of rapidly growing body of experimental explorations of such physics in different platforms.
In parallel, some general aspects of non-equilibrium QPT have been investigated theoretically~\cite{Diehl2010,Torre2013}, and particularly e.g. in coupled spins~\cite{Kessler2012}, interacting bosonic systems~\cite{Casteels2016,Boite2017,Casteels2017, Verstraelen2020}, and semiconductor microcavities~\cite{Carusotto2005}. Due to their coupling to a bath, driven-dissipative dynamics are not given by a Hermitian Hamiltonian but with a superoperator, whose gapped spectrum signifies a QPT~\cite{Drummond1980,Drummond1981,Carmichael2015}. In spite of all progress, due to a notably larger parameter space compared to the closed systems, dissipative phase transitions (DPT) necessitate further investigations. A natural question e.g., could be about the crossover between the DPT and the phase transition in the thermodynamic limit (TD). Although due to their constant interaction with the environment, open systems are inherently far from the thermodynamic equilibrium however, still there could be some parameter ranges where the system asymptotically approaches the mean-field (MF) limit, where quantum correlations and fluctuations can be ignored.
To be more specific, in this paper we focus our studies on a driven-dissipative three-mode bosonic system subject to a Kerr-type intra- and intermodal interactions. To keep our results and discussions general, we do not specify the nature of the bosonic system. But let us remark that such setup could be realized in various platforms, including cavity Rydberg polaritons~\cite{Jia2018,Schine2019,Clark2020}, excitons in 2D materials and semiconductors~\cite{Togan2018,Tan2020}, microwave photons in superconducting circuits~\cite{Materise2018}, or interacting photons in optical cavities~\cite{Klaers2010}.
Starting from the MF description we first explore the phase transitions of the system as a function its parameters, i.e. pump, detuning, interaction strength, and bare-cavity mode spacing. We show that depending on the bare cavity features, the phase transition can be either continuous (2$^{nd}$-order phase transition) or abrupt (1$^{st}$-order phase transition) corresponding to an optical multi-stability, as studied for planar microcavities~\cite{Wouters2007B}. In both cases, the phase transition manifests itself by a non-zero amplitude of the unpumped modes and is related to the dissipative gap closure of the Liouvillian. We show that within this range and up to the MF level, there is an unconditionally squeezed mode at the output, attributed to the spontaneous breaking of the local U(1)- and time-translational symmetry (TTS). While at TD limit, the diverging quadrature of this state is related to the well-known, freely propagating Goldstone mode~\cite{Wouters2007,Leonard2017,Guo2019}, employing the Wigner phase-space representation we show that within the quantum limit this mode becomes susceptible to fluctuations and becomes short-lived. Since employing the Wigner formalism allows us to properly include the quantum noise, we have been able to explore the phase diagram more accurately and beyond MF description. That also helps to delineate the validity range of MF when it comes to the study of QPT. In spite of its simplicity, the investigated system reveals important dynamics of driven-disspative bosonic gases and could be a quintessential model for further exploration of SSB in open many-body systems.
The paper is organized as follows; in Section~\ref{sec:problem} we present the general problem, its MF description in the form of a generalized Gross-Pitaevskii equation (GPE) and the low-energy excitation spectrum determined via Bogoliubov treatment. We also summarize the stochastic formulation of the problem based on the truncated Wigner phase-space method.
In Section~\ref{sec:results} we present the numerical results of the three-mode cavity where various phase transitions are investigated and discussed. Finally, the last section summarizes the main results of the paper and sets the stage for future directions that can be explored in such systems.
\section{Problem Formulation}\label{sec:problem}
Consider a three-mode cavity with the following Hamiltonian describing the interaction dynamics between the modes ($\hat{a}_{1,2,3}$)
\begin{align}~\label{eq:3-mode Hamiltonian}
\hat{H} &= \sum_{n=1}^3 \left(\omega_m \hat{a}_m^\dagger \hat{a}_m + \frac{V_0}{2} \sum_{m}^3 \hat{a}_m^\dagger \hat{a}_n^\dagger \hat{a}_m \hat{a}_n\right)\\
&+ V_0 (\hat{a}_2^{\dagger ^2} a_1 a_3 + \hat{a}_1^\dagger \hat{a}_3^\dagger \hat{a}_2^2),
\end{align}
where $\omega_m$ is the frequency of the $m^{th}$-mode of the bare cavity and $V_0$ is the interaction strength.
A coherent drive at frequency $\omega_L$ excites the $p^{th}$-mode of the cavity at the rate of $\Omega_0$ as
\begin{equation}~\label{eq:coherent drive}
\hat{H}_D = \Omega_0(\hat{a}_p e^{+i\omega_L t} + \hat{a}_p^\dagger e^{-i\omega_L t}).
\end{equation}
Assuming a Markovian single-photon loss for the mode-bath coupling, the following Lindblad master equation describes the evolution of the reduced cavity density matrix $\hat{\rho}$ as
\begin{equation}~\label{eq:master equation}
\frac{d\hat{\rho}}{dt} = -i\left[\hat{H} , \hat{\rho}\right] + \sum_m \gamma_m \left(2\hat{a}_m\hat{\rho}\hat{a}_m^\dagger - \{\hat{a}_m^\dagger \hat{a}_m , \hat{\rho}\}\right),
\end{equation}
where $\hat{H} = \hat{H}_{ph} + \hat{H}_D$ on the RHS describes the unitary dynamics of the system and the second term captures the quantum jumps and losses of the $m^{th}$-cavity field at rate $\gamma_m$.
Equivalently, we can derive the equations of motion for $\hat{a}_m$ operators and describe the dynamics via Heisenberg-Langevin equations as~\cite{Gardiner2004}
\begin{multline}~\label{eq:Heisenberg-Langevin}
\dot{\hat{a}}_m = -i\left(\Delta_m -i\gamma_m \right)\hat{a}_m - iV_0 \sum_{nkl}\eta^{mn}_{kl} \hat{a}_n^\dagger \hat{a}_k \hat{a}_l - i\Omega_0 \delta_{mp} + \\
\sqrt{2\gamma_m} \hat{\xi}_m(t),
\end{multline}
where in the above equation $\Delta_m = \omega_m - \omega_L$ is the frequency of the $m^{th}$-mode in the laser frame, $\eta_{kl}^{mn}$ is the mode-specific prefactor arising from different commutation relations, and $\{\hat{\xi}_m(t)\}$ describe stationary Wiener stochastic processes with zero means and correlations as
\begin{align}~\label{eq:white-noise correlation}
\braket{\hat{\xi}_m^\dagger(t+\tau) \hat{\xi}_n(t)} = n_{th} \delta(\tau) ~\delta_{mn}, \\ \nonumber
\braket{\hat{\xi}_m(t+\tau) \hat{\xi}_n^\dagger (t)} = (1 + n_{th})\delta(\tau) ~\delta_{mn},
\end{align}
$n_{th}$ in the above equations is the number of thermal photons at temperature $T$.
For numerical calculations, the dimension of the relevant (few-photon) Hilbert space grows rapidly with increasing number of modes and particle number. Hence, the direct solution of the density matrix in Eq.~(\ref{eq:master equation}) is only possible for a small number of modes and at a low pumping rate $\Omega_0$. For the quantum Langevin equations in Eq.~(\ref{eq:Heisenberg-Langevin}), the two-body interaction generates an infinite hierarchy of the operator moments, making them intractable as well.
The most straight-forward approach is a classical MF treatment where the correlations are approximated with the multiplication of the expectation values i.e., $\braket{\hat{a}_m \hat{a}_n} \approx \braket{\alpha_m} \braket{\alpha_n}$. These substitutions simplify the equations of motion of the operators' MFs in Eq.~(\ref{eq:Heisenberg-Langevin}) to a set of coupled non-linear equations as
\begin{multline}~\label{eq:mean-field equations}
i\dot{\alpha}_m = \left(\Delta_m -i\gamma_m \right)\alpha_m + V_0 \sum_{nkl}\eta^{mn}_{kl} \alpha_n^* \alpha_k \alpha_l + \Omega_0 \delta_{mp}.
\end{multline}
In the steady state, the mean values are determined as $\dot{\alpha}_m=0$, which is an exact description for the operators' 1$^{st}$-moments.
In this work, we used the Jacobian matrix to check the dynamical stability of all steady-states. Equation~(\ref{eq:mean-field equations}) is a Gross-Pitaevskii type equation with added drive and dissipation terms.
Although the MF provides a good starting point, information about quantum correlations is lost. To improve this, we replace $\hat{a}_m = \alpha_m + \hat{b}_m$ and linearize Eq.~(\ref{eq:Heisenberg-Langevin}) around MF determined from the steady state of Eq.~(\ref{eq:mean-field equations}). Defining $\hat{B} = \left[\hat{b}\right]$ as fluctuation-operator vector (with $2N$ components), its time evolution is determined as
\begin{equation}~\label{eq:linearized fluctuations EoM}
\frac{d\hat{B}}{dt} = M\hat{B} + D^{1/2} \hat{\Xi} ,
\end{equation}
where $M$ is the Bogoliubov matrix at the MF $\alpha_m$, $D=\mathrm{diag}(2\gamma_m)$, and $\hat{\Xi}$ is the noise operator vector of the Wiener processes in Eq.~(\ref{eq:Heisenberg-Langevin}).
As shown in Appendix~\ref{app:Covariance MF-Bog}, from $\hat{B}$ one can directly determine the covariance matrix, $\mathrm{C}_B(\omega)$ whose entries are the stationary two-time correlations of the (zero-mean) operators $\hat{B}_i,\hat{B}_j$
\begin{equation}~\label{eq:spectral response}
\Gamma_{ij}(\omega) = \mathcal{F} \braket{\lim_{t\to\infty}\hat{B}_i(t+\tau) \hat{B}_j(t)} = \braket{\Tilde{\hat{B}}_i(\omega) \Tilde{\hat{B}}_j(-\omega)},
\end{equation}
where $\mathcal{F}$ represents the Fourier transform of the correlation w.r.t to the delay $\tau$ and $\Tilde{\hat{B}}_i(\omega)$ is the Fourier transform of $\hat{B}_i(t)$.
Within the Born-Markov approximation, if the 2$^{nd}$-order dynamics is contractive and, in the vicinity of the steady state it dominates over the higher-order terms, then most of the important correlations can be obtained from the linearized Bogoliubov treatment as in Eq.~(\ref{eq:linearized fluctuations EoM}). This is a self-consistent criterion with $M$ being a negative-definite matrix and is typically satisfied at large particle numbers and weak interactions, as for the TD limit, where MF treatment is well-justified.
To examine the validity of the MF and linearization in the quantum limit of small number of particles, we further employ the Wigner function (WF) representation to express the system dynamics in terms of the analytic quasi-probability distribution $W(\vec{\alpha};t)$~\cite{Wiseman2011,Gardiner2004,Berg2009}. Using Itô calculus, the truncated dynamics of $W$ can be further mapped to a set of stochastic differential equations (SDE)s for $\alpha_m^\pm$ with the following general form (more details can be found in Appendix~\ref{app:Wigner func.})
\begin{equation}~\label{eq:SDE}
d\alpha_m = A_m dt + \sum_{m'} D_{m,m'} ~ dN_m,
\end{equation}
where $dN_m$ is a complex Wiener process describing a Gaussian white noise.
For any operator $\hat{\mathcal{O}}$, the expectation value of its symmetrically-ordered form, i.e. the equally weighted average of all possible orderings of the $\hat{\mathcal{O}}$ and $\hat{\mathcal{O}}^\dag$, can be obtained as
\begin{equation}\label{expectationvalue-SDE}
\braket{\hat{\mathcal{O}}}_{sym} = \braket{\braket{\mathcal{O}}},
\end{equation}
where $\braket{\braket{.}}$ stands for the ensemble average over stochastic trajectories.
Before leaving this section, we would like to emphasize that the beyond-MF corrections of GPE in Eq.~(\ref{eq:mean-field equations}) need the effect of the $2^{nd}$ and the $3^{rd}$ normally- and anomalously-ordered correlations. These terms contribute to the MF as \emph{state-dependent} noises. In the truncated Wigner method, there are additional drift terms as well as Langevin forces to capture those aforementioned quantum-field corrections, partially. While the full dynamics of $W(\vec{\alpha};t)$ in Eq.~(\ref{eq:Fokker-Planck}) is equivalent to the master equation in Eq.~(\ref{eq:master equation}), the truncated Wigner (TW) is an approximation which can only be applied to initially positive WF and preserves this property. It can be interpreted as the semi-classical version of Langevin equations of Eq.~(\ref{eq:Heisenberg-Langevin}). Thus, the TW and its equivalent SDE in Eq.~(\ref{eq:SDE}) might not be able to reproduce the quantum dynamics, fully. However, it goes beyond the MF-Bogoliubov treatment and can describe the generation of non-Gaussian and non-classical states~\cite{Corney2015}.
\section{Results and Discussion}\label{sec:results}
Throughout this section we assume identical field decay rates for all cavity modes, i.e., $\gamma_m = \gamma_0$ and express all other rates normalized to this value. Similarly, time is expressed in units of $\gamma_0^{-1}$. A coherent drive as in Eq.~(\ref{eq:coherent drive}) excites the second mode, i.e. $\hat{a}_2$ hence, the $1^{st}$ and $3^{rd}$ modes are populated, equally (more discussions can be found in Appendix~\ref{app:Covariance MF-Bog}). Thermal fluctuations due to the bath are assumed to be zero, i.e. $n_{th}=0$. Part of the full quantum mechanical calculations are done with QuTip open-source software~\cite{Johansson2012,Johansson2013}. The numerical convergence in each case has been tested by increasing the number of random initialization (MF), random trajectories (SDE), and the truncation number in Fock states (DM) to have a relative error $\approx O(-5)$ in the particle number.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{PD-Harmonic.eps}
\caption{~\label{fig:phase transition} MF Dissipative phase diagram of a three-mode harmonic cavity, i.e., $2\omega_2 = \omega_1 + \omega_3$, as a function of (a) the interaction strength $V_0$ and (b) the laser detuning $\Delta_0$. In each panel the yellow (A), orange (B), and red (C) regions correspond to one, two (bi-stability), and three stable (tri-stability) fixed points for the pumped mode, respectively. In (a) the detuning is fixed at $\Delta_0 = -3$ and in (b) the interaction strength has the constant value $V_0 = 1$. The dotted vertical lines [labelled (I) and (II)] at $V_0=0.1$ and $\Delta_0=-3$ indicate the cuts through the phase diagram studied in subsequent figures.}
\end{figure}
In a driven-dissipative system, the interplay between coherent excitation rate and its detuning , incoherent loss , and interaction leads to notable changes in system properties, typically known as dissipative phase transition (DPT). In a multi-mode case as in here, we have an additional parameter $\delta_D = 2\omega_2 - (\omega_1 + \omega_3)$, which is the anharmonicity of the bare cavity. To distinguish between these two cases, we call the cavity \emph{harmonic} if $\delta_D = 0$ and \emph{anharmonic} otherwise. As will be discussed, $\delta_D$ is also an important parameter governing the DPT. Similar phase diagrams and multi-stability phenomena have been studied for exciton-polaritons in planar cavities where $\delta_D$ vanishes~\cite{Wouters2007B}. Moreover, in this case the frequencies of the generated pairs are set by the bare cavity modes and the interaction, self-consistently.
Figure~\ref{fig:phase transition}(a),(b) shows the phase diagram of a harmonic cavity as a function of the interaction strength $V_0$ and the laser detuning $\Delta_0$, respectively. The phase diagram closely resembles the DPT of a single-mode cavity depicted in Fig.~\ref{fig:pump-only PT}(a),(b) in Appendix~\ref{app:single-mode}. While it is in (A)-phase, i.e. the yellow region, the pumped mode has one stable fixed point. In (B)-phase, i.e. the orange region, there are two distinct values for the pumped mode. Finally in (C)-phase, i.e. the red region which only appears in the multi-mode case, the system is within a tri-stable phase and the pumped mode has three stable MF fixed points.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{n-Harmonic.eps}
\caption{~\label{fig:3-mode population-harmonic} Population of the $1^{st}$ ($3^{rd}$) and the $2^{nd}$ mode in a harmonic cavity, i.e., $\delta_D = 0$, as a function of the pumping rate ($\Omega_0$) calculated from MF (black dots) and SDE (purple diamonds). Solid red line in panels (c),(d) show the DM solutions for comparison. $V_0 = 0.1$ in panels (a),(b) and $V_0 = 1$ in (c),(d). In both cases $\Delta_0 = -3$.}
\end{figure}
In Fig.~\ref{fig:3-mode population-harmonic} we plot $\braket{\hat{n}_{1,2}}$ for $V_0 = 0.1,~1$ at $\Delta_0 = -3$ as a function of the pumping rate varied along the dotted lines (I),(II) in Fig.~\ref{fig:phase transition}(a),(b), respectively.
There, the black dots show the MF solutions determined from integrating Eq.~(\ref{eq:mean-field equations}) for many different random initial conditions and for a time long compared to all transient time scales. The purple line with diamonds show the data calculated using the SDE method averaged over 2000 random trajectories, and the solid red line in panel (c),(d) depicts the results of the full density matrix calculations (DM) as a benchmark.
It can be seen that the phase transitions are discontinuous, i.e. a \emph{first-order} PT. Moreover, for all modes the difference between stable MF branches decreases upon increasing the interaction from $V_0 = 0.1$ to $V_0 = 1$ in Fig.~\ref{fig:3-mode population-harmonic}(a,b) and (c,d), respectively. Aside from the finite region around the multi-stability, also it can be seen that the results of MF, SDE, and DM agree quite well (Note a similar tendency for the single-mode case in Fig.~\ref{fig:pump-only} of Appendix~\ref{app:single-mode}).
For the 1$^{st}$ and 3$^{rd}$ modes on the other hand, both Fig.\ref{fig:3-mode population-harmonic}(a) and (c) indicate that the finite MF tri-stable region (C in the DPT) is the only parameter range where these modes get non-zero population.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{PT-anharmonic.eps}
\caption{~\label{fig:3-mode DPTT anharmonic} Number of photons in the $1^{st},3^{rd}$-modes of a three-mode anharmonic cavity ($2\omega_2 \ne \omega_1 + \omega_3$), as a function of the (a) interaction strength $V_0$ and (b) anharmonicity $\delta_D$, determined from MF. In (a) $\delta_D = 5$ and in (b) $V_0 = 1$ and the laser is always resonantly pumping the $2^{nd}$-mode $\Delta_0 = 0$. (A) , (C) indicate two different phases of zero and non-zero population of the first mode. The dotted vertical lines [labelled (I) and (II)] at $V_0=0.1$ and $\delta_D=5$ indicate the the cuts through the phase diagram studied in subsequent figures.}
\end{figure}
The situation is completely different in an \emph{anharmonic} cavity where $\delta_D \ne 0$. Figure~\ref{fig:3-mode DPTT anharmonic}(a),(b) shows the average number of photons in unpumped modes $\braket{n_{1,3}}$ as a function of the interaction strength $V_0$, the pumping rate $\Omega_0$, and the anharmonicity parameter $\delta_D$. For better illustrations, in Fig.\ref{fig:3-mode population-anharmonic}(a,b) and (c,d) we plot the average number of photons in all cavity modes as a function of the pump rate at weak ($V_0 = 0.1$) and strong ($V_0 = 1$) interaction, respectively when the pumping rate is continuously increased along (I) and (II) dotted lines in Fig.~\ref{fig:3-mode DPTT anharmonic}(a),(b). Unlike the harmonic cavity case, here we only have two phases (A),(C), where the transition occurs continuously (but non-analytic), i.e.\emph{second-order} PT, with a unique-valued order parameter in each phase.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig7.eps}
\caption{~\label{fig:3-mode population-anharmonic} Population of the $1^{st}$ ($3^{rd}$) and the $2^{nd}$ mode, in an anharmonic cavity with $\delta_D = 5$, as a function of the pumping rate ($\Omega_0$) calculated from MF (black dots), SDE (purple diamonds). Solid red line in panels (c),(d) shoe the DM solutions for comparison. $V_0 = 0.1$ in panels (a),(b) and $V_0 = 1$ in (c),(d). In both cases $\Delta_0 = 0$.}
\end{figure}
As elaborated in Appendix~\ref{app:Covariance MF-Bog} for the single-mode cavity, the interaction of the pumped mode ($2^{nd}$ mode here) with itself creates energetically symmetric sidebands. In a multi-mode case, the interplay between the intra- and inter-mode interactions leads to the excitation of other modes in both harmonic as well as anharmonic cavities. Similarly for both, MF predicts a threshold and a finite parameter range for non-zero occupations of the the $1^{st}, ~ 3^{rd}$-mode.
While the lower threshold is set solely by the pumped mode when $V_0 n_2 \ge \gamma_2$, the upper threshold is dependent on the population of the other two modes as well as their relative energies. (The lowest and highest pumping rate is set by the constraints on $\Phi_0 , \Phi_p$, respectively, as detailed in Appendix~\ref{app:Covariance MF-Bog}.)
When quantum fluctuations are included, however, either via SDE or full density matrix calculations (DM), unique, continuous and, non-zero solutions for all three modes are predicted at all pumping rates. In both cavities and for the pumped mode, MF, SDE, and DM results agree quite well in (A)-phase. For the parametrically populated modes however, the SDE and DM results are in good agreement over the whole range but are remarkably different from MF. However, the rising slope of the former analyses always coincide with the transition to the MF (C)-phase.
\subsection{Spontaneous Symmetry Breaking and Goldstone mode}~\label{sec:SSB}
In the absence of the coherent pump, the Liouvillian super-operator $\mathcal{L}$ of Eq.~(\ref{eq:master equation}) has a continuous global U(1)-symmetry, which is broken by a coherent drive of Eq.(\ref{eq:coherent drive}). However, with the Hamiltonian of Eq.~(\ref{eq:3-mode Hamiltonian}) for the three-mode cavity, $\mathcal{L}$ sill has a local U(1)-symmetry as it remains unchanged under the following transformations for any arbitrary phase $\Theta_0$~\cite{Wouters2007}
\begin{equation}~\label{eq:local U(1)-sym}
\hat{a}_1 \rightarrow \hat{a}_1 e^{+i\Theta_0} ~ , ~ \hat{a}_3 \rightarrow \hat{a}_3 e^{-i\Theta_0}.
\end{equation}
If the MF amplitudes $\alpha_{1,3} = 0$, then the steady state respects the Liouvillian's symmetry. However, for $\alpha_{1,3} \ne 0$ as occurs within the (C)-phase, the MF solutions are not U(1) symmetric, anymore. Hence, there is a \emph{spontaneous symmetry breaking} (SSB) accompanied by a DPT. However, it is evident that the set of all solutions is invariant under the aforementioned rotations. In other words, within the (C)-phase there is a continuum of MF fixed points.
Figure~\ref{fig:LC}(a),(b) shows the temporal behavior of order parameters $\alpha_m$ within the MF (C)-phase of the harmonic and anharmonic cavities, respectively. As can be seen, while the pumped mode $m_2$ is time-invariant (green line), the parametrically populated modes $m_{1,3}$ (blue and red lines) show self-sustained oscillations with a random relative phase, reflecting the value U(1) phase acquire in the SSB.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig8.eps}
\caption{~\label{fig:LC} Temporal behavior of the mean fields $\alpha_j(t)$ within the MF (C)-phase in a thee-mode (a) harmonic cavity at $\Omega_0 = 1.85, \Delta_0 = -3$ and (b) anharmonic cavity at $\Omega_0 = 2, \delta_D = 5$ and $V_0 = 1$. In both panels the blue, green, and red lines correspond to the $1^{st},~2^{nd}$ and, the $3^{rd}$-mode, respectively. The time is in units of $\gamma_0^{-1}$ and $T_{LC}$ indicates the limit-cycle period.}
\end{figure}
In the laser rotated frame, the Liouvillian $\mathcal{L}$ is TTS, which indeed is the symmetry of the solutions within the (A),(B)-phase. Within the (C)-phase, however, the order parameter becomes time-periodic and thus breaks the time-translational symmetry. Therefore, in both of the harmonic and anharmonic cavities, the MF (C)-phase is accompanied by SSB of the local U(1) symmetry and the TTS. This oscillatory behavior, known as \emph{limit-cycle} (LC)-phase, is an apparent distinction of DPT from its equilibrium counterparts~\cite{Qian2012,Chan2015}.
From Fig.~\ref{fig:LC} the LC-period can be determined as $T_{LC} \approx 6.28$ and $T_{LC} \approx 0.83$, corresponding to $\omega_{LC} = 1 , ~ 7.5$ for the harmonic and anharmonic cavities, respectively. Note that these frequencies agree with theoretical predictions of $\Tilde{\Delta}_{1,3}$ in Appendix~\ref{app:Covariance MF-Bog}.
The consequence of SSB of this continuous symmetry can be interpreted in terms of the gapless \emph{Goldstone} mode.
The eigenvalues $\{\lambda\}$ of the Bogoliubov matrix $M$ in Eq.(\ref{eq:linearized fluctuations EoM}) directly determine the excitation energies around a MF fixed point, with Re($\lambda$) being the excitation linewidth and Im($\lambda$) its frequency. It is straightforward to check that due to the relative-phase freedom of the unpumped modes, $M$ has a kernel along the following direction~\cite{Wouters2007} (more information in Appendix~\ref{app:Covariance MF-Bog})
\begin{equation}~\label{eq:Goldstone mode}
\ket{G} = [\alpha_1, 0, -\alpha_3, -\alpha_1^*, 0, \alpha_3^*]^T,
\end{equation}
where $T$ means the transpose.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{XPspec_MF.eps}
\caption{~\label{fig:3-mode spectra} Output $X,P$ spectra of the modes in the (a),(b) harmonic cavity at $\Delta_0 = -3 , \Omega_0 = 1.85$, and (c),(d) anharmonic cavity at $\Delta_0 = 0 , \Omega_0 = 2$, calculated from the MF-Bogoliubov. In each panel the solid blue, red, and green lines correspond to the spectrum of the pumped ($\ket{m_2}$), symmetric ($\ket{m_+}$) and antisymmetric ($\ket{m_-}$) modes, respectively. Due to its divergence, the momentum of the antisymmetric mode is scaled down in panels (b),(d).}
\end{figure}
$\lambda_G = 0$ implies that in the local oscillators frame, $\ket{G}$ is a mode at $\omega=0$ with zero linewidth, i.e., an undamped excitation. To investigate the implications of this mode on quantum correlations, we employ Eq.~(\ref{eq:linearized fluctuations EoM}) to calculate the $XP$-quadrature spectra of the cavity output.
Figure~\ref{fig:3-mode spectra} shows the quadrature correlations of the output $2^{nd}$-mode and $\ket{m_\pm} = m_1 \pm m_3$, i.e., the symmetric and antisymmetric superpositions of the two unpumped modes. Panels (a),(b) show the spectra of the harmonic cavity at $\Omega_0 = 1.85$, and panels (c),(d) show the same quantities for an anharmonic cavity at $\Omega_0 = 2$, which correspond to the point B within the MF LC-phase, and on the rising slope of the SDE/DM results in Fig.~\ref{fig:3-mode population-harmonic}(c) and Fig.~\ref{fig:3-mode population-anharmonic}(c).
Although the spectral features of the pumped and the symmetric mode depend on detail cavity features, the antisymmetric mode quadratures in harmonic and anharmonic cavities look alike (solid green lines in Fig.~\ref{fig:3-mode spectra}(c),(d)). While $S_{X_-}$ is unconditionally fully squeezed at the origin, the spectrum of its conjugated variable $S_{P_-}$ diverges. From Eq.~(\ref{eq:Goldstone mode}) it is clear that $S_{P_-}$ is indeed the spectrum of the gapless Goldstone mode. Since in the MF picture, this mode encounters no restoring force its fluctuation diverges. (The analytic form of the spectra and further can be found in Appendix~\ref{app:Covariance MF-Bog}.)
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig10.eps}
\caption{~\label{fig:Wigner 3-mode Harmonic} Histograms of number state occupation probability $p_n$ and colormaps of the Wigner function of the (a)-(d) $1^{st}, 3^{rd}$-modes and (e)-(h) $2^{nd}$-mode, in a three-mode harmonic cavity when $\Delta_0 = -3 , V_0 = 1$ for different pumping rates $\Omega_0$ highlighted as (A,B,C,D) in Fig.~\ref{fig:3-mode population-harmonic}(c),(d). In each phase-space map the white dashed lines show the axes ($X=0,P=0$) in the $XP$-plane and black stars or circles correspond to the predicted MF.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig11.eps}
\caption{~\label{fig:Wigner 3-mode anHarmonic} Histograms of number state occupation probability $p_n$ and colormaps of the Wigner function of the (a)-(d) $1^{st}, 3^{rd}$-modes and (e)-(h) $2^{nd}$-mode, in a three-mode anharmonic cavity when $\Delta_0 = 0, \delta_D = 5$ and at $V_0 = 1$ for different pumping rates $\Omega_0$ highlighted as (A,B,C,D) in Fig.~\ref{fig:3-mode population-anharmonic}(c),(d). In each phase-space map the white dashed lines show the axes ($X=0,P=0$) in the $XP$-plane and black stars or circles correspond to the predicted MF.}
\end{figure}
To examine the robustness of the Goldstone mode and the consequent unconditional squeezing, we employ the SDE to study the beyond-MF behavior of the cavity state. Figure~\ref{fig:Wigner 3-mode Harmonic} shows the number state occupation probability ($p_n$) and the Wigner function distribution of the harmonic cavity at four different pumping rates $\Omega_0 = 1,~ 1.85,~ 3.5,~ 10$ corresponding to (A,B,C,D) points in Fig.~\ref{fig:3-mode population-harmonic} at $V_0 = 1$, respectively. Panels (a)-(d) show these quantities for the $1^{st}$-mode and panels (e)-(h) show the ones for the $2^{nd}$-mode. As can be seen in all panels (a)-(d), distributions of the $1^{st},~3^{rd}$-modes are azimuthally symmetric independent of the pumping rate, which is consistent with the local U(1) symmetry of these two modes and their phase freedom, i.e., $\braket{\hat{a}_{1,3}} = 0$.
Within the (A)-phase at low pumping rate and before the parametric threshold, MF predicts zero amplitude for the $1^{st},3^{rd}$ modes, while the $2^{nd}$ mode looks like a coherent state (Fig.~\ref{fig:Wigner 3-mode Harmonic}(a),(e)).
As the pumping rate increases (point B in Fig.~\ref{fig:3-mode population-harmonic} (c),(d)), the system enters the LC-phase in which mode 2 has three stable fixed points, as shown with three stars in Fig.~\ref{fig:Wigner 3-mode Harmonic}(f), and the two unpumped modes acquire a finite population. The black circle in Fig.~\ref{fig:Wigner 3-mode Harmonic}(b) shows the loci of MF fixed points.
For larger values of the pump, close to the upper threshold of the multi-stability region (point C in Fig.~\ref{fig:3-mode population-harmonic}(c),(d)), the systems transitions to the uniform (A)-phase again where the $2^{nd}$-mode attains a unique fixed point and the $1^{st},3^{rd}$-modes have zero MF. However, as can be seen in Fig.~\ref{fig:Wigner 3-mode Harmonic}(g) the cavity state is far from coherent due to the larger interaction at this photon number.
At even larger pumping rate shown in Fig.~\ref{fig:Wigner 3-mode Harmonic}(d),(h), corresponding to the point D in Fig.~\ref{fig:3-mode population-harmonic}(c),(d) (far within the (A)-phase), the $2^{nd}$ mode is a non-classical state whose phase-space distribution is reminiscent of the single-mode cavity at this regime (Fig.~\ref{fig:pump-only} of Appendix~\ref{app:single-mode}). Also it is worth mentioning that in spite of the similar symmetric distribution of the $1^{st},3^{rd}$ modes and their vanishing means, their variances clearly change as the system traverses through different phases.
For completeness, in Fig.~\ref{fig:Wigner 3-mode anHarmonic} we detail the state of the anharmonic cavity through its different phases at four pumping rates of $\Omega_0 =1,~ 2, ~ 3.5, ~ 10$ corresponding to (A,B,C,D) points in Fig.~\ref{fig:3-mode population-anharmonic}(c),(d). As can be seen the overall behavior of the cavity modes looks like that of the harmonic case, with the main distinction of always having one unique MF fixed point.
To study the robustness of the Goldstone mode in the presence of quantum fluctuations, from SDE analysis we calculate the correlation and spectrum of $\hat{P}_-$ as
\begin{align}\label{eq:minus-mode g1}
g^{(1)}(\tau) &= \braket{\lim_{t\to\infty}\hat{P}_-(t+\tau) \hat{P}_-(t)}, \\
S_{P_-}(\omega) &= \mathcal{F}\left(g^{(1)}(\tau) \right),
\end{align}
where $\hat{P}_- = i(\hat{a}_- - \hat{a}^\dagger_-)/\sqrt{2}$ is the momentum of $\ket{m_-}$-mode.
The results are shown in Fig.~\ref{fig:goldstone mode spec} when the interaction $V_0$ is increased from 0.1 to 1 (brown to yellow lines). Panels (a),(b) are the spectra and correlations in the (A)-phase while (c),(d) are within the (C)-phase where LC is predicted by MF. For direct comparison with LC oscillations of Fig.~\ref{fig:LC}(b) and highlighting $\omega_{LC}$, the spectral densities in (a),(c) are shown in the laser ($\omega_L$) rather than the local frame ($\omega_{LO}$).
Defining a dimension-less parameter $N$ where $V_0/N \rightarrow 0^+$ is the TD limit, the pumping rate is scaled by $\sqrt{N}$, so that $V \Omega^2$ is kept fixed.
As can be seen in Fig.~\ref{fig:goldstone mode spec}(a),(b), the observables are almost unchanged when the system is in the (A)-phase, where MF predicts zero-photon number in $\ket{m_-}$. From Fig.~\ref{fig:goldstone mode spec}(a) we can see that the linewidth of this mode is large and the spectral density is very small (note that the lines for $V_0 = 0.5, ~ 0.1$ are shifted upwards to clarify things better). Similarly, the temporal behavior in panel (b) shows a short correlation time.
On the contrary, when the system transitions to the MF LC-phase by virtue of increasing the pumping rate, the spectral densities shown in Fig.~\ref{fig:goldstone mode spec}(c) increase and an apparent resonance feature appears that becomes more prominent at weaker interaction closer to the TD limit hence, the validity range of MF.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{minus-mode.eps}
\caption{~\label{fig:goldstone mode spec} SDE calculations of the (a),(c) spectral density in the laser frame and (b),(d) delayed temporal correlation of $P$-quadrature of $\ket{m_-}$ mode in an anharmonic cavity. The upper row shows the behavior in the (A)-phase and the lower row shows the ones with the MF LC-phase. The interaction is changed from $V_0 = 0.1$ to $V_0 = 1.0$, yellow to red to brown, respectively. The dashed lines show the Lorentzian fit in (a),(c) and the exponential fits in (c),(d).}
\end{figure}
Similarly the temporal correlations in panel (d) show prolonged coherence times that increases at weaker interaction. To quantify these features better we fit a Lorentzian lineshape with the following form to $S_{P_-}(\omega)$
\begin{equation}~\label{eq:lor. fit}
L(\omega) = \frac{a}{(\omega - \omega_{peak})^2 + \Gamma^2} + c
\end{equation}
The fits are shown with dashed lines in Fig.~\ref{fig:goldstone mode spec}(a),(c) and the center and linewidth fit parameters are presented in table~\ref{tab: Goldstone mode Lorentzian}.
Within the (A)-phase, $\omega_{peak},\Gamma$ slightly changes with changing the interaction $V_0$. Throughout the LC-phase on the other hands, $\omega_{peak} \approx 7.5$, i.e., the LC oscillation frequency $\omega_{LC}$ in Fig.~\ref{fig:LC}(b). Moreover, starting from a narrow resonance ($\Gamma \approx 0.4$) at weak interaction (large $N$), the linewidth clearly increases ($\Gamma \approx 3.2$) by increasing the interaction (small $N$). Similar values were obtained by fitting the correlation functions with exponential functions, i.e. dashed lines in Fig.~\ref{fig:goldstone mode spec}(b),(d), independently.
\begin{table}~\label{tab: Goldstone mode Lorentzian}
\begin{tabular}{ |c|c|c|c| }
\hline
& $V_0 = 0.1$ & $V_0 = 0.5$ & $V_0 = 1.0$ \\
\hline
$\omega_{peak}$ (A) & 8.7 & 8.5 & 9 \\
\hline
$\omega_{peak}$ (LC) & 7.5 & 7.7 & 7.9 \\
\hline
$\Gamma$ (A) & 5.6 & 5.4 & 6.5 \\
\hline
$\Gamma$ (LC) & 0.4 & 1.7 & 3.2 \\
\hline
\end{tabular}
\caption{The Lorentzian fit parameters to the spectral density of $P_-$quadrature within the MF (A)- and LC-phase as in Fig.~\ref{fig:goldstone mode spec}(a),(c).}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Fig16.eps}
\caption{~\label{fig:goldstone mode decay} The linewidth of $P_-$quadrature, i.e. the Goldstone mode, within the MF LC-phase as a function of dimensionless parameter $N$. The red squares are the SDE calculation results while the solid blue line is a power-law fit to the data. The solid red line shows the number of particles in this mode (right axis). The inset colormaps show the TW distribution of $\ket{m_-}$-mode at a couple of interaction strengths.}
\end{figure}
As a final remark we study the behavior of $P_-$quadrature linewidth within the whole quantum to TD limit, corresponding to the small and large $N$, respectively. The results depicted in Fig.~\ref{fig:goldstone mode decay} with red squares. The solid red line is the number of particles in this mode (right y-axis), and the the solid blue line is a power-law fit to the data, indicating the linewidth narrowing scales as $N^{-0.9}$. In other words, while the gapless Goldstone mode picture at TD limit (kernel of MF-Bogoliubov matrix) corroborates well with a small $\Gamma \approx 0$ of $P_-$quadrature, approaching the quantum limit the decay rate notably increases due to the phase diffusion.
It is worth comparing this tendency with $N^{-1}$ behavior, i.e. the Shallow-Towens laser linewidth scaling~\cite{Haken1984}.
To investigate the $\ket{m_-}$-mode noise spectra as well, we show the Wigner function distribution of this mode at a few different interaction points. As can be seen at larger $N$ (point C), hence the weaker interaction, the phase-space distribution resembles the one of a number-squeezed state. However, upon increasing the interaction (points A,B) the squeezing decreases.
This clearly confirms the phase diffusion effect in reducing the coherence time of the generated pairs. Besides, this effect becomes more dominant deep into the quantum range where the fluctuations should not be ignored.
\section{Conclusion}
Exploring dissipative phase transitions is one of the important topics of open quantum systems. There, the interplay between dissipation, drive, and interaction can lead to a rich testbed to investigate dynamics of many-body systems far from their equilibrium. In this article, we theoretically investigate the first- and second-order quantum dissipative phase transitions of in a three-mode cavity with intra- and inter-modal two-body interaction as a prototypical model. We showed the emergence of a MF limit-cycle phase where the local U(1) symmetry and the TTS of the Liouvillian are spontaneously broken. We explained the connection between this phase and the Goldstone mode well-studied in the TD limit. By employing the Wigner function formalism hence, properly including the quantum noise, we showed the breakdown of MF predictions within the quantum regime. Within this range, fluctuations notably limit the coherence time of the Goldstone mode due to the phase diffusion.
Concerning the experimental realizations, the model and the results are applicable to a wide variety of driven-dissipative interacting bosonic platforms, including circuit-QED, semiconductor excitons, and multi-mode cavities with cold-atoms~\cite{Jia2018, Vaidya2018}, where the figure of merit $V_0/\gamma$ can be tuned, properly. It is also interesting to explore the feasibility of using such platforms in creating non-Gaussian states as an instrumental ingredient for quantum information protocols based on continuous variable entanglement and photonic quantum logic gates~\cite{Braunstein2005,Santori2014,Liu2017,Zhang2017}.
\section*{acknowledgement}
The authors thank Wolfgang Schleich, Hans-Peter B\"uchler, Jan Kumlin, and Jens Hertkorn for insightful discussions. The invaluable IT support from Daniel Weller is greatly acknowledged. H. A. acknowledges the financial supports from IQST Young Researchers grant and the Eliteprogram award of Baden-W\"urttemberg Stiftung. I. C. acknowledges financial support from the European Union FET-Open grant ``MIR-BOSE'' (n. 737017), from the H2020-FETFLAG-2018-2020 project "PhoQuS" (n.820392), and from the Provincia Autonoma di Trento.
| proofpile-arXiv_065-186 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The past two decades have seen numerous advances to the approximability of the maximum disjoint paths problem ({\sc edp}) since the seminal paper \cite{GargVY97}.
An instance of \textsc{edp} consists of a (directed or undirected) ``supply'' graph $G=(V,E)$ and a collection of $k$ {\em requests} (aka demands). Each request consists of a pair of nodes $s_i,t_i \in V$. These
are sometimes viewed as a {\em demand graph} $H=(V(G).\{s_it_i: i \in [k]\})$. A subset $S$ of the requests is called {\em routable} if there exist edge-disjoint paths $\{P_i: i \in S\}$ such that $P_i$ has endpoints $s_i,t_i$ for each $i$. We may also be given a profit $w_i$ associated with each request and the goal
is to find a routable subset $S$ which maximizes $w(S)=\sum_{i \in S} w_i$. The {\em cardinality version} is where
we have unit weights $w_i \equiv 1$.
For directed graphs it is known \cite{guruswami2003near} that there is no $O(n^{0.5-\epsilon})$
approximation, for any $\epsilon >0$ under the assumption $P \neq NP$. Subsequently, research shifted to undirected graphs
and two relaxed models. First, in the {\em all-or-nothing flow model} ({\sc anf}) the notion of routability is relaxed. A subset $S$ is called routable if there is a feasible (fractional) multiflow which satisfies each request in $S$. In \cite{Chekuri04a} a polylogarithmic approximation is given for {\sc anf}. Second, in the {\em congestion} model \cite{KleinbergT98} one is allowed to increase the capacity of each edge in $G$ by some constant factor.
Two streams of results ensued. For general graphs, a polylogarithmic approximation is ultimately provided \cite{chuzhoy2012polylogarithimic,ChuzhoyL12,chekuri2013poly} with edge congestion $2$. For planar graphs, a constant factor approximation is given \cite{seguin2020maximum,CKS-planar-constant} with edge congestion $2$. There is also an $f(g)$-factor approximation for bounded genus $g$ graphs with congestion 3.
As far as we know, the only congestion $1$ results known for either maximum {\sc anf} or {\sc edp} are as follows; all of these apply only to the cardinality version.
In \cite{kawarabayashi2018all}, a constant factor approximation is given for {\sc anf} in planar graphs and
for treewidth $k$ graphs there is an $f(k)$-approximation for {\sc edp} \cite{chekuri2013maximum}.
More recent results include a constant-factor approximation in the {\em fully planar} case where $G+H$ is planar \cite{huang2020approximation,garg2020integer}.
In the weighted regime, there is
a factor $4$ approximation for
{\sc edp} in capacitated trees \cite{chekuri2007multicommodity}. We remark that this problem for unit capacity ``stars'' already generalizes the maximum weight matching problem in general graphs. Moreover, inapproximability bounds for {\sc edp} in planar graphs are almost polynomial \cite{chuzhoy2017new}. This lends interest to how far one can push beyond trees. Our main contribution to the theory of maximum throughput flows is the following result which is the first generalization of the (weighted) {\sc edp} result for trees
\cite{chekuri2007multicommodity},
modulo a larger implicit constant of $224$.
\begin{restatable}{theorem}{outerplanarWEDPapprox}
\label{thm:edp}
There is an $224$ approximation algorithm for
the maximum weight {\sc anf} and {\sc edp} problems for capacitated
outerplanar graphs.
\end{restatable}
It is natural to try to prove this is by reducing the problem in outerplanar graphs to trees and then use \cite{chekuri2007multicommodity}.
A promising approach is to use results of
\cite{gupta2004cuts} -- an $O(1)$ distance tree embedding for outerplanar graphs -- and a {\em transfer theorem} \cite{andersen2009interchanging,Racke08} which proves a general equivalence between distance and capacity embeddings.
Combined, these results imply that there is a probabilistic embedding into trees which approximates cut capacity in outerplanar graphs with constant congestion.
One could then try to mimic the success of using low-distortion (distance) tree embeddings to approximate minimum cost network design problems. There is an issue with this approach however. Suppose we have a distribution on trees $T_i$ which approximates cut capacity in expectation. We then apply a known {\sc edp} algorithm which outputs a subset of requests $S_i$ which are routable in each $T_i$. While the tree embedding guarantees the convex combination of $S_i$'s satisfies the cut condition in $G$, it may be that no single $S_i$ obeys the cut condition, even approximately. This is a problem even for {\sc anf}. In fact, this seems to be a problem even when the trees are either dominating or dominated by $G$.
\iffalse
SKETCH WHY: If the $T_i$'s are all dominating $G$ then the issue is already
implied in what I typed. Some of the $S_i$s may massively violate cut capacity even though in expectation they are fine. If $T_i$'s are dominated then this problem is solved. But then while in expectation the $T_i$'s cover at least some constant fraction of $G$'s capacity. There is no guarantee that the $S_i$'s will as well. So I dont see that we can guarantee that one of the $S_i$'s will give at least constant times the OPT in $G$.
\fi
We resolve this by computing a {\bf single} tree which approximates the cuts in $G$ -- see Theorem~\ref{thm:tree}. Our algorithmic proof is heavily inspired by work of Gupta \cite{gupta2001steiner} which gives a method for eliminating Steiner nodes in probabilistic (distance) tree embeddings for general graphs.
It turns out that having a single-tree is not enough for us and we need additional technical properties to apply the algorithm from \cite{chekuri2007multicommodity}. First, our single tree $T$ should have integer capacities and be non-expansive, i.e., $\hat{u}(\delta_T(S)) \leq u(\delta_G(S))$ (where $\hat{u}/u$ are the edge capacities in $T/G$ and $\delta$ is used to denote the edges in the cut induced by $S$).
To see why it is useful that $T$ is an under-estimator of $G$'s cut capacity, consider the classical grid example of \cite{GargVY97}. They give an instance with a set of $\sqrt{n}$ requests which satisfy the cut condition in $2 \cdot G$, but for which one can only route a single request in the capacity of $G$.
If our tree is an under-estimator, then we can ultimately obtain a ``large'' weight subset of requests satisfying the cut condition in $G$ itself. However, even this is not generally sufficient for (integral) routability. For a multiflow instance $G/H$ one normally also requires that $G+H$ is Eulerian,
even for easy instances such as when $G$ is a $4$-cycle. The final ingredient we use is that our single tree $T$ is actually a {\bf subtree} of $G$ which
allows us to invoke the following result -- see Section~\ref{sec:required}.
\begin{restatable}{theorem}{OProute}
\label{thm:OP}
Let G be an outerplanar graph with integer edge capacities $u(e)$. Let $H$ denote a
demand graph such that $G + H = (V (G),E(G) \cup E(H))$ is outerplanar. If $G,H$ satisfies the cut
condition, then $H$ is routable in G.
\end{restatable}
\noindent
The key point here is that we can avoid the usual parity condition needed, such as in \cite{Okamura81,seymour1981matroids,frank1985edge}.
We are not presently aware of the above result's existence in the literature.
\subsection{A Single-Subtree Cut Sparsifier and Related Results}
Our main cut approximation theorem is the following which may be of independent interest.
\begin{restatable}{theorem}{integerTree}
\label{thm:tree}
For any connected outerplanar graph $G=(V,E)$ with integer edge capacities $u(e) > 0$, there is a subtree $T$ of $G$ with integer edge weights $\hat{u}(e) \geq 0$ such that
\[
\frac{1}{14} u(\delta_G(X)) \leq \hat{u}(\delta_{T}(X)) \leq u(\delta_G(X)) \mbox{ for each proper subset $X \subseteq V$}
\]
\end{restatable}
We discuss some connections of this result to prior work on sparsifiers and metric embeddings.
Celebrated work of R\"acke \cite{racke02} shows the existence of a single capacitated tree $T$ (not a subtree) which behaves as a flow sparsifier for a given graph $G$. In particular,
routability of demands on $T$ implies fractional routability in $G$ with edge congestion $polylog(n)$; this bound was further improved to $O(\log^2n \log\log n)$ \cite{harrelson2003polynomial}. Such single-tree results were also instrumental in an application to maximum throughput flows: a polylogarithmic approximation for the maximum all-or-nothing flow problem in general graphs \cite{chekuri2013all}. Even more directly to Theorem~\ref{thm:tree} is work on cut sparsifiers; in \cite{racke2014improved} it is shown that there is a single tree (again, not subtree) which approximates cut capacity in a general tree $G$ within a factor of $O(\log^{1.5} \log\log n)$. As far as we know, our result is the only global-constant factor single-tree cut approximator for a family of graphs.
R\"acke improved the bound for flow sparsification to an optimal congestion of $O(\log n)$ \cite{Racke08}. Rather than a single tree, this work requires a convex combination of (general) trees to simulate the capacity in $G$. His work also revealed a beautiful equivalence between the existence of good (low-congestion) distributions over trees for capacities, and
the existence of good (low-distortion) distributions over trees for distances \cite{andersen2009interchanging}.
This {\em transfer theorem} states very roughly that for a graph $G$ the following are equivalent for a given $\rho \geq 1$. (1) For any edge lengths $\ell(e)>0$, there is a (distance) embedding of $G$ into a distribution of trees which has stretch at most $\rho$. (2) For any edge capacities $u(e)>0$, there is a (capacity) embedding of $G$ into a distribution of trees which has congestion at most $\rho$. This work has been applied in other related contexts such as flow sparsifiers for proper subsets of terminals \cite{englert2014vertex}.
The transfer theorem uses a very general setting where there are a collection of valid {\em maps}. A map $M$ sends an edge of $G$ to an abstract ``path'' $M(e) \subseteq E(G)$. The maps may be refined for the application of interest. In the so-called {\em spanning tree setting}, each $M$ is associated with a subtree $T_M$ of $G$ (the setting most relevant to Theorem~\ref{thm:tree}). $M(e)$ is then the unique path which joins the endpoints of $e$ in $T_M$. For an edge $e$, its {\em stretch} under $M$ is $(\sum_{e' \in <(e)} \ell(e'))/\ell(e)$.
In the context of distance tree embeddings this model has been studied in \cite{alon1995graph,AbrahamBN08,elkin2008lower}.
In capacity settings, the {\em congestion} of an edge under $M$ is $(\sum_{e': e \in M(e)} c(e'))/c(e)$. One can view this as simulating the capacity of $G$ using the tree's edges with bounded congestion. The following result shows that we cannot guarantee a single subtree with $O(1)$ congestion even for outerplanar graphs; this example was found independently by Anastasios Sidiropoulos \cite{tasos}.
\begin{theorem}
\label{thm:lowerbound}
There is an infinite family $\mathcal{O}$ of outerplanar graphs
such that for every $G \in \mathcal{O}$ and every spanning tree $T$ of $G$:
\[
\max_{X} \frac{u(\delta_G(X))}{u(\delta_T(X))} = \Omega(\log|V(G)|),
\]
where the max is taken over fundamental cuts of $T$.
\end{theorem}
This suggests that the single-subtree result Theorem~\ref{thm:tree} is a bit lucky
and critically requires the use of tree capacities different from $u$.
Of course a single tree is sometimes
unnecessarily restrictive. For instance, outerplanar graphs also have an $O(1)$-congestion embedding using a distribution of subtrees by the transfer theorem (although we are not aware of one explicitly given in the literature). This follows implicitly due to existence of an $O(1)$-stretch embedding into subtrees \cite{gupta2004cuts}.
Finally we remark that despite the connections between distance and capacity tree embeddings, Theorem~\ref{thm:tree} stands in contrast to the situation for distance embeddings. Every embedding of the $n$ point cycle into a (single) subtree suffers distortion $\Omega(n)$, and indeed this also holds for embedding into an arbitrary (using Steiner nodes etc.) tree \cite{rabinovich1998lower}.
\iffalse
Subtrees such as in Theorem~\ref{thm:tree} seem to hit a limit with outerplanar graphs. Such trees for
for series parallel graphs would an imply a distribution over dominating arbitrary trees (by scaling). But then via the transfer theorem this contradicts a result in \cite{gupta2004cuts} which shows that dominating tree embeddings of
outerplanar graph distances may suffer logarithmic distortion.
BUG. Just like our result the trees may be O(1)-cut approximators but NOT have O(1)-congestion as defined by AF above.
\fi
\iffalse
BRUCE NOTES
1. Embedding into L1 is the same as embedding into lines (which are trees) but one is allowed to shrink some distances.
2. Series Parallel can be embeded into L1 (Lee and others)
3. There is no embedding of series parallel into dominating trees however (GNRS)
\fi
\iffalse
OLD COMMENTS/DISCUSSION
We first discuss distance-preserving embeddings and return later to capacity-preserving embeddings. In the distance setting, the goal is to replace a graph $G$ with edge distances $d(e) \geq 0$ by
a tree $T$ with edge weights $\hat{d}(e)$ so that the shortest-path distances in $G,d$ are quantitatively similar to those in $T,\hat{d}$. The high level motivation is to attack problems in $G$ using simpler algorithms for trees.
We also use the weights $d/\hat{d}$ to denote the induced shortest-path metrics in $G,T$.
For instance, for all $i,j \in V(G)$ we define $d(ij)=\min_{P \in \mathcal{P}_{ij}} d(P)$ where $\mathcal{P}_{ij}$ is the family of simple $ij$ paths in $G$ and $d(P)=\sum_{e \in P} d(e)$. Similarly we use the notation $\hat{d}(ij)$.
One may view $(G,d) \rightarrow (T,\hat{d})$ as a mapping of the shortest path semi-metric space in $G$ to one in $T$.
Depending on the scenario, additional restrictions on the mapping are considered in the literature.
For instance, one often requires non-contractive mappings, that is, the metric in $T$ {\em dominates} $G$: $\hat{d}(ij) \geq d(ij)$ for all $i,j \in V(G)$.
The {\em distortion} (for $i,j$) is then defined as
$\Delta(ij) := \frac{\hat{d}(ij)}{d(ij)}$ and the distortion of the map is $\max_{i,j \in V(G)} \Delta(ij)$.
One easily checks that
when $G$ is a unit-weight cycle, the distortion of any mapping into a subtree of $G$ is $n-1$. In fact, embedding into any $n$-point tree metric results in a distortion
of $\Omega(n)$ \cite{rabinovich1998lower}; this is true even if additional ``Steiner nodes'' are allowed \cite{gupta2001steiner}. Hence finding a constant-distortion single-tree distance approximator in a graph $G$ is not possible even for the most trivial ``non-tree graph''.
This inspires the important concept of probabilistic tree embeddings. That is,
a distribution $P$ over some family $\mathcal{T}$ of trees.
The goal is then to have a small (maximum) distortion in expectation.
A breakthrough in \cite{bartal1996probabilistic} showed that every graph has a probabilistic embedding into dominating tree metrics with polylogarithmic expected distortion which is improved to an optimal $O(\log n)$ distortion in \cite{FRT04}.
There are some subtleties in this line of research as to which maps are deemed {\em admissible}. For instance, do the maps have to be non-contractive or non-expansive? are the trees $T \in \mathcal{T}$ required to be subtrees of $G$?, e.g. \cite{alon1995graph}. One always assumes that $V(G) \subseteq V(T)$ but is $T$ allowed to have Steiner nodes? Also in some settings the weights on edges of $T$ must be the same as for $G$:
$\hat{d}(e) = d(e)$. We call this the {\em exact weight regime}.
The $O(\log n)$ expected distortion result \cite{FRT04} uses non-contractive maps and arbitrary trees (with Steiner nodes, although these can be eliminated with constant factor loss \cite{gupta2001steiner}). In the subtree-with-exact-weights regime, the current best distortion is $\tilde{O}(\log n)$ \cite{AbrahamBN08}.
AF also demonstrate that distance embeddings of a planar graph are closely related to capacity embeddings in its planar dual.
A similar relationship was described earlier by Emek \cite{emek2009k}.
\fi
\iffalse
\textcolor{blue}{
I'm a bit confused about the next two paragraphs.
A single tree capacity embedding doesn't exist in the sense of A-F, but their capacity embeddings don't represent all of what we call cut-approximators.
Also, I added `probabilistic' to the O(1)-distortion embedding.
}
\fi
\iffalse
Since there is an $O(1)$-distortion probabilistic tree embedding for outerplanar graphs \cite{gupta2004cuts}, the preceding results imply that there is an $O(1)$-congestion approximator of cuts via a distribution of trees.
The duality/polarity arguments of \cite{andersen2009interchanging} between
distance and capacity embeddings are implicit however. There is no obvious way to convert an $O(1)$-distortion distance embedding into an $O(1)$-congestion capacity embedding. In fact, to the best of our knowledge, no such distribution has been explicitly given for capacity mappings in outerplanar graphs. Moreover,
we know that single-tree embeddings do not even exist for distances in outerplanar graphs (or even cycles) so one may be tempted to believe that none exists for capacities either. Theorem~\ref{thm:tree} shows otherwise by producing an explicit single-tree cut approximator for any outerplanar graph.
\fi
\section{Single spanning tree cut approximator in Outerplanar Graphs}
In this section we first show the existence of a single-tree
which is an $O(1)$ cut approximator for an outerplanar graph $G$.
Subsequently we show that there is such a tree with two additional properties. First, its capacity on every cut is at most the capacity in $G$, and second, all of its weights are integral. These additional properties (integrality and conservativeness) are needed in our application to {\sc edp}. The formal statement we prove is as follows.
\integerTree*
In Section~\ref{sec:flowdist}, we show how to view capacity approximators in $G$ as (constrained) distance tree approximators in the planar dual graph. From then on, we look for distance approximators in the dual which correspond to trees in $G$. In Section~\ref{sec:non-conservative} we prove there exists a single-subtree cut approximator. In Appendix~\ref{sec:extend} we show how to make this conservative while maintaining integrality of the capacities.
In Section~\ref{sec:lb} we show that we cannot achieve Theorem~\ref{thm:tree} in the exact weight model.
\subsection{Converting flow-sparsifiers in outerplanar graphs to distance-sparsifiers in trees}
\label{sec:flowdist}
Let $G = (V, E)$ be an outerplanar graph with capacities $u:E\to\mathbb{R}^+$.
Without loss of generality, we can assume that $G$ is 2-node connected,
so the boundary of the outer face of $G$ is a cycle that
contains each node exactly once. Let $G^*$ be the dual of $G$; we assign weights
to the dual edges in $G^*$ equal to the capacities on the corresponding edges in $G$.
Let $G_z$ be the graph obtained by adding an apex node $z$ to $G$ which is connected
to each node of $G$, that is $V(G_z)=V\cup\{z\}$ and
$E(G_z)=E\cup\{(z,v):v\in V\}$. We may embed $z$ into the outer face of $G$, so $G_z$
is planar. Let $G_z^*$ denote the planar dual of $G_z$.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1,scale=0.6]
\draw (101,132.3) .. controls (101,77.46) and (209.18,33) .. (342.63,33) .. controls (476.08,33) and (584.26,77.46) .. (584.26,132.3) .. controls (584.26,187.14) and (476.08,231.6) .. (342.63,231.6) .. controls (209.18,231.6) and (101,187.14) .. (101,132.3) -- cycle ;
\draw (187,56.6) .. controls (224.26,108.6) and (226.26,147.6) .. (220.26,216.6) ;
\draw (187,56.6) .. controls (259.26,85.6) and (482.26,150.6) .. (559.26,176.6) ;
\draw (286,226.6) .. controls (305.26,175.6) and (409.26,159.6) .. (453.26,219.6) ;
\draw (529.26,67.6) .. controls (521.26,120.6) and (523.26,137.6) .. (559.26,176.6) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (187,56.6) .. controls (-83.07,31.81) and (11.26,382.98) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (220.26,216.6) .. controls (216.93,273.29) and (281.26,306.07) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (286,226.6) .. controls (287.33,238.28) and (290.75,252.71) .. (295.23,267.2) .. controls (300.61,284.59) and (307.54,302.06) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (453.26,219.6) .. controls (413.93,252.29) and (362.93,289.29) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (314.26,314.98) .. controls (469.5,317.67) and (564.5,230.67) .. (559.26,176.6) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (529.26,67.6) .. controls (714.5,40.81) and (676.5,405.67) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (310.51,314.98) .. controls (310.51,312.91) and (312.19,311.23) .. (314.26,311.23) .. controls (316.34,311.23) and (318.01,312.91) .. (318.01,314.98) .. controls (318.01,317.05) and (316.34,318.73) .. (314.26,318.73) .. controls (312.19,318.73) and (310.51,317.05) .. (310.51,314.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (274.51,139.98) .. controls (274.51,137.91) and (276.19,136.23) .. (278.26,136.23) .. controls (280.34,136.23) and (282.01,137.91) .. (282.01,139.98) .. controls (282.01,142.05) and (280.34,143.73) .. (278.26,143.73) .. controls (276.19,143.73) and (274.51,142.05) .. (274.51,139.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (170.51,138.98) .. controls (170.51,136.91) and (172.19,135.23) .. (174.26,135.23) .. controls (176.34,135.23) and (178.01,136.91) .. (178.01,138.98) .. controls (178.01,141.05) and (176.34,142.73) .. (174.26,142.73) .. controls (172.19,142.73) and (170.51,141.05) .. (170.51,138.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (409.51,83.98) .. controls (409.51,81.91) and (411.19,80.23) .. (413.26,80.23) .. controls (415.34,80.23) and (417.01,81.91) .. (417.01,83.98) .. controls (417.01,86.05) and (415.34,87.73) .. (413.26,87.73) .. controls (411.19,87.73) and (409.51,86.05) .. (409.51,83.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (551.51,124.98) .. controls (551.51,122.91) and (553.19,121.23) .. (555.26,121.23) .. controls (557.34,121.23) and (559.01,122.91) .. (559.01,124.98) .. controls (559.01,127.05) and (557.34,128.73) .. (555.26,128.73) .. controls (553.19,128.73) and (551.51,127.05) .. (551.51,124.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (363.51,206.98) .. controls (363.51,204.91) and (365.19,203.23) .. (367.26,203.23) .. controls (369.34,203.23) and (371.01,204.91) .. (371.01,206.98) .. controls (371.01,209.05) and (369.34,210.73) .. (367.26,210.73) .. controls (365.19,210.73) and (363.51,209.05) .. (363.51,206.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (151.51,242.98) .. controls (151.51,240.91) and (153.19,239.23) .. (155.26,239.23) .. controls (157.34,239.23) and (159.01,240.91) .. (159.01,242.98) .. controls (159.01,245.05) and (157.34,246.73) .. (155.26,246.73) .. controls (153.19,246.73) and (151.51,245.05) .. (151.51,242.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (256.51,254.98) .. controls (256.51,252.91) and (258.19,251.23) .. (260.26,251.23) .. controls (262.34,251.23) and (264.01,252.91) .. (264.01,254.98) .. controls (264.01,257.05) and (262.34,258.73) .. (260.26,258.73) .. controls (258.19,258.73) and (256.51,257.05) .. (256.51,254.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (331.51,256.98) .. controls (331.51,254.91) and (333.19,253.23) .. (335.26,253.23) .. controls (337.34,253.23) and (339.01,254.91) .. (339.01,256.98) .. controls (339.01,259.05) and (337.34,260.73) .. (335.26,260.73) .. controls (333.19,260.73) and (331.51,259.05) .. (331.51,256.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (474.51,237.98) .. controls (474.51,235.91) and (476.19,234.23) .. (478.26,234.23) .. controls (480.34,234.23) and (482.01,235.91) .. (482.01,237.98) .. controls (482.01,240.05) and (480.34,241.73) .. (478.26,241.73) .. controls (476.19,241.73) and (474.51,240.05) .. (474.51,237.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (593.51,186.98) .. controls (593.51,184.91) and (595.19,183.23) .. (597.26,183.23) .. controls (599.34,183.23) and (601.01,184.91) .. (601.01,186.98) .. controls (601.01,189.05) and (599.34,190.73) .. (597.26,190.73) .. controls (595.19,190.73) and (593.51,189.05) .. (593.51,186.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (473.51,15.98) .. controls (473.51,13.91) and (475.19,12.23) .. (477.26,12.23) .. controls (479.34,12.23) and (481.01,13.91) .. (481.01,15.98) .. controls (481.01,18.05) and (479.34,19.73) .. (477.26,19.73) .. controls (475.19,19.73) and (473.51,18.05) .. (473.51,15.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (174.26,138.98) -- (278.26,139.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) -- (413.26,83.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) -- (367.26,206.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (260.26,254.98) -- (278.26,139.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (413.26,83.98) -- (477.26,15.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (367.26,206.98) -- (335.26,256.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (155.26,242.98) -- (174.26,138.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (413.26,83.98) -- (555.26,124.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (555.26,124.98) -- (597.26,186.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (81.51,138.98) -- (174.26,138.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (81.51,138.98) .. controls (81.51,136.91) and (83.19,135.23) .. (85.26,135.23) .. controls (87.34,135.23) and (89.01,136.91) .. (89.01,138.98) .. controls (89.01,141.05) and (87.34,142.73) .. (85.26,142.73) .. controls (83.19,142.73) and (81.51,141.05) .. (81.51,138.98) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (113.5,165.67) .. controls (76.93,226.29) and (106.5,293.95) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) .. controls (349.52,143.3) and (520.52,174.3) .. (478.26,237.98) ;
\draw (305.51,321.38) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$z$};
\draw (262,46.39) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$G$};
\draw (408,94.39) node [anchor=north west][inner sep=0.75pt] [font=\normalsize,color={rgb, 255:red, 74; green, 144; blue, 226 } ,opacity=1 ] {$T^{*}$};
\draw (63,293.6) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,opacity=1 ] {$\delta ( z)$};
\end{tikzpicture}
\caption{
The solid edges form the outerplanar graph $G$,
and the dotted edges are the edges incident to the apex node $z$ in $G_z$.
The dashed edges form the dual tree $T^*$.
}
\label{fig:op-dual}
\end{figure}
Note that $\delta(z)=\{(z,v):v\in V\}$ are the edges of a spanning tree of $G_z$, so
$E(G_z)^*\setminus\delta(z)^*$ are the edges of a spanning tree $T^*$ of $G_z^*$.
Each non-leaf node of $T^*$ corresponds to an inner face of $G$, and each leaf of
$T^*$ corresponds to a face of $G_z$ whose boundary contains the apex node $z$.
Also note that we obtain $G^*$ if we combine all the leaves of $T^*$ into a single
node (which would correspond to the outer face of $G$). We will call $T^*$ the dual
tree of the outerplanar graph $G$ (Figure \ref{fig:op-dual}).
Let a central cut of $G$ be a cut $\delta(S)$ such that both of its shores $S$ and
$V\setminus S$ induced connected subgraphs of $G$. Hence, the shores of a central cut are subpaths of
the outer cycle, so the dual of $\delta(S)$ is a leaf-to-leaf path in $T^*$. Since
the edges of any cut in a connected graph is a disjoint union of central cuts, it suffices to
only consider central cuts.
We want to find a strictly embedded cut-sparsifier $T=(V,F,u^*)$ of $G$ (ie. a spanning
tree $T$ of $G$ with edges weights $u^*$) such that for any nonempty $X\subsetneq V$,
we have
\begin{equation}
\alpha u(\delta_G(X)) \le u^*(\delta_T(X)) \le \beta u(\delta_G(X)) .
\label{cut-sparsifier}
\end{equation}
In the above inequality, we can replace $u^*(\delta_T(X))$ with $u^*(\delta_G(X))$
if we set $u^*(e)=0$ for each edge $e\notin E(T)$. In the dual tree (of $G$),
$\delta_G(X)^*$ is a leaf-to-leaf path for any central cut $\delta(X)$,
so inequality \eqref{cut-sparsifier} is equivalent to
\begin{equation}
\alpha u(P) \le u^*(P) \le \beta u(P)
\label{distance-sparsifier}
\end{equation}
for any leaf-to-leaf path $P$ in $T^*$.
Finally, we give a sufficient property on the weights $u^*$ assigned to the
edges such that all edges of positive weight are in the spanning tree of $G$.
Recall that the dual of the edges not in the spanning tree of $G$ would
form a spanning tree of $G^*$. Since we assign weight 0 to edges not in the
spanning tree of $G$, it is sufficient for the 0 weight edges to form a
spanning subgraph of $G^*$. Since $G^*$ is obtained by combining the leaves
of $T^*$ into a single node, it suffices for each node $v\in V(T^*)$ to
have a 0 weight path from $v$ to a leaf of $T^*$.
\subsection{An algorithm to build a distance-sparsifier of a tree}
\label{sec:non-conservative}
In this section, we present an algorithm to obtain a distance-sparsifier
of a tree. In particular, this allows us to obtain a cut-approximator of
an outerplanar graph from a distance-sparsifier of its dual tree.
Let $T=(V,E,u)$ be a weighted tree where $u:E\to\mathbb{R}^+$ is the
length function on $T$. Let $L\subset V$ be the leaves of $T$. We assign
non-negative weights $u^*$ to the edges of $T$. Let $d$ be the shortest
path metric induced by the original weights $u$, and let $d^*$ be the
shortest path metric induced by the new weights $u^*$. We want the following
two conditions to hold:
\begin{enumerate}
\item there exists a 0 weight path from each $v\in V$ to a leaf of $T$.
\item for any two leaves $x,y\in L$, we have
\begin{equation}
\frac14 d(x,y) \le d^*(x,y) \le 2 d(x,y) .
\label{tree-bounds}
\end{equation}
\end{enumerate}
We define $u^*$ recursively as follows. Let $r$ be a non-leaf node of $T$
(we are done if no such nodes exist), and consider $T$ to be rooted at
$r$. For $v\in V$, let $T(v)$ denote the subtree rooted at $v$, and let $h(v)$
denote the \emph{height} of $v$, defined by $h(v)=\min\{d(v,x):x\in L\cap T(v)\}$. Now,
let $r_1, ..., r_k$ be the points in $T$ that are at distance exactly $h(r)/2$
from $r$. Without loss of generality, suppose that each $r_i$ is a node
(otherwise we can subdivide the edge to get a node), and order the $r_i$'s
by increasing $h(r_i)$, that is $h(r_{i-1})\le h(r_i)$ for each $i=2,...,k$.
Furthermore, suppose that we have already assigned weights to the edges in
each subtree $T(r_i)$ using this algorithm, so it remains to assign weights
to the edges not in any of these subtrees. We assign a weight of $h(r_i)$ to
the first edge on the path from $r_i$ to $r$ for each $i=2,...,k$, and weight
0 to all other edges (Figure \ref{fig:algorithm}).
In particular, all edges on the path from $r_1$ to $r$ receive weight $0$.
This algorithm terminates because the length of the
longest path from the root to a leaf decreases by at least half the length
of the shortest edge incident to a leaf in each iteration.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.38,47.73) .. controls (145.38,45.9) and (147.12,44.42) .. (149.27,44.42) .. controls (151.41,44.42) and (153.15,45.9) .. (153.15,47.73) .. controls (153.15,49.56) and (151.41,51.04) .. (149.27,51.04) .. controls (147.12,51.04) and (145.38,49.56) .. (145.38,47.73) -- cycle ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (5.26,148.48) -- (301.05,148.48) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (56.64,148.48) .. controls (56.64,146.65) and (58.38,145.16) .. (60.53,145.16) .. controls (62.68,145.16) and (64.42,146.65) .. (64.42,148.48) .. controls (64.42,150.31) and (62.68,151.79) .. (60.53,151.79) .. controls (58.38,151.79) and (56.64,150.31) .. (56.64,148.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (219.87,149.74) .. controls (219.87,147.91) and (221.61,146.42) .. (223.76,146.42) .. controls (225.91,146.42) and (227.65,147.91) .. (227.65,149.74) .. controls (227.65,151.56) and (225.91,153.05) .. (223.76,153.05) .. controls (221.61,153.05) and (219.87,151.56) .. (219.87,149.74) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.38,148.48) .. controls (145.38,146.65) and (147.12,145.16) .. (149.27,145.16) .. controls (151.41,145.16) and (153.15,146.65) .. (153.15,148.48) .. controls (153.15,150.31) and (151.41,151.79) .. (149.27,151.79) .. controls (147.12,151.79) and (145.38,150.31) .. (145.38,148.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (174.96,98.1) .. controls (174.96,96.28) and (176.7,94.79) .. (178.84,94.79) .. controls (180.99,94.79) and (182.73,96.28) .. (182.73,98.1) .. controls (182.73,99.93) and (180.99,101.42) .. (178.84,101.42) .. controls (176.7,101.42) and (174.96,99.93) .. (174.96,98.1) -- cycle ;
\draw (149.27,47.73) -- (178.84,98.1) ;
\draw (60.53,148.48) -- (149.27,47.73) ;
\draw (150.74,145.96) -- (180.32,95.59) ;
\draw (225.24,147.22) -- (180.32,95.59) ;
\draw (60.53,148.48) -- (96.68,212.09) -- (24.38,212.09) -- cycle ;
\draw (149.27,148.48) -- (176.87,243.57) -- (121.66,243.57) -- cycle ;
\draw (223.65,149.74) -- (251.15,212.27) -- (196.15,212.27) -- cycle ;
\draw (301.41,45.14) -- (301.05,148.48) ;
\draw [shift={(301.05,148.48)}, rotate = 270.2] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ;
\draw [shift={(301.41,45.14)}, rotate = 270.2] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ;
\draw (43.21,124.3) node [anchor=north west][inner sep=0.75pt] {$r_{1}$};
\draw (123.94,151.24) node [anchor=north west][inner sep=0.75pt] {$r_{2}$};
\draw (237.48,151.79) node [anchor=north west][inner sep=0.75pt] {$r_{3}$};
\draw (157.17,32.13) node [anchor=north west][inner sep=0.75pt] {$r$};
\draw (310.76,76.38) node [anchor=north west][inner sep=0.75pt] {$\frac{h( r)}{2}$};
\draw (120.92,117.48) node [anchor=north west][inner sep=0.75pt] [font=\small] {$h( r_{2})$};
\draw (210.64,114.48) node [anchor=north west][inner sep=0.75pt] [font=\small] {$h( r_{3})$};
\draw (169.88,64.56) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (92.65,83.7) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (41.25,190.2) node [anchor=north west][inner sep=0.75pt] {$T( r_{1})$};
\draw (130.19,220.15) node [anchor=north west][inner sep=0.75pt] {$T( r_{2})$};
\draw (205.09,191.39) node [anchor=north west][inner sep=0.75pt] {$T( r_{3})$};
\end{tikzpicture}
\caption{
The algorithm assigns weights to the edges above $r_1,...,r_k$,
and is run recursively on the subtrees $T(r_1),...,T(r_k)$.
}
\label{fig:algorithm}
\end{figure}
Since we assign 0 weight to edges on the $r_1r$ path,
Condition 1 is satisfied for all nodes above the $r_i$'s in the tree by construction. It remains to prove Condition 2.
We use the following upper and lower bounds. For each leaf $x\in L$,
\begin{align}
d^*(x,r) &\le 2d(x,r) - h(r) \label{upper-bound} , \\
d^*(x,r) &\ge d(x,r) - h(r) \label{lower-bound} .
\end{align}
We prove the upper bound in \eqref{upper-bound} by induction. We are
done if $T$ only has 0 weight edges, and the cases that cause the algorithm
to terminate will only have 0 weight edges. For the induction, we consider
two separate cases depending on whether $x\in T(r_1)$.
\textbf{Case 1}: $x\in T(r_1)$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_1) + d^*(r_1, r)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d^*(x, r_1)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d(x, r_1) - h(r_1)
&& \textrm{(by induction)} \\
&= 2d(x, r) - 2d(r, r_1) - h(r_1)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= 2d(x, r) - \frac32 h(r)
&& \textrm{($h(r_1)=h(r)/2$ by definition of $r_1$)} \\
&\le 2d(x, r) - h(r)
\end{align*}
\textbf{Case 2}: $x\in T(r_i)$ for some $i\neq1$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_i) + d^*(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d^*(x, r_i) + h(r_i)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d(x, r_i) - h(r_i) + h(r_i)
&& \textrm{(by induction)} \\
&= 2d(x, r) - 2d(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= 2d(x, r) - h(r)
&& \textrm{($d(r_i, r) = h(r)/2$ by definition of $r_i$)}
\end{align*}
This proves inequality \eqref{upper-bound}.
We prove the lower bound in \eqref{lower-bound} similarly.
\textbf{Case 1}: $x \in T(r_1)$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_1) + d^*(r_1, r)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d^*(x, r_1)
&& \textrm{(by definition of $u^*$)} \\
&\ge d(x, r_1) - h(r_1)
&& \textrm{(by induction)} \\
&= d(x, r) - d(r, r_1) - h(r_1)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d(x, r) - h(r)
&& \textrm{(by definition of $r_1$)}
\end{align*}
\textbf{Case 2}: $x \in T(r_i)$ for some $i\neq1$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_i) + d^*(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d^*(x, r_i) + h(r_i)
&& \textrm{(by definition of $u^*$)} \\
&\ge d(x, r_i) - h(r_i) + h(r_i)
&& \textrm{(by induction)} \\
&= d(x, r) - d(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d(x, r) - h(r)/2
&& \textrm{($d(r_i, r) = h(r)/2$ by definition of $r_i$)} \\
&\ge d(x, r) - h(r)
\end{align*}
This proves inequality \eqref{lower-bound}
Finally, we prove property 2, that is inequality \eqref{tree-bounds},
by induction. Let $x,y\in L$ be two leaves of $T$. Suppose that
$x\in T(r_i)$ and $y\in T(r_j)$. By induction, we may assume that
$i\neq j$, so without loss of generality, suppose that $i<j$.
We prove the upper bound.
\begin{align*}
d^*(x, y) &= d^*(x, r_i) + d^*(r_i, r_j) + d^*(r_j, y) \\
&\le 2d(x, r_i) - h(r_i) + 2d(y, r_j) - h(r_j) + d^*(r_i, r_j)
&& \textrm{(by \eqref{upper-bound})} \\
&\le 2d(x, r_i) - h(r_i) + 2d(y, r_j) - h(r_j) + h(r_i) + h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&= 2d(x, r_i) + 2d(y, r_j) \\
&\le 2d(x, y)
\end{align*}
We prove the lower bound.
\begin{align*}
d(x, y)
&= d(x, r_i) + d(r_i, r_j) + d(r_j, y) \\
&\le d(x, r_i) + d(r_j, y) + h(r_i) + h(r_j) && \\
&\qquad\qquad \textrm{(because $d(r, r_i)=h(r)/2\le h(r_i)$ for all $i\in[k]$)} && \\
&\le 2d(x, r_i) + 2d(r_j, y)
&& \textrm{(by definition of $h$)} \\
&\le 2d^*(x, r_i) + 2h(r_i) + 2d^*(y, r_j) + 2h(r_j)
&& \textrm{(by \eqref{lower-bound})} \\
&= 2d^*(x, y) - 2d^*(r_i, r_j) + 2h(r_i) + 2h(r_j) .
\end{align*}
Now we finish the proof of the lower bound by considering two cases.
\textbf{Case 1}: $i = 1$, that is $x$ is in the first subtree.
\begin{align*}
d(x, y)
&\le 2d^*(x, y) - 2d^*(r_1, r_j) + 2h(r_1) + 2h(r_j) \\
&= 2d^*(x, y) - 2h(r_j) + 2h(r_1) + 2h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d^*(x, y) + 2h(r_1) \\
&\le 4d^*(x, y)
\end{align*}
\textbf{Case 2}: $i > 1$, that is neither $x$ nor $y$ is in the first subtree.
\begin{align*}
d(x, y)
&\le 2d^*(x, y) - 2d^*(r_i, r_j) + 2h(r_i) + 2h(r_j) \\
&= 2d^*(x, y) - 2h(r_i) - 2h(r_j) + 2h(r_i) + 2h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&= 2d^*(x, y)
\end{align*}
This completes the proof of property 2.
\section{Maximum Weight Disjoint Paths}
In this section we prove our main result for {\sc edp}, Theorem~\ref{thm:edp}.
\subsection{Required Elements}
\label{sec:required}
We first prove the following result which establishes conditions for when
the cut condition implies routability.
\iffalse
\begin{restatable}{theorem}{OProute}
\label{thm:OP}
Let $G$ be an outerplanar graph with integer edge capacities $u(e)$.
Let $H$ denote a demand graph such that $G+H=(V(G),E(G)\cup E(H))$
is outerplanar. If $G,H$ satisfies the cut condition, then $H$ is routable in $G$.
\end{restatable}
\fi
\OProute*
The novelty in this statement is that we do not require the Eulerian condition
on $G+H$. This condition is needed in virtually all classical results for edge-disjoint paths. In fact, even when $G$ is a $4$-cycle and $H$ consists of a matching of size $2$, the cut condition need not be sufficient to guarantee routability. The main exception is the case when $G$ is a tree and a trivial greedy algorithm suffices to route $H$. We prove the theorem by giving a simple (but not so simple) algorithm to compute a routing.
To prove this theorem, we need the following $2$-node reduction lemma which is generally known.
\begin{lemma}
\label{lemma:2con-cc}
Let $G$ be a graph and let $H$ be a collection of demands that satisfies the cut condition.
Let $G_1,...,G_k$ be the blocks of $G$ (the 2-node connected components and the cut edges (aka bridges) of $G$).
Let $H_i$ be the collection of nontrivial (i.e., non-loop) demands after contracting
each edge $e\in E(G)\setminus E(G_i)$.
Then each $G_i,H_i$ satisfies the cut condition.
Furthermore, if $G$ (or $G+H$) was outerplanar (or planar),
then each $G_i$ (resp. $G_i+H_i$) is outerplanar (resp. planar).
Moreover, if each $H_i$ is routable in $G_i$, then $H$ is routable in $G$.
\end{lemma}
\begin{figure}[htbp]
\centering
\scalebox{0.7}{
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (120.4,91.54) -- (196.52,70.07) ;
\draw (44.29,113) -- (51.52,176.07) ;
\draw (44.29,113) -- (120.4,91.54) ;
\draw (120.4,91.54) -- (121.52,30.07) ;
\draw (196.52,70.07) -- (121.52,30.07) ;
\draw (51.52,176.07) -- (113.52,146.07) ;
\draw (158.52,160.07) -- (206.52,119.07) ;
\draw (120.4,91.54) -- (206.52,119.07) ;
\draw (120.4,91.54) -- (158.52,160.07) ;
\draw (44.29,113) -- (113.52,146.07) ;
\draw (252,125.27) -- (288.31,125.27) -- (288.31,119) -- (312.52,131.54) -- (288.31,144.07) -- (288.31,137.81) -- (252,137.81) -- cycle ;
\draw (158.52,160.07) -- (181.52,227.07) ;
\draw (181.52,227.07) -- (110.52,201.07) ;
\draw (110.52,201.07) -- (158.52,160.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (51.52,176.07) .. controls (63.52,223.07) and (141.52,257.07) .. (181.52,227.07) ;
\draw (435.4,96.54) -- (511.52,75.07) ;
\draw (359.29,118) -- (366.52,181.07) ;
\draw (359.29,118) -- (435.4,96.54) ;
\draw (435.4,96.54) -- (436.52,35.07) ;
\draw (511.52,75.07) -- (436.52,35.07) ;
\draw (366.52,181.07) -- (428.52,151.07) ;
\draw (473.52,165.07) -- (521.52,124.07) ;
\draw (435.4,96.54) -- (521.52,124.07) ;
\draw (435.4,96.54) -- (473.52,165.07) ;
\draw (359.29,118) -- (428.52,151.07) ;
\draw (473.52,165.07) -- (496.52,232.07) ;
\draw (496.52,232.07) -- (425.52,206.07) ;
\draw (425.52,206.07) -- (473.52,165.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (366.52,181.07) .. controls (376.52,160.07) and (378.52,138.07) .. (359.29,118) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (473.52,165.07) .. controls (467.52,186.07) and (475.52,217.07) .. (496.52,232.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (435.4,96.54) .. controls (433.52,116.07) and (448.52,151.07) .. (473.52,165.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (359.29,118) .. controls (389.52,121.07) and (420.52,112.07) .. (435.4,96.54) ;
\draw (60,136.4) node [anchor=north west][inner sep=0.75pt] {$G_{1}$};
\draw (70,74.4) node [anchor=north west][inner sep=0.75pt] {$G_{2}$};
\draw (131,54.4) node [anchor=north west][inner sep=0.75pt] {$G_{3}$};
\draw (152,117.4) node [anchor=north west][inner sep=0.75pt] {$G_{4}$};
\draw (142,185.4) node [anchor=north west][inner sep=0.75pt] {$G_{5}$};
\draw (467,119.4) node [anchor=north west][inner sep=0.75pt] {$G_{4}$};
\draw (446,195.4) node [anchor=north west][inner sep=0.75pt] {$G_{5}$};
\draw (445,63.4) node [anchor=north west][inner sep=0.75pt] {$G_{3}$};
\draw (379.35,74.67) node [anchor=north west][inner sep=0.75pt] {$G_{2}$};
\draw (382,143.4) node [anchor=north west][inner sep=0.75pt] {$G_{1}$};
\end{tikzpicture}
}
\caption{
The new demand edges that replace a demand edge whose terminals belong in different blocks.
Solid edges represent edges of $G$ and dashed edges represent demand edges.
}
\label{fig:route-contract}
\end{figure}
\begin{proof}
Consider the edge contractions to be done on $G+H$ to obtain $G_i+H_i$.
Then, any cut in $G_i+H_i$ was also a cut in $G+H$.
Since $G,H$ satisfies the cut condition, then $G_i,H_i$ must also satisfy the cut condition.
Furthermore, edge contraction preserves planarity and outerplanarity.
For each $st \in H$ and each $G_i$, the reduction process produces
a request $s_it_i$ in $G_i$. If this is not a loop, then $s_i,t_i$ lie in different components of $G$ after deleting the edges of $G_i$. In this case,
we say that $st$ {\em spawns} $s_it_i$. Let $J$ be the set of edges spawned by a demand $st$.
It is easy to see that the edges of $J$ form an $st$ path.
Hence if each $H_i$ is routable in $G_i$, we have that $H$ is routable in $G$.
\end{proof}
\iffalse
...and let $P$ be any simple path from $s$ to $t$.
For each $G_i$ such that $E(G_i)\cap E(P)\neq\emptyset$,
we obtain a nontrivial demand $s_it_i\in H_i$.
Since $\bigcup_{i=1}^kE(G_i)=E(G)$,
the demands $s_it_i$ form a connected set of edges that join $s$ and $t$ (Figure~ \ref{fig:route-contract}).
Hence, if each $H_i$ is routable in $G_i$, then we may concatenate the paths used to route
the demands $s_it_i$ to route the demand $st$. Hence $H$ is also routable in $G$.
\fi
\begin{proof}[Proof of theorem \ref{thm:OP}]
Without loss of generality, we may assume that the edges of $G$ (resp. $H$) have unit capacity (resp. demand).
Otherwise, we may place $u(e)$ (resp. $d(e)$) parallel copies of such an edge $e$.
In the algorithmic proof, we may also assume that $G$ is 2-node connected.
Otherwise, we may apply Lemma \ref{lemma:2con-cc} and consider each 2-node
connected component of $G$ separately.
When working with 2-node connected $G$, the boundary of its outer face is a simple cycle.
So we label the nodes $v_1,...,v_n$
by the order they appear on this cycle.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (43,123.26) .. controls (43,69.54) and (86.54,26) .. (140.26,26) .. controls (193.97,26) and (237.52,69.54) .. (237.52,123.26) .. controls (237.52,176.97) and (193.97,220.52) .. (140.26,220.52) .. controls (86.54,220.52) and (43,176.97) .. (43,123.26) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (120,218.76) .. controls (120,216.68) and (121.68,215) .. (123.76,215) .. controls (125.84,215) and (127.52,216.68) .. (127.52,218.76) .. controls (127.52,220.84) and (125.84,222.52) .. (123.76,222.52) .. controls (121.68,222.52) and (120,220.84) .. (120,218.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (165,215.76) .. controls (165,213.68) and (166.68,212) .. (168.76,212) .. controls (170.84,212) and (172.52,213.68) .. (172.52,215.76) .. controls (172.52,217.84) and (170.84,219.52) .. (168.76,219.52) .. controls (166.68,219.52) and (165,217.84) .. (165,215.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (208,189.76) .. controls (208,187.68) and (209.68,186) .. (211.76,186) .. controls (213.84,186) and (215.52,187.68) .. (215.52,189.76) .. controls (215.52,191.84) and (213.84,193.52) .. (211.76,193.52) .. controls (209.68,193.52) and (208,191.84) .. (208,189.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (228,155.76) .. controls (228,153.68) and (229.68,152) .. (231.76,152) .. controls (233.84,152) and (235.52,153.68) .. (235.52,155.76) .. controls (235.52,157.84) and (233.84,159.52) .. (231.76,159.52) .. controls (229.68,159.52) and (228,157.84) .. (228,155.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (77,200.76) .. controls (77,198.68) and (78.68,197) .. (80.76,197) .. controls (82.84,197) and (84.52,198.68) .. (84.52,200.76) .. controls (84.52,202.84) and (82.84,204.52) .. (80.76,204.52) .. controls (78.68,204.52) and (77,202.84) .. (77,200.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,156.76) .. controls (45,154.68) and (46.68,153) .. (48.76,153) .. controls (50.84,153) and (52.52,154.68) .. (52.52,156.76) .. controls (52.52,158.84) and (50.84,160.52) .. (48.76,160.52) .. controls (46.68,160.52) and (45,158.84) .. (45,156.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (41,108.76) .. controls (41,106.68) and (42.68,105) .. (44.76,105) .. controls (46.84,105) and (48.52,106.68) .. (48.52,108.76) .. controls (48.52,110.84) and (46.84,112.52) .. (44.76,112.52) .. controls (42.68,112.52) and (41,110.84) .. (41,108.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (100,33.76) .. controls (100,31.68) and (101.68,30) .. (103.76,30) .. controls (105.84,30) and (107.52,31.68) .. (107.52,33.76) .. controls (107.52,35.84) and (105.84,37.52) .. (103.76,37.52) .. controls (101.68,37.52) and (100,35.84) .. (100,33.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (219,74.76) .. controls (219,72.68) and (220.68,71) .. (222.76,71) .. controls (224.84,71) and (226.52,72.68) .. (226.52,74.76) .. controls (226.52,76.84) and (224.84,78.52) .. (222.76,78.52) .. controls (220.68,78.52) and (219,76.84) .. (219,74.76) -- cycle ;
\draw (48.76,156.76) .. controls (96.52,140.07) and (183.52,100.07) .. (222.76,74.76) ;
\draw (168.76,215.76) .. controls (139.52,170.07) and (89.52,152.07) .. (48.76,156.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (48.76,156.76) .. controls (120.52,144.07) and (165.52,151.07) .. (211.76,189.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (44.76,108.76) .. controls (89.52,106.07) and (155.52,96.07) .. (222.76,74.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=3] [dash pattern={on 7.88pt off 4.5pt}] (103.76,33.76) .. controls (141.52,62.07) and (173.52,70.07) .. (222.76,74.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (222.76,74.76) .. controls (194.52,119.07) and (198.52,152.07) .. (211.76,189.76) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (60,63.76) .. controls (60,61.68) and (61.68,60) .. (63.76,60) .. controls (65.84,60) and (67.52,61.68) .. (67.52,63.76) .. controls (67.52,65.84) and (65.84,67.52) .. (63.76,67.52) .. controls (61.68,67.52) and (60,65.84) .. (60,63.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (155,28.76) .. controls (155,26.68) and (156.68,25) .. (158.76,25) .. controls (160.84,25) and (162.52,26.68) .. (162.52,28.76) .. controls (162.52,30.84) and (160.84,32.52) .. (158.76,32.52) .. controls (156.68,32.52) and (155,30.84) .. (155,28.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (194,44.76) .. controls (194,42.68) and (195.68,41) .. (197.76,41) .. controls (199.84,41) and (201.52,42.68) .. (201.52,44.76) .. controls (201.52,46.84) and (199.84,48.52) .. (197.76,48.52) .. controls (195.68,48.52) and (194,46.84) .. (194,44.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (233.76,119.5) .. controls (233.76,117.42) and (235.44,115.74) .. (237.52,115.74) .. controls (239.59,115.74) and (241.28,117.42) .. (241.28,119.5) .. controls (241.28,121.58) and (239.59,123.26) .. (237.52,123.26) .. controls (235.44,123.26) and (233.76,121.58) .. (233.76,119.5) -- cycle ;
\draw (113,230.4) node [anchor=north west][inner sep=0.75pt] {$v_{1}$};
\draw (57,207.4) node [anchor=north west][inner sep=0.75pt] {$v_{2}$};
\draw (170,228.4) node [anchor=north west][inner sep=0.75pt] {$v_{n}$};
\draw (211,205.4) node [anchor=north west][inner sep=0.75pt] {$v_{n-1}$};
\draw (247.01,161) node [anchor=north west][inner sep=0.75pt] [rotate=-19.44] {$\vdots $};
\draw (23.19,177.54) node [anchor=north west][inner sep=0.75pt] [rotate=-325.57] {$\vdots $};
\draw (80,7.4) node [anchor=north west][inner sep=0.75pt] {$v_{i}$};
\draw (233,54.4) node [anchor=north west][inner sep=0.75pt] {$v_{j}$};
\end{tikzpicture}
\caption{
The solid edges form the outerplanar graph $G$.
The dashed edges are the demand edges.
The thick dashed edge is a valid edge to route
because there are no terminals $v_k$ with $i<k<j$.
}
\label{fig:route-op}
\end{figure}
If there are no demand edges, then we are done. Otherwise, since $G+H$ is
outerplanar, without loss of generality there exists $i<j$ such that $v_iv_j\in E(H)$ and no
$v_k$ is a terminal for $i<k<j$ (Figure \ref{fig:route-op}).
Consider the outer face path $P=v_i,v_{i+1},...,v_j$.
We show that the cut condition is still satisfied after removing both the path $P$
and the demand $v_iv_j$. This represents routing the demand $v_iv_j$ along the path $P$.
Consider a central cut $\delta_G(X)$.
Suppose that $v_i$ and $v_j$ are on opposite sides of the cut. Then, we decrease
both $\delta_G(X)$ and $\delta_H(X)$ by 1, so the cut condition holds.
Suppose that $v_i,v_j\notin X$, that is $v_i$ and $v_j$ are on the same side of the cut.
Then, either $X\subset V(P)\setminus\{v_i,v_j\}$ or $X\cap V(P)=\emptyset$.
We are done if $X\cap V(P)=\emptyset$ because $\delta_G(X)\cap E(P)=0$.
Otherwise, $X\subset V(P)\setminus\{v_i,v_j\}$ contains no terminals,
so we cannot violate the cut condition.
\end{proof}
We also need the following result from \cite{chekuri2007multicommodity}.
\begin{theorem}
\label{thm:cms}
Let $T$ be a tree with integer edge capacities $u(e)$. Let $H$ denote a demand graph such that each fundamental cut of $H$ induced by an edge $e \in T$ contains
at most $k u(e)$ edges of $H$. We may then partition $H$ into at most $4k$ edges sets $H_1, \ldots ,H_{4k}$ such that each $H_i$ is routable in $T$.
\end{theorem}
\subsection{Proof of the Main Theorem}
\outerplanarWEDPapprox*
\begin{proof}
We first run the algorithms to produce a integer-capacitated tree $T,\hat{u}$ which is an $O(1)$ cut approximator for $G$. In addition $T$ is a subtree and it is a conservative approximator for each cut in $G$. First, we prove that the maximum weight routable in $T$ is not too much smaller than for $G$ (in either the {\sc edp} or {\sc anf} model). To see this let $S$ be an optimal solution in $G$, whose value is {\sc opt(G)}. Clearly $S$ satisfies the cut condition in $G$ and hence by Theorem~\ref{thm:tree} it satisfies $14 \cdot$ the cut condition in $T,\hat{u}$.
Thus by Theorem~\ref{thm:cms} there are $56$ sets such that $S = \cup_{i=1}^{56} S_i$ and each $S_i$ is routable in $T$. Hence one of the
sets $S_i$ accrues at least $\frac{1}{56}^{th}$ the profit from {\sc opt(G)}.
Now we use the factor $4$ approximation \cite{chekuri2007multicommodity} to solve the maximum {\sc edp=anf} problem for $T,\hat{u}$. Let $S$ be a subset of requests which are routable in $T$ and have weight at least $\frac{1}{4}$ {\sc opt(T)} $ \geq \frac{1}{224}$ {\sc opt(G)}. Since $T$ is a subtree of $G$ we have that $G+T$ is outerplanar. Since $T,\hat{u}$ is an under-estimator of cuts in $G$, we have that the edges of $T$ (viewed as requests) satisfies the cut condition in $G$. Hence by Theorem~\ref{thm:OP} we may route these single edge requests in $G$. Hence since $S$ can route in $T$, we have that $S$ can also route in $G$, completing the proof.
\end{proof}
\section{Conclusions}
The technique of finding a single-tree constant-factor cut approximator (for a global constant) appears to hit a limit at outerplanar graphs.
It would be
interesting to find a graph parameter $k$ which ensures a single-tree $O(f(k))$ cut approximator.
\noindent
The authors thank Nick Harvey for his valuable feedback on this article.
\iffalse
we still have our question of (1) when do we have O(1) approximators more generally ie beyond outerplanar graphs, (2) When do we have simultaneous approximators for cuts/distances and I suppose (3) do (1) and (2) cooincide for the same class of graphs perhaps? Is it implied by the equivalence theorems?
\fi
\bibliographystyle{plain}
| proofpile-arXiv_065-211 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The dimensionality of a quantum system is crucial for its ability to perform quantum information processing tasks. For example, the security of some protocols for quantum key distribution and randomness expansion depends on the presumed dimensionality of the underlying physical system. The dimensionality also plays a crucial role in device characterisation tasks. Also, non-classical phenomena such as Kochen-Specker contextuality is known to require quantum systems of dimension at least three \cite{Kochen:1967JMM}. Therefore, it is of fundamental importance to have efficient tools to determine the dimensionality of the underlying Hilbert space where the measurement operators act on the physical system for any experimental setup.
There are several approaches to tackle this problem. One of them is known as {\em self-testing} \cite{Yao_self}. The idea of self-testing is to identify unique equivalence class of configurations corresponding to extremal quantum violation of a Bell inequality. The members of the equivalence class are related via some fixed local isometry. The dimension of the individual quantum system can be lower bounded by identifying the equivalence class of configurations attaining the optimality \cite{Yao_self}. Though initially proposed in the setting of Bell non-locality, the idea of self-testing has been extended to prepare-and-measure scenarios, contextuality, and quantum steering \cite{tavakoli2018self, BRVWCK19, bharti2019local,vsupic2016self,shrotriya2020self}. For a review of self-testing, we refer to \cite{vsupic2019self}. It is important to stress that only extremal points of the quantum set of correlations that can be attained via finite-dimensional configurations admit self-testing \cite{goh2018geometry}.
The second approach is {\em tomography}. Quantum tomography is a process via which the description of a quantum state is obtained by performing measurements on an ensemble of identical quantum states. For quantum systems of dimension $d$, to estimate an unknown quantum system to an error $\epsilon$ (in $l_1$ norm) requires $\Theta \left(d^2 \epsilon^{-2}\right)$ copies of a quantum state \cite{OW17}. One drawback of this approach is that it requires a prior knowledge of the dimensionality of the system.
The third approach is {\em dimension witnesses} \cite{brunner_testing_2008}. This is the approach we will focus on in this paper.
The goal of dimension witness is to render a lower bound on the dimensionality of the underlying physical system based on the experimental statistics. For example, a quantum dimension witness is a quantity that can be computed from the input-output correlations and whose value gives a lower bound to the dimension of the Hilbert space needed to accommodate the density matrices and the measurement operators needed to produce such correlations. Dimension witnesses have been investigated for the following types of scenarios:
\begin{enumerate}
\item \label{type1} \textbf{Bell scenarios:} Here, quantum dimension witnesses are based on the observation that certain bipartite Bell non-local correlations are impossible to produce with quantum systems of local dimension $d$ (and thus global dimension $d^2$) or less, implying that the experimental observation of these correlations certifies that the quantum local dimension is at least $d+1$ \cite{brunner_testing_2008,vertesi_bounding_2009,brunner_dimension_2013}. There are dimension witnesses of this type for arbitrarily high quantum local dimension $d$ \cite{brunner_testing_2008}, but they require preparing entangled states of dimension $d^2$ and conditions of spatial separation that do not occur naturally in quantum computers. This approach to dimension witnessing is related to self-testing based on Bell non-local correlations \cite{Yao_self}. A Bell dimension witness certifies the minimum quantum dimension accessed by the measurement devices acting on the physical systems prepared by a single source.
\item \label{type2} \textbf{Prepare-and-measure scenarios:} These scenarios consists of $p$ different preparation sources and $m$ measurements acting on the physical systems emitted by those sources. Prepare-and-measure dimension witnesses require $p > d+1$ preparations to certify classical or quantum dimension $d$ \cite{wehner2008lower,gallego_device-independent_2010}. They have been used to experimentally certify in a device-independent way small classical and quantum dimensions \cite{hendrych_experimental_2012,ahrens2012experimental,d2014device}. A prepare-and-measure dimension witness certifies the minimum classical or quantum dimension spanned by the $p$ preparation sources and the $m$ measurements.
\item \label{type3} \textbf{Kochen-Specker contextuality scenarios:} They consist of a single state preparation followed by a sequence of compatible ideal measurements chosen from a fixed set. Two measurements are compatible (or jointly measurable) when there is a third measurement that works as a refinement for both of them, so each of them can be measure by coarse graining the third measurement and thus both of them can be jointly measured. A measurement is ideal when it yields the same outcome when repeated on the same physical system and does not disturb any compatible measurement. Checking experimentally that a set of measurements are ideal and have certain relations of compatibility can be done from the input-output correlations \cite{LMZNCAH18}. Correlations between the outcomes of ideal measurements are Kochen-Specker contextual when they cannot be reproduced with models in which measurements have predetermined context-independent outcomes~\cite{cabello2008experimentally,KCBS}. Quantum Kochen-Specker contextuality dimension witnesses are based on the observation that certain Kochen-Specker contextual correlations are impossible to produce with quantum systems of dimension $d$ or less, implying that its experimental observation certifies a local quantum dimension of at least $d$. The problem of contextuality dimension witnesses is that they require testing in addition that the measurements are ideal and satisfy certain relations of compatibility. A {\em state-dependent} contextuality dimension witness certifies the minimum quantum dimension accessed by the measurement devices acting on the physical systems prepared by a single source. In a {\em state-independent} contextuality scenario, these measurements form a state-independent contextuality set in dimension $d$, defined as one for which the quantum predictions for sequences of compatible measurements for any quantum state in dimension $d$ cannot be reproduced by non-contextual models \cite{cabello2015necessary}. The minimum quantum dimension for contextual correlations have been studied in~\cite{GBCKL14}. A state-independent Kochen-Specker contextuality dimension witness certifies the minimum quantum dimension accessed by the measurement devices, without relating the conclusion to any particular source.
\end{enumerate}
In this paper, we introduce a novel graph-theoretic approach to quantum dimension witnessing. We deal with abstract structures of measurement events produced for one preparation and several measurements, as is the case in Kochen-Specker contextuality and Bell scenarios. This means that our approach will always work in Kochen-Specker contextuality scenario and sometimes in specific Bell scenarios.
Our approach is, first, based on the observation that the problem of finding dimension witnesses can be reformulated as the problem of finding correlations for structures of exclusivity which are impossible to produce with systems of quantum dimension $d$ or less, implying that its experimental observation certifies a quantum dimension of at least $d+1$. Second, it is based on the observation that, given a set of events and their relations of mutual exclusivity, the sets of correlations allowed in quantum theory are connected to well-known and easy to characterize invariants and sets in graph theory \cite{CSW}. In fact, the power of the graph-theoretic approach to dimension witnessing is based on three pillars:
\begin{itemize}
\item The connection between correlations for structures of exclusivity and easy to characterize sets in graph theory. This connection allows us to use tools and results of graph theory for quantum graph dimension witnessing.
\item The observation that finding dimension witnesses in scenarios with many measurements is difficult due to the difficulty to fully characterize in these scenarios the sets of correlations that cannot be achieved with a given dimension. In contrast, the graph approach allows us to rapidly identify structures of exclusivity that have dimension witnesses, even though many of them correspond to scenarios with many measurements.
\item The connection between abstract structures of exclusivity and some specific contextuality scenarios (those consisting of dichotomic measurements having a structure of compatibility isomorphic to the structure of exclusivity). This assures that any quantum dimension witness for a graph of exclusivity always admits a physical realization in {\em some} Kochen-Specker contextuality scenario. Moreover, by imposing extra constraints, we can find, in principle, those dimension witness that also admit a physical realizations in a {\em specific} Kochen-Specker contextuality or Bell scenario.
\end{itemize}
The paper is organized as follows. In Sec.~\ref{notation_context} we introduce some standard definitions of graph theory and the graph-theoretic approach to correlations. In Sec.~\ref{sec2}, we use this graph-theoretic approach to study quantum dimension witness. Specifically, in Subsec.~\ref{heuristics}, we present a heuristic technique to compute a lower bound on the $d$ dimensional-restricted quantum value and find the corresponding $d$-dimensional quantum realisations. We illustrate the usefulness of this tool with some examples.
In Subsec.~\ref{sec4Qites}, we introduce a family of graphs, which we call the $k$-Qite family, and show that their elements are relatively simple quantum dimension witness for any dimension $k \geq 3$. Finally, in Sec.~\ref{disc}, we conclude by listing future directions for research.
Most of the notations used in the paper are self-explanatory. A graph describes relationships between several entities or vertices. We denote an edge between two vertices $i$ and $j$ by the symbol $i \sim j$. A class of commonly studied graphs is the cycles on $n$ vertices, which we denote by $C_n$. The work also uses semidefinite programming where we use the symbol $S_+^{n}$ to denote the class of positive semi-definite hermitian matrices of size $n \times n$.
\section{Graph theoretic approach to contextuality}
\label{notation_context}
Consider an experiment in the black-box setting.
An outcome $a$ and its associated measurement $M$, are together called a measurement event and denoted as $(a|M)$.
\begin{definition}(Exclusive event)
Two events $e_{i}$ and $e_{j}$ are defined to be exclusive if there exists a measurement $M$ such that $e_{i}$ and $e_{j}$ correspond to different outcomes of $M,$ i.e. $e_{i}=\left(a_{i} \mid M\right)$ and $e_{j}=\left(a_{j} \mid M\right)$ such that $a_{i} \neq a_{j}.$
\end{definition}
\begin{definition}(Exclusivity graph)
For a family of events $\left\{e_{1}, e_{2} \ldots e_{n}\right\}$ we associate a simple undirected graph, $\mathcal{G}_{\mathrm{ex}}:=(V, E),$ with vertex set $V$ and edge set $E$ such that two vertices $i, j \in V$ share an edge if and only if $e_{i}$ and $e_{j}$ are exclusive events. $G$ is called an exclusivity graph.
\end{definition}
Now we consider theories that assign probabilities to the events corresponding to its vertices. Concretely, a {\em behaviour} corresponding to $\mathcal{G}_{\mathrm{ex}}$ is a mapping $p: [n]\to [0,1]$, such that $p_i+p_j\le 1$, for all $i\sim j$, where we denote $p(i)$ by $p_i$.
Here, the non-negative scalar $p_i\in [0,1]$ encodes the probability that measurement event $e_i$ occurs. Furthermore, note that two exclusive events $e_i$ and $e_j$ implies the linear constraint $p_i+p_j\le 1$.
A behaviour $p: [n]\to [0,1]$ is {\em deterministic non-contextual} if each $p_i \in \{0,1\}$ such that $p_i+p_j \leq 1$ for exclusive events $e_i$ and $e_j$. A {\em deterministic non-contextual} behaviour can be considered as a vector in $\mathbb{R}^n$. The polytope of {\em non-contextual behaviours}, denoted by $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$, is the convex hull of all deterministic non-contextual behaviours. The behaviours that do not lie in $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ are called {\em contextual}. It is worthwhile to mention that in combinatorial optimisation, one often encounters the {\em stable set} polytope of a graph $G$, $STAB(G)$ (defined below). It is quite easy to see that stable sets of $G$ (a subset of vertices, where no two vertices share an edge between them) and {\em deterministic} behaviours coincide.
\begin{definition}
\[ STAB(G) = \{ conv(x) : x \text{ is a characteristic vector of a stable set of } G \}
\]
\end{definition}It thus follows from the definition that $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})=STAB(\mathcal{G}_{\mathrm{ex}})$.
Lastly, a behaviour $p: [n]\to [0,1]$ is called {\em quantum} if there exists a quantum state $\ket{\psi}$ and projectors $\Pi_1,\ldots \Pi_n$ acting on a Hilbert space $\mathcal{H}$ such that
\begin{equation} p_i= \bra{\psi}\Pi_i \ket{\psi}, \forall i\in [n] \text{ and } \mathrm{tr}(\Pi_i\Pi_j)=0, \text{ for } i\sim j.\end{equation}
We refer to the ensemble $\ket{\psi}, \{\Pi\}_{i=1}^n$ as a {\em quantum realization} of the behaviour $p$.
The convex set of all quantum behaviours is denoted by $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}})$. It turns out this set too is a well studied entity in combinatorial optimisation, namely the {\em theta body}.
\begin{definition}
The theta body of a graph $G=([n],E)$ is defined by:
$${\rm TH}(G)=\{x\in \mathbb{R}^n_+: \exists Y\in \mathbb{S}^{1+n}_+, \ Y_{00}=1, \ Y_{ii}=x_i = Y_{0i} \quad \, \forall i \in [n], \ Y_{ij}=0, \forall (i,j)\in E\}.$$
\end{definition}
The fact that $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}}) = TH(\mathcal{G}_{\mathrm{ex}})$, was observed in ~\cite{CSW} and follows by taking $d = \ket{\psi}$ and $w_i = \Pi_i \ket{\psi} /\sqrt{\bra{\psi}\Pi_i \ket{\psi}} \ \forall i \in [n]$, in the following lemma.
\begin{lemma}\label{startpoint}
We have that $x\in TH(G)$ iff there exist unit vectors $d,w_1,\ldots,w_n$ such that
\begin{equation}\label{csdcever}
x_i=\langle d,w_i\rangle^2, \forall i\in [n] \text{ and } \langle w_i, w_j\rangle=0, \text{ for } (i,j)\in E.
\end{equation}
\end{lemma}
\begin{proof}Let $x\in {\rm TH}(G)$. By definition, $x$ is the diagonal of a matrix $Y$ satisfying $Y\in \mathbb{S}^{1+n}_+, \ Y_{00}=1, \ Y_{ii}=Y_{0i}, \ Y_{ij}=0, \forall (i,j)\in E$. Let $Y=Gram(d,v_1,\ldots,v_n)$. Define $w_i={v_i\over \|v_i\|}$. Using that $x_i=Y_{ii}=Y_{0i}$ we get that
$$x_i=\langle v_i,v_i\rangle=\langle d,v_i\rangle=\langle d,w_i\|v_i\|\rangle=\|v_i\|\langle d,w_i\rangle.$$
Lastly, note that $\langle d,w_i\rangle=\langle d, {v_i\over \|v_i\|}\rangle={ \langle v_i,v_i\rangle \over \|v_i\|}=\|v_i\|.$ Combining these two equations we get that
$$x_i=\langle d,w_i\rangle^2.$$
\noindent Conversely, let $Y$ be the Gram matrix of $d, \langle d,w_1\rangle w_1,...,\langle d,w_1\rangle w_1$. Note that $\langle d,w_i\rangle w_i$ is the orthogonal projection of $d$ onto the unit vector $w_i$. It is easy to see that $Y$ has all the desired properties.
\end{proof}
\noindent In the above lemma, the vectors $w_i$, for $i \in [n]$, are sometimes referred to as an orthonormal representation (OR) of $G$.
\begin{definition}(orthonormal representation) An orthonormal representation of a graph $G = (V,E)$, is a set of unit vectors $w_i$ for $i \in [|V|]$, such that $\braket{w_i}{w_j} = 0, \text{ for all } (i,j) \in E$.
\end{definition}
\noindent The cost of this orthonormal representation of the graph is defined as $\lambda_{\max}\left( \sum_{i \in [|V|]} \ketbra{w_i}\right)$.
\medskip
Next, we turn our attention to the sum $S = p_1 + p_2 + \cdots + p_n$, where $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ is a {\em non-contextual} behaviour. The set of non-contextual behaviors forms a bounded polyhedron i.e. a polytope. The facets of the aforementioned polytope define tight non-contextuality inequalities, which correspond to half-spaces. This explains why we are interested in $\sum_i p_i $. The maximum of $S$ over {\em deterministic} behaviours is the same as the maximum of $S$ over {non-contextual} behaviours. To see this, let $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ be a maximizer of $S$. We can write $p$ as a convex sum of deterministic behaviours, that is $p = \sum_j \lambda_j p^{(j)}$, where $p^{(j)}$ are deterministic behaviours and $\lambda_i > 0, \ \sum_i \lambda_i = 1$. Now, note that the optimal value of $S = \sum_j \lambda_j \|p^{(j)}\|_1 \leq \max_j \|p^{(j)}\|_1$. This shows that there always exist a {\em deterministic} behaviour of $\mathcal{G}_{\mathrm{ex}}$ that attains the maximum of $S$. Therefore, the maximum of $S$ for classical theories is the size of the largest stable set of $\mathcal{G}_{\mathrm{ex}}$. This is exactly the independence number of $\mathcal{G}_{\mathrm{ex}}$, denoted by $\alpha(\mathcal{G}_{\mathrm{ex}})$. So we get the inequality $p_1 + p_2 + \cdots + p_n \leq \alpha(\mathcal{G}_{\mathrm{ex}})$.
\begin{definition}(Independence number)
Given a graph $G=(V,E)$, Independence number is the size of the largest subset of vertices $S \subseteq V$ such that no pair of vertices in $S$ are connected. Independence number is denoted by $\alpha(G)$.
\end{definition}
\begin{definition}
A non-contextuality inequality corresponds to a half-space that contains the set of non-contextual behaviours, that is,
\begin{equation}
\sum_{i \in [n]} p_i \leq \alpha(\mathcal{G}_{\mathrm{ex}}),
\end{equation}
for all $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$.
\end{definition}
Interestingly in the quantum setting, one has some additional degrees of freedom to increase this sum. Indeed, let state $u_0$ be a unit vector in a complex Hilbert space $\mathcal{H}$. The event $e_i$ correspond to projecting $u_0$ to a one-dimensional subspace, spanned by a unit vector $u_i \in \mathcal{H}$; the probability that the event occurs is just the squared length of the projection. That is, $p_i = |\braket{u_0}{u_i}|^2$ and $p_1 + p_2 + \cdots + p_n = \sum_{i=1}^n |\braket{u_0}{u_i}|^2$. Now two exclusive events must correspond to projections onto orthogonal vectors, and hence $\braket{u_i}{u_j} = 0$, for all edges $(i,j)$ in $\mathcal{G}_{\mathrm{ex}}$. From Lemma~\ref{startpoint}, $p \in TH(\mathcal{G}_{\mathrm{ex}})$. Therefore, the optimisation problem we are interested in is
\begin{equation} \label{sayma}
\max \sum_i p_i : p \in TH(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
In other words, find a matrix $ X\in \mathbb{S}^{1+n}_+, \text{ with the largest diagonal sum such that } X_{00}=1, \ X_{ii} = X_{0i} \, \forall i \in [n], \ X_{ij}=0, \forall (i,j)\in E\ $. This is precisely the definition of the Lov\'asz theta SDP~\eqref{lovtheta} corresponding to $\mathcal{G}_{\mathrm{ex}}$. The value of this SDP is the famous Lov\'asz theta number $\vartheta(\mathcal{G}_{\mathrm{ex}})$.
\begin{equation}\label{lovtheta}
\begin{aligned}
\vartheta(\mathcal{G}_{\mathrm{ex}}) = \max & \ \sum_{i=1}^n { X}_{ii} \\
{\rm s.t.} & \ { X}_{ii}={ X}_{0i}, \ i\in [n],\\
& \ { X}_{ij}=0,\ i\sim j,\\
& \ X_{00}=1,\ X\in \mathcal{S}^{n+1}_+.
\end{aligned}
\end{equation}
\noindent Hence we get $p_1 + p_2 + \cdots + p_n \leq \vartheta(\mathcal{G}_{\mathrm{ex}})$.
\section{Graph-theoretic dimension witnesses}
\label{sec2}
Any Bell or contextuality inequality can be associated to a graph of exclusivity~\cite{CSW}. In this sense, all of them can be studied under the graph-theoretic framework. While in all previous works one first fixes a (Bell or contextuality) scenario and then looks for dimension witnesses, in this work we investigate the dimension witnesses for graphs (of exclusivity), without fixing a priori any scenario.
\subsection{Quantum correlations with dimensional restrictions}
In this section we examine from a graph-theoretic perspective the problem of
quantum correlations (aka behaviours) with dimensional restrictions. We use some standard concepts of graph theory and the graph-theoretic approach to correlations introduced in Section~\ref{notation_context}.
\begin{definition}(\textbf{$d$-quantum behaviour for a graph of exclusivity}) A behaviour $p: [n]\to~[0,1]$ corresponding to a graph of exclusivity $\mathcal{G}_{\mathrm{ex}}$, having $n$ vertices, is $d$-quantum if there exists a quantum state $\ket{\psi} \in \mathcal{H}^d$ and non zero projectors $\Pi_1,\ldots, \Pi_n$, belonging to a $d$-dimensional Hilbert space $\mathcal{H}^d$~such that
\begin{equation}\label{rankdef}
p_i= \bra{\psi}\Pi_i\ket{\psi},\, \forall i\in [n] \text{ and } \mathrm{tr}(\Pi_i\Pi_j)=0, \text{ for } i \sim j.
\end{equation}
\end{definition}
We call a quantum realization of the behaviour $p$, the set $\ket{\psi}, \{\Pi_i\}_{i=1}^n \in \mathcal{H}^d$ satisfying \eqref{rankdef}. We denote the set of $d$-quantum behaviours by $\mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}})$.
\begin{definition}(\textbf{Orthogonal rank}) The orthogonal rank of a graph $G$, denoted by $R_o(G)$, is the minimum $d$ such that there exists a $d$-dimensional orthonormal representation for $G$.
\end{definition}
\noindent For example, any orthonormal representation of the $3$-cycle graph of exclusivity must consist of three mutually orthonormal vectors and therefore must be of dimension at least~$3$. Therefore, $R_o(C_3) = 3$. Note that ${\cal P}^d_Q(\mathcal{G}_{\mathrm{ex}})$ is an empty set for $d < R_o(\mathcal{G}_{\mathrm{ex}})$.
Suppose that we are interested in the largest value of the expression $\sum_{i \in [n]} p_i$, as $p$ ranges over the set of $d$-quantum behaviours, that is, the following optimisation problem:
\begin{equation}\label{dimbehaviour}
\max \sum_{i=1}^{n} p_i : p \in \mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
Removing the dimensional constraint, the set of quantum behaviours $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}})$ becomes the theta body of $\mathcal{G}_{\mathrm{ex}}$, $TH(\mathcal{G}_{\mathrm{ex}})$ (see Sec.~\ref{notation_context}). As explained in Eq.~(\ref{sayma}), maximizing the $\ell_1$ norm of $p$ over the theta body is equivalently given by the Lov\'asz theta SDP. Therefore, for all $d \geq R_o(\mathcal{G}_{\mathrm{ex}})$, problem in Eq~\eqref{dimbehaviour} with the dimensional constraint is equivalently expressed by the following rank constrained version of the Lov\'asz theta SDP:
\begin{equation}\label{theta:primalrank}
\begin{aligned}
\vartheta^d(\mathcal{G}_{\mathrm{ex}}) = \max & \ \ \sum_{i=1}^n {X}_{ii} \\
\text{ subject to} & \ \ {X}_{ii}={ X}_{0i}, \ \ 1\le i\le n,\\
& \ \ { X}_{ij}=0, \ \ i\sim j,\\
& \ \ X_{00}=1,\ \ X\in \mathcal{S}^{1+n}_+, \\
& \ \ \text{rank}(X) \leq d.
\end{aligned}
\end{equation}
More concretely, using the same arguments as in Lemma~\ref{startpoint}, if $p \in \mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}})$ is optimal for \eqref{dimbehaviour} and $ \{\ket{u_i}\bra{u_i}\}_{i=0}^n \in \mathbb{C}^d$ is a quantum realization of $p$ ( where $\ketbra{u_0}$ refers to the quantum state where as $\ketbra{u_i}$ for $1 \leq i \leq n$, refers to the $n$ projectors), then the Gram matrix of the vectors $\ket{u_0},\braket{u_0}{u_1}\ket{u_1},\ldots,\braket{u_0}{u_n}\ket{u_n}$ corresponds to an optimal solution for~\eqref{theta:primalrank} of rank at most~$d$.
Conversely, for any optimal solution $X={\rm Gram}(\ket{u_0},\ket{u_1},\ldots,\ket{u_n})$, with $u_i \in \mathbb{C}^d$, of the SDP \eqref{theta:primalrank},
the realization $\{{\ket{u_i}\bra{u_i} / \|\ket{u_i}\bra{u_i}\|}\}_{i=0}^n$ is optimal for \eqref{dimbehaviour}. The equivalence fails to hold for $d < R_o(\mathcal{G}_{\mathrm{ex}})$, due to the inverse norm factor in the above line, since $\|u_i\|=0$ for at least one $i$. This is because otherwise $\{u_i/\|u_i\|\}_{i=1}^n$ is a valid orthonormal representation for $\mathcal{G}_{\mathrm{ex}}$ of dimension $d < R_o(\mathcal{G}_{\mathrm{ex}})$, violating the definition of orthogonal rank. The quantities $\vartheta^1(\mathcal{G}_{\mathrm{ex}}),\vartheta^2(\mathcal{G}_{\mathrm{ex}}), \ldots, \vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})-1}(\mathcal{G}_{\mathrm{ex}})$ are still well-defined but they do not seem to have any physical relevance in this context.
\medskip
On the other hand, we are also interested in the minimum dimension in which the Lov\'asz theta bound can be achieved.
\begin{definition}(\textbf{Lov\'asz rank}) The Lov\'asz rank of a graph $G$, denoted by $R_L(G)$, is the minimum $d$ for which $\vartheta^d(G) = \vartheta(G)$.
\end{definition}
\noindent By definition, $R_L(G) \geq R_o(G)$. $R_L(G)$ can be sometimes much smaller than the number of vertices of $G$. The following lemma due to Barvinok~\cite{Barvinok1995} gives an upper bound on $R_L(G)$.
\begin{lemma}(\textbf{Barvinok bound}) \label{barvinok}
There exists an optimal solution of $X^*$ of the following SDP
\begin{equation}
\begin{aligned}
\max : & \ \ \mathrm{tr}(CX) \\
\mathrm{s.t.} & \, \mathrm{tr} (A_i X) = b_i, \quad \forall i = 1,2,\ldots, m \\
& X \succeq 0,
\end{aligned}
\end{equation}
with rank $r$ satisfying the inequality $r(r+1)/2 \leq m$.
\end{lemma}
\noindent For the Lov\'asz theta SDP, the number of linear constraints is $m = 1 + |V| + |E|$. Hence $R_L(G) \leq \frac{1}{2} \left(\sqrt{8(|V| + |E|)+9}-1 \right)$. To summarise, we have the following relationships:
\begin{equation}
\vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})}(\mathcal{G}_{\mathrm{ex}}) \leq \vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})+1}(\mathcal{G}_{\mathrm{ex}}) \leq \cdots \leq \vartheta^{R_L(\mathcal{G}_{\mathrm{ex}})}(\mathcal{G}_{\mathrm{ex}}) = \vartheta(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
This suggests a way to lower bound the dimension of the underlying quantum system that violates a certain dimension restricted non-contextuality inequality. More formally, a violation of the inequality $\sum_i p_i \leq \vartheta^d(\mathcal{G}_{\mathrm{ex}})$, where $p \in {\cal P}_Q(\mathcal{G}_{\mathrm{ex}})$, implies that the underlying quantum system must have dimension at least $d+1$. We shall refer to the operator in such a dimension restricted non-contextuality inequality as a {\cal dimension witness} for dimension $d+1$.
\medskip
Finally, we note an equivalent way to compute the dimension restricted Lov\'asz theta, which we define as:
\begin{equation}
\begin{aligned}\label{Prog_2}
\mathcal{\theta}^d(G) = &\max_{\{v_i \in \mathbb{C}^d\}_{i= 1}^n} \lambda_{max}\left(\sum_{i= 1}^n\ketbra{v_i}\right) \\
& \mathrm{s.t.} \; \braket{v_i}{v_i} = 1, \forall i \in [n]\\
& \mathrm{and} \; \braket{v_i}{v_j} = 0, i\sim j.
\end{aligned}
\end{equation}
\begin{lemma}\label{Lmax}
$\theta^d(G) = \vartheta^d(G)$.
\end{lemma}
\begin{proof}
\textsf{($\geq$ direction)} Let $X$ be a solution of SDP. Let $X = VV^{\dagger}$ and the rows of $V$ be $v_i \in \mathbb{C}^{d}$ for $0\leq i \leq n$. Let $\tilde{v_i} = v_i /\|v_i\|$. Clearly, $\tilde{v_i}$ satisfies the constraints in~(\ref{Prog_2}). Now observe that
\begin{equation}
\begin{aligned}
\theta^d(G) \geq & \lambda_{max}\left(\sum_{i=1}^{n} \ketbra{\tilde{v_i}}\right) = \max_{v: \|v\|=1} \sum_{i=1}^n |\braket{v}{\tilde{v_i}}|^2 \\
&\geq \sum_{i=1}^n |\braket{v_0}{\tilde{v_i}}|^2 = \sum_{i=1}^n |\braket{v_i}{\tilde{v_i}}|^2 = \sum_{i=1}^n \braket{v_i}{v_i} \\
&= \vartheta^d(G).
\end{aligned}
\end{equation}
\textsf{($\leq$ direction)} Let $\{v_i \in \mathbb{C}^{d}\}_{i=1}^n$ be a an optimal solution of $\theta^d(G)$ and let $v_0$ be the eigen-vector of $\sum_{i= 1}^n\ketbra{v_i}$ corresponding to the largest eigenvalue. Now construct a $(n+1) \times d$ matrix $V$, with $V_0 = v_0$, the first row of $V$ and $V_i = \braket{v_i}{v_0}v_i$, for all $i \in [n]$. Let $X = VV^\dagger$. Firstly, we note that it satisfies all the constraints of the SDP. Now observe that
\begin{equation}
\begin{aligned}
\vartheta^d(G) & \geq \mathrm{tr}(X) -1 \\
&= \sum_{i=1}^n \braket{v_i}{v_i}|\braket{v_i}{v_0}|^2 \\
&= \sum_{i=1}^n |\braket{v_i}{v_0}|^2 \\
&= \lambda_{max} \left(\sum_{i=1}^{n} \ketbra{v_i} \right) \\
&= \theta^d(G).
\end{aligned}
\end{equation}
\end{proof}\qedhere
\subsection{Finding low rank solutions: Heuristic approach}
\label{heuristics}
Unfortunately, \emph{rank-constrained} SDPs are \emph{NP}-hard problems and hence they are computationally intractable. An easy way to see this is that the NP-hard \textsf{Max-Cut} problem with weight matrix $W$ can be expressed as the following rank one restricted SDP:
\begin{equation}
\begin{aligned}
\max \ \ &\frac{1}{2}\mathrm{tr} (W X) \\
\text{s.t.}\ & {X}_{ii}= 1, \forall i,\\
&X \succeq 0, \\
&\text{rank}(X) = 1.
\end{aligned}
\end{equation}
\noindent Because of this restriction, it seems unlikely that given a non-contextuality inequality and a dimension $d$, one can efficiently compute the value $\vartheta^d(\mathcal{G}_{\mathrm{ex}})$ and find a quantum realisation of dimension $d$ that achieves the bound. Nevertheless, it is important to find such low dimensional quantum realisations which at least violate the classical bound $\alpha(\mathcal{G}_{\mathrm{ex}})$. For this purpose, we provide a heuristic technique (algorithm \ref{algo:heuristic}) to compute a lower bound on the $d$ dimensional restricted quantum value and find the corresponding $d$-dimensional quantum realisations.
\medskip
\begin{algorithm}[H]
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{Graph $G$ having $n$ nodes, dimension $d$, number of iterations \texttt{k}}
\Output{A lower bound to $\vartheta^d(G)$}
\vspace{0.2cm}
Generate a random matrix $W \in \mathbb{R}^{(n+1)\times (n+1)}$\;
\textit{iter} = 1\;
\While{iter $< \texttt{k}$}{
Minimise $\mathrm{tr}((W-I_{n+1})X)$, subject to $X \succeq 0$, $X_{00} = 1$, $X_{ii} = X_{0i}$ for all $i$ and $X_{ij} = 0$ for all $i \sim j$\;
Obtain optimal $X$ for the above SDP\;
Minimise $\mathrm{tr}(XW)$, subject to $I_{n+1} \succeq W \succeq 0$, $\mathrm{tr}(W) = n+1-d$\;
Obtain optimal $W$ from the above SDP \;
\textit{iter} = \textit{iter} + 1\;
}
\caption{Heuristics using SDPs.}
\label{algo:heuristic}
\end{algorithm}
\medskip
The algorithm is adapted from an approach to solving rank constrained problems given in Chapter 4 of ~\cite{dattorro2005convex}. The reference gives a heuristic algorithm for producing low rank solutions to feasibility SDP of the form:
\vspace{-0.5cm}
\begin{equation} \label{genranksdp}
\begin{aligned}
\text{Find} \ \ & G \in \mathcal{S}^N_+ \\
\text{ s.t. } & G \in \mathcal{C}\\
&\text{rank}(G) \leq d,
\end{aligned}
\end{equation}
\noindent where $\mathcal{C}$ is a convex set. Instead of solving this non-convex problem directly, they suggest to solve a couple of SDPs~\eqref{ranksdp1} and \eqref{ranksdp2} iteratively, until the following stopping criteria is met. After a particular iteration, let $G^*$ and $W^*$ be the optimal solution of the SDPs ~\eqref{ranksdp1} and \eqref{ranksdp2} respectively. The loop is stopped if $\langle G^*, W^*\rangle = 0$. Let us see why. Note that the eigenvalues of $W^*$ lie in the closed interval $[0,1]$ and they sum up to $N-d$. This implies that at least $N-d$ of its eigenvalues are non-zero, that is, rank$(W^*) \geq N-d$. This, along with the fact that $\langle G^*, W^*\rangle = 0$, implies that rank$(G^*) \leq d$. Since $G^*$ is a solution of the first SDP, it must also satisfy the conditions $G^* \in \mathcal{C}$ and $G^* \in \mathcal{S}^N_+$. Thus $G^*$ is a solution of SDP~\eqref{genranksdp}. However, note that there is no guarantee that the stopping criteria will be met.
\begin{multicols}{2}
\begin{equation} \label{ranksdp1}
\begin{aligned}
\min_G \ \ &\langle G,W \rangle \\
\text{ s.t. } & G \in \mathcal{C}\\
& G \in \mathcal{S}^N_+.
\end{aligned}
\end{equation}
\begin{equation} \label{ranksdp2}
\begin{aligned}
\min_W \ \ &\langle G,W \rangle \\
\text{ s.t. } & \mathrm{tr}(W) = N - d \\
& I_N \succeq W \succeq 0.
\end{aligned}
\end{equation}
\end{multicols}
In our case, the SDP~\eqref{theta:primalrank} is more general in the sense that it also involves optimising an objective function. Thus we include the objective function of the Lov\'asz theta SDP, $\mathrm{tr}(X)$, as an extra additive term to the objective function of the first SDP~\eqref{ranksdp1}. Besides this, the main idea of Algorithm~\ref{algo:heuristic}, is same as in the feasibility SDP case - to solve two SDPs iteratively. The first SDP tries to satisfy all the Lov\'asz theta SDP constraints, while the second SDP tries to restrict the rank of the solution $X$ to the desired value. The algorithm is made to run for a predefined number of iterations, $\texttt{k}$. In the end of the program, if the final $X$ and $W$ are such that $\langle X,W \rangle = 0$, then the solution $X$ is indeed a feasible solution to SDP~\eqref{theta:primalrank}. If not, we restart the program. We find that this heuristic works pretty well in practice and enables us to find low rank solutions to the Lov\'asz theta SDP. Taking a Gram decomposition of the solution matrix $X$ allows us to compute the $d$ dimensional quantum realisations.
\medskip
Note that Algorithm~\ref{algo:heuristic} only outputs a lower bound for $\vartheta^d(G)$ and is not directly used to find dimension witnesses (which would require an upper bound). However one may expect to guess this upper bound by running this algorithm several times (by taking the maximum among all the runs). This idea allows us to find candidate graphs for which we can find dimension witnesses and prove the upper bound theoretically. In fact, in Sec.~\ref{sec4Qites}, we describe a family of graphs, which can be used as dimension witnesses, which was found precisely by the same logic using Algorithm~\ref{algo:heuristic}.
\subsection{Examples}
To demonstrate the usefulness of the tools introduced, we apply them to two of graphs which are relevant in the literature on contextuality. For each graph, we report the lower bounds on the rank constrained Lov\'asz theta values for different dimensions obtained with the algorithm introduced before\footnote{A MATLAB implementation of the code using the SDPT3 solver, can be found \href{https://www.dropbox.com/sh/595q05xpo7wfzpd/AAC8jvuprr-C-DTcJccxl6fea?dl=0}{here}.} and discuss why the results are interesting.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{dagograph.png}
\caption{$G_1$ graph: The $9$-vertex graph $G_1$ was used in \cite{Dagomirgraph} to illustrate the notion of almost state-independent contextuality.}
\label{dagograh}
\end{figure}
\subsubsection{Almost state-independent contextuality}
The earliest proof of state-independent quantum contextuality by
Kochen and Specker \cite{Kochen:1967JMM} required 117 three-dimensional real projective measurements. Since then, the number of projective measurements needed to demonstrate state-independent contextuality has been drastically reduced to thirteen over the years \cite{cabello1996bell, yu2012state}. The paper by Yu and Oh suggested a test to reveal state-independent contextuality with only thirteen
projectors \cite{yu2012state}. Later, a computer-aided proof confirmed that it is impossible to demonstrate state-independent contextuality
with less than thirteen measurements \cite{cabello2016quantum}. Thus, any test of contextuality with less than thirteen projective measurements
would fail to exhibit contextuality for at least a few quantum states. The $9$-vertex graph $G_1$ in Fig.~\ref{dagograh} is a part of the original proof of the Kochen-Specker theorem \cite{Kochen:1967JMM} and has been used in \cite{Dagomirgraph} to illustrate the concept of ``almost state-independent'' contextuality. The almost state-independent non-contextuality inequality is given by,
\begin{equation} \label{eq: Dag_ineq}
\sum_{i \in [n]} p_i \leq 3,
\end{equation}
with the events satisfying the exclusivity relation given by the graph in Fig.~\ref{dagograh}. In reference \cite{Dagomirgraph}, authors showed that the non-contextuality inequality in \eqref{eq: Dag_ineq} is saturated by a three dimensional maximally mixed state and violated by every other choice of three-dimensional preparation, for an appropriate choice of measurement settings. Since the non-contextuality inequality in \eqref{eq: Dag_ineq} is violated for every quantum state, except maximally mixed state, it exemplifies the concept of almost state-independent contextuality. For details, refer to \cite{Dagomirgraph}. As one can see, the non-contextual bound for the aforementioned non-contextuality inequality is given by its independence number, $\alpha(G_1) = 3$ \cite{CSW}. In addition, $R_o(G_1) = 3$ and $R_L(G_1) \leq 4$.
Our calculations lead to the following results:
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$d =$ & $3$ & $4$ \\
\hline
$\vartheta^d(G_1) \geq$ & $3.333$ & $3.4706=\vartheta(G_1)$ \\
\hline
\end{tabular}
\end{center}
The authors of \cite{Kochen:1967JMM,Dagomirgraph} used this graph to illustrate state-independent and almost state-independent in $d=3$, respectively. From numerics, we know that there exists a rank 4 solution which achieves the Lov\'asz theta number and it would be interesting to show that $R_L(G_1) = 4$. Also, numerical evidence suggests that $\vartheta^3(G_1) \leq 3.333$, however we do not have theoretical proof. If we assume $\vartheta^3(G_1) \leq 3.333$, it would mean that any experimental value $> 3.333$ will certify that the underlying dimension is greater than $3$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{mermin_graph.png}
\caption{$G_2$ graph: the $16$-vertex graph $G_2$, is the graph of exclusivity corresponding to the $16$ events in the Bell operator of Mermin's tripartite Bell inequality. The aforementioned tripartite Bell inequality can be used to self-test the $3$-qubit GHZ state.}
\label{mermingrah}
\end{figure}
\subsubsection{Mermin's Bell inequality}
We discuss an $n$-partite Bell inequality (for odd $n\geq 3$ ), known as Mermin's Bell inequality \cite{Mermin90}, the interest of which is based on the fact that the Bell operator
\begin{equation}
S_n = \frac{1}{2 i} \left[\bigotimes_{j=1}^n (\sigma_x^{(j)}+i \sigma_z^{(j)}) - \bigotimes_{j=1}^n (\sigma_x^{(j)}-i \sigma_z^{(j)})\right],
\end{equation}
where $\sigma_x^{(j)}$ is the Pauli matrix $x$ for qubit $j$, has an eigenstate with eigenvalue $2^{(n-1)}$ . In contrast, for local hidden-variable (LHV) and noncontextual hidden-variable (NCHV) theories,
\begin{equation}
\langle S_n \rangle \overset{\scriptscriptstyle{\mathrm{LHV, NCHV}}}{\le} 2^{(n-1)/2}.
\end{equation}
The aforementioned inequality thus demonstrates the fact that there is no
limit to the amount by which quantum theory can surpass the limitations imposed by local hidden variable theories (or non-contextual hidden variable theories). We are interested in the tripartite case, i.e. for $n=3$,
\begin{equation} \label{eq: mermin_3}
\langle \sigma_z^{(1)} \otimes \sigma_x^{(2)} \otimes \sigma_x^{(3)} \rangle + \langle
\sigma_x^{(1)} \otimes \sigma_z^{(2)} \otimes \sigma_x^{(3)}\rangle +
\langle \sigma_x^{(1)} \otimes \sigma_x^{(2)} \otimes \sigma_z^{(3)} \rangle-
\langle \sigma_z^{(1)} \otimes \sigma_z^{(2)} \otimes \sigma_z^{(3)} \rangle \leq 2.
\end{equation}
The tripartite inequality in \eqref{eq: mermin_3} can be used to self-test a $3$- qubit GHZ state \cite{kaniewski2017self}. One can study the aforementioned inequality via the graph approach introduced in \cite{CSW}. The $16$-vertex graph $G_2$ in Fig.~\ref{mermingrah} is the graph of exclusivity corresponding to the $16$ events in the Bell operator of Mermin's tripartite Bell inequality~\cite{mermin_cabello}. In this case, $\alpha(G_2) = 3$, $R_o(G_2) = 4$, and $R_L(G_2) \leq 7$. Our calculations give
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$d =$ & $4$ & $5$ & $6$ & $7$ \\
\hline
$\vartheta^d(G_2) \geq$ & $3.414$ & $3.436$ & $3.6514$ & $4=\vartheta(G_2)$ \\
\hline
\end{tabular}
\end{center}
Further if we can show that these lower bounds are tight, then one can use these inequalities as dimension witnesses. It is also interesting to note that the Lov\'asz theta can be achieved in $d=7$, since achieving it in the three-party, two-setting, two-outcome Bell scenario requires $3$ qubits and thus $d=2^3=8$.
\subsection{Quantum dimension witnesses for arbitrary dimensions : the family of Qites}
\label{sec4Qites}
It was realised \cite{Kochen:1967JMM} that achieving Kochen-Specker contextuality requires quantum dimension of at least $3$. A simple proof of this is provided in the following Lemma.
\begin{lemma}\label{gleason}
$\vartheta^2(\mathcal{G}_{\mathrm{ex}}) = \alpha(\mathcal{G}_{\mathrm{ex}})$.
\end{lemma}
\begin{proof}
For this proof we use the definition of the restricted Lov\'asz theta number from (\ref{Prog_2}). We need to show that, if we restrict ourselves to $2$ dimensional vectors, then the restricted Lov\'asz theta number is at most the independence number of the graph. Firstly note that if the graph has an odd cycle ($>1$), then it cannot have orthonormal representation in 2 dimensions. Thus we consider only bipartite graphs. Furthermore, assume that $\mathcal{G}_{\mathrm{ex}}$ is connected. If it is not connected, apply the same arguments as follows, to each connected component and then note that the independence number of the graph is the sum of the independence number of its connected components. For a connected bipartite graph its bi-partition is unique and for $\mathcal{G}_{\mathrm{ex}}$, let them be denoted as $V$ and $V'$. The key observation is that for any unit vector $\ket{v}$ in $\mathbb{C}^2$, there exists a unique (up to a unit complex number $e^{i \theta}$) vector $\ket{v^{\perp}}$ that is orthogonal to $\ket{v}$. This implies that if we assign a unit vector $v \in \mathbb{C}^2$ to a vertex in $V$ then all the vectors in $V$ must be of the form $e^{i \theta} \ket{v}$, for some $\theta \in [0,2\pi]$, whereas all vectors in $V'$ must be of the form $e^{i \theta} \ket{v^{\perp}}$. This implies that the cost of the orthonormal representation is at most $\lambda_{\max} \left(\sum_{i \in V} \ketbra{v} + \sum_{i \in V'} \ketbra{v^{\perp}}\right) = \max \{|V|, |V'| \} = \alpha(\mathcal{G}_{\mathrm{ex}})$. \qedhere
\end{proof}
\medskip
To look for more interesting dimension witnesses for arbitrary higher dimensions we define a family of graphs parameterised by integers $k \geq 2$, called \emph{k-Qite}\footnote{The reason for is that they resemble kites. However the name kite is already reserved for another family of graphs.}.
\begin{definition}
A $k$-Qite graph has $2k+1$ vertices, $v_1,v_2,\ldots, v_{2k+1}$, with the first $k$ vertices forming a fully connected graph. Vertex $v_i$ is connected to vertex $v_{i+k}$, for all $1\leq i \leq k$. Vertex $v_{2k+1}$ is connected to vertices $v_{k+i}$, for all $1\leq i \leq k$.
\end{definition}
\noindent Note that the first member of the family, that is $k=2$, is just the $C_5$ graph (see Fig.~\ref{fig:2qite}). This is one of the most well studied graphs in the field of contextuality since it is the smallest graph for which the Lov\'asz theta number is strictly greater than the independence number. The corresponding non-contextuality inequality is the famous {\em KCBS} inequality~\cite{KCBS}. The graph corresponding to $k=3$ is shown in Fig.~\ref{fig:3qite}.
\begin{figure}
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{2qite.png}
\caption{$2$-Qite $\equiv$ $C_5$, where \hspace{2cm} $\alpha(C_5) = 2, \vartheta(C_5) = \sqrt{5} \approx 2.2361$}.
\label{fig:2qite}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{3qite.png}
\caption{$3$-Qite, where $\alpha(3\text{-Qite}) = 3, \vartheta(3\text{-Qite}) \approx 3.0642$}.
\label{fig:3qite}
\end{minipage}
\end{figure}
\begin{lemma}\label{qiteind}
The independence number of the $k$-Qite graph is $k$.
\end{lemma}
\begin{proof}
Partition the set of the vertices into three sets: $S_1 = \{v_1,v_2,\ldots, v_{k}\}$, $S_2 = \{v_{k+1},v_{k+2},\newline \ldots, v_{2k}\}$ and $S_3 = \{v_{2k+1}\}$. Firstly note that since none of the vertices in $S_2$ are connected to each other, the independence number is at least $|S_2| = k$. Since every vertex in $S_1$ is connected to each other, there can be at most one vertex from $S_1$ in a maximal independent set. However, the inclusion of a vertex from $S_1$, say $v_i$ in the maximal independent set would imply the vertex $v_{k+i}$ cannot be included simultaneously in the maximal independent set. Similarly inclusion of $v_{2k+1}$ implies that one cannot have any vertex of $S_2$ in the maximal independent set. Hence the lemma follows.
\end{proof}
\begin{theorem}
$R_o(\emph{k-}Qite) = k$, for all $k \geq 3$.
\end{theorem}
\begin{proof}
Consider the vertex partitioning as in Lemma~\ref{qiteind}. Since vertices in $S_1$ form a $k$-complete graph, we have $R_o(\text{k-}Qite) \geq k$. Now we show that there exists an orthonormal representation in dimension $k$ for all $\text{k-}Qite$ graphs with $k \geq 3$. Depending of the parity of $k$, we give an explicit construction for the orthonormal representation. \newline
\textbf{When $k$ is odd:} For all the vertices in $S_1$, assign the standard vectors $e_i$ in a $k$-dimensional Hilbert space to vertex $v_i$, for $i \in [$k$]$. Assign the vector $\frac{1}{\sqrt{k}}(1,1,\ldots,1)$ to vertex $v_{2k+1}$. Now consider the vertices $v_{k+i}$ in $S_2$, for $i \in [$k$]$. For vertex $v_{k+i}$ to be orthogonal to vertex $v_{i}$, the vector for $v_{k+i}$ must have $0$ in the $i^{th}$ position. Let the magnitude of the remaining entries of the vector be $\frac{1}{\sqrt{k}}$. Since $k$ is odd, the number of entries with non-zero (also equal) magnitude is even. Setting, half of them randomly to negative sign, makes it orthogonal to the vector $v_{2k+1}$. Hence, in this case, all orthonormality constraints are satisfied.
\newline
\noindent \textbf{When $k$ is even:} Assign the vectors to all the vertices in $S_1$ in the same way as in the odd $k$ case. Set the vector corresponding to vertex $v_{2k+1}$ as $\frac{1}{\sqrt{k-1}}(0,1,1,\ldots,1)$. Except vertex $v_{k+1}$, set all the rest of the vertex in $S_2$ in the same way as in the odd $k$ case. Note that this establishes orthogonality of vertex $v_{k+i}$ with $v_{2k+1}$ for all $2 \leq i \leq k$. Vertex $v_{k+1}$ is then set such that its first entry is $0$ (to make it orthogonal to $v_1$) and is orthogonal to $v_{2k+1}$. There are many such vectors which would satisfy these conditions. For example, set $v_{k+1}$ as $\frac{1}{\sqrt{(k-2)(k-1)}}(0,1,1,\ldots,1,2-k)$ to conclude the proof.
\end{proof}
In order to propose dimension witnesses, we want to find upper bounds on the dimension restricted Lov\'asz theta number corresponding to the {\em Qite} family. For $k=2$, Lemma~\ref{gleason} already gives us the required bound of $2$. We now generalise the Lemma for the {\em Qite} family.
\begin{theorem}
$\vartheta^k(\emph{k-}Qite) \leq k$, for all $k \geq 2$.
\end{theorem}
\begin{proof}
We use the $\theta^d(G)$ definition of rank restricted Lov\'asz theta for the proof, see Lemma~\ref{Lmax}. $\vartheta^k(\emph{k-}Qite) = \max_{\{v_i\}} \lambda_{max} \left(\sum_{i=1}^{2k+1} \ketbra{v_i} \right)$, where $\ket{v_i} \in \mathbb{C}^k$ is a $k$-dimensional quantum state corresponding to the vertex $v_i$, such that $\braket{v_i}{v_j} = 0$, whenever vertices $v_i$ and $v_j$ share an edge. Since, the first $k$ vectors must form an orthogonal basis (as they form a $k$-complete graph), one can suppose that $\ket{v_i} = e_i$ (the standard basis vector), for $1\leq i \leq k$, without loss of generality. This is because there will always exist a unitary $U$, that can rotate any orthonormal basis to the standard basis. Note that this unitary rotation on all the vertices, gives us another set of orthonormal representation of the graph with the same cost, that is,
\begin{equation}
\begin{aligned}
\lambda_{max}\left(\sum_{i=1}^{2k+1} \ketbra{v_i}\right) &= \lambda_{max}\left(U\left(\sum_{i=1}^{2k+1} \ketbra{v_i}\right)U^{\dagger}\right) = \lambda_{max}\left(\sum_{i=1}^{2k+1} U\ketbra{v_i}U^{\dagger}\right).
\end{aligned}
\end{equation}
\noindent Since $ \sum_{i=1}^{k} \ketbra{v_i} = \mathbb{I}$, we are required to show that $\lambda_{max}\left(\sum_{i=k+1}^{2k+1} \ketbra{v_i}\right) \leq k-1$. Note that setting the first $k$ vectors to the standard basis vectors also implies that the $i^{th}$ component of $\ket{v_{k+i}}$ is $0$, for $1 \leq i \leq k$. Next, observe that $\ket{v_{2k+1}}$ is orthogonal to $\ket{v_{k+i}}_{i=1}^k$ and so $\lambda_{max}\left(\sum_{i=k+1}^{2k+1} \ketbra{v_i}\right) \leq \max \{\lambda_{max}\left(\sum_{i=k+1}^{2k} \ketbra{v_i}\right), 1\}$. Hence it suffices to show that $\lambda_{max}\left(\sum_{i=k+1}^{2k} \ketbra{v_i}\right) \leq k-1$.
Let $M \in \mathbb{C}^{k \times k}$ be the matrix whose $i^{th}$ row is $\ket{v_{k+i}}^{\mathrm{T}}$, for $i \in [k]$. Note that $M^{\dagger}M = \sum_{i=k+1}^{2k} \ketbra{v_i}$. Also, observe that $M$ has the property that it's diagonal is all zero and it's rows are all normalized to 1 in $\ell_2$-norm. We shall now bound the largest eigenvalue of $M^{\dagger}M$. We make use of Gershgorin's circle theorem which states that given a complex square matrix $A \in \mathbb{C}^{n \times n}$, it's eigenvalues (which may be complex) lie within at least one of the $n$ Gershgorin discs, that is a closed disk in the complex plane centered at $A_{ii}$ with radius given by the row sum $r_i = \sum_{j\neq i } |A_{ij}|$ for $1 \leq i \leq n$. Since $M_{ii} = 0$ for all $i$,
\begin{equation}\max_{x: \|x\|=1}\|Mx\|_2 = |\lambda_{max}(M)| \leq \max_{k+1\leq i \leq 2k} \, \|\ket{v_{i}}\|_1 \leq \sqrt{k-1} \max_{k+1\leq i \leq 2k} \, \|\ket{v_{i}}\|_2 = \sqrt{k-1},
\end{equation}
where the second inequality follows from the fact that the $\ell_1$-norm of a vector $v$ is at most $\sqrt{\dim(v)}$ times it's $\ell_2$-norm. Finally putting everything together,
\begin{equation}
\lambda_{max}(M^{\dagger}M) = \max_{x: \|x\|=1} x^{\dagger}M^{\dagger}Mx = \max_{x:\|x\|=1} \|Mx\|_2^2 \leq (\sqrt{k-1})^2 = k-1.
\end{equation}
\end{proof}
On the other hand, one can verify that $\vartheta(\emph{k-}Qite) > k$, for any $k > 1$, by solving the Lov\'asz theta SDP for the $\emph{k-}Qite$ graph numerically. This gives us the following corollary.
\begin{corollary}
Violating the non-contextuality inequality $\sum_i p_i \leq k$ where $p \in {\cal P}_{Q}(\emph{k-}Qite)$, implies that the underlying quantum realisation must have dimension at least $k+1$.
\end{corollary}
\section{Conclusion}
\label{disc}
In this work, we have introduced a novel approach to quantum dimension witnessing in scenarios with one preparation and several measurements (examples of them are Kochen-Specker contextuality and Bell nonlocality scenarios). Our approach is based on graphs which represent the relations of exclusivity between events. Each graph can be realized in different scenarios, and there is always a (specific Kochen-Specker contextuality) scenario for which all quantum behaviours for the graph can be realized. The virtue of our approach is precisely that we do not need to fix any scenario. Instead, we explore the features of abstract graphs for dimension witnessing. Here, we have introduced all the necessary tools to identify graph-based dimension witnesses, and we have illustrated their usefulness by showing how famous exclusivity graphs in quantum theory hide some surprises when re-examined with our tools and how one can construct simple dimension witnesses for any arbitrary dimension. Arguably, however, the main interest of our results is that they can be extended in many directions, connected to multiple problems, and applied to in different ways. Here we list some of possible future lines of research:
\begin{itemize}
\item Identifying graph-theoretic dimension witnesses for specific Bell and Kochen-Specker contextuality scenarios.
\item Using previous knowledge in graph theory for finding useful quantum dimension witnesses. For example, there are graphs for which the ratio of Lov\'asz theta number to independence number is quite large, i.e., $\frac{\vartheta(G)}{\alpha(G)} \gg 1$ \cite{Feige1997RandomizedGP, amaral2015maxcontext}. This indicates situations where the quantum vs classical advantage is highly robust against imperfections. Therefore, dimension witnesses based on such graphs could be useful for certification tasks on, e.g., noisy intermediate-scale quantum devices \cite{preskill2018quantum}.
\item For the purpose of noise robust dimension witnesses, one may also use a weighted version of graphs (corresponding to a weighted non-contextuality inequality). As an example, for our family of $\emph{k-}Qite$ graphs, one can consider a weight vector given by $w=(1,1,\ldots,1,k-1)$, where more weight is given to the $(2k+1)^{th}$ vertex of $\emph{k-}Qite$. Note that the weighted independence number of this weighted graph is still $k$. However numerically solving the weighted Lov\'asz theta for this graph suggests $\vartheta(\emph{k-}Qite,w) - \alpha(\emph{k-}Qite,w)> 0.26$ for all $k \geq 3$. For large $k$ this difference converges to $\approx 1/3$. However note that since for large $k$, the ratio $\frac{\vartheta(\emph{k-}Qite,w)}{\alpha(\emph{k-}Qite,w)} \approx 1$, this approach is still not noise robust.
\item Implementing graph-theoretic quantum dimension witnesses in actual experiments.
\item Obtaining the classical memory cost \cite{kleinmann2011memory,CGGX18} for simulating graph-theoretic dimension witnesses and identifying quantum correlations achievable with low-dimensional quantum systems but requiring very-high dimensional classical systems.
\item Extending the graph-theoretic framework to classical dimension witnessing.
\item Developing a general graph-theoretic framework to analyse and unify different approaches to dimension witnessing.
\end{itemize}
\section*{Acknowledgments}
The authors thank Zhen-Peng Xu for valuable comments on the ar$\chi$iv version and suggesting the use of weighted graphs for increasing the quantum-classical gap as described in the conclusions. The authors also thank Antonios Varvitsiotis for helpful discussions. We also thank the National Research Foundation of Singapore, the Ministry of Education of Singapore for financial support. This work was also supported by \href{http://dx.doi.org/10.13039/100009042}{Universidad de Sevilla} Project Qdisc (Project No.\ US-15097), with FEDER funds, \href{http://dx.doi.org/10.13039/501100001862}{MINECO} Projet No.\ FIS2017-89609-P, with FEDER funds, and QuantERA grant SECRET, by \href{http://dx.doi.org/10.13039/501100001862}{MINECO} (Project No.\ PCI2019-111885-2).
\bibliographystyle{alpha}
| proofpile-arXiv_065-255 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A key step in the origin of life is the formation of a metabolic network that is both self-sustaining and collectively autocatalytic \cite{ars, hay, liu, vai, vas, xav}. Systems that combine these two general properties have been studied within a formal framework that is sometimes referred to as RAF theory \cite{hor17}.
We give precise definitions shortly but, roughly speaking, a `RAF' \textcolor{black}{(=Reflexively Autocatalytic and F-generated)} set is a subset of reactions where the reactants and at least one catalyst of each reaction in the subset can be produced from an available food set by using reactions from within the subset only.
The study of RAFs traces back to pioneering work on `collectively autocatalytic sets' in polymer models of early life \cite{kau71, kau86}, which was subsequently developed mathematically (see \cite{hor19, hor17} and the references there-in). RAF algorithms have been applied recently to investigate the traces of earliest metabolism that can be detected in large metabolic databases across bacteria and archaea \cite{xav}, leading to the development of an open-source program to analyse and visualise RAFs in complex biochemical systems \cite{cat}. RAF theory overlaps with other graph-theoretic approaches in which the emergence of directed cycles in reaction graphs plays a key role \cite{bol, j1, j2}, and is also related to (M, R) systems \cite{cor, jar10} and chemical organisation theory \cite{dit2}.
RAF theory has also been applied in other fields, including ecology \cite{caz18} and cognition \cite{gab17}, and the ideas may have application in other contexts. In economics, for instance, the production of consumer items can be viewed as a catalysed reaction; for example, the production of a wooden table involves nails and wood (reactants) and a hammer (a catalyst, as it is not used up in the reaction but makes the reaction happen much more efficiently) and the output (reaction product) is the table. On a larger scale, a factory is a catalyst for the production of the items produced in it from reactants brought into the factory. In both these examples, notice that each reactant may either be a raw material (i.e. the elements of a `food set') or a products of other (catalysed) reactions, whereas the products may, in turn, be reactants, or catalysts, for other catalysed reactions. Products can sometimes also {\em inhibit} reactions; for example, the production of internal combustion engines resulted in processes for building steam engines being abandoned.
In this paper, we extend RAF theory further by investigating the impact of different modes of catalysis and inhibition on the appearance of (uninhibited) RAF subsets. We focus on the expected number of such sets (rather than on the probability that at least one such set exists which has been the focus of nearly all earlier RAF studies \cite{fil, mos}). \textcolor{black}{Using a mathematical approach, we derive explicit and exact analytical expressions for the expected number of such uninhibited RAF subsets,} as well as providing some insight into the expected population sizes of RAFs for the catalysis rate at which they first appear (as we discuss in Section~\ref{relation}). \textcolor{black}{In particular, we show that for simple systems, with an average catalysis rate that is set at the level where RAFs first appear, the expected number of RAFs depends strongly on the variability of catalysis across molecules. At one extreme (uniform catalysis), the expected number of RAFs is small (e.g. 1, or a few), while at the other extreme (all-or-nothing catalysis) the expected number of RAFs grows exponentially with the size of the system.}
\textcolor{black}{The motivation for looking at the expected number of RAFs (rather than the probability that a RAF exists) is twofold. Firstly, by focusing on expected values it is possible to present certain exact results (in Theorem~\ref{thm1}), rather than just inequalities or asymptotic results, while still gaining some information about the probability that a RAF exists. Secondly, in origin of life studies, it is relevant to consider populations of self-sustaining autocatalytic chemical networks, which may be subject to competition and selection, a topic which has explored by others (see e.g. \cite{sza, vas, vir}), and information concerning the likely diversity of RAFs available in a given chemical reaction system is therefore a natural question. In previous analyses where RAFs have been identified, subsequent analysis has revealed a large number of RAFs present within the RAF; for example, for a 7-reaction RAF in a laboratory-based study involving RNA-ribosymes (from \cite{vai}) more than half of the $2^7 = 128$ subsets of this RAF are also RAFs ({\em cf.} Fig. 5 of \cite{ste}). Simulation studies involving Kauffman's binary polymer model have also identified a large number of RAFs present once catalysis rises above the level at which RAFs first appear \cite{hor15}. }
\textcolor{black}{The structure of this paper is as follows. We begin with some formal definitions, and then described different models for catalysis and inhibition. In Section~\ref{gensec}, we present the main mathematical result, along with some remarks, and proof. We then present a number of consequences of our main result, beginning with a generic result concerning the impact of inhibition when catalysis is uniform. We then investigate the impact of different catalysis distributions on the expected number of RAF arising in `elementary' chemical reaction systems, focusing on the catalysis rate at which RAFs first appear. We end with some brief concluding comments.}
\subsection{Definitions}
Let $X$ be a set of molecule types; $R$ a set of reactions, where each reaction consists of a subset of molecule types as input (`reactants') and a set of molecule types as
outputs (`products'); and let $F$ be a subset of $X$ (called a `food set'). We refer to the triple $\mathcal Q=(X, R, F)$ as a {\em chemical reaction system with food set} and, unless stated otherwise, we impose no further restrictions on $\mathcal Q$ (e.g. it need not correspond to a system of polymers and a reaction can have any positive number of reactants and any positive number of products).
Given a reaction $r \in R$, we let $\rho(r) \subseteq X$ denote the set of reactants of $r$ and $\pi(r)$ denote the set of products of $r$.
Moreover, given a subset $R'$ of $R$, we let $\pi(R') = \bigcup_{r \in R'} \pi(r).$
A subset $R'$ of $R$ is said to be {\em $F$-generated} if $R'$ can be ordered $r_1, r_2, \ldots, r_{|R'|}$ so that
$\rho(r_1) \subseteq F$ and for each $i \in \{2, \ldots, |R|\}$, we have $\rho(r_i) \subseteq F \cup \pi(\{r_1, \ldots, r_{i-1}\})$. In other words, $R'$ is $F$-generated if the $R'$ can be built up by starting from one reaction that has all its reactants in the food set, then adding reactions in such a way that each added reaction has each of its reactants present either in the food set or as a product of a reaction in the set generated so far.
Now suppose that certain molecule types in $X$ can catalyse certain reactions in $R$.
A subset $R'$ of $R$ is said to be {\em Reflexively Autocatalytic and F-generated} (more briefly, a {\em RAF}) if $R'$ is nonempty and each reaction $r \in R'$ is catalysed by
at least one molecule type in $F \cup \pi(R')$ and $R'$ is $F$-generated.
We may also allow certain molecule types to also inhibit reactions in $R$, in which case a subset
$R'$ of $R$ is said to be an {\em uninhibited RAF} (uRAF) if
$R'$ is a RAF and no reaction in $R'$ is inhibited by any molecule type in $F \cup \pi(R')$. \textcolor{black}{The notion of a uRAF was first defined and studied in \cite{mos}.}
\textcolor{black}{Notice that inhibition is being applied in a strong sense: a reaction $r$ cannot be part of a uRAF if $r$ is inhibited by at least one molecule type present, regardless of how many molecule types are catalysts for $r$ and present in the uRAF}.
Since a union of RAFs is also a RAF, when a RAF exists in a system, there is a unique maximal RAF. However, the same does not apply to uRAFs -- in particular, the union of two uRAFs can fail to be a uRAF.
These concepts are illustrated in Fig.~\ref{fig1}.
\begin{figure}[h]
\centering
\includegraphics[scale=1.1]{fig1.pdf}
\caption{A chemical reaction system consisting of the set of molecule types $X=\{a, b, c, a', b', c', x, x', w,w', z,z'\}$, a food set $F=\{a, b, c, a', b', c'\}$ \textcolor{black}{(each placed inside a green box)} and the reaction set
$R=\{r_1, r_2, r_1', r_2', r_3, r_4\}$ \textcolor{black}{(bold, beside small white-filled squares)}. Solid arcs indicate two reactants entering a reaction and a product coming out.
Catalysis is indicated by dashed arcs (blue) and inhibition (also called blocking) is indicated by dotted arcs (red). The full set of
reactions is not a RAF, but it contains several RAFs that are contained in the unique maximal RAF $R'=\{r_1, r_1', r_2, r_2'\}$ (note that $r_4$ is not part of this RAF even though it is catalysed and the reactants of $r_4$ are present in the food set).
The maximal RAF $R'$ is not a uRAF (e.g. $r'_1$ is inhibited by $z$ which is a product of $r_2$); however, $\{r_1, r_2\}$ and $\{r_1', r_2'\}$ are uRAFs, and so are $\{r_1\}, \{r_1'\}$ and $\{r_1, r_1'\}$. }
\label{fig1}
\end{figure}
\section{Modelling catalysis and inhibition}
We will model catalysis and also blocking (inhibition) by random processes.
To provide for greater generality, we allow the possibility that elements in a subset $C^{-}$ (respectively, $B^{-}$) of the food set cannot catalyse (respectively block) any reaction in $R$. Let $c=|F \setminus C^{-}|$ and $b=|F \setminus B^{-}|$. Thus $c$ (respectively $b$) is the number of food elements that are possible catalysts (respectively blockers).
Suppose that each molecule type $x \in X\setminus C^{-}$ has an associated probability $C_x$ of catalysing any given reaction in $R$. \textcolor{black}{The values $C_x$ are sampled independently from a distribution $\mathcal D$, for each $x \in X$.}
This results in a random assignment of catalysis (i.e. a random subset $\chi$ of $X \times \mathcal R$), where $(x,r) \in \chi$ if $x$ catalyses $r$. Let
$\mathcal C_{x,r}$ be the event that $x$ catalyses $r$.
We assume that:
\begin{itemize}
\item[($I_1$)] $\mathcal C=(C_x, x\in X\setminus C^{-})$ is a collection of independent random variables.
\item[($I_2$)] Conditional on $\mathcal C$, $(\mathcal C_{x,r}: x\in X \setminus C^-, r \in R)$ is a collection of independent events.
\end{itemize}
Since the distribution of $C_x$ is the same for all $x \in X\setminus C^-$, we \textcolor{black}{will use $C$ to denote an arbitrary random variable sampled from the distribution $\mathcal D$.} Let
$\mu_C = \mathbb E[C]$ and, for $i\geq 0$, let $\lambda_i$ be the $i$--th moment of $1-C$; that is:
$$\lambda_i =\mathbb E[(1-C)^i].$$
Although our results concern general catalysis distributions, we will pay particular attention to three forms of catalysis \textcolor{black}{which have been considered in previous studies (e.g. \cite{hor16}), and which will be compared in our analyses.}
\begin{itemize}
\item The {\em uniform model:} Each $x \in X\setminus C^-$ catalyses each reaction in $\mathcal R$ with a fixed probability $p$. Thus, $C =p$ with probability 1, and so $\mu_C = p$.
\item The {\em sparse model:} $C= u$ with probability $\pi$ and $C =0$ with probability $1-\pi$, and so $\mu_C = u \pi$.
\item
The {\em all-or-nothing model:} $C=1$ with probability $\pi$ and $C=0$ with probability $1-\pi$, and so $\mu_C = \pi$.
\end{itemize}
The uniform model is from Kauffman's binary polymer network and has been the default for most recent studies involving polymer models \cite{hor17}.
More realistic catalysis scenarios can be modelled by allowing $C$ to take a range of values
values around $\mu_C$ with different probabilities. The {\em sparse model} generalises the uniform model slightly by allowing a (random) subset of molecule types to be catalysts. In this model, $\pi$ would typically be very small in applications (i.e. most molecules are not catalysts but those few that are will catalyse a lot or reactions, as in the recent study of metabolic \textcolor{black}{origins, described in} \cite{xav}). The all-or-nothing model is a special case of the sparse model.
The emergence of RAFs in these models (and others, including a power-law distribution) was investigated in \cite{hor16}.
For these three models, the associated $\lambda_i$ values are given as follows: $\lambda_0=1$, and
for all $i\geq 1$:
\begin{equation}
\label{mu-eq}
\lambda_i = \begin{cases}
(1-\mu_C)^i, & \mbox{(uniform model)};\\
1-\pi + \pi(1-u)^i, & \mbox{(sparse model)};\\
1-\mu_C, & \mbox{(all-or-nothing model)}.
\end{cases}
\end{equation}
In addition to catalysis, we may also allow random blocking (inhibition) of reactions by molecules, formalised as follows.
Suppose that each molecule type $x \in X\setminus B^{-}$ has an associated probability $B_x$ of blocking any given reaction in $R$. We will treat $B_x$ as a random variable taking values in $[0,1]$ with a common distribution $\hat{\mathcal D}$. This results in a random assignment of blocking ( i.e. a random subset \textcolor{black}{$\beta$} of $X \times \mathcal R$), where \textcolor{black}{$(x,r) \in \beta$} if $x$ blocks reaction $r$. Let
$\mathcal B_{x,r}$ be the event that $x$ blocks $r$. We assume that:
\begin{itemize}
\item[($I'_1$)] $\mathcal B=(B_x, x\in X\setminus B^{-})$ is a collection of independent random variables.
\item[($I'_2$)] Conditional on $\mathcal B$, $(\mathcal B_{x,r}: x\in X \setminus C^-, r \in R)$ is a collection of independent events.
\end{itemize}
Since the distribution of $B_x$ is the same for all $x$, we will use $B$ to denote this random variable, let $\mu_B = \mathbb E[B]$ and, for $i\geq 0$, let: $$\hat{\lambda}_i =\mathbb E[(1-B)^i].$$
We also assume that catalysis and inhibition are independent of each other. Formally, this is the following condition:
\begin{itemize}
\item[($I_3$)] $C$--random variables in ($I_1$, $I_2$) are independent of the $B$--random variables in ($I'_1$, $I'_2$).
\end{itemize}
Note that $(I_3)$ allows the possibility that a molecule type $x$ both catalyses and blocks the same reaction $r$ (the effect of this on uRAFs is the same as if $x$ just blocks $r$; (i.e. blocking is assumed to trump catalysis)).
Notice also that $\lambda_0 = \hat{\lambda}_0 = 1$.
\section{Generic results}
\label{gensec}
To state our first result, we require two further definitions. Let $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$ denote the expected number of RAFs and uRAFs (respectively) arising in $\mathcal Q$ under the random process of catalysis and inhibition described.
For integers $k, s\geq1$ let $n_{k,s}$ be the number of F-generated subsets $R'$ of $R$ \textcolor{black}{that have size $k$ and} for which the total number of non-food products in $X$ produced by reactions in $R'$ is $s$. Note that $n_{k,s}=0$ for $s>\min\{|X|-F, k M\}$ where $M$ is the maximum number of products of any single reaction.
Part (i) of the following theorem gives an exact expression for $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$, which we then use in Parts (ii) and (iii) to describe the catalysis and inhibition distributions (having a given mean) that minimise or maximise the expected number of RAFs and uRAFs. We apply this theorem to particular systems in the next section.
\begin{theorem}
\label{thm1}
Let $\mathcal Q$ be any chemical reaction system with food set, accompanied by catalysis and inhibition distributions $\mathcal D$ and $\hat{\mathcal D}$, respectively.
\begin{itemize}
\item[(i)]
The expected number of RAFs and uRAFs for $\mathcal Q$ is given as follows:
\begin{equation}
\label{mumu1}
\mu_{\rm RAF}= \sum_{k\geq 1,s\geq 0} n_{k,s} \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)
\end{equation}
and
\begin{equation}
\label{mumu2}
\mu_{\rm uRAF}= \sum_{k\geq 1, s\geq 0} n_{k,s} \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right) \hat{\lambda}_k^{s+b}.
\end{equation}
\item[(ii)]
Among all distributions $\mathcal D$ on catalysis having a given mean $\mu_C$, the distribution that minimises the expected number of RAFs and uRAFs (for any inhibition distribution) is the uniform model (i.e. $C = \mu_C$ with probability 1).
\item[(iii)] Among all distributions $\hat{\mathcal D}$ on inhibition having a given mean $\mu_B$, the following hold:
\begin{itemize}
\item[(a)] the distribution that minimises the expected number of uRAFs (for any catalysis distribution) is the uniform model ($B = \mu_B$ with probability 1). \item[(b)] the distribution that maximises the expected number of uRAFs (for any catalysis distribution) is the all-or-nothing inhibition model (i.e. $B=1$ with probability $\mu_B$, and $B=0$ with probability $1-\mu_B)$.
\end{itemize}
\end{itemize}
\end{theorem}
\bigskip
\textcolor{black}{We give the proof of Theorem~\ref{thm1} shortly, following some brief remarks.}
\subsection{Remarks}
\begin{itemize}
\item[(1)]
If $P_{\rm RAF}$ and $P_{\rm uRAF}$ are the probability that $\mathcal Q$ contains a RAF and a uRAF, respectively, then these quantities are bounded above as follows:
$$P_{\rm RAF} \leq \mu_{\rm RAF} \mbox{ and } P_{\rm uRAF} \leq \mu_{\rm uRAF}.$$
This follows from the well-known inequality $\mathbb P(V>0) \leq \mathbb E[V]$ for any non-negative integer-valued random variable $V$, upon taking $V$
to be the number of RAFs (or the number of uRAFs). We will explore the extent to which $P_{\rm RAF}$ underestimates $\mu_{\rm RAF}$ in Section~\ref{relation}.
\item[(2)]
Theorem~\ref{thm1} makes clear that the only relevant aspects of the network $(X, R)$ for $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$ are encoded entirely within the coefficients $n_{k,s}$ (the two stochastic terms depend only on $r$ and $s$ but not on further aspects of the network structure). By contrast, an expression for the probabilities $P_{\rm RAF}$ and $P_{\rm uRAF}$ that a RAF or uRAF exists
requires more detailed information concerning the structure of the network. This is due to dependencies that arise in the analysis.
Notice also that Theorem~\ref{thm1} allows the computation of $\mu_{\rm uRAF}$ in $O(|R|^2 \times |X|)$ steps (assuming that the $\lambda_i, \hat{\lambda}_i$ and $n_{k,s}$ values are available).
\item[(3)]
Although the computation or estimation of $n_{k,s}$ may be tricky in general systems, Eqn.~(\ref{mumu1}) can still be useful (even with little or no information about $n_{k,s}$) for asking comparative types of questions. In particular, Parts (ii) and (iii) provide results that are independent of the details of the network $(X, R, F)$.
In particular, Theorem~\ref{thm1}(ii) is consistent with simulation results in \cite{hor16} for Kauffman's binary polymer model, in which variable catalysis rates (the sparse and all-or-nothing model) led to RAFs appearing at lower average catalysis values ($\mu_C$) than for uniform catalysis.
\item[(4)]
\textcolor{black}{For the uniform model, note that the term $\left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)$ in Eqns.~(\ref{mumu1}) and (\ref{mumu2}) simplifies to
$\left[ 1- (1-\mu_C)^{s+c}\right]^k$.}
\end{itemize}
\bigskip
\subsection{\textcolor{black}{Proof of Theorem~\ref{thm1}}}
For Part (i), recall that $\pi(R')$ denotes the set of products of reactions in $R'$.
\textcolor{black}{ For $k, s \geq 1,$ let ${\rm FG}(k,s)$ denote the collection of subsets $R'$ of $R$ that satisfy all of the following three properties:
\begin{itemize}
\item[(i)] $R'$ has size $k$;
\item[(ii)] $R'$ is F-generated, and
\item[(iii)] the number of non-food molecule types produced by reactions in $R'$ is $s$.
\end{itemize}
}
Thus, $$n_{k,s}= |{\rm FG}(k,s)|.$$
For $R' \subseteq R$, let $\mathbb I_{R'}$ be the Bernoulli random variable
that takes the value $1$ if each reaction in $R'$ is catalysed by at least one product of a reaction in $R'$ or by an element of $F\setminus C^{-}$, and $0$ otherwise.
Similarly, let $\hat{\mathbb I}_{R'}$ be the Bernoulli random variable
that takes the value $1$ if no reaction in $R'$ is blocked by the product of any reaction in $R'$ or by an element of $F\setminus B^{-}$. Then the random variable
$$\sum_{k\geq 1,s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'}$$
counts the number of uRAFs present, so we have:
\begin{equation}\label{nicer}
\mu_{\rm uRAF} = \mathbb E\left[\sum_{k \geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'}\right]
=\sum_{k\geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb E\left[\mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'} \right] $$
$$= \sum_{k \geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb E[ \mathbb I_{\mathcal R'}]\cdot\mathbb E[ \hat{\mathbb I}_{\mathcal R'}],
\end{equation}
where the second equality is by linearity of expectation, and the third equality is by the independence assumption ($I_3$).
Given $\mathcal R'\in {\rm FG}(k,s)$, let $C_1, C_2, \ldots, C_{s+c}$ be the random variables (ordered in any way) that correspond to the catalysis probabilities of
the $s$ products of $\mathcal R'$ and the $c$ elements of $F\setminus C^{-}$. We can then write:
\begin{equation}\label{nice}
\mathbb E[ \mathbb I_{\mathcal R'}] =\mathbb P(\mathbb I_{R'}=1) = \mathbb E[\mathbb P(\mathbb I_{R'}=1|C_1, C_2, \ldots, C_{s+c})],
\end{equation}
where the second expectation is with respect to the random variables $C_i$.
The event $\mathbb I_{R'}=1$ occurs precisely when each of the $r$ reactions in $R'$ is catalysed by at least one of the $s+c$ elements in
$(\pi(R')\setminus F) \cup (F\setminus C^{-})$. By the independence assumption ($I_2$),
\begin{equation}
\label{epr1}
\mathbb P(\mathbb I_{R'}=1|C_1, C_2, \ldots, C_{s+c}) = \prod_{r' \in R'} \left(1- \prod_{j=1}^{s+c} (1-C_j)\right) = \left(1- \prod_{j=1}^{s+c} (1-C_j)\right)^k.
\end{equation}
Set $V:= \prod_{j=1}^{s+c} (1-C_j)$. \textcolor{black}{Eqns.~(\ref{nice}) and (\ref{epr1}) then give:}
\begin{equation}
\label{epr2}
\textcolor{black}{\mathbb E[ \mathbb I_{\mathcal R'}] = \mathbb E[(1-V)^k] = \sum_{i=0}^k (-1)^i \binom{k}{i} \mathbb E[V^i],}
\end{equation}
\textcolor{black}{where the second equality is from the binomial expansion $(1-V)^k = \sum_{i=0}^k (-1)^i \binom{k}{i} V^i$, and linearity of expectation.}
Moreover, for each $i\geq 0$, we have:
\begin{equation}
\label{epr3}
\mathbb E[V^i] = \mathbb E\left[ \left[\prod_{j=1}^{s+c} (1-C_j)\right]^i\right]=\mathbb E\left[ \prod_{j=1}^{s+c} (1-C_j)^i\right] =\prod_{j=1}^{s+c} \mathbb E[(1-C_j)^i]\\
=\prod_{j=1}^{s+c} \lambda_i = \lambda_i^{s+c},
\end{equation}
where the first two equalities are trivial algebraic identities, the third is by the independence assumption ($I_1$), the fourth is by definition and the last is trivial.
\textcolor{black}{Substituting Eqn.~(\ref{epr3}) into (\ref{epr2})} gives:
\begin{equation}
\label{epr4}
\mathbb E[ \mathbb I_{\mathcal R'}] = \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_i^{s+c}.
\end{equation}
Turning to inhibition, a RAF subset $R'$ of $R$ in ${\rm FG}(k,s)$ is a uRAF precisely if no reaction in $R'$ is blocked by any
of the $s+b$ elements of $(\pi(R')\setminus F) \cup (F\setminus B^{-})$. By the independence assumption ($I'_2$),
$$\mathbb P(\hat{\mathbb I}_{R'}=1|B_1, B_2, \ldots, B_{s+b}) = \prod_{r' \in R'}\left(\prod_{j=1}^{s+b} (1-B_j)\right)$$
$$= \left(\prod_{j=1}^{s+b} (1-B_j)\right)^k =\prod_{j=1}^{s+b} (1-B_j)^k. $$
Applying expectation (using the independence assumption ($I'_1$)), together with the identity $\mathbb E[(1-B_j)^k] = \hat{\lambda}_k$ gives:
\begin{equation}
\label{epr5}
\mathbb E[\hat{ \mathbb I}_{\mathcal R'}] =\hat{\lambda}_k^{s+b}.
\end{equation}
Combining \textcolor{black}{Eqns.~(\ref{epr4}) and (\ref{epr5})} into Eqn.~(\ref{nicer}) gives the first equation in Part (i). The second is then obtained by putting $\hat{\lambda}_i = 1$ for all $i$.
\bigskip
{\em Parts (ii) and (iii):}
Observe that the function $u=(1-y)^k$ for $k \geq 1$ is convex and strictly convex when $k>1$.
Thus, by Jensen's Inequality, for any random variable $Y$, we have:
\begin{equation}
\label{in}
\mathbb E[(1-Y)^k] \geq (1-\mathbb E[Y])^k,
\end{equation}
with a strict inequality when $Y$ is nondegenerate and $k>1$.
For Part (ii), let $\textcolor{black}{V= } \prod_{j=1}^{s+c} (1-C_j)$. Then \textcolor{black}{by the first equality in Eqn.~(\ref{epr2}) we have:}
$$\mathbb E[ \mathbb I_{\mathcal R'}] = \mathbb E[(1-V)^k],$$
\textcolor{black}{and by Inequality~(\ref{in}) (with $Y=V$) we have:
\begin{equation}
\label{in2}
\mathbb E[ \mathbb I_{\mathcal R'}] \geq (1-\mathbb E[V])^k,
\end{equation}
\textcolor{black}{and the inequality is strict when $V$ is nondegenerate and $k>1$. }
By the independence assumption $(I_1)$, and noting that $\mathbb E[(1-C_j)] = 1-\mu_C$ we have:
\begin{equation}
\label{in3}
\mathbb E[V] = \mathbb E[ \prod_{j=1}^{s+c} (1-C_j)] = \prod_{j=1}^{s+c}\mathbb E[(1-C_j)] = (1-\mu_C)^{s+c},
\end{equation}
and substituting Eqn.~(\ref{in3}) into Inequality~(\ref{in2}) gives:}
$$\mathbb E[ \mathbb I_{\mathcal R'}] \geq (1-(1-\mu_C)^{s+c})^k,$$
with equality only for the uniform model.
This gives Part (ii).
\bigskip
For Part (iii)(a), Inequality (\ref{in}) implies that $\hat{\lambda}_k =\mathbb E[(1-B)^k)] \geq (1-\mu_B)^k$.
\textcolor{black}{Let $H(k,s) := \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)$. By Eqn. (\ref{epr4}), $H(k,s) = \mathbb E[ \mathbb I_{\mathcal R'}]$ for $\mathcal R' \in {\rm FG}(k,s)$ and so $H(k,s) \geq 0$.
Thus, by Eqn.~(\ref{mumu2}) we have:
$$\mu_{\rm uRAF}= \sum_{k\geq 1, s\geq 0} n_{k,s} \cdot H(k,s) \cdot \hat{\lambda}_k^{s+b} \geq \sum_{k\geq 1, s\geq 0} n_{k,s} \cdot H(k,s) \cdot (1-\mu_B)^{k(s+b)}, $$
and the right-hand side of this inequality is the value of $\mu_{\rm uRAF}$ for the uniform model of inhibition. }
\bigskip
For Part (iii)(b),
suppose that $Y$ is a random variable taking values in $[0,1]$ with mean $\eta$ and let $Y_0$ be the random variable that
takes the value 1 with probability $\eta$ and $0$ otherwise. Then $\mathbb E[Y_0^m] = \eta$ for all $m \geq 1$, and $\mathbb E[Y^m] \leq \mathbb E[Y^2] \leq \eta$ for all $m\geq 1$ (since $Y^m \leq Y^2 \leq Y$ because $Y$ takes values in $[0,1]$); moreover,
$\mathbb E[Y^2]= \eta$ if and only if $\mathbb E[Y(1-Y)] = 0$, which implies that $Y=Y_0$.
Now apply this to $Y= (1-B)$ and $m=k$ to deduce for the distributions on $B$ that have a given mean $\mu_B$, $\hat{\lambda}_k$ is maximised when the distribution takes the value $1$ with probability $\mu_B$ and
zero otherwise.
\hfill$\Box$
\section{Applications}
\subsection{Inhibition-catalysis trade-offs under the uniform model}
For any model in which catalysis and inhibition are uniform, Theorem~\ref{thm1} provides a simple prediction concerning how the expected number of uRAFs compares with a model with zero inhibition (and a lower catalysis rate). To simplify the statement, we will assume $b=c$ and we will write $\mu_{\rm uRAF}(p, tp)$ to denote the dependence of $\mu_{\rm uRAF}$ on
$\mu_C=p$ and $\mu_B = tp$ for some value of $t$.
We will also write $p = \nu /N$, where $N$ is the total number of molecule types that are in the food set or can be generated by a sequence of reactions in $\mathcal R$. We assume in the following result that $p$ is small (in particular, $< 1/2$) and $N$ is large (in particular, $(1-\nu/N)^N$ can be approximated by $e^{-\nu}$).
The following result (which extends Theorem 2 from \cite{hor16}) applies to any chemical reaction system and provides a lower bound on the expected number of uRAFs in terms of the expected number of RAFs in the system with no inhibition (and half the catalysis rate); its proof relies on Theorem~\ref{thm1}.
\textcolor{black}{Roughly speaking, Corollary~\ref{thm2} states that for any chemical reaction system with uniform catalysis, if one introduces a limited degree of inhibition then by doubling the original catalysis rate, the expected number of uninhibited RAFs is at least as large as the original number of expected RAF before inhibition was present (and at the original catalysis rate). }
\begin{corollary}
\label{thm2}
For all non-negative values of $t$ with $t \leq \frac{1}{\nu}\ln(1+e^{-\nu})$, the following inequality holds:
$$
\mu_{\rm uRAF}(2p, tp) \geq \mu_{\rm RAF}(p, 0).
$$
\end{corollary}
\begin{proof}
\textcolor{black}{By Theorem~\ref{thm1}, and Remark (4) following this theorem, and noting that $\mu_C =p$ and $\mu_B=tp$ we have:
\begin{equation}
\label{por2}
\mu_{\rm uRAF}(2p, tp)= \sum_{k \geq 1, s\geq 0} n_{k,s} \left[(1- (1-2p)^{s+c})\cdot (1-tp)^{s+c}\right]^k,
\end{equation}
which can be re-written as:}
\begin{equation}
\label{por2plus}
\mu_{\rm uRAF}(2p, tp)= \sum_{k \geq 1, s\geq c} n_{k,s-c} \left[(1- (1-2p)^{s})\cdot (1-tp)^{s}\right]^k.
\end{equation}
Thus (putting $t=0$ in this last equation) we obtain:
\begin{equation}
\label{por3}
\mu_{\rm RAF}(p, 0)= \sum_{k \geq 1, s\geq c} n_{k,s-c} \left[1- (1-p)^{s}\right]^k.
\end{equation}
Now, for each $x\in (0, 0.5)$, we have: $$1-(1-2x)^s\geq 1-(1-x)^{2s} = (1-(1-x)^s)(1+(1-x)^s).$$
Thus (with $x=p$), we see that the term inside the square brackets in Eqn.~(\ref{por2plus}) exceeds the term in square brackets in Eqn.~(\ref{por3}) by a factor of
$(1+(1-p)^s)(1-tp)^s$, and this is minimised when $s = N$ (the largest possible value $s$ can take). \textcolor{black}{ Setting $s=N$ and writing $p = \nu/N$} we have
$$(1+(1-p)^s)(1-tp)^s \textcolor{black}{ = (1+ (1-\nu/N)^N(1-t\nu/N)^N} \sim (1+e^{-\nu}) e^{-t\nu}$$ and the last term on the right is at least 1 when $t$ satisfies the stated inequality (namely, $t \leq \frac{1}{\nu}\ln(1+e^{-\nu})$). \textcolor{black}{Thus $(1+(1-p)^s)(1-tp)^s \geq 1$, for all $s$ between 1 and $N$ and so}
each term in Eqn.~(\ref{por2plus}) is greater or equal to the corresponding term in square brackets in Eqn.~(\ref{por3}), which justifies the inequality in Corollary~\ref{thm2}.
\end{proof}
\subsection{Explicit calculations for two models on a subclass of networks}
\label{relation}
For the remainder of this section, we consider {\em elementary} chemical reaction systems (i.e. systems for which each reaction has all its reactants in the food set, as studied in \cite{ste}), with the further conditions that:
(i) each reaction has exactly one product,
(ii) different reactions produce different products,
(iii) no reaction is inhibited, and
(iv) no food element catalyses any reaction.
We can associate with each such system a directed graph $\mathcal G$ on the set $X-F$ of products of the reactions, with an arc from $x$ to $y$ if $x$ catalyses the reaction that produces $y$
(this models a setting investigated in \cite{j1, j2}).
RAF subsets are then in one-to-one correspondence with the subgraphs of $\mathcal G$ for which each vertex has indegree at least one. In particular, a RAF exists if and only if there is a directed cycle in $\mathcal G$ (which could be an arc from a vertex to itself).\footnote{An asymptotic study of the emergence of first cycles in large random directed graphs was explored in \cite{bol}.} In this simple set-up, if $N$ denotes the number of reactions (= number of non-food molecule types) then:
$$n_{k,s} = \begin{cases}
\binom{N}{k}, & \mbox{ if $k=s$;}\\
0, & \mbox{ otherwise.}
\end{cases}
$$
Applying Theorem~\ref{thm1}(i) gives:
\begin{equation}
\label{ab1}
\mu_{\rm RAF} = \sum_{j=1}^N \binom{N}{j} \left ( \sum_{i=0}^j (-1)^i\binom{j}{i} \lambda_i^j\right).
\end{equation}
Regarding catalysis, consider first the {\bf all-or-nothing model}, for which $\lambda_i= 1-\pi=1-\mu_C$ for $i\geq 1$ (and $\lambda_0=1$).
Eqn.~(\ref{ab1}) simplifies to:
\begin{equation}
\label{ab1a}
\mu_{\rm RAF} = 2^N - (2-\mu_C)^N,
\end{equation}
and we provide a proof of this in the Appendix.
This expression can also be derived by the following direct argument. First, note that a subset $S$ of the $N$ products of reactions does not correspond to a RAF if and only if each of the $|S|$ elements $x$ in $S$ has $C_x=0$.
The random variable $W=|\{x: C_x =1\}|$ follows the binomial distribution $Bin(N, \mu_C)$, and the proportion of sets of size $N$ that avoid a given set $S$ of size $m$
is $2^{-m}$. Thus the expected proportion of subsets that are not RAFs is the expected value of $2^{-W}$ where $W$ is the binomial distribution above. Applying standard combinatorial identities then leads to Eqn.~(\ref{ab1a}).
The probability of a RAF for the all-or-nothing models is also easily computed:
\begin{equation}
\label{ab2a}
P_{\rm RAF} = 1-(1-\mu_C)^N.
\end{equation}
Notice that one can select $\mu_C$ to tend to 0 in such a way $P_{\rm RAF}$ converges to 0 exponentially quickly with $N$ while $\mu_{\rm RAF}$ tends to infinity at an exponential rate with $N$ (this requires $\mu_C$ to decay sufficiently fast with $N$ but not too fast, e.g. $\mu_C = \Theta(N^{-1-\delta})$ for $\delta>0$).
Comparing Eqns.~(\ref{ab1a}) and (\ref{ab2a}), we also observe the following identity: $$\mu_{\rm RAF}(\mu_C) = 2^N P_{\rm RAF}(\mu_C/2 ).$$
By contrast, for the {\bf uniform model}, applying straightforward algebra to Eqn.~(\ref{ab1}) leads to
\begin{equation}
\label{ab3x}
\mu_{\rm RAF} = \sum_{j=1}^N \binom{N}{j} \left(1- (1-\mu_C)^j\right)^j.
\end{equation}
\textcolor{black}{ We now use these formulae to investigate the relationship between $P_{\rm RAF}$ and $\mu_{\rm RAF}$ in elementary chemical reaction systems (satisfying conditions (i)--(iv)) as $N$ becomes large; in particular the impact of the choice of model (all-or-nothing vs uniform) on this relationship. }
\bigskip
\noindent {\bf Asymptotic properties of the two models at the catalysis level where RAFs arise:} For the all-or-nothing and uniform models, RAFs arise with a given (positive) probability, provided that $\mu_C$ converges to 0 no faster than $N^{-1}$
as $N$ grows. Thus, it is helpful to write $\mu_C = \gamma/N$ to compare their behaviour as $N$ grows.
For the all-or-nothing model, Eqns.~(\ref{ab1a}) and (\ref{ab2a}) reveal that:
$$\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} =
2^N \frac{\left(1-\left(1-\frac{\gamma}{2N}\right)^N\right)}{\left(1-\left(1-\frac{\gamma}{N}\right)^N\right)}
\sim 2^N \left(\frac{1-\exp(-\gamma/2)}{1-\exp(-\gamma)}\right),$$
where $\sim$ is asymptotic equivalence as $N$ becomes large (with $\gamma$ being fixed),
and so:
\begin{equation}
\label{abu}
\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} \sim 2^{N-1}(1 + O(\gamma)),
\end{equation}
Let us compare this with the uniform model with the same $\mu_C$ (and hence $\gamma$) value.
It can be shown that when $\gamma< e^{-1}$, we have:
\begin{equation}
\label{ab0}
\lim_{N \rightarrow \infty} \sum_{j=1}^N \binom{N}{j} \left(1- (1-\gamma/N)^j\right)^j = \gamma + o(\gamma).
\end{equation}
where $o(\gamma)$ has order $\gamma^2$ as $\gamma \rightarrow 0$ (a proof is provided in the Appendix).
By Theorem 1 of \cite{hor2} (and for any value of $N$ and assuming $\gamma<1$), we have:
\begin{equation}
\label{ab3y}
1-\exp(-\gamma) \leq P_{\rm RAF} \leq -\ln(1-\gamma).
\end{equation}
In particular, for small $\gamma$ and the uniform model we have:
\begin{equation}
\label{ab4y}
P_{\rm RAF} = \gamma + o(\gamma).
\end{equation}
Eqns.~(\ref{ab1}), (\ref{ab0}), and (\ref{ab4y}) provide the following result for the uniform model when $\gamma < e^{-1}$:
\begin{equation}
\label{abu2}
\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} \sim 1 + O(\gamma),
\end{equation}
where $\sim$ again denotes asymptotic equivalence as $N$ becomes large (with $\gamma$ fixed).
Comparing Eqns.~(\ref{abu}) and (\ref{abu2}) reveals a key difference in the ratio $\mu_{\rm RAF}/ P_{\rm RAF}$ between the all-or-nothing and uniform models when $N$ is large and $\gamma$ is small: the former equation involves an exponential term in $N$, while the second does not. This can be explained as follows. In the all-or-nothing model, the existence of a RAF comes down to whether or not there is a reaction $r$ that generates a universal catalyst; when there is, then any subset of the $N$ reactions that contains $r$ is a RAF. By contrast, with the uniform model at a low catalysis level where RAF are improbable, if a RAF exists, there is likely to be only one. \textcolor{black}{Note that the results in this section are particular to chemical reaction systems that are elementary and satisfy properties (i)--(iv) as described at the start of this section.}
\section{Concluding comments}
In this paper, we have focused on the expected number of RAFs and uRAFs (rather than the probability of at least one such set existing), as this quantity can be described explicitly, and generic results described via this expression can be derived (e.g. in Parts (ii) and (iii) of Theorem~\ref{thm1} and Corollary~\ref{thm2}). Even so, the expressions in Theorem~\ref{thm1} involve quantities $n_{k,s}$ that may be difficult to quantify exactly; thus in the second part of the paper, we consider more restrictive types of systems.
In our analysis, we have treated inhibition and catalysis as simple and separate processes. However, a more general approach would allow reactions to proceed under rules that are encoded by Boolean expressions. For example, the expression $(a \wedge b) \vee c \vee (d \wedge \neg e)$ assigned to a reaction $r$ would allow $r$ to proceed if at least one of the following holds: (i) both $a$ and $b$ are present as catalysts, or (ii) $c$ is present as a catalyst or (iii) $d$ is present as a catalyst and $e$ is not present as an inhibitor. Extending the results in this paper to this more general setting could be an interesting exercise for future work.
\section{Acknowledgements}
\textcolor{black}{We thank the two reviewers for a number of helpful comments on an earlier version of this manuscript.}
| proofpile-arXiv_065-260 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
To realize pulsed emission in fiber lasers, Q-switching is one of the preferred technology to generate short and high energy pulses which are widely employed in optical communications, industrial processing, sensing, medicine and spectroscopy, etc. \cite{chenieee20}. Besides, nonlinear frequency conversion \cite{peremansol}, Doppler LIDAR \cite{ouslimani} and coherent beam combinations \cite{heoe14,zhouieee15} require that narrow bandwidths of the short pulses to elevate the conversion efficiency, measurement accuracy and beam quality. Generally, a Q-switching and a band-limited element are both necessary to achieve, separately, a Q-switched pulse emission and a narrow spectral bandwidth effect. On the one hand, active modulators with external signals (such as an acoustic optical modulator or a piezoelectric actuator \cite{lees32,Posada2017,Kaneda2004}) and passive saturable absorbers (e.g., semiconductor saturable absorption mirrors and two-dimensional materials) have both been exploited to obtain Q-switched operation \cite{Tse2008,Li2017Mode,lipr6,Yao2017Graphene}; On the other hand, bandpass filter, phase-shifted FBG and multimode interference filter \cite{Tse2008,Chakravarty2017,Popa2011} are also employed to narrow the bandwidth. Besides, some configurations based on the spectral narrowing effect (e.g., suppressing ASE gain self-saturation and coupled interference filtering) have also been adopted to achieve narrow spectra \cite{Yao19oe,Anting2003}. However, one has to face such a fact that such separated functions usually result in highly complex laser cavity with a pretty low reliability. In the last decade, a highly integrated reliable saturable absorber filter with both of saturable absorption and spectral filtering in one device was achieved by forming a filter in an unpumped (without 975 nm pump light) rare-earth-doped fiber \cite{poozeshjlt,yehoe15,yehlpl4}. However, these saturable absorber filters were commonly used to realize continuous-wave narrow bandwidth lasing because it is difficult for the rare-earth-doped fibers to meet the FSA Q-switching criterion due to their small absorption cross-sections and low doping concentrations in the corresponding radiation bands \cite{tsaioe}. Tasi, et. al. proposed a method of {\it{mismatch of mode field areas}} to make the unpumped EDF satisfy the Q-switching criterion $C_q>1$ or even $C_q>1.5$ \cite{oltsai}, but the spectral filtering and narrow bandwidth output were not involved in the laser.
In this work, we proposed a method to achieve an SDIG by inserting a segment of unpumped EDF between a circulator and a fiber Bragg grating (FBG). Theoretical analysis and experimental observations confirmed both the saturable absorption and spectral filtering can be realized simultaneously with such an SDIG. Further investigation showed the FSA Q-switching criterion in our laser can be degenerated to $C_q=1$ due to the spectral filtering of the SDIG. In addition, the spectral width of the Q-switched pulses can be easily modulated by the length of the SDIG and the input pump power. The proposed configuration is quite efficient to generate the Q-switched pulses with narrow bandwidths.
\section{Experimental configuration}
\begin{figure}[h!]
\centering\includegraphics[width=8.5cm]{fig1}
\caption{The schematic diagram of the all-fiber Q-switched laser. Inside the gray box is the SDIG.}
\label{fig1}
\end{figure}
\noindent The architect of the proposed all-fiber Q-switched laser is depicted in Fig. \ref{fig1}. In the cavity, two pieces of EDFs (Liekki, Er110-4/125) are utilized for gain medium (with a length of 50 cm) and SDIG, respectively. All the components are directly connected by single-mode fibers (SMF-28e) and the core/inner cladding diameters of the EDFs and SMFs are 4/125 $\mu$m and 9/125 $\mu$m, respectively. The gain medium is pumped by a pigtailed diode laser emitting continuous wave at 975 nm through a 980/1550 nm wavelength division multiplexing (WDM). When the light goes through a 30/70 optical coupler (OC), 30$\%$ of the energy outputs and 70$\%$ continues to propagate in the cavity. Then, a circulator (CIR) with three ports and a reflective FBG (98$\%$ reflectivity at the central wavelength of 1550 nm) with the 3 dB bandwidth of 0.5 nm controls the light from port 1 to port 2, through the EDF2, reflected by the FBG, back to port 2 and port 3. Finally, the light enters into the WDM and finishes one roundtrip. The $\sim$10.3-m-long all-fiber cavity is compact and misalignment free, and all the components are commoditized. For measuring the output pulse, a real-time digital storage oscilloscope (DSO, Agilent Technologies, DSO9104A) with a bandwidth of 2.5 GHz, an optical spectrum analyzer (OSA, YOKOGAWA, AQ6370C) and a frequency spectrum analyzer (Agilent Technologies, N9000A) are employed to monitor the pulse trains, optical spectra and radio frequency signals, respectively.
\begin{figure}[h!]
\centering\includegraphics[width=8.5cm]{fig2}
\caption{Characteristics of the SDIG. (a) Absorption and emission cross-sections of EDF; (b) and (c) imaginary and real part of the susceptibility versus normalized pump power from $q=0$ to $1$, respectively; (d) reflect bandwidth of the SDIG with respect to the length of EDF and the refractive rate change $\Delta n$ (inset).}
\label{fig2}
\end{figure}
\noindent It was demonstrated that a fiber with high concentration of active ions can be used as a saturable absorber for generating a pulsed regime due to the ion clusters induced nonradiative transition~\cite{kurkovqe2010}. The doping concentration of the EDF in this work is $\rho=8\times 10^{25}\quad\rm ions/m^3$, which will be verified to be enough for a FSA in the experiment. In the cavity, the coherence of two counter-propagating lasing forms a standing-wave field between the circulator and the FBG. When the EDF2 absorbs the standing-wave field energy, a period spatial refractive index distribution is induced in the EDF2 due to the spatial selective saturation of the transition between the ground state and the excited state \cite{stepanovjpd}. Thus the SDIG is achieved. Figure. \ref{fig2}(a) depicts the absorption and emission cross-sections of the EDF2. The pink region represents the optical spectrum at 1550$\pm$0.25 nm which is limited by the FBG. In this region, the emission cross-section is larger than the absorption cross-section, and, nevertheless, there is saturable absorption characteristic in the EDF. The result shows inconformity with the opinion that the absorption cross-section should be larger than the emission cross-section \cite{tsaioe,oltsai}. Besides, in this setup, the saturable absorber Q-switching criterion is degenerated to $C_q=1$. We contribute that the grating region of the SDIG reflects the light step by step, and little energy reaches the back part of the SDIG. Thus, the back part of the SDIG still offers saturable absorption effect for a given power which is saturate for the FSAs without spectral filtering. In other words, the spectral filtering expands the Q-switching condition of the EDF. In the EDF2, the erbium ion transition occurs between energy levels $^4\rm I_{15/2}$ and $^4\rm I_{13/2}$ if the incident light is limited in 1550$\pm$0.25 nm region. Under this circumstance, the EDF can be regarded as a two-level system. Once the EDF2 absorbs light, the electric field of the light will result in the change of the susceptibility whose imaginary part $\chi''(\omega)$ is related to the absorption and emission cross-sections $\sigma_a$, $\sigma_e$ and atomic populations densities $N_1$, $N_2$. $\chi''(\omega)$ can expressed as \cite{desurvire}
\begin{equation}
-\chi''(\omega)=\frac{n_{eff}c}{\omega}[\sigma_e(\omega)N_2-\sigma_a(\omega)N_1],
\end{equation}
where $n_{eff}=1.46$ is the refractive index of the EDF and $c$ represents the light speed in vacuum. The relationship of the real and imaginary parts of the atomic susceptibility is expressed by Kramers-Kronig relation (KKR)
\begin{equation}
\chi'(\omega)=\frac{1}{\pi}P.V.\int_{-\infty}^{+\infty}\frac{\chi''(\omega')}{\omega'-\omega}\rm d\omega',
\end{equation}
where $N_1=\rho_0/(1+q)$ and $N_2=\rho_0q/(1+q)$ describe the population densities at the two energy levels and $q$ is the normalized input power. $q=0$ and $q=1$ represent the EDF with no input power and saturation state, respectively. Figures \ref{fig2}(b) and (c) depict the imaginary and real parts of the susceptibility with different $q$. When $q$ is increased, $\chi''(\omega)$ enlarges and the absorption rate of the EDF reduces gradually. Meanwhile, the reduced $\chi'(\omega)$ reflects the decrease in refractive index change through $\delta n(\omega)=(\Gamma_s/2n)\chi'(\omega)$. In the EDF, the overlap factor $\Gamma_s=0.5$. From Fig. \ref{fig2}(c), the refractive index change at 1550 nm is calculated as $2.89\times10^{-6}<\delta n<9.25\times10^{-6}$, corresponding to a maximum refractive index difference $\Delta n$ of the EDF of $6.36\times10^{-6}$.
Inside the unpumped EDF, the formed DIG is considered as a Bragg reflective grating \cite{stepanovjpd}. Thus the FWHM bandwidth is described by \cite{zhangoe16}
\begin{equation}
\Delta \lambda=\lambda\kappa\sqrt{(\frac{\Delta n}{2 n_{eff}})^2+(\frac{\lambda}{2n_{eff}L_g})^2},
\end{equation}
where $\lambda$ and $L_g$ are the central wavelength of light and the length of EDF, respectively. $\kappa=2\Delta n/(\lambda n_{eff})$ is the coupling coefficient of the DIG. The reflective bandwidth versus $L_g$ and $\Delta n$ is shown in Fig. \ref{fig2}(d). The reflectance bandwidth decreases as the EDF lengthens and $\Delta n$ (related to the input power of pump source) becomes small. The marks represent the lengths and $\Delta n$ of the used EDFs in this work. The reflective bandwidths $\Delta \lambda$ are calculated as 69.2 pm, 50.3 pm and 30.0 pm with the EDF lengths of 7 cm, 10 cm and 20 cm, respectively. Apparently, the saturable absorption and spectral filtering can both be achieved for the SDIG, thus it can be used as a narrow bandwidth SA.
\section{Experimental results}
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig3}
\caption{Pulse characteristics including (a) average powers, (b) repeat frequencies, (c) pulse durations and (d) single pulse energies versus pump power.}
\label{fig3}
\end{figure}
\noindent In our experiment, we modulated the pump power from 1-650 mW and measured the Q-switching performances in terms of the average powers, repeat frequencies, pulse durations and single pulse energies when the EDF2 with lengths of 7 cm, 10 cm and 20 cm was spliced in the cavity one by one, as shown in Fig. \ref{fig3}, the laser with the three different lengths of EDFs operates in Q-switching regime when the pump power is increased from the lasing thresholds (70 mW, 100 mW and 250 mW for the lengths of EDF2 of 7 cm, 10 cm and 20 cm, respectively) to the maximum 650 mW. The self-started characteristic manifests the effective and high efficiency of the SDIG for Q-switching. From Fig. \ref{fig3} (a), the average powers increase linearly from 0.56 mW, 0.62 mW, 5.83 mW to 27.13 mW, 25.74 mW and 21.38 mW with the gradually raised pump power, and the slope efficiencies are 4.72$\%$, 4.44$\%$ and 3.83$\%$ corresponding to the lengths of EDF2 of 7 cm, 10 cm and 20 cm, respectively. The low slope efficiencies mainly originate from the high loss induced by the narrow reflect bandwidth of the SDIG. Furthermore, during the reduction process of the pump power, the bistable state appears and the obtained minimum emission powers are 0.46 mW, 0.62 mW and 1.05 mW at the pump powers of 63 mW, 80 mW and 120 mW, respectively. During the same modulation process of the pump power, the repeat frequencies and single pulse energies promote while the growth rates reduce [Figs. \ref{fig3}(b) and (d)]. The pulse durations showed in Fig. \ref{fig3}(c) become narrow at first and reach their steady values gradually with the increased pump power. Comparing the results in Fig. \ref{fig3}, we contribute that a larger cavity loss is induced by a longer SDIG due to the large lasing absorption length and narrower bandwidth of the SDIG, leading to the lower average powers, less slope efficiencies and repeat frequencies, larger single pulse energies and broader pulse durations. The deduction is identical with the prediction of the Eq. 3 and Fig. \ref{fig2}(d).
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig4}
\caption{Typical output pulse performance. (a) pulse trains, (b) single pulse waveforms, (c) fundamental frequencies and (d) RF spectrums at the pump power of 450 mW.}
\label{fig4}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig5}
\caption{ Optical spectrums of the Q-switched laser with the lengths of the SDIGs of (a) 7 cm, (b) 10 cm and (c) 20 cm, respectively.}
\label{fig5}
\end{figure}
When the pump power is fixed at 450 mW, the experimental results including the pulse intensities and radio frequency characteristics with the three EDF2 are detected, which are shown in Fig. \ref{fig4}. From Fig. \ref{fig4}(a), the pulse trains in the three lengths of EDF2 are all stable. The pulse intervals of 11.7 $\mu$s, 12.8 $\mu$s and 16.5 $\mu$s correspond to the repeat frequencies of 85.53 kHz, 78.05 kHz and 60.63 kHz in Fig. \ref{fig4}(d), respectively. Figure \ref{fig4}(b) shows the single pulse performances in the expanded time domain. Pulse durations of 1.14 $\mu$s, 1.25 $\mu$s and 1.75 $\mu$s are obtained through Gauss fitting of the pulse data in the three cases. With a shorter EDF2, the noise signal on the Q-switched pulse envelop becomes more obvious, thus the laser tends to unstable. Conversely, acquiring a purer Q-switched pulse needs a longer EDF2. The output performances in the frequency domain are depicted in Figs. \ref{fig4}(c) and (d), the fundamental frequencies (@BW: 2 kHz) manifest the signal-to-noise ratio (SNR) of the three Q-switched pulses exceed 50 dB. Besides, high order harmonic signals exist in the frequency range from 0 to 1 MHz. Obviously, with a fixed pump power, a longer SDIG decreases the repeat frequency and broadens the pulse duration, which is consistent with the results in Fig. \ref{fig3}.
\begin{table*}
\centering
\caption{Comparison diagram of the Er-doped fiber lasers based on FSAs ($\sim$ represents the estimated values according to the figures in these literatures).}
\begin{tabular}{ccccccc}
\hline
SAs & Average power (mW) & Repetition rate (kHz) & Central wavelength (nm) & Pulse duration ($\mu$s) & Spectral width (pm) & Refs \\
\hline
\hline
Tm & - & 0.1-6 & 1570 & 0.42 & - & \cite{oetsai}\\
Tm & 100-720 & 0.3-2 & 1580 & 0.1 & $\sim$1100 & \cite{lplkurkov}\\
Tm & 0.18-0.24 & 3.9-12.7 & 1557.6 & 7.4-20.6 & - & \cite{cpltiu}\\
Tm & 136 (Max) & 54.1-106.7 & 1560 & 3.28 & $\sim$200 & \cite{joprahman}\\
Tm & 1.57 & 14.45-78.49 & 1555.14, 1557.64 & 6.94-35.84 & - & \cite{oclatiff}\\
Tm-Ho & 0.6-1.1 & 1-15 & 1535-1573 & 8.2-10 & - & \cite{lptao}\\
Tm-Ho & $\sim$1-12.5 & 5.5-42 & 1557.5 & 7.8 (Min) & 63 & \cite{lpltao}\\
Tm-Ho & 27.61 (Max) & 11.6-57.14 & 1529.69, 1531.74, 1533.48 & 10.46-61.8 & - & \cite{ieeeanzueto}\\
Sm & 80 (Max) & $\sim$70.2 & 1550.0 & 0.45 (Min) & $\sim$50 & \cite{predaol}\\
Cr & 10.68 (Max) & 68.12-115.9 & 1558.5 & 3.85 (Min) & $\sim$200 & \cite{oftdutta}\\
Er & $\sim$2.07 & 0.5-1 & 1530 & 0.08-0.32 & - & \cite{oltsai}\\
Er & 0.56-27.13 & 17.94-118.79 & 1549.56 & 5.32-1.20 & 29.1 (Min) & This work\\
\hline
\end{tabular}
\label{table1}
\end{table*}
As to the optical spectra, the variations of shapes and bandwidths concerning the pump power are measured and shown in Fig. \ref{fig5}. The central wavelengths of the optical spectra are around 1549.6 nm, and the shapes remain almost unchanged when the pump power is altered. Due to the bandwidth limitation provided by the SDIGs in EDF2, the full-width-half maximum (FWHM) bandwidth and spectrum structures for each length of the EDF2 broadens with the raising pump power. Besides, when the EDF2 becomes longer, the spectral width obtained from the $\Delta\lambda$ value of the OSA is narrowed significantly. The largest spectral widths are 74.2 pm, 47.8 pm and 37.2 pm with the lengths of the SDIG of 7 cm, 10 cm and 20 cm, respectively. The minimum spectral width of 29.1 pm is obtained when the length of the SDIG and pump power are 20 cm and 250 mW, respectively. The results show that the bandwidth of the SDIG is narrower with a longer EDF2 and lower pump power, which coincide with the theoretical analysis in the section above. Therefore, one can realize a narrower bandwidth and even a single-longitudinal-mode Q-switched pulses with a high power pump source and a piece of longer EDF2.
The pulse characteristics comprising average power, repletion rate, central wavelength, pulse duration and spectral width in several typical published articles on Er-doped fiber lasers Q-switched by different FSAs (including Tm-doped, Tm/Ho-doped, Sm-doped, Cr-doped and Er-doped fibers) are displayed in Table. \ref{table1}. Clearly, the spectral width of this work is the narrowest one in these lasers, indicating the effective filtering of the SDIG. Besides, the tunable ranges of repetition rates and pulse durations of our laser also perform well which are benefit from the compact configuration.
\section{Conclusion}
We have achieved an Er-doped Q-switched fiber laser with narrow bandwidth pulse emissions based on a self-designed SDIG. Such an FSA based SDIG can provide saturable absorption and spectral filtering simultaneously, which is efficient for realizing Q-switching operation in fiber lasers. Further results manifest that the spectral width of the output Q-switched pulse can be narrowed by increasing the length of SDIG and reducing the pump power. The narrowest spectral width of 29.1 pm is achieved when the SDIG length and pump power are 20 cm and 250 mW, respectively. The theoretical and experimental results are in good agreement. Our method provides a promising way to obtain narrow bandwidth Q-switched fiber lasers with low cost and compact size, which may exhibit significant potentials in nonlinear frequency conversion, Doppler LIDAR and coherent beam combinations.
\begin{acknowledgments}
This work is supported by National Nature Science Foundation of China (61905193); National Key R\&D Program of China (2017YFB0405102); Key Laboratory of Photoelectron of Education Committee Shaanxi Province of China (18JS113); Open Research Fund of State Key Laboratory of Transient Optics and Photonics (SKLST201805); Northwest University Innovation Fund for Postgraduate Students (YZZ17099).
\end{acknowledgments}
\nocite{*}
| proofpile-arXiv_065-261 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart{W}{ith} the rapid development of computer technique, multi-dimensional data,
which is also known as tensors \cite{kolda2009tensor}, has received much attention
in various application fields, such as data mining \cite{kolda2008scalable, morup2011applications},
signal and image processing \cite{cichocki2007nonnegative, cichocki2015tensor, sidiropoulos2017tensor, zhang2019nonconvex},
and neuroscience \cite{mori2006principles}. Many underlying tensor data is
nonnegative due to their physical meaning such as the pixels of images.
An efficient approach to exploit the intrinsic structure of a nonnegative tensor is tensor factorization,
which can explore its hidden information.
Moreover, the underling tensor data may also suffer from missing entries and noisy corruptions
during its acquiring process.
In this paper, we focus on the sparse nonnegative tensor factorization (NTF) and completion
problem from partial and noisy observations, where the observed entries are corrupted
by general noise such as additive Gaussian noise, additive Laplace noise, and Poisson observations.
Tensors arise in a variety of real-world applications that
can represent the multi-dimensional correlation of the underlying tensor data,
e.g., the spatial and spectral dimensions for hyperspectral images
and the spatial and time dimensions for video data.
In particular, for second-order tensors,
NTF reduces to nonnegative matrix factorization (NMF),
which can extract meaningful features
and has a wide variety of practical applications in scientific and engineering areas,
see \cite{ding2008convex, lee1999learning, gillis2020nonnegative, pan2019generalized, pan2018orthogonal}
and references therein.
Here the order of a tensor is the number of dimensions, also known as ways or modes \cite{kolda2009tensor}.
It has been demonstrated that NMF is
able to learn localized features with obvious interpretations \cite{lee1999learning}.
Moreover, Gillis \cite{gillis2012sparse} proposed a sparse NMF model with a sparse factor,
which provably led to optimal and sparse solutions under a separability assumption.
Gao et al. \cite{gao2005improving} showed that sparse NMF can
improve molecular cancer class discovery than the direct application of the basic NMF.
Zhi et al. \cite{zhi2010graph} also showed that sparse NMF provided
better facial representations and achieved higher recognition rates than NMF for facial expression recognition.
More applications about the advantages of sparse NMF over NMF can be referred to
\cite{gillis2010using, kim2007sparse, soltani2017tomographic}.
Besides, Soni et al. \cite{soni2016noisy} proposed a general class of
matrix completion tasks with noisy observations, which could reduce to
sparse NMF when the underlying factor matrices are nonnegative and all
entries of the noisy matrix are observed.
They showed that the error
bounds of estimators of sparse NMF are lower than those of NMF \cite{soni2016noisy}.
Furthermore, Sambasivan et al.
\cite{sambasivan2018minimax} derived the minimax lower bounds of the expected per-element square
error under general noise
observations.
Due to exploiting the intrinsic structure of the underlying tensor data,
which contains correlation in different modes,
NTF has also been widely applied in a variety of fields,
see, e.g., \cite{ chi2012tensors, hong2020generalized, morup2008algorithms, pan2021orthogonal, veganzones2015nonnegative}.
There are some popular NTF approaches,
such as nonnegative Tucker decomposition \cite{li2016mr},
nonnegative CANDECOMP/PARAFAC (CP) decomposition \cite{veganzones2015nonnegative},
nonnegative tensor train decomposition \cite{lee2016nonnegative},
which are derived by different
applications, see also \cite{kolda2009tensor, vervliet2019exploiting}.
For example, Xu \cite{xu2015alternating} proposed an alternating proximal
gradient method for sparse nonnegative Tucker decomposition,
while it is only efficient for additive Gaussian noise.
Qi et al. \cite{qi2018muti} utilized Tucker decomposition to establish the
redundant basis of the space of multi-linear maps with the sparsity
constraint, and further proposed multi-dimensional synthesis/analysis sparse
models to represent multi-dimensional signals effectively and efficiently.
Moreover, M{\o}rup et al. \cite{morup2008algorithms}
showed that sparse nonnegative Tucker decomposition yields a
parts-based representation as seen in NMF for two-way data,
which is a simpler and more interpretable decomposition
than the standard nonnegative Tucker decomposition for multi-dimensional data.
Furthermore, they showed that sparse nonnegative
Tucker decomposition can help reduce ambiguities
by imposing constraints of sparseness in
the decomposition for model selection and component identification.
For nonnegative CP decomposition,
Veganzones et al. \cite{veganzones2015nonnegative} proposed
a novel compression-based nonnegative CP decomposition without
sparse constraints for blind spectral unmixing of hyperspectral images,
which was only utilized for the observations with additive Gaussian noise.
Kim et al.
\cite{kim2013sparse} proposed a sparse CP decomposition model,
which improved the analysis and inference of multi-dimensional data
for dimensionality reduction, feature selection as well as signal recovery.
Another kind of NTF is based on the recently proposed
tensor-tensor product \cite{Kilmer2011Factorization},
whose algebra operators have been proposed and studied
for third-order tensors \cite{Kilmer2011Factorization, Kilmer2013Third}
and then generalized to higher-order tensors \cite{martin2013order} and transformed tensor-tensor product \cite{song2020robust}.
Besides, Kilmer et al. \cite{Kilmer2011Factorization} established the framework of
tensor singular value decomposition (SVD).
This kind of tensor-tensor product and tensor SVD
has been applied in a great number of areas such as facial recognition \cite{hao2013facial},
tensor completion \cite{ng2020patch, corrected2019zhang, zhang2014novel,Zhang2017Exact, zhang2021low},
and image processing \cite{semerci2014tensor, zheng2020mixed}.
Recently, this kind of sparse NTF models
have been proposed and studied
on dictionary learning problems, e.g.,
tomographic image reconstruction \cite{soltani2016tensor},
image compression and image deblurring \cite{newman2019non}.
The sparse factor in this kind of NTF with tensor-tensor product is due to the sparse
representation of patched-dictionary elements for tensor dictionary learning \cite{soltani2016tensor}.
One needs to learn a nonnegative tensor patch dictionary from training data,
which is to solve a sparse NTF problem with tensor-tensor product. It was demonstrated that
the tensor-based dictionary
learning algorithm exhibits better performance
than the matrix-based method in terms of approximation accuracy.
However, there is no theoretical result about the error bounds of nonnegative sparse
tensor factorization models. Both different noise settings and missing values
are not studied in the literature.
In this paper, we propose a sparse NTF and completion model with tensor-tensor product
from partial and noisy observations for third-order tensors, where the observations
are corrupted by a general class of noise models.
The proposed model consists of a data-fitting term for the observations
and the tensor $\ell_0$ norm for the sparse factor,
where the two tensor factors operated by tensor-tensor product are nonnegative
and the data-fitting term is derived by the maximum likelihood estimate.
Theoretically, we show that the error
bounds of the estimator of the proposed model can be established under general noise observations.
The detailed error bounds under specific noise distributions including additive Gaussian noise,
additive Laplace noise, and Poisson observations can be derived.
Moreover, the minimax lower bounds are shown to be matched with the established upper bounds
up to a logarithmic factor of the sizes of the underlying tensor. These theoretical results for tensors
are better than those obtained for matrices \cite{soni2016noisy}, and this illustrates the advantage of the use of
nonnegative sparse tensor models for completion and denoising.
Then an alternating direction method of multipliers (ADMM) based algorithm \cite{Gabay1976A, wang2015global}
is developed to solve the general noise observation models.
Numerical examples are presented to
show the performance of the proposed sparse NTF and completion
is better than that of the matrix-based factorization \cite{soni2016noisy}.
The main contributions of this paper are summarized as follows.
(1) Based on tensor-tensor product, a sparse NTF and completion model
from partial and noisy observations
is proposed under general noise distributions.
(2) The upper bounds of the estimators of the proposed model are established under general noise observations.
Then the upper bounds are specialized to the widely used noise observations including additive Gaussian noise,
additive Laplace noise, and Poisson observations.
(3) The minimax lower bounds are derived for the previous noise observations,
which match the upper bounds with a logarithmic factor for different noise models.
(4) An ADMM based algorithm is developed to solve the resulting model.
And numerical experiments are presented to demonstrate the effectiveness of the proposed tensor-based method
compared with the matrix-based method in \cite{soni2016noisy}.
The rest of this paper is organized as follows.
Some notation and notions are provided in Section \ref{Prelim}.
We propose a sparse NTF and completion model based on tensor-tensor product
from partial and noisy observations in Section \ref{ProMod},
where the observations are corrupted by a general class of noise.
In Section \ref{upperbound}, the upper bounds of estimators of the proposed model are established,
which are specialized to three widely used noise models
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Then the minimax lower bounds are also derived for the previous
observation models in Section \ref{lowerbou}.
An ADMM based algorithm is developed to solve the resulting model in Section \ref{OptimAlg}.
Numerical experiments are reported to validate
the effectiveness of the proposed method in Section \ref{NumeriExper}.
Finally, the conclusions and future work are given in Section \ref{Conclu}.
All proofs of the theoretical results are delegated
to the appendix.
\section{Preliminaries}\label{Prelim}
Throughout this paper, $\mathbb{R}$
represents the space with real numbers.
$\mathbb{R}_+^{n_1\times n_2\times n_3}$ denotes the tensor space that
all elements of the tensors are nonnegative.
Scalars are represented by lowercase letters, e.g., $x$.
Vectors and matrices are represented by
lowercase boldface letters and uppercase boldface letters, respectively,
e.g., $\mathbf{x}$ and $\mathbf{X}$.
Tensors are denoted by capital Euler script letters, e.g., $\mathcal{X}$.
The $(i,j,k)$th entry of a tensor $\mathcal{X}$ is denoted as $\mathcal{X}_{ijk}$.
The $i$th frontal slice of a tensor $\mathcal{X}$ is a matrix denoted by $\mathbf{X}^{(i)}$,
which is a matrix by fixing the third index and vary the first two indexes of $\mathcal{X}$.
The $\ell_2$ norm of a vector $\mathbf{x}\in\mathbb{R}^{n}$,
denoted by $\|\mathbf{x}\|$, is defined as $\|\mathbf{x}\|=\sqrt{\sum_{i=1}^{n}x_i^2}$,
where $x_i$ is the $i$th entry of $\mathbf{x}$.
The tensor $\ell_\infty$ norm of a tensor
$\mathcal{X}$ is defined as $\|\mathcal{X}\|_\infty=\max_{i,j,k}|\mathcal{X}_{ijk}|$.
The tensor $\ell_0$ norm of $\mathcal{X}$, denoted by $\|\mathcal{X}\|_0$, is defined as the count of all nonzero entries of $\mathcal{X}$.
The inner product of two tensors $\mathcal{X}, \mathcal{Y}\in\mathbb{R}^{n_1\times n_2\times n_3}$ is defined as $\langle \mathcal{X}, \mathcal{Y} \rangle=\sum_{i=1}^{n_3}\langle \mathbf{X}^{(i)}, \mathbf{Y}^{(i)} \rangle$, where $\langle \mathbf{X}^{(i)}, \mathbf{Y}^{(i)} \rangle=tr((\mathbf{X}^{(i)})^T\mathbf{X}^{(i)})$.
Here $\cdot^T$ and $tr(\cdot)$ denote the transpose and the trace of a matrix, respectively.
The tensor Frobenius norm of $\mathcal{X}$ is defined as $\|\mathcal{X}\|_F=\sqrt{\langle \mathcal{X},\mathcal{X} \rangle}$.
Let $p_{x_1}(y_1)$ and $p_{x_2}(y_2)$ be the
probability density functions or probability mass functions
with respect to the random variables $y_1$ and $y_2$ with parameters $x_1$ and $x_2$, respectively.
The Kullback-Leibler (KL) divergence of $p_{x_1}(y_1)$ from $p_{x_2}(y_2)$ is defined as
$$
D(p_{x_1}(y_1)||p_{x_2}(y_2))=\mathbb{E}_{p_{x_1}(y_1)}\left[\log\frac{p_{x_1}(y_1)}{p_{x_2}(y_2)}\right].
$$
The Hellinger affinity between $p_{x_1}(y_1)$ and $p_{x_2}(y_2)$ is defined as
$$
H(p_{x_1}(y_1)||p_{x_2}(y_2))
=\mathbb{E}_{p_{x_1}}\left[\sqrt{\frac{p_{x_2}(y_2)}{p_{x_1}(y_1)}}\right]
=\mathbb{E}_{p_{x_2}}\left[\sqrt{\frac{p_{x_1}(y_1)}{p_{x_2}(y_2)}}\right].
$$
The joint distributions of higher-order
and multi-dimensional variables, denoted
by $p_{\mathcal{X}_1}(\mathcal{Y}), p_{\mathcal{X}_2}(\mathcal{Y})$,
are the joint distributions of the vectorization of the tensors.
Then the KL divergence of $p_{\mathcal{X}_1}(\mathcal{Y})$ from $ p_{\mathcal{X}_2}(\mathcal{Y})$ is defined as
$$
D(p_{\mathcal{X}_1}(\mathcal{Y})||p_{\mathcal{X}_2}(\mathcal{Y}))
:=\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathcal{Y}_{ijk}}(\mathcal{Y}_{ijk})),
$$
and its Hellinger affinity is defined as
$$
H(p_{\mathcal{X}_1}(\mathcal{Y})||p_{\mathcal{X}_2}(\mathcal{Y}))
:=\prod_{i,j,k}H(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk}),p_{\mathcal{Y}_{ijk}}(\mathcal{Y}_{ijk})).
$$
Now we define the tensor-tensor product between two third-order tensors \cite{Kilmer2011Factorization}.
\begin{definition}\label{TenTensPro}
\cite[Definition 3.1]{Kilmer2011Factorization}
Let $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$
and $\mathcal{Y}\in\mathbb{R}^{n_2\times n_4\times n_3}$.
The tensor-tensor product, denoted as $\mathcal{X}\diamond\mathcal{Y}$,
is an $n_1\times n_4\times n_3$ tensor defined by
$$
\mathcal{X}\diamond\mathcal{Y}:=
\textup{Fold}\left(\textup{Circ}(\textup{Unfold}(\mathcal{X}))\cdot \textup{Unfold}(\mathcal{Y})\right),
$$
where
$$
\textup{Unfold}(\mathcal{X})=\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}, \
\textup{Fold}\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}=\mathcal{X}, \
\textup{Circ}\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}=
\begin{pmatrix}
\mathbf{X}^{(1)} & \mathbf{X}^{(n_3)} & \cdots & \mathbf{X}^{(2)} \\
\mathbf{X}^{(2)} & \mathbf{X}^{(1)} & \cdots & \mathbf{X}^{(3)}\\
\vdots & \vdots & & \vdots \\
\mathbf{X}^{(n_3)} &\mathbf{X}^{(n_3-1)}&\cdots & \mathbf{X}^{(1)} \end{pmatrix}.
$$
\end{definition}
By the block circulate structure,
the tensor-tensor product of two third-order tensors
can be implemented efficiently by fast Fourier transform \cite{Kilmer2011Factorization}.
\begin{definition}\cite{Kilmer2011Factorization}
The transpose of a tensor $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$,
is the tensor $\mathcal{X}^T\in\mathbb{R}^{n_2\times n_1\times n_3}$ obtained by transposing each of the frontal
slices and then reversing the order of transposed frontal slices 2 through $n_3$, i.e.,
$$
(\mathcal{X}^T)^{(1)} = (\mathbf{X}^{(1)})^T, \ (\mathcal{X}^T)^{(i)} = (\mathbf{X}^{(n_3+2-i)})^T, \ i=2,\ldots, n_3.
$$
\end{definition}
\begin{definition}\cite[Definition 3.4]{Kilmer2011Factorization}
An $n\times n\times m$ identity tensor $\mathcal{I}$ is the tensor whose
first frontal slice is the $n\times n$ identity matrix, and whose other frontal slices are all zeros.
\end{definition}
\begin{definition}\cite[Definition 3.5]{Kilmer2011Factorization}
A tensor $\mathcal{A}\in\mathbb{R}^{n\times n\times m}$ is said to have an inverse, denoted by $\mathcal{A}^{-1}\in\mathbb{R}^{n\times n\times m}$, if $\mathcal{A}\diamond\mathcal{A}^{-1}=\mathcal{A}^{-1}\diamond\mathcal{A}=\mathcal{I}$, where $\mathcal{I}\in\mathbb{R}^{n\times n\times m}$ is the identity tensor.
\end{definition}
The proximal mapping of a closed proper function $f:\mathfrak{C}\rightarrow (-\infty, +\infty]$ is defined as
$$
\textup{Prox}_{f}(y)=\arg\min_{x\in\mathfrak{C}}\left\{f(x)+\frac{1}{2}\|x-y\|^2\right\},
$$
where $\mathfrak{C}$ is a finite-dimensional Euclidean space.
Next we provide a brief summary of the notation used throughout this paper.
\begin{itemize}
\item $\lfloor x\rfloor$ is the integer part of $x$.
$\lceil x\rceil$ is smallest integer that is larger or equal to $x$.
\item Denote $m\vee n=\max\{m,n\}$ and $m\wedge n=\min\{m,n\}$.
\end{itemize}
\section{Sparse NTF and Completion via Tensor-Tensor Product}\label{ProMod}
Let $\mathcal{X}^*\in\mathbb{R}_+^{n_1\times n_2\times n_3}$ be an unknown nonnegative tensor we aim to estimate,
which admits a following nonnegative factorization:
$$
\mathcal{X}^*=\mathcal{A}^* \diamond \mathcal{B}^*,
$$
where $\mathcal{A}^*\in\mathbb{R}_+^{n_1\times r\times n_3}$ and
$\mathcal{B}^*\in\mathbb{R}_+^{r\times n_2\times n_3}$
are prior unknown factor tensors with $r\leq \min\{n_1,n_2\}$.
We assume that each entries of $\mathcal{X}^*, \mathcal{A}^*,
\mathcal{B}^*$ are bounded, i.e.,
$$
0\leq \mathcal{X}_{ijk}^*\leq \frac{c}{2}, \ \ \
0\leq \mathcal{A}_{ijk}^*\leq 1, \ \ \ 0\leq \mathcal{B}_{ijk}^*\leq b, \ \ \ \forall \ i,j,k,
$$
where $\frac{c}{2}$ is used for simplicity of subsequent analysis.
We remark that the amplitude $1$ of each entry $\mathcal{A}_{ijk}$ of $\mathcal{A}^*$ can be arbitrary.
Besides, our focus is that the factor tensor $\mathcal{B}^*$ is sparse.
However,
only a noisy and incompleted version of the underlying tensor $\mathcal{X}^*$ is available in practice.
Let $\Omega\subseteq\{1,2,\ldots, n_1\}\times \{1,2,\ldots, n_2\}\times \{1,2,\ldots, n_3\}$
be a subset at which the entries of the observations $\mathcal{Y}$ are collected.
Denote $\mathcal{Y}_{\Omega}\in\mathbb{R}^m$ to be a vector
such that the entries of $\mathcal{Y}$ in the index $\Omega$ are vectorized into a vector by lexicographic order,
where $m$ is the number of observed entries.
Assume that $n_1, n_2, n_3\geq 2$ throughout this paper.
Suppose that the location
set $\Omega$ is generated according to an independent
Bernoulli model with probability $\gamma=\frac{m}{n_1n_2n_3}$ (denoted by Bern($\gamma$)),
i.e., each index $(i,j,k)$ belongs to $\Omega$ with probability $\gamma$, which is denoted as $\Omega\sim \text{Bern}(\gamma)$.
Mathematically, the joint probability density function
or probability mass function of observations $\mathcal{Y}_\Omega$ is given by
\begin{equation}\label{obserPo}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
:=\prod_{(i,j,k)\in \Omega}p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk}).
\end{equation}
By the maximum likelihood estimate, we propose the following sparse NTF and completion model with nonnegative constraints:
\begin{equation}\label{model}
\widetilde{\mathcal{X}}^{\lambda}\in\arg\min_{\mathcal{X}=\mathcal{A} \diamond \mathcal{B}\in\Gamma}\left\{-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{B}\|_0\right\},
\end{equation}
where $\lambda>0$ is the regularization parameter and $\Gamma$ is defined by
\begin{equation}\label{TauSet}
\Gamma:=\{\mathcal{X}=\mathcal{A} \diamond \mathcal{B}:
\ \mathcal{A}\in\mathfrak{L}, \ \mathcal{B}\in\mathfrak{D}, \ 0\leq \mathcal{X}_{ijk}\leq c \}.
\end{equation}
Here $\Gamma$ is a countable set of estimates constructed as follows:
First, let
\begin{equation}\label{denu}
\vartheta:=2^{\lceil\beta\log_2(n_1\vee n_2)\rceil}
\end{equation}
for a specified $\beta\geq 3,$
we construct $\mathfrak{L}$ to be the set
of all tensors $\mathcal{A}\in\mathbb{R}_+^{n_1\times r\times n_3}$
whose entries are discretized to one of $\vartheta$
uniformly sized bins in the range $[0,1]$,
and $\mathfrak{D}$
to be the set of all tensors $\mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}$
whose entries either take the value $0$, or are discretized to
one of $\vartheta$ uniformly sized bins in the range $[0,b]$.
\begin{remark}
When all entries of $\mathcal{Y}$ are observed and $\mathcal{Y}$ is corrupted by additive Gaussian noise,
the model (\ref{model}) reduces to sparse NTF with tensor-tensor product,
whose relaxation, replaced the tensor $\ell_0$ norm by the tensor $\ell_1$ norm,
has been applied in patch-based dictionary learning for image data \cite{soltani2016tensor, newman2019non}.
\end{remark}
\begin{remark}
We do not specialize the noise in model (\ref{model}), and just need the joint
probability density function
or probability mass function of observations in (\ref{obserPo}).
In particular, our model can address the observations with some widely used noise distributions,
such as additive Gaussian noise, additive Laplace noise, and Poisson observations.
\end{remark}
\section{Upper Bounds}\label{upperbound}
In this section, we establish a general upper error
bound of the sparse NTF and completion model from partial observations under a general class of noise in (\ref{model}),
and then derive the upper bounds of the special noise models
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Now we establish the upper error bound of the
estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}),
whose proof follows the line of the proof of \cite[Theorem 1]{soni2016noisy},
see also \cite[Theorem 3]{raginsky2010compressed}.
The key technique of this proof is the well-known Kraft-McMillan inequality \cite{Brockway1957Two, kraft1949device}.
Then we construct the penalty of the underlying tensor $\mathcal{X}$ with
the tensor-tensor product of two nonnegative tensors, where one factor tensor is sparse.
\begin{theorem}\label{maintheo}
Suppose that
$\kappa\geq \max_{\mathcal{X}\in\Gamma}\max_{i,j,k} D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})$.
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Then, for any
$
\lambda\geq 4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left((n_1\vee n_2)\sqrt{n_3}\right)
$,
the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\Gamma}\left\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+
\left( \lambda+\frac{8\kappa(\beta+2) \log\left((n_1\vee n_2)\sqrt{n_3}\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
& \ +\frac{8\kappa\log(m)}{m}.
\end{split}
\]
\end{theorem}
The detailed proof of Theorem \ref{maintheo} is left to Appendix \ref{ProoA}.
From Theorem \ref{maintheo}, we can observe that the
upper bound of $\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[-2\log H(p_{\widetilde{\mathcal{X}}^\lambda},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}$ is of the order of $O(
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m}\log(n_1\vee n_2))$
if the KL divergence $D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$
is not too large in the set $\Gamma$.
The explicit upper bounds with respect to $D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$ in $\Gamma$ and $\kappa$
will be given for the observations with special noise distributions.
\begin{remark}
For the upper error bounds of estimators of observations with special noise distributions,
the main difference of proofs between the matrix case \cite{soni2016noisy} and the tensor case is to establish the upper bound of $\min_{\mathcal{X}\in\Gamma}
\|\mathcal{X}^*-\mathcal{X}\|_F^2$, where $\Gamma$ is defined as (\ref{TauSet}).
We need to estimate this bound based on the tensor-tensor product structure $\mathcal{X}=\mathcal{A} \diamond \mathcal{B}\in \Gamma$,
which can be obtained by Lemma \ref{xxappr}.
The key issue in Lemma \ref{xxappr} is to construct the surrogates of entries of the two factor tensors $\mathcal{A}^*, \mathcal{B}^*$ in the set $\Gamma$,
where $\mathcal{X}^*=\mathcal{A}^*\diamond \mathcal{B}^*$.
\end{remark}
In the following subsections, we establish the upper error bounds of the estimators for the observations with three special noise models,
including additive Gaussian noise,
additive Laplace noise, and Poisson observations.
By Theorem \ref{maintheo},
the main steps of proofs for the special noise models are to establish the lower bound of $-2\log(H(p_{\mathcal{X}^*},p_{\widetilde{\mathcal{X}}^\lambda}))$
and the upper bound of $\min_{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\Gamma}D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$, respectively.
Before deriving the upper error bounds of the observations with special noise models,
we fix the choices of $\beta$ and $\lambda$ based on Theorem \ref{maintheo}, which are defined as follows:
\begin{equation}\label{beta}
\beta=\max\left\{3,1+\frac{\log(3rn_3^{1.5}b/c)}{\log(n_1\vee n_2)}\right\},
\end{equation}
and
\begin{equation}\label{lambda}
\lambda=4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left(n_1\vee n_2\right).
\end{equation}
\subsection{Additive Gaussian Noise}
Assume that each entry of the underlying tensor
is corrupted by independently additive zero-mean Gaussian noise with standard deviation $\sigma>0$,
that is
\begin{equation}\label{Gauyom}
\mathcal{Y}_{ijk}=\mathcal{X}_{ijk}^*+\sigma^2\epsilon_{ijk},
\end{equation}
where $\epsilon_{ijk}$ obeys the independently standard normal distribution (i.e., $\epsilon_{ijk}\sim N(0,1)$) for any $(i,j,k)\in\Omega$.
Then the observation $\mathcal{Y}_\Omega$ can be regarded as a vector and
its joint probability density function in (\ref{obserPo}) is given as
\begin{equation}\label{YomeGasu}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
=\frac{1}{(2\pi\sigma^2)^{|\Omega|/2}}\exp\left(-\frac{1}{2\sigma^2}\|\mathcal{Y}_\Omega-\mathcal{X}_\Omega^*\|^2\right),
\end{equation}
where $|\Omega|$ denotes the number of entries of $\Omega$, i.e., $|\Omega|=m$.
Now we establish the explicit upper error bound of the estimator in (\ref{model}) with the observations $\mathcal{Y}_{\Omega}$ satisfying (\ref{Gauyom}).
\begin{Prop}\label{Gauuupp}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Assume that $\beta$ and $\lambda$ are defined as (\ref{beta}) and (\ref{lambda}), respectively,
where $\kappa=\frac{c^2}{2\sigma^2}$ in (\ref{lambda}).
Suppose that $\mathcal{Y}_{\Omega}$ satisfies (\ref{Gauyom}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}
\leq \frac{22c^2\log(m)}{m} + 16(3\sigma^2+2c^2)(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log(n_1\vee n_2) .
\end{split}
\]
\end{Prop}
The detailed proof of Proposition \ref{Gauuupp} is left to Appendix \ref{ProoB}.
From Proposition \ref{Gauuupp}, we can see that the upper bound of
$\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}$
for the observations with additive Gaussian noise
is of the order $O(
(\sigma^2+c^2)(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m})\log(n_1\vee n_2))$.
Now we give a comparison with a matrix-based method in \cite[Corollary 3]{soni2016noisy}
if we ignore the intrinsic structure of a tensor.
Note that we cannot compare with the matrix-based method directly since the underlying data is the tensor form.
However,
we can stack these frontal slices of the underlying tensor (with size $n_1\times n_2\times n_3$)
into a matrix, whose size is $n_1n_3\times n_2$.
In this case, the estimator $\mathcal{X}_1$ obtained
by the matrix-based method in \cite[Corollary 3]{soni2016noisy} satisfies
\begin{equation}\label{MBMGN}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\mathcal{X}_1-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}
= O\left((\sigma^2+c^2)\left(\frac{\widetilde{r}n_1n_3
+\|\mathcal{B}^*\|_0}{m}\right)\log((n_1n_3)\vee n_2)\right),
\end{equation}
where $\widetilde{r}$ is the rank of the resulting matrix.
In particular, we choose $\widetilde{r}$ in the matrix-based method the same as $r$ in the tensor-based method with tensor-tensor product.
In real-world applications, $n_1n_3>n_2$ in general.
For example, if $n_3$ denotes the frame
in video datasets or spectral dimensions in hyperspectral image datasets, $n_3$ is large.
Therefore, if $n_1n_3>n_2$, the upper error bound of the matrix-based method in (\ref{MBMGN})
is larger than that of the tensor-based method in Proposition \ref{Gauuupp}.
Especially, when $n_1=n_2$, the logarithmic factor
in Proposition \ref{Gauuupp} is $\log(n_1)$, while it is $\log(n_1n_3)=\log(n_1)+\log(n_3)$
in (\ref{MBMGN}).
\begin{remark}
We also compare the upper error bound with that of the noisy tensor completion in \cite{wang2019noisy},
which did not consider the sparse factor.
The upper error bound of the estimator $\mathcal{X}_t$ in \cite[Theorem 1]{wang2019noisy} satisfies
\begin{equation}\label{upbtc}
\frac{\|\mathcal{X}_t-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq C_t(\sigma^2\vee c^2)\left(\frac{r\max\{n_1,n_2\}n_3}{m}\right)\log((n_1+n_2)n_3)
\end{equation}
with high probability, where $C_t>0$ is a constant.
We note that the upper error bound of our method can be improved
potentially when $n_2>n_1$ and $\mathcal{B}^*$ is sparse.
In fact, the upper bound in Proposition \ref{Gauuupp} is of the order $O(\frac{rn_1n_3}{m}\log(n_2))$, while the
upper bound in \cite{wang2019noisy} is of the order $O(\frac{rn_2 n_3}{m}\log((n_1+n_2)n_3))$.
Moreover, the two upper bounds roughly coincide except for the logarithmic factor
when $\mathcal{B}^*$ is not sparse, i.e., $\|\mathcal{B}^*\|_0=rn_2n_3$.
However, when $n_1\geq n_2$, the improvement of the upper bound of Proposition \ref{Gauuupp} is mainly on
the logarithmic factor, which is much smaller than that of (\ref{upbtc}).
\end{remark}
\begin{remark}
From Proposition \ref{Gauuupp}, we know that the upper error bound decreases when the number of observations increases.
In particular,
when we observe all entries of $\mathcal{Y}$, i.e., $m=n_1n_2n_3$,
Proposition \ref{Gauuupp} is the upper error bound of the sparse NTF model
with tensor-tensor product in \cite{newman2019non, soltani2016tensor},
which has been used to construct a tensor patch dictionary prior for CT and facial images, respectively.
This demonstrates that the upper error bound of sparse NTF with tensor-tensor
product in \cite{newman2019non, soltani2016tensor} is lower than that of
sparse NMF in theory, where Soltani et al. \cite{soltani2016tensor} just
showed the performance of sparse NTF with tensor-tensor product is better than that of sparse NMF in experiments.
\end{remark}
\subsection{Additive Laplace Noise}
Suppose that each entry of the underlying tensor
is corrupted by independently additive Laplace noise with
the location parameter being zero and the diversity being $\tau>0$ (denoted by Laplace($0,\tau$)),
that is
\begin{equation}\label{Lapayom}
\mathcal{Y}_{ijk}=\mathcal{X}_{ijk}^*+\epsilon_{ijk},
\end{equation}
where $\epsilon_{ijk}\sim$ Laplace($0,\tau$) for any $(i,j,k)\in\Omega$.
Then the joint probability density function of the observation $\mathcal{Y}_\Omega$
is given by
\begin{equation}\label{lapnoid}
p_{\mathcal{X}_{\Omega}^*}(\mathcal{Y}_{\Omega})
=\left(\frac{1}{2\tau}\right)^{|\Omega|}\exp\left(-\frac{\|\mathcal{Y}_{\Omega}-\mathcal{X}_{\Omega}^*\|_1}{\tau}\right).
\end{equation}
Now we establish the upper error bound of estimators in (\ref{model})
for the observations with additive Laplace noise.
\begin{Prop}\label{AddErp}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Assume that $\mathcal{Y}_{\Omega}$ obeys to (\ref{Gauyom}).
Let $\beta$ and $\lambda$ be defined as (\ref{beta})
and (\ref{lambda}), respectively, where $\kappa= \frac{c^2}{2\tau^2}$ in (\ref{lambda}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3c^2(2\tau+c)^2\log(m)}{m\tau^2}+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left(n_1\vee n_2\right).
\end{split}
\]
\end{Prop}
The detailed proof of Proposition \ref{AddErp} is delegated to Appendix \ref{ProoC}.
Similar to the case of observations with additive Gaussian noise, we compare
the upper error bound in Proposition \ref{AddErp} with that of \cite[Corollary 5]{soni2016noisy},
which satisfies
\begin{equation}\label{LapMaxm}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\mathcal{X}_2-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}=O\left((\tau+c)^2\tau c \left(\frac{\widetilde{r}n_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log((n_1n_3)\vee n_2)\right),
\end{equation}
where $\mathcal{X}_2$ is the estimator by the matrix-based method and
$\widetilde{r}$ is the rank of the resulting matrix by matricizing the underlying tensor.
Therefore, the difference of the upper error bounds
between Proposition \ref{AddErp} and \cite[Corollary 5]{soni2016noisy}
is mainly on the logarithmic factor.
If $n_1n_3>n_2$, which holds in various real-world scenarios,
the logarithmic factor in (\ref{LapMaxm}) is $\log(n_1n_3)$,
while it is $\log(n_1\vee n_2)$ in Proposition \ref{AddErp}.
In particular, when $n_1=n_2$,
the logarithmic factor in (\ref{LapMaxm}) is $\log(n_1n_3)$,
while it is $\log(n_1)$ in Proposition \ref{AddErp}.
\subsection{Poisson Observations}
Suppose that each entry of $\mathcal{Y}_\Omega$ follows a Poisson distribution,
i.e.,
\begin{equation}\label{Posyijk}
\mathcal{Y}_{ijk}=\text{Poisson}(\mathcal{X}_{ijk}^*), \ \ \forall \ (i,j,k)\in\Omega,
\end{equation}
where $y=\text{Poisson}(x)$ denotes that $y$ obeys a Poisson distribution
with parameter $x>0$, each $\mathcal{Y}_{ijk}$ is independent and $\mathcal{X}_{ijk}^*>0$.
The joint probability mass function of $\mathcal{Y}_\Omega$ is given as follows:
\begin{equation}\label{Poissobse}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
=\prod_{(i,j,k)\in\Omega}\frac{(\mathcal{X}_{ijk}^*)^{\mathcal{Y}_{ijk}}\exp(-\mathcal{X}_{ijk}^*)}{\mathcal{Y}_{ijk}!}.
\end{equation}
Now we establish the upper error bound of estimators in (\ref{model}) for the observations obeying (\ref{Posyijk}),
which mainly bases on Theorem \ref{maintheo}.
The key step is to give the upper bound of $D(p_{\mathcal{X}^*}||p_{\mathcal{X}})$.
\begin{Prop}\label{uppPoissobs}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$
and $4\leq m\leq n_1n_2n_3$.
Suppose that each entry of $\mathcal{X}^*$ is positive, i.e.,
$\zeta:=\min_{i,j,k}\mathcal{X}_{ijk}^*>0$,
and each entry of the candidate $\mathcal{X}\in\Gamma$ also satisfies $\mathcal{X}_{ijk}\geq \zeta$.
Let $\beta$ and $\lambda$ be defined as (\ref{beta})
and (\ref{lambda}), respectively, where $\kappa= {c}/{\zeta}$ in (\ref{lambda}).
Assume that $\mathcal{Y}_{\Omega}$ obeys to the distribution in (\ref{Posyijk}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\|\widetilde{\mathcal{X}}^\lambda-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\\
\leq & \ \frac{4c^3(3+8\log(m))}{\zeta m}+
48c\left(1+\frac{4c^2}{3\zeta}\right)
\frac{(\beta+2) \left(rn_1n_3+\|\mathcal{B}^*\|_0\right)\log\left(n_1\vee n_2\right)}{m}.
\end{split}
\]
\end{Prop}
We leave the detailed proof of Proposition \ref{uppPoissobs} to Appendix \ref{ProoD}.
Similar to the case of observations with additive Gaussian noise,
we compare the upper error bound in Proposition \ref{uppPoissobs} with
that of the matrix-based method in \cite[Corollary 6]{soni2016noisy}.
The resulting upper error bound of the matrix-based method is of the order
\begin{equation}\label{PoUmbm}
O\left(c\left(1+\frac{c^2}{\zeta}\right)
\left(\frac{\widetilde{r}n_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left((n_1n_3)\vee n_2\right)\right),
\end{equation}
where $\widetilde{r}$ is the rank of the resulting matrix.
The mainly difference of the upper error bounds between the tensor-
and matrix-based methods is the logarithmic factor.
Hence,
if $n_1n_3>n_2$, which holds in various real-world scenarios,
the logarithmic factor in (\ref{PoUmbm}) is $\log(n_1n_3)$,
while it is $\log(n_1\vee n_2)$ in Proposition \ref{uppPoissobs}.
In particular, the logarithmic factor in Proposition \ref{uppPoissobs} is $\log(n_1)$ when $n_1=n_2$.
\begin{remark}
The constants of the upper bound in Proposition \ref{uppPoissobs}
have some differences compared with the matrix-based method in \cite[Corollary 6]{soni2016noisy},
which will also influence the recovery error in practice.
\end{remark}
In addition,
Cao et al. \cite{cao2016Poisson} proposed a matrix-based model
for matrix completion with Poisson noise removal
and established the upper error bound of the estimator, where the low-rank
property is utilized by the upper bound of the nuclear norm of a matrix in a constrained set.
The error bound of the estimator $\mathcal{X}_3$ in \cite[Theorem 2]{cao2016Poisson} satisfies
\begin{equation}\label{Poiupbd}
\frac{\|\mathcal{X}_3-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq
C_p\left(\frac{c^2\sqrt{\widetilde{r}}}{\zeta}\right)\frac{n_1n_3+n_2}{m}\log^{\frac{3}{2}}(n_1n_2n_3)
\end{equation}
with high probability, where $C_p>0$ is a given constant.
Therefore, if $\log(n_1n_2n_3)>\widetilde{r}$, the upper error bound
of the tensor-based method has a great improvement on the logarithmic factor if $\mathcal{B}^*$ is sparse.
Specifically, when $n_1=n_2$ and $\log(n_1n_2n_3)>\widetilde{r}$,
the logarithmic factor of (\ref{Poiupbd}) is $\log(n_1n_2n_3)$,
while it is $\log(n_1)$ in Proposition \ref{uppPoissobs}.
Recently, Zhang et al. \cite{zhang2021low} proposed a method for low-rank tensor completion with Poisson observations, which combined the transformed tensor nuclear norm ball constraint with maximum likelihood estimate. When $m\geq \frac{1}{2}(n_1+n_2)n_3\log(n_1+n_2)$ and all entries of multi-rank of the underlying tensor $\mathcal{X}^*$ are $r_1$,
the upper error bound of the estimator $\mathcal{X}_{tc}$ in \cite[Theorem 3.1]{zhang2021low} is
$$
\frac{\|\mathcal{X}_{tc}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq
C_{tc}n_3\sqrt{\frac{(n_1+n_2)r_1}{m}}\log(n_1n_2n_3)
$$
with high probability, where $C_{tc}>0$ is a given constant. In this case, since $r_1$ is small in general and ${(n_1+n_2)r_1}/{m}<1$, the upper error bound in \cite[Theorem 3.1]{zhang2021low} is larger than that of Proposition \ref{uppPoissobs}.
\section{Minimax Lower Bounds}\label{lowerbou}
In this section, we study the sparse NTF and completion problem with incomplete and noisy observations,
and establish the lower bounds on the
minimax risk for the candidate estimator in the following set:
\begin{equation}\label{Ucnae}
\begin{split}
\mathfrak{U}(r,b,s):
=\Big\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:& \
\mathcal{A}\in\mathbb{R}_+^{n_1\times r \times n_3}, \ 0\leq \mathcal{A}_{ijk}\leq 1,\\
& ~ ~ \mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}, \ 0\leq \mathcal{B}_{ijk}\leq b, \ \|\mathcal{B}\|_0\leq s\Big\},
\end{split}
\end{equation}
which implies that the underlying tensor has a nonnegative factorization with tensor-tensor product and one factor tensor is sparse.
We only know the joint probability density function
or probability mass function of observations $\mathcal{Y}_\Omega$ given by (\ref{obserPo}).
Let $\widetilde{\mathcal{X}}$ be an estimator of $\mathcal{X}^*$.
The risk of estimators with incomplete observations is defined as
\begin{equation}\label{mirisk}
\mathfrak{R}_{\widetilde{\mathcal{X}}}
=\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2]}{n_1n_2n_3}.
\end{equation}
The worst-case performance of an estimator $\widetilde{\mathcal{X}}$
over the set $\mathfrak{U}(r,k,c)$ is defined as
$$
\inf_{\widetilde{\mathcal{X}}}\sup_{\mathcal{X}^*\in\mathfrak{U}(r,b,s)}\mathfrak{R}_{\widetilde{\mathcal{X}}}.
$$
The estimator is defined to achieve the minimax risk when it is the smallest maximum risk among all possible estimators.
Denote
\begin{equation}\label{deltasp}
\Delta:=\min\left\{1,\frac{s}{n_2n_3}\right\}.
\end{equation}
Now we establish the lower bounds of the minimax risk,
whose proof follows a similar line of \cite[Theorem 1]{sambasivan2018minimax} for noisy matrix completion,
see also \cite[Theorem 3]{klopp2017robust}.
The main technique is to define suitable packing sets
for two factor tensors $\mathcal{A}$ and $\mathcal{B}$ in (\ref{Ucnae}) based on tensor-tensor product.
Then we construct binary sets for the two packing sets in the tensor form,
which are subsets of (\ref{Ucnae}).
The line is mainly on the general results for the
risk estimate based on KL divergence \cite[Theorem 2.5]{tsybakov2009}
and the measures of two probability distributions.
In this case, we need to establish the lower bounds of Hamming distance between any two binary
sequences based on Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009}.
First we establish the minimax lower bound with a general class of noise models in (\ref{obserPo}),
whose joint probability density function
or probability mass function of observations is given.
\begin{theorem}\label{lowbounMai}
Suppose that the KL divergence of the scalar probability density function or probability mass function satisfies
\begin{equation}\label{DKLpq}
D(p(x)||q(x))\leq \frac{1}{2\nu^2}(x-y)^2,
\end{equation}
where $\nu>0$ depends on the distribution of observations in (\ref{obserPo}).
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{obserPo}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s + rn_1n_3}{m}\right)\right\},
$$
where $\Delta$ is defined as (\ref{deltasp}).
\end{theorem}
From Theorem \ref{lowbounMai}, we know that the minimax lower bound matches
the upper error bound in Theorem \ref{maintheo} with a logarithmic factor $\log(n_1\vee n_2)$, which
implies that the upper error bound in Theorem \ref{maintheo} is nearly optimal up to
a logarithmic factor $\log(n_1\vee n_2)$.
\begin{remark}
For the minimax lower bound with general noise observations in Theorem \ref{lowbounMai}, the main differences of proofs between \cite{sambasivan2018minimax} and Theorem \ref{lowbounMai}
are the constructions of packing sets (the sets in (\ref{GenesubX}), (\ref{CXZG}), (\ref{GenesubXB})) for the set $\mathfrak{U}(r,b,s)$ in (\ref{Ucnae}),
where the tensor-tensor product is used in the set (\ref{GenesubX}).
Moreover, being different from the proof of \cite{sambasivan2018minimax},
we need to construct the subsets of the packing sets
(the sets in (\ref{SubXAA}) and (\ref{SubXBB})),
where the tensor in the subsets has special nonnegative tensor factorization structures with the tensor-tensor product form.
The special block tensors are constructed for one factor tensor and special sets with block structure tensors are
constructed for the other factor tensor (see (\ref{GeneXACsub}) and (\ref{GeneXbSubBb})).
\end{remark}
In the next subsections, we establish the explicit
lower bounds for the special noise distributions,
including additive Gaussian noise, additive Laplace noise, and Poisson observations,
where the condition (\ref{DKLpq}) can be satisfied easily in each case.
\subsection{Additive Gaussian Noise}
In this subsection, we establish the minimax lower bound for the observations with additive Gaussian noise,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Gauyom}).
By Theorem \ref{lowbounMai}, the key issue is to give the explicit $\nu$ in (\ref{DKLpq}).
\begin{Prop}\label{ProupbG}
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{Gauyom}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\sigma^2\left(\frac{s + rn_1n_3}{m}\right)\right\},
$$
where $\Delta$ is defined as (\ref{deltasp}).
\end{Prop}
\begin{remark}
From Proposition \ref{ProupbG}, we know that the minmax lower bound matches the upper error bound
in Proposition \ref{Gauuupp}
with a logarithmic factor $\log(n_1\vee n_2)$, which implies that the upper error bound
in Proposition \ref{Gauuupp} is nearly optimal.
\end{remark}
\begin{remark}
When we observe all entries of $\mathcal{Y}$, i.e., $m=n_1n_2n_3$, Proposition \ref{ProupbG}
is just the minimax lower bound of sparse NTF with tensor-tensor product, which has been applied in dictionary learning \cite{newman2019non}.
\end{remark}
\subsection{Additive Laplace Noise}
In this subsection, we establish the minimax lower bound for the observations with additive Laplace noise,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Lapayom}).
Similar to the case of additive Gaussian noise,
we only need to give $\nu$ in (\ref{DKLpq}) in Theorem \ref{lowbounMai}.
\begin{Prop}\label{lapUpb}
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{Lapayom}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\tau^2\left(\frac{s + rn_1n_3}{m}\right)\right\}.
$$
\end{Prop}
\begin{remark}
It follows from Proposition \ref{lapUpb} that the rate attained
by our estimator in Proposition \ref{AddErp} is optimal up to a logarithmic factor $\log(n_1\vee n_2)$,
which is similar to the case of observations with additive Gaussian noise.
\end{remark}
\subsection{Poisson Observations}
In this subsection, we establish the minimax lower bound for Poisson observations,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Posyijk}).
There is a slight difference compared with additive Gaussian noise and Laplace noise,
we need to assume that all entries of the underlying tensor are strictly positive,
i.e., $\zeta:=\min_{i,j,k}\mathcal{X}_{ijk}^*>0$.
Suppose that $\zeta<b$.
Being different from the candidate set (\ref{Ucnae}),
each entry of the candidate tensor is also strictly positive.
The candidate set is defined as follows:
\begin{equation}\label{UbarPoissn}
\begin{split}
\widetilde{\mathfrak{U}}(r,b,s,\zeta):
=\Big\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:& \
\mathcal{X}_{ijk}\geq \zeta, \ \mathcal{A}\in\mathbb{R}_+^{n_1\times r \times n_3}, \ 0\leq \mathcal{A}_{ijk}\leq 1,\\
&~ ~ \mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}, \ 0\leq \mathcal{B}_{ijk}\leq b, \ \|\mathcal{B}\|_0\leq s\Big\}.
\end{split}
\end{equation}
Then we know that $\widetilde{\mathfrak{U}}(r,b,s,\zeta)\subseteq\mathfrak{U}(r,b,s)$.
Now the lower bound of candidate estimators for Poisson observations
is given in the following proposition, whose proof
follows a similar line of the matrix case in \cite[Theorem 6]{sambasivan2018minimax}.
For the sake of completeness, we give it here.
Similar to Theorem \ref{lowbounMai}, the main differences
between the matrix- and tensor-based methods are
the packing sets for the two nonnegative factorization factors $\mathcal{A}$ and $\mathcal{B}$.
We mainly use the results in \cite[Theorem 2.5]{tsybakov2009} for the constructed packing sets
and the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009} for the binary sets.
\begin{Prop}\label{Poisslow}
Suppose that $\mathcal{Y}_\Omega$ follows from (\ref{Posyijk}).
Let $r\leq\min\{n_1,n_2\}$ and
$n_2n_3< s\leq rn_2n_3$.
Then there exist $0<\widetilde{\beta}_c<1$ and $\widetilde{C}>0$ such that
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{U}}(r,b,s,\zeta)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
\widetilde{C}\min\left\{\widetilde{\Delta} b^2,\widetilde{\beta}_c^2\zeta\left(\frac{s-n_2n_3+rn_1n_3}{m}\right)\right\},
$$
where $\widetilde{\Delta}:=\min\{(1-\varsigma)^2, \Delta_1\}$
with $\varsigma:=\frac{\zeta}{b}$ and $\Delta_1:=\min\{1,\frac{s-n_2n_3}{n_2n_3}\}$.
\end{Prop}
\begin{remark}
From Proposition \ref{Poisslow},
we note that the lower bound of Poisson observations is of the order $O(\frac{s-n_2n_3+rn_1n_3}{m})$.
In particular, when $s\geq 2n_2n_3$, the lower bound in Proposition \ref{Poisslow}
matches the upper bound in Proposition \ref{uppPoissobs} up to a logarithmic factor $\log(n_1\vee n_2)$.
\end{remark}
\begin{remark}
For the minimax lower bound with Poisson observations in Proposition \ref{Poisslow}, the main differences of proofs between \cite{sambasivan2018minimax} and Proposition \ref{Poisslow}
are the constructions of packing sets (the sets in (\ref{PoscsubX1}), (\ref{CXZ}), (\ref{PoscsubX1B1})) for the set $\widetilde{\mathfrak{U}}(r,b,s,\zeta)$ in (\ref{UbarPoissn}),
where the tensor-tensor product is used in the set (\ref{PoscsubX1}).
Moreover, the subsets of the packing sets with two nonnegative factor tensors
(the sets in (\ref{PoissXA1A}) and (\ref{PoissXB1B})) need to be constructed, where the tensor-tensor product is also used in the two subsets.
Besides, in the two subsets,
the special block tensors for one factor tensor and special sets with block tensors for the other factor tensor (see the sets in (\ref{PoisXAsubC1}) and (\ref{PoisXBsubB1})) are constructed.
\end{remark}
\section{Optimization Algorithm}\label{OptimAlg}
In this section, we present an ADMM based algorithm \cite{Gabay1976A, wang2015global} to solve the model (\ref{model}).
Note that the feasible set $\Gamma$ in (\ref{TauSet}) is discrete which makes the algorithm design difficult.
In order to use continuous optimization
techniques, the discrete assumption of $\Gamma$ is dropped.
This may be justified by choosing a very large
value of $\vartheta$ and by noting that continuous optimization algorithms
use finite precision arithmetic when executed on a computer.
Now we consider to solve the following relaxation model:
\begin{equation}\label{MidelSolv}
\begin{split}
\min_{\mathcal{X},\mathcal{A},\mathcal{B}} \ & -\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{B}\|_0, \\
\text{s.t.}\ & \mathcal{X}=\mathcal{A}\diamond \mathcal{B},\
0\leq \mathcal{X}_{ijk}\leq c, \
0\leq \mathcal{A}_{ijk}\leq 1, \
0\leq \mathcal{B}_{ijk}\leq b.
\end{split}
\end{equation}
Let $\mathfrak{X}'=\{\mathcal{X}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:0\leq \mathcal{X}_{ijk}\leq c\}$,
$\mathfrak{A}=\{\mathcal{A}\in\mathbb{R}_+^{n_1\times r\times n_3}:0\leq \mathcal{A}_{ijk}\leq 1\}$,
$\mathfrak{B}'=\{\mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}:0\leq \mathcal{B}_{ijk}\leq b\}$,
and $\mathcal{Q}=\mathcal{X}, \mathcal{M}=\mathcal{A}$, $\mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B}$.
Then problem (\ref{MidelSolv}) can be rewritten equivalently as
\begin{equation}\label{ModelOtheFo}
\begin{split}
\min_{\mathcal{X},\mathcal{A},\mathcal{B},\mathcal{Q},\mathcal{M},\mathcal{N},\mathcal{Z}} \ &
-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{N}\|_0
+\delta_{\mathfrak{X}'}(\mathcal{Q})+
\delta_{\mathfrak{A}}(\mathcal{M})+\delta_{\mathfrak{B}'}(\mathcal{Z}), \\
\text{s.t.}\ & \mathcal{X}=\mathcal{A}\diamond \mathcal{B},
\mathcal{Q} = \mathcal{X}, \mathcal{M}=\mathcal{A}, \mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B},
\end{split}
\end{equation}
where $\delta_{\mathfrak{A}}(x)$ denotes the indicator function
of $\mathfrak{A}$, i.e., $\delta_{\mathfrak{A}}(x)=0$ if $x\in \mathfrak{A}$ otherwise $\infty$.
The augmented Lagrangian function associated with (\ref{ModelOtheFo}) is defined as
\[
\begin{split}
&L(\mathcal{X},\mathcal{A},\mathcal{B},\mathcal{Q},\mathcal{M},\mathcal{N},\mathcal{Z},\mathcal{T}_i)\\
:=&
-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{N}\|_0
+\delta_{\mathfrak{X}'}(\mathcal{Q})+
\delta_{\mathfrak{A}}(\mathcal{M})+\delta_{\mathfrak{B}'}(\mathcal{Z})-\langle \mathcal{T}_1, \mathcal{X}-\mathcal{A}\diamond \mathcal{B} \rangle\\
&-\langle \mathcal{T}_2, \mathcal{Q}- \mathcal{X}\rangle - \langle\mathcal{T}_3, \mathcal{M}-\mathcal{A} \rangle
-\langle \mathcal{T}_4, \mathcal{N}-\mathcal{B} \rangle -\langle \mathcal{T}_5, \mathcal{Z}-\mathcal{B}\rangle \\ &+\frac{\rho}{2}\Big(\|\mathcal{X}-\mathcal{A}\diamond \mathcal{B}\|_F^2
+\|\mathcal{Q} - \mathcal{X}\|_F^2+\| \mathcal{M}-\mathcal{A}\|_F^2+\| \mathcal{N}-\mathcal{B}\|_F^2+\|\mathcal{Z}-\mathcal{B}\|_F^2\Big),
\end{split}
\]
where $\mathcal{T}_i$ are Lagrangian multipliers, $i=1,\ldots, 5$, and
$\rho>0$ is the penalty parameter. The iteration of ADMM is given as follows:
\begin{align}
&\mathcal{X}^{k+1}=\arg\min_{\mathcal{X}} L(\mathcal{X},\mathcal{A}^k,\mathcal{B}^k,\mathcal{Q}^k,\mathcal{M}^k,\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k) \nonumber \\ \label{Xk1}
&~~~~~~=\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}\left(\frac{1}{2}\left(\mathcal{Q}^k+\mathcal{A}^k\diamond \mathcal{B}^k+\frac{1}{\rho}(\mathcal{T}_1^k-\mathcal{T}_2^k)\right)\right), \\
&\mathcal{A}^{k+1}=\arg\min_{\mathcal{A}} L(\mathcal{X}^{k+1},\mathcal{A},\mathcal{B}^k,\mathcal{Q}^k,\mathcal{M}^k,\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k)\nonumber \\ \label{Ak1}
&~~~~~~=\left(\mathcal{M}^k+(\mathcal{X}^{k+1}-\frac{1}{\rho}\mathcal{T}_1^k)\diamond
(\mathcal{B}^k)^T-\frac{1}{\rho}\mathcal{T}_3^k\right)\diamond (\mathcal{B}^k\diamond (\mathcal{B}^k)^T+\mathcal{I})^{-1}, \\
&\mathcal{B}^{k+1}=\arg\min_{\mathcal{B}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B},\mathcal{Q}^k,\mathcal{M}^k,
\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k)\nonumber\\ \label{bK1}
&~~~~~~=\left((\mathcal{A}^{k+1})^T\diamond \mathcal{A}^{k+1}+2\mathcal{I}\right)^{-1}\diamond \nonumber \\
&~~~~~~~~~~
\left((\mathcal{A}^{k+1})^T\diamond\mathcal{X}^{k+1}+\mathcal{N}^k+\mathcal{Z}^k
-\frac{1}{\rho}\left((\mathcal{A}^{k+1})^T\diamond\mathcal{T}_1^k+
\mathcal{T}_4^k+\mathcal{T}_5^k\right)\right), \\
\label{QK1}
&\mathcal{Q}^{k+1}=\arg\min_{\mathcal{Q}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q},\mathcal{M}^{k},\mathcal{N}^k,
\mathcal{Z}^{k},\mathcal{T}_i^k)
=\Pi_{\mathfrak{X}'}\Big(\mathcal{X}^{k+1}+\frac{1}{\rho}\mathcal{T}_2^k\Big), \\ \label{Mk1}
&\mathcal{M}^{k+1}=\arg\min_{\mathcal{M}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M},\mathcal{N}^k,
\mathcal{Z}^{k+1},\mathcal{T}_i^k)
=\Pi_{\mathfrak{A}}\Big(\mathcal{A}^{k+1}+\frac{1}{\rho}\mathcal{T}_3^k\Big),\\ \label{Nk1}
&\mathcal{N}^{k+1}=\arg\min_{\mathcal{N}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M}^{k+1},\mathcal{N},
\mathcal{Z}^{k},\mathcal{T}_i^k)\nonumber \\
&~~~~~~~=\textup{Prox}_{\frac{\lambda}{\rho}\|\cdot\|_0}\Big(\mathcal{B}^{k+1}+\frac{1}{\rho}\mathcal{T}_4^k\Big), \\ \label{Zk1}
&\mathcal{Z}^{k+1}=\arg\min_{\mathcal{Z}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M}^{k+1},
\mathcal{N}^{k+1},\mathcal{Z},\mathcal{T}_i^k)
=\Pi_{\mathfrak{B}'}\Big(\mathcal{B}^{k+1}+\frac{1}{\rho}\mathcal{T}_5^k\Big), \\ \label{Tk12}
&\mathcal{T}_1^{k+1}=\mathcal{T}_1^k-\rho(\mathcal{X}^{k+1}-\mathcal{A}^{k+1}\diamond \mathcal{B}^{k+1}), \
\mathcal{T}_2^{k+1}= \mathcal{T}_2^{k}-\rho(\mathcal{Q}^{k+1}- \mathcal{X}^{k+1}), \\ \label{Tk34}
& \mathcal{T}_3^{k+1}= \mathcal{T}_3^k-\rho(\mathcal{M}^{k+1}-\mathcal{A}^{k+1}), \
\mathcal{T}_4^{k+1}=\mathcal{T}_4^{k}-\rho(\mathcal{N}^{k+1}-\mathcal{B}^{k+1}), \\ \label{Tk5}
&\mathcal{T}_5^{k+1}=\mathcal{T}_5^k-\rho(\mathcal{Z}^{k+1}-\mathcal{B}^{k+1}),
\end{align}
where $\Pi_{\mathfrak{X}'}(\mathcal{X}), \Pi_{\mathfrak{A}}(\mathcal{X})$, and
$\Pi_{\mathfrak{B}'}(\mathcal{X})$ denote
the projections of $\mathcal{X}$ onto the sets $\mathfrak{X}'$, $\mathfrak{A}$, and $\mathfrak{B}'$, respectively.
Now the ADMM for solving (\ref{ModelOtheFo}) is stated in Algorithm \ref{AlSNDM}.
\begin{algorithm}[htbp]
\caption{Alternating Direction Method of Multipliers for Solving (\ref{ModelOtheFo})} \label{AlSNDM}
{\bf Input}. Let $\rho>0$ be a given constant. Given $\mathcal{A}^0, \mathcal{B}^0, \mathcal{Q}^0,
\mathcal{M}^0, \mathcal{N}^0, \mathcal{Z}^0, \mathcal{T}_i^0, i=1,\ldots,5$.
For $k=0,1,\ldots,$ perform the following steps: \\
{\bf Step 1}. Compute $\mathcal{X}^{k+1}$ via (\ref{Xk1}). \\
{\bf Step 2}. Compute $\mathcal{A}^{k+1}$ by (\ref{Ak1}). \\
{\bf Step 3}. Compute $\mathcal{B}^{k+1}$ by (\ref{bK1}). \\
{\bf Step 4}. Compute $\mathcal{Q}^{k+1}, \mathcal{M}^{k+1},
\mathcal{N}^{k+1}, \mathcal{Z}^{k+1}$ by (\ref{QK1}), (\ref{Mk1}), (\ref{Nk1}), and (\ref{Zk1}), respectively. \\
{\bf Step 5}. Update $\mathcal{T}_1^{k+1}$, $\mathcal{T}_2^{k+1}$, $\mathcal{T}_3^{k+1}$,
$\mathcal{T}_4^{k+1}$, $\mathcal{T}_5^{k+1}$ via (\ref{Tk12}), (\ref{Tk34}), and (\ref{Tk5}), respectively. \\
{\bf Step 6}. If a termination criterion is not satisfied, set $k:=k+1$ and go to Step 1.
\end{algorithm}
Algorithm \ref{AlSNDM} is an ADMM based algorithm for solving nonconvex optimization problems.
Although great efforts have been made about the convergence of ADMM
for nonconvex models in recent years \cite{wang2015global, Hong2016Convergence},
the existing ADMM based algorithm cannot been applied to our model directly
since both the objective function and constraints are nonconvex.
Moreover, the data-fitting term is nonsmooth when the observations are corrupted by additive Laplace noise,
which also gives rise to the difficulty of the convergence of ADMM for solving our model.
\begin{remark}\label{theProMap}
In Algorithm \ref{AlSNDM}, one needs to compute the proximal mapping
$\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})$,
where $\mathcal{S}=\frac{1}{2}(\mathcal{Q}^k+\mathcal{A}^k\diamond \mathcal{B}^k+\frac{1}{\rho}(\mathcal{T}_1^k-\mathcal{T}_2^k))$.
In particular, for the additive Gaussian noise, additive Laplace noise,
and Poisson observations, the proximal mappings at $\mathcal{S}$ are given by
\begin{itemize}
\item Additive Gaussian noise:
$$
\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\frac{\mathcal{Y}+2\rho\sigma^2\mathcal{S}}{1+2\rho\sigma^2}\right)+
\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\overline{\Omega}$ is the complementary set of $\Omega$.
\item Additive Laplace noise:
$$\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\mathcal{Y}_{\Omega}+\textup{sign}
(\mathcal{S}-\mathcal{Y}_{\Omega})\circ\max\left\{|\mathcal{S}
-\mathcal{Y}_\Omega|-\frac{1}{2\rho\tau},0\right\}\right)+\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\textup{sign}(\cdot)$ denotes the signum function and $\circ$ denotes the point-wise product.
\item Poisson observations:
$$
\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\frac{2\rho \mathcal{S}-\mathbb{I}+\sqrt{(2\rho \mathcal{S}-\mathbb{I})^2
+8\rho \mathcal{Y}}}{4\rho}\right)+\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\mathbb{I}\in\mathbb{R}^{n_1\times r\times n_3}$ denotes the tensor with all entries being $1$, and the square and root are performed in point-wise manners.
\end{itemize}
\end{remark}
\begin{remark}
We also need to compute the proximal mapping of tensor $\ell_0$ norm \cite{donoho1994ideal} in Algorithm \ref{AlSNDM}.
Note that the tensor $\ell_0$ norm is separable. Then we just need to derive its scalar form.
For any $t>0$, the proximal mapping of $t \|\cdot\|_0$ is
given by (see, e.g., \cite[Example 6.10]{beck2017first})
$$
\textup{Prox}_{t\|\cdot\|_0}(y)=
\left\{
\begin{array}{ll}
0, & \mbox{if} \ |y|<\sqrt{2t}, \\
\{0,y\}, & \mbox{if} \ |y|=\sqrt{2t}, \\
y, & \mbox{if}\ |y|>\sqrt{2t}.
\end{array}
\right.
$$
\end{remark}
\begin{remark}
The ADMM based algorithm is developed to solve the model (\ref{ModelOtheFo}). However, the problem
(\ref{ModelOtheFo}) is nonconvex, and it is difficult to obtain its global optimal solution in experiments,
while the estimators of the upper error bounds in Section \ref{upperbound} are globally optimal.
\end{remark}
\begin{remark}
The main cost of ADMM in Algorithm \ref{AlSNDM} is the tensor-tensor product and tensor inverse operations.
First, we consider the computational cost of the tensor-tensor product for two tensors $\mathcal{A}\in \mathbb{R}^{n_1\times r\times n_3}$ and $ \mathcal{B}\in \mathbb{R}^{r\times n_2\times n_3}$,
which is implemented by fast Fourier transform \cite{Kilmer2011Factorization}.
The application of discrete Fourier transform
to an $n_3$-vector is of $O(n_3\log(n_3))$ operations.
After Fourier transform along the tubes, we need to compute $n_3$ matrix products with sizes $n_1$-by-$r$ and $r$-by-$n_2$,
whose cost is $O(rn_1n_2n_3)$. Therefore, for the tensor-tensor product of $\mathcal{A}\in \mathbb{R}^{n_1\times r\times n_3}$ and $ \mathcal{B}\in \mathbb{R}^{r\times n_2\times n_3}$,
the total cost is $O(r(n_1+n_2)n_3\log(n_3)+rn_1n_2n_3)$.
Second, for the inverse operation of an $n\times n\times n_3$ tensor, one takes fast Fourier transform along the third-dimension and operates the inverse for each frontal slice
in the Fourier domain. Then the total cost of the tensor inverse is $O(n^2n_3\log(n_3)+n^3n_3)$.
For the ADMM, its main cost is to compute $\mathcal{A}^{k+1}$ and $\mathcal{B}^{k+1}$. The complexities of computing $\mathcal{A}^{k+1}$ and $\mathcal{B}^{k+1}$ are $O(n_2(r+n_1)n_3\log(n_3)+rn_1n_2n_3)$
and $O(n_1(r+n_2)n_3\log(n_3)+rn_1n_2n_3)$, respectively.
Note that $r\leq\min\{n_1,n_2\}$.
If we take one of the proximal mappings in Remark \ref{theProMap} for $\mathcal{X}^{k+1}$,
the total cost of ADMM is $O(n_1n_2n_3\log(n_3)+rn_1n_2n_3)$.
\end{remark}
\section{Numerical Results}\label{NumeriExper}
In this section, some numerical experiments are conducted to
demonstrate the effectiveness of the proposed tensor-based method
for sparse NTF and completion with different noise observations,
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
We will compare the sparse NTF and completion method with the matrix-based method in \cite{soni2016noisy}.
The Karush-Kuhn-Tucker (KKT) conditions of (\ref{ModelOtheFo}) are given by
\begin{equation}\label{KKTCon}
\left\{
\begin{array}{ll}
0\in \partial_\mathcal{X}\left(-\log(p_{\mathcal{X}_{\Omega}}(\mathcal{Y}_{\Omega}))\right)
-\mathcal{T}_1+\mathcal{T}_2, \\
\mathcal{T}_1\diamond\mathcal{B}^T +\mathcal{T}_3=0, \ \mathcal{A}^T\diamond\mathcal{T}_1 +\mathcal{T}_4+\mathcal{T}_5 = 0, \\
0\in \partial{\delta_{\mathfrak{X}'}(\mathcal{Q})}-\mathcal{T}_2, \ 0\in \partial{\delta_{\mathfrak{A}}(\mathcal{M})}-\mathcal{T}_3, \\
0\in\partial(\lambda\|\mathcal{N}\|_0)-\mathcal{T}_4, \ 0\in\partial{\delta_{\mathfrak{B}'}(\mathcal{Z})}-\mathcal{T}_5,\\
\mathcal{X}=\mathcal{A}\diamond \mathcal{B},
\mathcal{Q} = \mathcal{X}, \mathcal{M}=\mathcal{A}, \mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B},
\end{array}
\right.
\end{equation}
where $\partial f(x)$ denotes the subdifferential of $f$ at $x$.
Based on the KKT conditions in (\ref{KKTCon}),
we adopt the following relative residual to measure the accuracy:
$$
\eta_{max}:=\max\{\eta_1,\eta_2,\eta_3,\eta_4,\eta_5,\eta_6\},
$$
where
\[
\begin{split}
& \eta_1=\frac{\|\mathcal{X}-\textup{Prox}_{(-\log(p_{\mathcal{X}_{\Omega}}
(\mathcal{Y}_{\Omega})))}(\mathcal{T}_1-\mathcal{T}_2+\mathcal{X})\|_F}{1+\|\mathcal{X}\|_F
+\|\mathcal{T}_1\|_F+\|\mathcal{T}_2\|_F}, \
\eta_2 = \frac{\|\mathcal{Q}-\Pi_{\mathfrak{X}'}(\mathcal{T}_2+\mathcal{Q})\|_F}
{1+\|\mathcal{T}_2\|_F+\|\mathcal{Q}\|_F}, \\
&\eta_3 = \frac{\|\mathcal{M}-\Pi_{\mathfrak{A}}(\mathcal{T}_3
+\mathcal{M})\|_F}{1+\|\mathcal{T}_3\|_F+\|\mathcal{M}\|_F}, \
\eta_4 = \frac{\|\mathcal{N}-\textup{Prox}_{\lambda\|\cdot\|_0}
(\mathcal{T}_4+\mathcal{N})\|_F}{1+\|\mathcal{T}_4\|_F+\|\mathcal{N}\|_F}, \\
& \eta_5 = \frac{\|\mathcal{Z}-\Pi_{\mathfrak{B}'}(\mathcal{T}_5+\mathcal{Z})\|_F}{1+\|\mathcal{T}_5\|_F+\|\mathcal{Z}\|_F}, \
\eta_6 = \frac{\|\mathcal{X}-\mathcal{A}\diamond \mathcal{B}\|_F}{1+\|\mathcal{X}\|_F + \|\mathcal{A}\|_F + \|\mathcal{B}\|_F}.
\end{split}
\]
Algorithm 1 is terminated if $\eta_{max}<=10^{-4}$ or the number of iterations researches the maximum of $300$.
In order to measure the quality of the recovered tensor,
the relative error (RE) is used to evaluate the performance of different methods,
which is defined as
$$
\textup{RE}=\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F}{\|\mathcal{X}^*\|_F},
$$
where $\widetilde{\mathcal{X}}$ and $\mathcal{X}^*$ are the recovered tensor and the ground-truth tensor, respectively.
We generate the nonnegative tensors $\mathcal{A}^*\in\mathbb{R}_+^{n_1\times r\times n_3}$
and $\mathcal{B}^*\in\mathbb{R}_{+}^{r\times n_2\times n_3}$ at random.
$\mathcal{A}^*$ is generated by the MALTAB command $\textup{rand}(n_1,r,n_3)$
and $\mathcal{B}^*$ is a nonnegative sparse tensor generated by the tensor toolbox
command $b\cdot\textup{sptenrand}([r,n_2,n_3],s)$ \cite{TTB_Software}, where $b$ is the magnitude of
$\mathcal{B}^*$ and $s$ is the sparse ratio.
Then $\mathcal{X}^*=\mathcal{A}^*\diamond \mathcal{B}^*$ and we choose $c=2\|\mathcal{X}^*\|_\infty$.
The size of the testing third-order tensors is $n_1=n_2=n_3=100$ in the following two experiments.
The initial values will also influence the performance of ADMM. For the initial values
$\mathcal{A}^0,\mathcal{B}^0, \mathcal{M}^0, \mathcal{N}^0, \mathcal{Z}^0, \mathcal{T}_3^0, \mathcal{T}_4^0, \mathcal{T}_5^0$ of ADMM,
we choose them as two random tensors with the same size as that of $\mathcal{A}^*,\mathcal{B}^*$.
For the initial values $\mathcal{Q}^0, \mathcal{T}_1^0, \mathcal{T}_2^0$, we choose them as the observations $\mathcal{Y}_\Omega$ in $\Omega$
and zeros outside $\Omega$.
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{GaussianSR.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{LaplaceSR.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{PoissonSR.eps}
\end{minipage}
}
\caption{\small RE versus SR of different methods for different noise observations. (a) Gaussian. (b) Laplace. (c) Poisson.}\label{Diffrennoise}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankGaussian.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankLaplace.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankPoisson.eps}
\end{minipage}
}
\caption{\small RE versus $r$ of different methods for different noise observations. (a) Gaussian. (b) Laplace. (c) Poisson.}\label{DiffrennoiseR}
\end{figure}
As discussed in Section \ref{ProMod}, we aim to estimate the ground-truth
tensor $\mathcal{X}^*$. Note that the two factors may not be unique.
Therefore, we only compare the recovered tensor
$\widetilde{\mathcal{X}}$ with the ground-truth tensor $\mathcal{X}^*$ in the experiments.
In fact, we just establish the error upper bounds of $\widetilde{\mathcal{X}}$,
and do not analyze the error bounds of each factor tensor independently in theory.
First we analyze the recovery performance of different methods versus SRs.
In Figure \ref{Diffrennoise}, we display the REs of the recovered tensors with different sampling ratios
and $r$, where the sparse ratio $s=0.3$ and the observed entries are corrupted
by Gaussian noise, Laplace noise, and Poisson noise, respectively.
We set $\sigma=0.1$ and $\tau=0.1$ for Gaussian noise and Laplace noise, respectively,
and $r=10$ and $b=2$.
The SRs vary from $0.3$ to $0.9$ with step size $0.1$.
It can be seen from this figure that the relative errors decrease
when the sampling ratios increase for both matrix- and tensor-based methods.
Moreover, the relative errors obtained by the tensor-based method are lower than those
obtained by the matrix-based method.
Compared with the matrix-based method, the improvements of the
tensor-based method for additive Gaussian noise and Laplace noise
are much more than those for Poisson observations,
where the main reason is that the constants of the upper bound in Proposition \ref{uppPoissobs}
are slightly larger than those of the matrix-based method in \cite{soni2016noisy} for Poisson observations.
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizeGaussian.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizeLaplace.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizePoisson.eps}
\end{minipage}
}
\caption{\small RE versus size of tensors of different methods for different noise observations.
(a) Gaussian. (b) Laplace. (c) Poisson.}\label{DiffrennoiseSize}
\end{figure}
The recovery performance of different methods versus $r$ is also analyzed, where $\textup{SR} = 0.5$
and $r$ varies from $5$ to $40$ with step size $5$ for additive Gaussian noise and Laplace noise,
and from $5$ to $30$ with step size $5$ for Poisson observations.
Again we set $b=2$, and $\sigma=0.1$ and $\tau=0.1$ for Gaussian noise and Laplace noise, respectively.
It can be seen from Figure \ref{DiffrennoiseR} that
the relative errors
obtained by the tensor-based method are lower than those obtained
by the matrix-based method for the three noise models.
Besides, we can observe that REs increase when $r$ increases for both matrix- and tensor-based methods.
Again the tensor-based method performs better than the matrix-based method for different $r$ and noise observations.
Compared with Poisson observations, the improvements of the tensor-based method are
much more for the additive Gaussian noise and Laplace noise.
In Figure \ref{DiffrennoiseSize}, we test different sizes $n:=n_1=n_2=n_3$ of tensors,
and vary $n$ from $50$ to $500$ with step size $50$, where $\textup{SR}=0.5$, the sparse ratio $s=0.3$, $r=10$, and $b=2$.
Here we set $\sigma=0.1$ for Gaussian noise and $\tau=0.1$ for Laplace noise, respectively.
It can be observed from this figure that the relative errors of the tensor-based method are smaller
than those of the matrix-based method for different noise distributions.
The relative errors of both matrix- and tensor-based methods decrease as $n$ increases.
Furthermore, for different size $n$ of the testing tensors, the improvements of relative errors of the tensor-based method for
Gaussian noise and Laplace noise are much more than those for Poisson observations.
\section{Concluding Remarks}\label{Conclu}
In this paper, we have studied the sparse NTF and completion
problem based on tensor-tensor product from partial and noisy observations,
where the observations are corrupted by general noise distributions.
A maximum likelihood estimate of partial observations is derived for the data-fitting term
and the tensor $\ell_0$ norm is adopted to enforce the sparsity of the sparse factor.
Then an upper error bound is established for a general class of noise models,
and is specialized to widely used noise distributions
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Moreover, the minimax lower bounds are also established for the previous noise models,
which match the upper error bounds up to logarithmic factors.
An ADMM based algorithm is developed to solve the resulting model.
Preliminary numerical experiments are presented to demonstrate the superior performance
of the proposed tensor-based model compared with the matrix-based method \cite{soni2016noisy}.
It would be of great interest to study the upper error bounds of the convexification model
by using the tensor $\ell_1$ norm to replace the tensor $\ell_0$ norm for the sparse factor.
It would also be of great interest to establish the convergence of ADMM for our proposed model with general noise observations,
which is nonconvex and has multi-block variables.
Moreover, future work may extend the theory of sparse NTF and completion model with tensor-tensor product to that
with transformed tensor-tensor product under suitable unitary transformations \cite{song2020robust},
which is more effective than tensor-tensor product for robust tensor completion \cite{song2020robust, ng2020patch}
and data compression \cite{zeng2020decompositions}.
\section*{Acknowledgments}
The authors would like to thank
the associate editor and anonymous referees for their helpful comments and constructive suggestions on improving the quality of this paper.
\appendices
\section{Proof of Theorem \ref{maintheo}}\label{ProoA}
We begin by stating the following lemma which will be
useful in the proof of Theorem \ref{maintheo}.
\begin{lemma}\label{leapp}
Let
$\Gamma$ be a countable collection of candidate
reconstructions $\mathcal{X}$ of $\mathcal{X}^*$ in (\ref{TauSet}) and its penalty $\textup{pen}(\mathcal{X})\geq 1$
satisfying $\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq 1$.
For any integer $4\leq m\leq n_1n_2n_3$, let $\Omega\sim \textup{Bern}(\gamma)$.
Moreover, the corresponding observations are obtained by
$p_{\mathcal{X}_{\Omega}^*}(\mathcal{Y}_{\Omega})=\prod_{(i,j,k)\in\Omega}p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk})$,
which are assumed to be conditionally independent given $\Omega$.
If
\begin{equation}\label{kappar}
\kappa\geq \max_{\mathcal{X}\in\Gamma}\max_{i,j,k} D(p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk})||p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})),
\end{equation}
then for any
$\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
the following penalized maximum likelihood estimator
\begin{equation}\label{mxial}
\widetilde{\mathcal{X}}^\xi\in\arg\min_{\mathcal{X}\in\Gamma}\left\{-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\xi\cdot\textup{pen}(\mathcal{X})\right\},
\end{equation}
satisfies
$$
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\left[-2\log H(p_{\widetilde{\mathcal{X}}^\xi},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}
\leq 3\cdot\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\frac{\textup{pen}(\mathcal{X})}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
$$
where the expectation is taken with respect to the joint distribution of $\Omega$ and $\mathcal{Y}_{\Omega}$.
\end{lemma}
The proof of Lemma \ref{leapp} can be derived
easily based on the matrix case \cite[Lemma 8]{soni2016noisy}, see also \cite{li1999estimation}.
At its essence, the three steps of proof in \cite[Lemma 8]{soni2016noisy} are mainly in point-wise manners for
the KL divergence, logarithmic Hellinger affinity, and maximum likelihood estimate.
Therefore, we can extend them to the tensor case easily.
For the sake of brevity, we omit the details here.
Next we give a lemma with respect to the upper bound
of the tensor $\ell_\infty$ norm between a tensor
and its closest surrogate in $\Gamma$.
\begin{lemma}\label{xxappr}
Consider a candidate reconstruction of the form
$\widetilde{\mathcal{X}}^*=\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*,$
where each entry of $\widetilde{\mathcal{A}}^*\in\mathfrak{L}$
is the closest discretized surrogates of the entries of $\mathcal{A}^*$,
and each entry of $\widetilde{\mathcal{B}}^*\in\mathfrak{D}$
is the closest discretized surrogates of the nonzero entries of $\mathcal{B}^*$, and zero otherwise.
Then
$$
\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_\infty\leq\frac{3rn_3b}{\vartheta},
$$
where $\vartheta$ is defined as (\ref{denu}).
\end{lemma}
\begin{IEEEproof}
Let $\widetilde{\mathcal{A}}^*=\mathcal{A}^*+\Delta_{\mathcal{A}^*}$
and $\widetilde{\mathcal{B}}^*=\mathcal{B}^*+\Delta_{\mathcal{B}^*}$.
Then
$$
\widetilde{\mathcal{X}}^*-\mathcal{X}^*
=\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*-\mathcal{A}^*\diamond\mathcal{B}^*
=\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}+\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*+
\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}.
$$
By the definitions of $\widetilde{\mathcal{A}}^*$ and $\widetilde{\mathcal{B}}^*$,
we know that
\begin{equation}\label{DeltaAB}
\|\Delta_{\mathcal{A}^*}\|_\infty\leq \frac{1}{\vartheta-1} \ \
\textup{and} \ \ \|\Delta_{\mathcal{B}^*}\|_\infty\leq \frac{b}{\vartheta-1}.
\end{equation}
By the definition of tensor-tensor product of two tensors, we deduce
\[
\begin{split}
\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}& =\textup{Fold}\left(\textup{Circ}\begin{pmatrix} (\mathbf{A^*})^{(1)} \\ (\mathbf{A^*})^{(2)}
\\ \vdots \\ (\mathbf{A^*})^{(n_3)} \end{pmatrix}\cdot \textup{Unfold}(\Delta_{\mathcal{B}^*})\right) \\
&=
\textup{Fold}\begin{pmatrix} (\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(1)}+(\mathbf{A^*})^{(n_3)}(\Delta_{\mathcal{B}^*})^{(2)}+\cdots +(\mathbf{A^*})^{(2)}(\Delta_{\mathcal{B}^*})^{(n_3)}
\\ (\mathbf{A^*})^{(2)}(\Delta_{\mathcal{B}^*})^{(1)}+ (\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(2)}+\cdots + (\mathbf{A^*})^{(3)}(\Delta_{\mathcal{B}^*})^{(n_3)}
\\ \vdots \\ (\mathbf{A^*})^{(n_3)}(\Delta_{\mathcal{B}^*})^{(1)} +(\mathbf{A^*})^{(n_3-1)}(\Delta_{\mathcal{B}^*})^{(2)} +\cdots +(\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(n_3)} \end{pmatrix}.
\end{split}
\]
It follows from (\ref{DeltaAB}) and $0\leq \mathcal{A}_{ijk}^*\leq 1$ that
$$
\|\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}\|_\infty\leq n_3\max_{i,j}\|(\mathbf{A^*})^{(i)} (\Delta_{\mathcal{B}^*})^{(j)}\|_\infty\leq \frac{rn_3b}{\vartheta-1}.
$$
Similarly, we can get that $\|\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*\|_\infty\leq \frac{rn_3b}{\vartheta-1}$ and $
\|\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}\|_\infty\leq \frac{rn_3b}{(\vartheta-1)^2}$.
Therefore, we obtain that
\[
\begin{split}
\|\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*-\mathcal{A}^*\diamond\mathcal{B}^*\|_\infty
& \leq \|\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}\|_\infty+
\|\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*\|_\infty+
\|\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}\|_\infty \\
&\leq \frac{rn_3b}{\vartheta-1}+ \frac{rn_3b}{\vartheta-1}+\frac{rn_3b}{(\vartheta-1)^2}\\
&\leq\frac{3rn_3b}{\vartheta},
\end{split}
\]
where
the last inequality holds by $\vartheta\geq 8$ in (\ref{denu}).
The proof is completed.
\end{IEEEproof}
\begin{remark}
By the construction of $\widetilde{\mathcal{B}}^*$ in Lemma \ref{xxappr},
we know that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$,
which will be used to establish the upper bounds in the specifical noise models.
\end{remark}
Now we return to prove Theorem \ref{maintheo}.
First, we need to define the penalty
$\textup{pen}(\mathcal{X})$ on the candidate reconstructions
$\mathcal{X}$ of $\mathcal{X}^*$ in the set $\Gamma$ such that the summability condition
\begin{equation}\label{penalt}
\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq1
\end{equation}
holds.
Notice that the condition in (\ref{penalt}) is the well-known Kraft-McMillan
inequality for coding entries of $\Gamma$ with an alphabet of size $2$
\cite{Brockway1957Two, kraft1949device}, see also \cite[Section 5]{cover2006elements}.
If we choose the penalties to be code lengths
for some uniquely decodable binary code of the entries $\mathcal{X}\in\Gamma$,
then (\ref{penalt}) is satisfied automatically \cite[Section 5]{cover2006elements},
which will provide the construction of the penalties.
Next we consider the discretized tensor factors $\mathcal{A}\in\mathfrak{L}$
and $ \mathcal{B}\in\mathfrak{D}$.
Fix an ordering of the indices of entries of $\mathcal{A}$
and encode the amplitude of each entry using $\log_2(\vartheta)$ bits.
Let $\widetilde{\vartheta}:=2^{\lceil\log_2(rn_2)\rceil}$.
Similarly, we encode each nonzero entry of $\mathcal{B}$ using $\log_2(\widetilde{\vartheta})$
bits to denote its location and $\log_2(\vartheta)$ bits for its amplitude.
By this construction, a total of $rn_1n_3\log_2(\vartheta)$ bits
are used to encode $\mathcal{A}$.
Note that $\mathcal{B}$ has $\|\mathcal{B}\|_0$ nonzero entries.
Then a total of $\|\mathcal{B}\|_0(\log_2(\widetilde{\vartheta})+\log_2(\vartheta))$ bits
are used to encode $\mathcal{B}$.
Therefore, we define the penalties $\textup{pen}(\mathcal{X})$ to all $\mathcal{X}\in\Gamma$
as the encoding lengths, i.e.,
$$
\textup{pen}(\mathcal{X})=rn_1n_3\log_2(\vartheta)
+\|\mathcal{B}\|_0(\log_2(\widetilde{\vartheta})+\log_2(\vartheta)).
$$
By the above construction, it is easy to see that such codes are uniquely decodable.
Thus, by Kraft-McMillan inequality \cite{Brockway1957Two, kraft1949device},
we get that $\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq1$.
Let $\lambda = \xi(\log_2(\vartheta)+\log_2(\widetilde{\vartheta}))$,
where $\xi$ is the regularization parameter in (\ref{mxial}).
Note that
$
\xi\cdot\textup{pen}(\mathcal{X})=\lambda\|\mathcal{B}\|_0+\xi rn_1n_3\log_2(\vartheta).
$
Then the minimizer $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model})
is the same as the minimizer $\widetilde{\mathcal{X}}^\xi$ in (\ref{mxial}).
Therefore, by Lemma \ref{leapp},
for any $\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
we get that
\begin{equation}\label{lamxl}
\begin{split}
&\ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\left[-2\log H(p_{\widetilde{\mathcal{X}}^\lambda},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}\\
\leq &\ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\frac{\textup{pen}(\mathcal{X})}{m} \right\rbrace +\frac{8\kappa\log(m)}{m}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\left(\log_2(\vartheta)+\log_2(\widetilde{\vartheta})\right) \frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m} \\
= & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \lambda+\frac{4\kappa\log(2)}{3}\left(\log_2(\vartheta)+\log_2(\widetilde{\vartheta})\right)\right) \frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
\end{split}
\end{equation}
where the second inequality holds by the definition of $\textup{pen}(\mathcal{X})$ and the nonnegativity of $\log_2(\vartheta)$ and $\log_2(\widetilde{\vartheta})$.
Note that
\begin{equation}\label{lohvvb}
\log_2(\vartheta)+\log_2(\widetilde{\vartheta})
\leq 2\beta\log_2\left(n_1\vee n_2\right)+2\log_2(rn_2)
\leq\frac{ 2(\beta+2)\log\left(n_1\vee n_2\right)}{\log(2)},
\end{equation}
where the last inequality follows from $rn_2\leq (n_1\vee n_2)^2$.
Hence, for any
$$
\lambda\geq 4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left(n_1\vee n_2\right),
$$
which is equivalent to $\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
we have
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+
\left( \lambda+\frac{8\kappa(\beta+2) \log\left(n_1\vee n_2\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
\end{split}
\]
where the inequality follows from (\ref{lamxl}) and (\ref{lohvvb}).
This completes the proof.
\section{Proof of Proposition \ref{Gauuupp}}\label{ProoB}
By Theorem \ref{maintheo}, we only need to give
the lower bound of $\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]$
and upper bound of $\min_{\mathcal{X}\in\Gamma}\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}\rbrace$, respectively.
It follows from \cite[Exercise 15.13]{wainwright2019high} that
the KL divergence of two Gaussian distributions is $ D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})=(\mathcal{X}_{i,j,k}^*-\mathcal{X}_{i,j,k})^2/(2\sigma^2)$,
which yields
\begin{equation}\label{KLGaussian}
D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})=\frac{\|\mathcal{X}-\mathcal{X}^*\|_F^2}{2\sigma^2}.
\end{equation}
Note that $ D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
\leq c^2/(2\sigma^2)$ for any $\mathcal{X}\in\Gamma$ and $i,j,k$.
We can choose $\kappa=c^2/(2\sigma^2)$ based on the assumption in Theorem \ref{maintheo}.
Moreover, by \cite[Appendix C]{carter2002deficiency}, we get that
$$
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\widetilde{\mathcal{X}}_{ijk}^\lambda}))
=(\widetilde{\mathcal{X}}_{ijk}^\lambda-\mathcal{X}_{ijk}^*)^2)/(4\sigma^2),
$$
which yields that
$
-2\log(H(p_{\mathcal{X}^*},p_{\widetilde{\mathcal{X}}^\lambda}))=\frac{\|\widetilde{\mathcal{X}}^\lambda-\mathcal{X}^*\|_F^2}{4\sigma^2}.
$
Therefore, by Theorem \ref{maintheo}, we get that
\begin{equation}\label{ErrobG}
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{2\|\mathcal{X}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}+
4\sigma^2\left( \lambda+\frac{4c^2(\beta+2) \log(n_1\vee n_2)}{3\sigma^2}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
& +\frac{16c^2\log(m)}{m}.
\end{split}
\end{equation}
Next we need to establish an upper bound of $\min_{\mathcal{X}\in\Gamma}\|\mathcal{X}-\mathcal{X}^*\|_F^2$.
Note that
\begin{equation}\label{vuppb}
\begin{split}
\vartheta&=2^{\lceil\log_2(n_1\vee n_2)^\beta\rceil}\geq 2^{\beta\log_2(n_1\vee n_2)}\\
&\geq2^{\log_2(n_1\vee n_2)}\cdot2^{\log_2(n_1\vee n_2)\frac{\log(3rn_3^{1.5}b/c)}{\log(n_1\vee n_2)}}\\
&=\frac{3(n_1\vee n_2)r{n_3}^{1.5}b}{c},
\end{split}
\end{equation}
where the second inequality holds by (\ref{beta}).
Since $n_1,n_2\geq 2$, we have $\vartheta\geq\frac{6r{n_3}^{1.5}b}{c}$,
which implies that
$\|\widetilde{\mathcal{X}}^*\|_\infty\leq \frac{3rn_3b}{\vartheta}+\|\mathcal{X}^*\|_\infty\leq c$
by Lemma \ref{xxappr}, where $\widetilde{\mathcal{X}}^*$ is defined in Lemma \ref{xxappr}.
Therefore, $\widetilde{\mathcal{X}}^*=\widetilde{\mathcal{A}}^*\diamond\widetilde{\mathcal{B}}^*\in\Gamma$.
By Lemma \ref{xxappr}, we have that
\begin{equation}\label{lowxtgm}
\min_{\mathcal{X}\in\Gamma}\left\{\frac{2\|\mathcal{X}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\right\}
\leq \frac{2\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{n_1n_2n_3}
\leq \frac{18(rn_3b)^2}{\vartheta^2}\leq \frac{2c^2}{m},
\end{equation}
where the last inequality follows from the fact $m\leq(n_1\vee n_2)^2n_3$ and (\ref{vuppb}).
Moreover, it follows from the construction of $\widetilde{\mathcal{B}}^*$ in Lemma \ref{xxappr}
that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$.
As a consequence, combining (\ref{lambda}), (\ref{ErrobG}) with (\ref{lowxtgm}), we obtain that
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{6c^2}{m}+
16(3\sigma^2+2c^2)(\beta+2) \log((n_1\vee n_2)\sqrt{n_3})
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right) +\frac{16c^2\log(m)}{m}\\
\leq & \ \frac{22c^2\log(m)}{m} + 16(3\sigma^2+2c^2)(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log(n_1\vee n_2) .
\end{split}
\]
This completes the proof.
\section{Proof of Proposition \ref{AddErp}}\label{ProoC}
A random variable is said to have a Laplace distribution, denoted by {Laplace}($\mu, b$) with parameters $b>0, \mu$,
if its probability density function is
$
f(x|\mu,b)=\frac{1}{2b}\exp(-\frac{|x-\mu|}{b}).
$
Before deriving the upper bound of observations with additive Laplace noise,
we establish the KL divergence and logarithmic Hellinger affinity between two distributions.
\begin{lemma}\label{KLHeLap}
Let $p(x)\sim\textup{Laplace}(\mu_1, b_1)$ and $q(x)\sim\textup{Laplace}(\mu_2, b_2)$. Then
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)+\frac{|\mu_2-\mu_1|}{b_2}+\frac{b_1}{b_2}
\exp\left(-\frac{|\mu_2-\mu_1|}{b_1}\right).
$$
Moreover, if $b_1=b_2$, then
$$
-2\log(H(p(x),q(x)))=\frac{|\mu_2-\mu_1|}{b_1}-2\log\left(1+\frac{|\mu_2-\mu_1|}{2b_1}\right).
$$
\end{lemma}
\begin{IEEEproof}
By the definition of the KL divergence of $p(x)$ from $q(x)$, we deduce
\[
\begin{split}
D(p(x)||q(x))&=\mathbb{E}_p\left[\log(p(x))-\log(q(x))\right]\\
&=\log\left(\frac{b_2}{b_1}\right)-\frac{1}{b_1}\mathbb{E}_p[|x-\mu_1|]+\frac{1}{b_2}\mathbb{E}_p[|x-\mu_2|].
\end{split}
\]
Without loss of generality, we assume that $\mu_1<\mu_2$.
By direct calculations, one can get that $\mathbb{E}_p[|x-\mu_1|]=b_1$ and $\mathbb{E}_p[|x-\mu_2|]=\mu_2-\mu_1+b_1\exp(-\frac{\mu_2-\mu_1}{b_1})$.
Then, we get that
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)-1+\frac{\mu_2-\mu_1}{b_2}+\frac{b_1\exp(-\frac{\mu_2-\mu_1}{b_1})}{b_2}.
$$
Therefore, by the symmetry, for any $\mu_1,\mu_2$, we have
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)-1+\frac{|\mu_2-\mu_1|}{b_2}+\frac{b_1\exp\big(-\frac{|\mu_2-\mu_1|}{b_1}\big)}{b_2}.
$$
Moreover, if $b_1=b_2$, the Hellinger affinity is
$$
H(p(x),q(x))=\frac{1}{2b_1}\int_{-\infty}^{+\infty}\exp\left(-\frac{|x-\mu_1|}{2b_1}-\frac{|x-\mu_2|}{2b_1}\right)dx.
$$
With simple manipulations, we obtain
$$
-2\log(H(p(x),q(x)))=\frac{|\mu_2-\mu_1|}{b_1}-2\log\left(1+\frac{|\mu_2-\mu_1|}{2b_1}\right).
$$
The proof is completed.
\end{IEEEproof}
Now we return to prove Proposition \ref{AddErp}.
By Lemma \ref{KLHeLap}, we have that
\begin{equation}\label{KLLappo}
D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
=\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}
-\left(1-\exp\left(-\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}\right)\right)
\leq\frac{1}{2\tau^2}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2,
\end{equation}
where the inequality follows from the fact that $e^{-x}\leq 1-x+\frac{x^2}{2}$ for $x>0$.
Hence, we choose $\kappa=\frac{c^2}{2\tau^2}$.
Notice that
\[
\begin{split}
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\mathcal{X}_{ijk}}))
&=\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}
-2\log\left(1+\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{2\tau}\right)\\
&\geq \frac{2(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2}{(2\tau+c)^2},
\end{split}
\]
where the last inequality follows from the Taylor's expansion,
see also the proof of Corollary 5 in \cite{soni2016noisy}.
Therefore, we have
$D(p_{\mathcal{X}^*}||p_{\mathcal{X}})\leq \frac{1}{2\tau^2}\|\mathcal{X}^*-\mathcal{X}\|_F^2$
and
\[
\begin{split}
-2\log(A(p_{\mathcal{X}^*},p_{\mathcal{X}}))\geq \frac{2}{(2\tau+c)^2}\|\mathcal{X}^*-\mathcal{X}\|_F^2.
\end{split}
\]
It follows from Theorem \ref{maintheo} that
\begin{equation}\label{ErLLP}
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3(2\tau+c)^2}{2}\cdot\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{2\tau^2 n_1n_2n_3}+
\left( \lambda+\frac{4c^2(\beta+2) \log(n_1\vee n_2)}{3\tau^2}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
&\ +\frac{2c^2(2\tau+c)^2\log(m)}{m\tau^2}.
\end{split}
\end{equation}
For the discretitzed surrogate $\widetilde{\mathcal{X}}^*$ of $\mathcal{X}^*$, by Lemma \ref{xxappr}, we get
$$
\min_{\mathcal{X}\in\Gamma}\left\{
\frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{2\tau^2 n_1n_2n_3}\right\}
\leq \frac{\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{2\tau^2 n_1n_2n_3}
\leq \frac{(3rn_3b)^2}{2\tau^2\vartheta^2}
\leq \frac{c^2}{2\tau^2(n_1\vee n_2)^2n_3}\leq\frac{c^2}{2\tau^2m},
$$
where the third inequality follows from (\ref{vuppb}) and the last inequality
follows from the fact that $m\leq (n_1\vee n_2)^2n_3$.
Note that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$ by the construction of $\widetilde{\mathcal{X}}^*$ in Lemma \ref{xxappr}.
Combining (\ref{ErLLP}) with (\ref{lambda}), we obtain that
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3c^2(2\tau+c)^2}{4m\tau^2}
+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)\log\left(n_1\vee n_2\right)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right) \\
&+\frac{2c^2(2\tau+c)^2\log(m)}{m\tau^2}\\
\leq & \ \frac{3c^2(2\tau+c)^2\log(m)}{m\tau^2}+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left(n_1\vee n_2\right).
\end{split}
\]
This completes the proof.
\section{Proof of Proposition \ref{uppPoissobs}}\label{ProoD}
For the KL divergence of Poisson observations,
it follows from \cite[Lemma 8]{cao2016Poisson} that
\begin{equation}\label{DKLPoi}
D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
\leq \frac{1}{\mathcal{X}_{ijk}}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2\leq \frac{1}{\zeta}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2.
\end{equation}
Then we can choose $\kappa=\frac{c^2}{\zeta}$.
Note that
\[
\begin{split}
(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2
=\left(\left(\sqrt{\mathcal{X}_{ijk}^*}-
\sqrt{\mathcal{X}_{ijk}}\right)\left(\sqrt{\mathcal{X}_{ijk}^*}+\sqrt{\mathcal{X}_{ijk}}\right)\right)^2
\leq 4c(\sqrt{\mathcal{X}_{ijk}^*}-
\sqrt{\mathcal{X}_{ijk}})^2.
\end{split}
\]
Therefore, by \cite[Appendix IV]{raginsky2010compressed}, we have
\begin{equation}\label{PoAil}
\begin{split}
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\mathcal{X}_{ijk}}))
&=\left(\sqrt{\mathcal{X}_{ijk}^*}-\sqrt{\mathcal{X}_{ijk}}\right)^2\geq \frac{1}{4c}\left(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}\right)^2.
\end{split}
\end{equation}
Therefore, we get
$D(p_{\mathcal{X}^*}||p_{\mathcal{X}})\leq \frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{\zeta}$
and
$
-2\log(A(p_{\mathcal{X}^*},p_{\mathcal{X}}))\geq \frac{1}{4c}\|\mathcal{X}^*-\mathcal{X}\|_F^2.
$
For the discreteized surrogate $\widetilde{\mathcal{X}}^*
=\widetilde{\mathcal{A}}^*\diamond\widetilde{\mathcal{B}}^*$ of $\mathcal{X}^*$, by Lemma \ref{xxappr}, we have
\begin{equation}\label{KLPois}
\min_{\mathcal{X}\in\Gamma}\left\{\frac{\|\mathcal{X}-\mathcal{X}^*\|_F^2}{\zeta n_1n_2n_3}\right\}
\leq \frac{\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{\zeta n_1n_2n_3}
\leq \frac{9(rn_3b)^2}{\zeta\vartheta^2}\leq \frac{c^2}{\zeta(n_1\vee n_2)^2n_3}\leq \frac{c^2}{\zeta m},
\end{equation}
where the third inequality follows from (\ref{vuppb}).
Notice that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$.
Therefore, combining (\ref{PoAil}), (\ref{KLPois}),
and Theorem \ref{maintheo}, we conclude
\[
\begin{split}
& \ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\|\widetilde{\mathcal{X}}^\lambda-\widetilde{\mathcal{X}}^*\|_F^2}{n_1n_2n_3}\\
\leq & \ \frac{12c^3}{\zeta m}+
12c\left( \lambda+\frac{8\kappa(\beta+2) \log\left(n_1\vee n_2\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m} +\frac{32c^3\log(m)}{\zeta m}\\
=& \ \frac{4c^3(3+8\log(m))}{\zeta m}+
48c\left(1+\frac{4c^2}{3\zeta}\right)
\frac{(\beta+2) \left(rn_1n_3+\|\mathcal{B}^*\|_0\right)\log\left(n_1\vee n_2\right)}{m},
\end{split}
\]
where the equality follows from (\ref{lambda}).
The proof is completed.
\section{Proof of Theorem \ref{lowbounMai}}\label{ProoE}
Let
\begin{equation}\label{GenesubX}
\mathfrak{X}:=\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}:
\mathcal{A}\in\mathfrak{C},\mathcal{B}\in\mathfrak{B}\},
\end{equation}
where $\mathfrak{C}\subseteq\mathbb{R}^{n_1\times r\times n_3}$ is defined as
\begin{equation}\label{CXZG}
\mathfrak{C}:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}: \ \mathcal{A}_{ijk}\in\{0,1,a_0\}\right\} \ \ \text{with} \ \ a_0=\min\left\{1, \ \frac{\beta_a\nu}{b\sqrt{\Delta}}\sqrt{\frac{rn_1n_3}{m}}\right\},
\end{equation}
and $\mathfrak{B}$ is defined as
\begin{equation}\label{GenesubXB}
\mathfrak{B}:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}: \
\mathcal{B}_{ijk}\in\{0,b,b_0\}, \|\mathcal{B}\|_0\leq s\right\}
\ \ \text{with} \ \ b_0=\min\left\{b, \ \frac{\beta_b\nu}{\sqrt{\Delta}}\sqrt{\frac{s}{m}}\right\}.
\end{equation}
Here $\beta_a,\beta_b>0$ are two constants which will be defined later.
From the construction, we get that $\mathfrak{X}\subseteq \mathfrak{U}(r,b,s)$.
Now we define a subset $\mathfrak{X}_{\mathcal{A}}$ such that $\mathfrak{X}_{\mathcal{A}}\subseteq\mathfrak{X}$.
Denote
\begin{equation}\label{SubXAA}
\mathfrak{X}_{\mathcal{A}}:=\left\{\mathcal{X}:=\mathcal{A}\diamond\widetilde{\mathcal{B}}:
\mathcal{A}\in\widetilde{\mathfrak{C}}, \
\widetilde{\mathcal{B}}=b(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathbf{0}_\mathcal{B}) \in \mathfrak{B}\right\},
\end{equation}
where $\widetilde{\mathcal{B}}$ is a block tensor with $\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor$
blocks $\mathcal{I}_r$, $\mathcal{I}_r\in\mathbb{R}^{r\times r\times n_3}$ is the identity tensor,
$\mathbf{0}_{\mathcal{B}}\in\mathbb{R}^{r\times(n_2- \lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor r)\times n_3}$ is the zero tensor with all entries being zero, and
\begin{equation}\label{GeneXACsub}
\widetilde{\mathfrak{C}}:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}:\mathcal{A}_{ijk}\in\{0,a_0\},\
1\leq i\leq n_1, 1\leq j\leq r, 1\leq k\leq n_3\right\}.
\end{equation}
By the definition of identity tensor, we get
$\|\widetilde{\mathcal{B}}\|_0
= r \lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor
\leq r\frac{s\wedge (n_2n_3)}{rn_3}\leq s$.
It follows from the construction of $\widetilde{\mathcal{B}}=b(\mathcal{I}_r \
\cdots \ \mathcal{I}_r \ \mathbf{0}_{\mathcal{B}})$ that $\widetilde{\mathcal{B}}\in\mathfrak{B}$.
Therefore, $\mathfrak{X}_{\mathcal{A}}\subseteq\mathfrak{X}$.
By the definition of tensor-tensor product, we have that
\[
\begin{split}
\mathcal{A}\diamond \mathcal{I}_r=\textup{Fold}\left(\begin{pmatrix}
\mathbf{A}^{(1)} & \mathbf{A}^{(n_3)} & \cdots & \mathbf{A}^{(2)} \\
\mathbf{A}^{(2)} & \mathbf{A}^{(1)} & \cdots & \mathbf{A}^{(3)}\\
\vdots & \vdots & & \vdots \\
\mathbf{A}^{(n_3)} &\mathbf{A}^{(n_3-1)}&\cdots & \mathbf{A}^{(1)} \end{pmatrix}\cdot
\begin{pmatrix} \mathbf{I}_r \\ \mathbf{0}
\\ \vdots \\ \mathbf{0} \end{pmatrix}\right) =\textup{Fold}\begin{pmatrix} \mathbf{A}^{(1)} \\ \mathbf{A}^{(2)}
\\ \vdots \\ \mathbf{A}^{(n_3)} \end{pmatrix} =\mathcal{A},
\end{split}
\]
where $\mathbf{I}_r$ is the $r\times r$ identity matrix.
Hence, for any $\mathcal{X}\in\mathfrak{X}_\mathcal{A}$,
we have that
$$
\mathcal{X}= b\mathcal{A}\diamond (\mathcal{I}_r \
\cdots \ \mathcal{I}_r \ \mathbf{0}_{\mathcal{B}})=b(\mathcal{A}\ \cdots \ \mathcal{A} \ \mathbf{0}_{\mathcal{X}}),
$$
where $\mathbf{0}_{\mathcal{X}}\in\mathbb{R}^{n_1\times(n_2-\lfloor
\frac{s\wedge (n_2n_3)}{rn_3}\rfloor r)\times n_3}$ is a zero tensor.
Notice that each entry of $\mathcal{A}$ is $0$ or $a_0$.
Therefore, by the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009},
we have that there exists a subset $\mathfrak{X}_\mathcal{A}^0\subseteq \mathfrak{X}_\mathcal{A}$
with $|\mathfrak{X}_\mathcal{A}^0|\geq 2^{rn_1n_3/8}+1$,
such that for any $\mathcal{X}_i,\mathcal{X}_j\in\mathfrak{X}_\mathcal{A}^0$,
\begin{equation}\label{KLADis}
\begin{split}
\|\mathcal{X}_i-\mathcal{X}_j\|_F^2&\geq \frac{r n_1 n_3}{8}
\left\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\right\rfloor a_0^2b^2\\
&\geq \frac{n_1n_2n_3}{16} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\},
\end{split}
\end{equation}
where the last inequality of (\ref{KLADis}) holds
by $\lfloor x\rfloor\geq \frac{x}{2}$ for any $x\geq 1$.
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{A}}^0$, we have that
\begin{equation}\label{DKLPr}
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathbf{0}_{ijk}}(\mathcal{Y}_{ijk}))\\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{1}{2\nu^2}|\mathcal{X}_{ijk}|^2\\
&\leq \frac{m}{n_1n_2n_3}\frac{1}{2\nu^2}(rn_1n_3)\left\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\right\rfloor a_0^2b^2\\
& \leq \frac{m}{2\nu^2} \min\left\{\Delta b^2, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\},
\end{split}
\end{equation}
where the first inequality follows from (\ref{DKLpq}),
the second inequality follows from $|\mathcal{X}_{ijk}|\leq b\|\mathcal{A}\|_\infty$,
and the last inequality follows from (\ref{CXZG}).
Therefore, combining (\ref{CXZG}) with (\ref{DKLPr}), we get that
$$
\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))
\leq \left(|\mathfrak{X}_\mathcal{A}^0|-1\right)\frac{\beta_a^2rn_1n_3}{2}\leq \left(|\mathfrak{X}_\mathcal{A}^0|-1\right)
\frac{4\beta_a^2\log(|\mathfrak{X}_{\mathcal{A}}^0|-1)}{\log(2)},
$$
where the last inequality holds by $rn_1n_3\leq 8\log_2(|\mathfrak{X}_{\mathcal{A}}^0|-1)$.
Therefore,
by choosing $0<\beta_a\leq \frac{\sqrt{\alpha_1\log(2)}}{2}$ with $0<\alpha_1<\frac{1}{8}$, we have
$$
\frac{1}{|\mathfrak{X}_\mathcal{A}^0|-1}\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))\leq \alpha_1\log(|\mathfrak{X}_{\mathcal{A}}^0|-1).
$$
Hence, by \cite[Theorem 2.5]{tsybakov2009}, we deduce
\begin{equation}\label{Aminmax}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{A}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{A}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\}\right)\geq \theta_1,
\end{split}
\end{equation}
where
$$
\theta_1=\frac{\sqrt{|\mathfrak{X}_{\mathcal{A}}^0|-1}}
{1+\sqrt{|\mathfrak{X}_{\mathcal{A}}^0|-1}}\left(1-2\alpha_1
-\sqrt{\frac{2\alpha_1}{\log(|\mathfrak{X}_{\mathcal{A}}^0|-1)}}\right)\in(0,1).
$$
Similar to the previous discussion, we construct $\mathfrak{X}_\mathcal{B}$ as follows:
\begin{equation}\label{SubXBB}
\mathfrak{X}_\mathcal{B}:=\left\{\mathcal{X}=\widetilde{\mathcal{A}}\diamond \mathcal{B}:\ \mathcal{B}\in\widetilde{\mathfrak{B}}\right\},
\end{equation}
where $\widetilde{\mathcal{A}}$ is a block tensor defined as
$$
\widetilde{\mathcal{A}}:=\begin{pmatrix} \mathcal{I}_{r'} & \mathbf{0}
\\\vdots & \vdots\\ \mathcal{I}_{r'} & \mathbf{0} \\ \mathbf{0}
& \mathbf{0} \end{pmatrix}\in\mathbb{R}^{n_1\times r\times n_3}
$$
and $\widetilde{\mathfrak{B}}$ is a set defined as
\begin{equation}\label{GeneXbSubBb}
\widetilde{\mathfrak{B}}:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}: \
\mathcal{B}=\begin{pmatrix} \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix},
\mathcal{B}_{r'}\in\mathbb{R}^{r'\times n_2\times n_3},
( \mathcal{B}_{r'})_{ijk}\in\{0,b_0\}, \|\mathcal{B}_{r'}\|_0\leq s \right\}.
\end{equation}
Here $r'=\lceil\frac{s}{n_2n_3}\rceil$,
$\mathcal{I}_{r'}\in\mathbb{R}^{r'\times r'\times n_3}$ is the identity tensor,
there are $\lfloor\frac{n_1}{r'}\rfloor$ block tensors $\mathcal{I}_{r'}$ in $\widetilde{\mathcal{A}}$,
and $\mathbf{0}$ is a zero tensor with all entries being zero whose dimention can be known from the context.
Thus $\mathfrak{X}_{\mathcal{B}}\subseteq \mathfrak{X}$.
Note that
\[
\begin{split}
\mathcal{I}_{r'} \diamond \mathcal{B}_{r'} = \textup{Fold}\left(\begin{pmatrix}
\mathbf{I}_{r'} & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} &\mathbf{I}_{r'} & \cdots & \mathbf{0}\\
\vdots & \vdots & & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{I}_{r'} \end{pmatrix}\cdot
\begin{pmatrix} \mathbf{B}_{r'}^{(1)} \\ \mathbf{B}_{r'}^{(2)}
\\ \vdots \\ \mathbf{B}_{r'}^{(n_3)} \end{pmatrix}\right)=
\textup{Fold}
\begin{pmatrix} \mathbf{B}_{r'}^{(1)} \\ \mathbf{B}_{r'}^{(2)}
\\ \vdots \\ \mathbf{B}_{r'}^{(n_3)} \end{pmatrix}=\mathcal{B}_{r'}.
\end{split}
\]
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{B}}$, we have
$$
\mathcal{X}=\begin{pmatrix} \mathcal{I}_{r'} & \mathbf{0}
\\\vdots & \vdots\\ \mathcal{I}_{r'} & \mathbf{0} \\ \mathbf{0}
& \mathbf{0} \end{pmatrix} \diamond \begin{pmatrix} \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix}=\begin{pmatrix} \mathcal{B}_{r'} \\ \vdots \\ \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix},
$$
where $ ( \mathcal{B}_{r'})_{ijk}\in\{0,b_0\}$, $\|\mathcal{B}_{r'}\|_0\leq s$,
and there are $\lfloor\frac{n_1}{r'}\rfloor$ blocks $ \mathcal{B}_{r'}$ in $\mathcal{X}$.
By the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009},
there is a subset $\mathfrak{X}_{\mathcal{B}}^0\subseteq \mathfrak{X}_{\mathcal{B}}$ such that
for any $\mathcal{X}_i,\mathcal{X}_j\in\mathfrak{X}_{\mathcal{B}}^0$,
\begin{equation}\label{XBKL}
|\mathfrak{X}_{\mathcal{B}}^0|\geq 2^{r'n_2n_3/8}+1 \geq 2^{s/8}+1
\end{equation}
and
\[
\begin{split}
\|\mathcal{X}_i-\mathcal{X}_j\|_F^2&
\geq \frac{r'n_2n_3}{8}\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2
\geq \frac{s}{8}\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2 \\
& \geq \frac{sn_1}{16r'}b_0^2=\frac{n_1n_2n_3}{16}\frac{s}{n_2n_3\lceil\frac{s}{n_2n_3}\rceil}b_0^2\\
&\geq \frac{n_1n_2n_3}{16}\min\left\{\frac{1}{2},\frac{s}{n_2n_3}\right\}b_0^2
\geq \frac{n_1n_2n_3}{32}\Delta\min\left\{b^2,\frac{\beta_b^2\nu^2}{\Delta}\frac{s}{m}\right\}\\
&=\frac{n_1n_2n_3}{32}\min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\},
\end{split}
\]
where the third inequality follows from $\lfloor x\rfloor\geq \frac{x}{2}$ for any $x\geq 1$ and
the fourth inequality follows from the fact that $\frac{x}{\lceil x\rceil}\geq \min\{\frac{1}{2},x\}$ for any $x>0$.
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{B}}^0$,
the KL divergence of of observations with parameters $\mathcal{X}_\Omega$ from $\mathbf{0}_\Omega$ is given by
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{\mathbf{0}_\Omega}(\mathcal{Y}_{\Omega}))
& =\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathbf{0}_{ijk}}(\mathcal{Y}_{ijk}))
\leq \frac{m}{2 \nu^2n_1n_2n_3}\sum_{i,j,k}|\mathcal{X}_{ijk}|^2\\
&\leq \frac{m}{2 \nu^2n_1n_2n_3}n_1(s\wedge (n_2n_3))b_0^2=\frac{m}{2\nu^2}\min\left\{\Delta b^2, \frac{\beta_b^2\nu^2s}{m}\right\}\\
&\leq \frac{\beta_b^2s}{2}\leq 4\beta_b^2\frac{\log(|\mathfrak{X}_\mathcal{B}^0|-1)}{\log(2)},
\end{split}
\]
where the second inequality follows from the fact that the nonzero entries of $\mathcal{X}$ is not larger than
$s\lfloor\frac{n_1}{r'}\rfloor=n_1(s\wedge (n_2n_3))$, and the last inequality holds by
$s\leq 8\log_{2}(|\mathfrak{X}_\mathcal{B}^0|-1)$.
By choosing $0<\beta_b\leq \frac{\sqrt{\alpha_2\log(2)}}{2}$ with $0<\alpha_2<\frac{1}{8}$, we obtain that
$$
\frac{1}{|\mathfrak{X}_\mathcal{B}^0|-1}
\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{B}^0} D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{\mathbf{0}_\Omega}(\mathcal{Y}_{\Omega}))
\leq\alpha_2\log(|\mathfrak{X}_\mathcal{B}^0|-1).
$$
Therefore, by \cite[Theorem 2.5]{tsybakov2009}, we have that
\begin{equation}\label{Bminmax}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{B}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{B}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\}\right)\geq \theta_2,
\end{split}
\end{equation}
where
$$
\theta_2=\frac{\sqrt{|\mathfrak{X}_\mathcal{B}^0|-1}}
{1+\sqrt{|\mathfrak{X}_\mathcal{B}^0|-1}}\left(1-2\alpha_2
-\sqrt{\frac{2\alpha_2}{\log(|\mathfrak{X}_\mathcal{B}^0|-1)}}\right)\in(0,1).
$$
Let $\beta_c=\min\{\beta_a,\beta_b\}$ and $\theta_c=\min\{\theta_1,\theta_2\}.$
Combining (\ref{Aminmax}) and (\ref{Bminmax}), we deduce
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{64} \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s+rn_1n_3}{m}\right)\right\}\right)\geq \theta_c.
$$
By Markov's inequality, we conclude
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
\frac{\theta_c}{64} \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s+rn_1n_3}{m}\right)\right\}.
$$
This completions the proof.
\section{Proof of Proposition \ref{ProupbG}}\label{AppdeF}
By (\ref{KLGaussian}), we choose $\nu=\sigma$.
It follows from Theorem \ref{lowbounMai} that we can get the desired result.
\section{Proof of Proposition \ref{lapUpb}}\label{AppdeG}
By (\ref{KLLappo}), we can choose $\nu=\tau$.
Then the conclusion can be obtained easily by Theorem \ref{lowbounMai}.
\section{Proof of Proposition \ref{Poisslow}}\label{ProoH}
Let \begin{equation}\label{PoscsubX1}
\mathfrak{X}_1:=\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}:
\mathcal{A}\in\mathfrak{C}_1,\mathcal{B}\in\mathfrak{B}_1\},
\end{equation}
where $\mathfrak{C}_1\subseteq\mathbb{R}^{n_1\times r\times n_3}$ is defined as
\begin{equation}\label{CXZ}
\mathfrak{C}_1:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}:\
\mathcal{A}_{ijk}\in\{0,1,\varsigma,a_0\}\right\} \ \ \text{with} \ \ a_0=\min\left\{1-\varsigma, \ \frac{\beta_a\sqrt{\zeta}}{b}\sqrt{\frac{rn_1n_3}{m}}\right\},
\end{equation}
and $\mathfrak{B}$ is defined as
\begin{equation}\label{PoscsubX1B1}
\mathfrak{B}_1:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}:
\mathcal{B}_{ijk}\in\{0,\zeta, b,b_0\}, \|\mathcal{B}\|_0\leq s\right\}
\ \ \text{with} \ \ b_0=\min\left\{b, \ \beta_b\sqrt{\frac{\zeta}{\Delta_1}}\sqrt{\frac{s-n_2n_3}{m}}\right\}.
\end{equation}
Let
\begin{equation}\label{PoissXA1A}
\widetilde{\mathfrak{X}}_\mathcal{A}:=\left\{\mathcal{X}
:=(\mathcal{A}+\mathcal{A}_\varsigma)\diamond\mathcal{B}: \ \mathcal{A}\in\widetilde{\mathfrak{C}}_1, \
\mathcal{B}=b(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )\in\mathfrak{B}_1\right\},
\end{equation}
where $\mathcal{I}_r\in\mathbb{R}^{r\times r\times n_3}$ is the identity tensor,
there are $\lfloor\frac{n_2}{r}\rfloor$ blocks $\mathcal{I}_r$
in $\mathcal{B}$, $\mathcal{B}_\mathcal{I}=\begin{pmatrix} \mathcal{I}_{\mathcal{B}} \\ \mathbf{0} \end{pmatrix}$,
$\mathcal{I}_{\mathcal{B}}\in\mathbb{R}^{(n_2-r\lfloor\frac{n_2}{r}\rfloor)\times (n_2-r\lfloor\frac{n_2}{r}\rfloor)\times n_3}$ is the identity tensor,
$\mathcal{A}_\varsigma\in \mathbb{R}^{n_1\times r\times n_3}$
with $(\mathcal{A}_\varsigma)_{ijk}=\varsigma$, and
\begin{equation}\label{PoisXAsubC1}
\widetilde{\mathfrak{C}}_1:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}: \
\mathcal{A}_{ijk}\in\{0,a_0\}\right\}\subseteq \mathfrak{C}_1.
\end{equation}
From the construction of $\mathcal{B}$, we know that $\|\mathcal{B}\|_0=n_2< s$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}$, we obtain that
\begin{equation}\label{XDCL}
\mathcal{X}=(\mathcal{A}+\mathcal{A}_\varsigma)\diamond\mathcal{B}=\zeta\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )+\mathcal{A}\diamond\mathcal{B},
\end{equation}
where $\mathbb{I}\in\mathbb{R}^{n_1\times r\times n_3}$
denotes a tensor with all entries being $1$.
By the definition of tensor-tensor product, we have that
\[
\begin{split}
\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I})& =
\textup{Fold}\left(\begin{pmatrix}
\mathbf{E}_{n_1r} & \mathbf{E}_{n_1r} & \cdots & \mathbf{E}_{n_1r} \\
\mathbf{E}_{n_1r} &\mathbf{E}_{n_1r} & \cdots & \mathbf{E}_{n_1r} \\
\vdots & \vdots & & \vdots \\
\mathbf{E}_{n_1r} &\mathbf{E}_{n_1r} &\cdots & \mathbf{E}_{n_1r} \end{pmatrix}\cdot
\begin{pmatrix}
\mathbf{I}_{r} & \mathbf{I}_r & \cdots & \mathbf{I}_r & \mathbf{I}_B \\
\mathbf{0} &\mathbf{0} & \cdots &\mathbf{0} & \mathbf{0}\\
\vdots & \vdots & & \vdots & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{0} &\mathbf{0} \end{pmatrix}\right)\\
& =
\textup{Fold}
\begin{pmatrix} \mathbf{E}_{n_1n_2} \\ \mathbf{E}_{n_1n_2}
\\ \vdots \\ \mathbf{E}_{n_1n_2} \end{pmatrix}=\mathbb{I}_{n_1n_2},
\end{split}
\]
where $\mathbf{E}_{n_1r}\in \mathbb{R}^{n_1\times r}$ is an $n_1\times r$ matrix with all entries being $1$, $\mathbf{I}_{r}$
is the $r\times r$ identity matrix,
$\mathbf{I}_{B0}=\begin{pmatrix} \mathbf{I}_{B} \\ \mathbf{0} \end{pmatrix}$
with $\mathbf{I}_{B}\in\mathbb{R}^{(n_2-r\lfloor\frac{n_2}{r}\rfloor)\times (n_2-r\lfloor\frac{n_2}{r}\rfloor)}$
being the identity matrix,
and $\mathbb{I}_{n_1n_2}\in\mathbb{R}^{n_1\times n_2 \times n_3}$ is a tensor with all entries being $1$.
Therefore, we have $\widetilde{\mathfrak{X}}_\mathcal{A}\subseteq\widetilde{\mathfrak{U}}(r,b,s,\zeta)$.
By applying the Varshamov-Gilbert
bound \cite[Lemma 2.9]{tsybakov2009} to the last term of (\ref{XDCL}), there is a subset
$\widetilde{\mathfrak{X}}_\mathcal{A}^0\subseteq\widetilde{\mathfrak{X}}_\mathcal{A}$
such that for any $\mathcal{X}_1,\mathcal{X}_2\in\widetilde{\mathfrak{X}}_\mathcal{A}^0$,
$$
\|\mathcal{X}_1-\mathcal{X}_2\|_F^2\geq \frac{rn_1n_3}{8}\left\lfloor\frac{n_2}{r}\right\rfloor a_0^2b^2\geq
\frac{n_1n_2n_3}{8}\min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}
$$
and $|\widetilde{\mathfrak{X}}_\mathcal{A}^0|\geq 2^{\frac{rn_1n_3}{8}}+1$.
Let $\mathcal{X}_0=\zeta\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}^0$, the KL divergence of $p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})$ from $p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega})$ is
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{(\mathcal{X}_0)_{ijk}}(\mathcal{Y}_{ijk})) \\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{(\mathcal{X}_{ijk}-\zeta)^2}{\zeta}\\
&\leq \frac{m(a_0b)^2}{\zeta}\leq \beta_a^2rn_1n_3,
\end{split}
\]
where the first inequality follows from (\ref{DKLPoi}),
the second inequality follows from (\ref{XDCL}),
and the last inequality follows from (\ref{CXZ}).
Note that $rn_1n_3\leq \frac{8\log(|\widetilde{\mathfrak{X}}_\mathcal{A}^0|-1)}{\log(2)}$.
Then, by choosing $0<\beta_a
\leq \frac{\sqrt{\widetilde{\alpha}_1\log(2)}}{2\sqrt{2}}$ and $0<\widetilde{\alpha}_1<\frac{1}{8}$,
we get that
$$
\frac{1}{|\widetilde{\mathfrak{X}}_\mathcal{A}^0|-1}\sum_{\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))\leq \widetilde{\alpha}_1\log(|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1).
$$
Therefore, by \cite[Theorem 2.5]{tsybakov2009}, we have that
\begin{equation}\label{PXKL}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{A}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq
\frac{1}{8}\min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{A}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{8} \min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}\right)\geq \widetilde{\theta}_1,
\end{split}
\end{equation}
where
$$
\widetilde{\theta}_1=\frac{\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1}}
{1+\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1}}\left(1-2\widetilde{\alpha}_1
-\sqrt{\frac{2\widetilde{\alpha}_1}{\log(|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1)}}\right)\in(0,1).
$$
Similar to the previous discussion, we define a subset $\widetilde{\mathfrak{X}}_\mathcal{B}\subseteq\mathbb{R}^{n_1\times n_2\times n_3}$ as
\begin{equation}\label{PoissXB1B}
\widetilde{\mathfrak{X}}_\mathcal{B}:=\left\{\mathcal{X}
=(\mathcal{A}_0+\mathcal{A}_1)\diamond\mathcal{B}: \mathcal{B}\in\widetilde{\mathfrak{B}}_1\right\},
\end{equation}
where
$$
\mathcal{A}_0:=(\mathcal{M}_1 \ \mathbf{0})\in\mathbb{R}^{n_1\times r\times n_3} \ \text{with} \ \mathcal{M}_1\in\mathbb{R}^{n_1\times 1\times n_3},
$$
and
$$
\mathcal{A}_1:=
\begin{pmatrix}
\mathbf{0}_{r'1} & \mathcal{I}_{r'} & \mathbf{0} \\
\vdots & \vdots & \vdots \\
\mathbf{0}_{r'1} & \mathcal{I}_{r'} & \mathbf{0} \\
\mathbf{0}_{r'1} & \mathbf{0} & \mathbf{0} \end{pmatrix}\in\mathbb{R}^{n_1\times r\times n_3}.
$$
Here $\mathbf{0}_{r'1}\in\mathbb{R}^{r'\times 1\times n_3}$ is a zero tensor,
and $ \mathcal{I}_{r'}\in\mathbb{R}^{r'\times r'\times n_3}$ is the identity tensor,
$\mathcal{M}_1\in\mathbb{R}^{n_1\times 1 \times n_3}$ denotes a tensor that the first frontal slice is all one and other frontal slices are zeros.
$
\widetilde{\mathfrak{B}}_1\subseteq\mathfrak{B}_1
$
is defined as
\begin{equation}\label{PoisXBsubB1}
\widetilde{\mathfrak{B}}_1:=\left\{\mathcal{B}=
\begin{pmatrix} \zeta\mathbb{I}_1 \\ \mathcal{B}_{r'}\\ \mathbf{0} \end{pmatrix},
\mathbb{I}_1\in\mathbb{R}^{1\times n_2\times n_3}, \mathcal{B}_{r'}\in \mathbb{R}^{r'\times n_2\times n_3},
(\mathcal{B}_{r'})_{ijk}\in\{0,b_0\},\|\mathcal{B}_{r'}\|_0\leq s-n_2n_3 \right\},
\end{equation}
where $r'=\lceil\frac{s}{n_2n_3}\rceil-1$ and $\mathbb{I}_1$ represents a tensor with all entries being ones.
By the definition of tensor-tensor product and the structure of $\mathcal{A}_1$,
we get that
$\mathcal{A}_1\diamond\mathcal{B}=\mathcal{A}_1\diamond\mathcal{B}'$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{B}$, we have
\begin{equation}\label{EXXp}
\begin{split}
\mathcal{X}& =\mathcal{A}_0\diamond\mathcal{B}+\mathcal{A}_1\diamond\mathcal{B}\\
& = \textup{Fold}\left(\begin{pmatrix}
\mathbf{N}_{n_1r} & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathbf{N}_{n_1r} & \cdots & \mathbf{0} \\
\vdots & \vdots & & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{N}_{n_1r} \end{pmatrix}\cdot
\begin{pmatrix}
\mathbf{B}^{(1)} \\
\mathbf{B}^{(2)} \\
\vdots \\
\mathbf{B}^{(n_3)}
\end{pmatrix} \right) + \mathcal{A}_1\diamond\mathcal{B}'\\
& =
\textup{Fold}
\begin{pmatrix} \zeta \mathbf{E}_{n_1n_2} \\ \zeta \mathbf{E}_{n_1n_2}
\\ \vdots \\ \zeta \mathbf{E}_{n_1n_2} \end{pmatrix} +\mathcal{A}_1\diamond\mathcal{B}' \\
& =\zeta \mathbb{I}_n+
\mathcal{A}_1\diamond\mathcal{B}',
\end{split}
\end{equation}
where $\mathbf{N}_{n_1r}= (\mathbf{E}_{n_11} \ \mathbf{0}_{n_1(r-1)})\in\mathbb{R}^{n_1\times r}$ with $\mathbf{E}_{n_11} \in\mathbb{R}^{n_1\times 1}$ being a column vector (all $1$),
$$
\mathbf{B}^{(i)}=
\begin{pmatrix} \zeta \mathbf{E}_{1n_2} \\ \mathbf{B}_{r'}^{(i)}\\ \mathbf{0} \end{pmatrix}
$$ with $\mathbf{E}_{1n_2} \in\mathbb{R}^{1\times n_2}$ being a row vector (all $1$) and and $\mathbf{B}_{r'}^{(i)}$ being the $i$th frontal slice of $\mathcal{B}_{r'}$,
$\mathbb{I}_n\in\mathbb{R}^{n_1\times n_2\times n_3}$ is a tensor with all entries being $1$, and
$$
\mathcal{B}'=
\begin{pmatrix} \mathbf{0}_1 \\ \mathcal{B}_{r'}\\ \mathbf{0} \end{pmatrix}
$$
with $\mathbf{0}_1\in\mathbb{R}^{1\times n_2\times n_3}$ being a zero tensor.
Therefore, $\mathcal{X}\in \widetilde{\mathfrak{U}}(r,b,s,\zeta)$, which implies that
$\widetilde{\mathfrak{X}}_\mathcal{B}\subseteq\widetilde{\mathfrak{U}}(r,b,s,\zeta)$.
Therefore, by applying the Varshamov-Gilbert
bound \cite[Lemma 2.9]{tsybakov2009} to the last term of (\ref{EXXp}),
for any $\mathcal{X}_1,\mathcal{X}_2\in\widetilde{\mathfrak{X}}_{\mathcal{B}}^0$,
there exists a subset
$\widetilde{\mathfrak{X}}_{\mathcal{B}}^0\subseteq\widetilde{\mathfrak{X}}_{\mathcal{B}}$
such that
$|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|\geq2^{\frac{s-n_2n_3}{8}}+1$
\[
\begin{split}
\|\mathcal{X}_1-\mathcal{X}_2\|_F^2&\geq \left(\frac{s-n_2n_3}{8}\right)\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2\\
&\geq \left(\frac{s-n_2n_3}{16}\right)\cdot \frac{n_1}{r'}\cdot
\min\left\{b^2, \ \beta_b^2\frac{\zeta}{\Delta_1}\frac{s-n_2n_3}{m}\right\}\\
& \geq \frac{n_1n_2n_3}{32}\Delta_1
\min\left\{b^2, \ \beta_b^2\frac{\zeta}{\Delta_1}\frac{s-n_2n_3}{m}\right\}\\
& = \frac{n_1n_2n_3}{32}
\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\},
\end{split}
\]
where the third inequality holds by the fact that $\frac{x}{\lceil x\rceil}\geq \min\{\frac{1}{2},x\}$ for any $x>0$.
The KL divergence of $p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})$ from
$p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega})$ is
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{(\mathcal{X}_0)_{ijk}}(\mathcal{Y}_{ijk})) \\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{(\mathcal{X}_{ijk}-\zeta)^2}{\zeta}\\
&\leq m\frac{b_0^2}{\zeta}\Delta_1\leq \beta_b^2(s-n_2n_3)\leq \frac{8\beta_b^2\log(|\mathfrak{X}_\mathcal{B}^0|-1)}{\log(2)},
\end{split}
\]
where the second inequality follows from
$\|\mathcal{A}_1\diamond\mathcal{B}'\|_0\leq \lfloor\frac{n_1}{r'}\rfloor(s-n_2n_3)\leq n_1n_2n_3\Delta_1$
and the last inequality follows from $|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|\geq2^{\frac{s-n_2n_3}{8}}+1$.
Therefore,
by choosing $0<\beta_b\leq \frac{\sqrt{\widetilde{\alpha}_2\log(2)}}{2\sqrt{2}}$ with $0<\widetilde{\alpha}_2<1/8$, we have
$$
\frac{1}{|\mathfrak{X}_\mathcal{B}^0|-1}\sum_{\mathcal{X}\in \mathfrak{X}_\mathcal{B}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
\leq \widetilde{\alpha}_2\log(|\mathfrak{X}_\mathcal{B}^0|-1).
$$
By \cite[Theorem 2.5]{tsybakov2009}, we obtain that
\begin{equation}\label{PXKL2}
\begin{split}
& \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{B}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq
\frac{1}{32}\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{B}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{32}\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\}\right)\geq \widetilde{\theta}_2,
\end{split}
\end{equation}
where
$$
\widetilde{\theta}_2=\frac{\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1}}
{1+\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1}}\left(1-2\widetilde{\alpha}_2
-\sqrt{\frac{2\widetilde{\alpha}_2}{\log(|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1)}}\right)\in(0,1).
$$
By combining (\ref{PXKL}) and (\ref{PXKL2}), we deduce
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{64} \min\left\{\widetilde{\Delta} b^2,\widetilde{\beta}_c^2\zeta\left(\frac{s-n_2n_3+rn_1n_3}{m}\right)\right\}\right)\geq \widetilde{\theta}_c,
$$
where $\widetilde{\Delta}:=\min\{(1-\varsigma)^2, \Delta_1\}$,
$\widetilde{\beta}_c:=\{\beta_a,\beta_b\}$,
and $\widetilde{\theta}_c=\min\{ \widetilde{\theta}_1, \widetilde{\theta}_2\}$.
By Markov's inequality, the desired conclusion is obtained easily.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-262 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The phenomenal spread of the COVID-19 pandemic will have unprecedented consequences for human life and livelihood. In the absence of a treatment or vaccine to develop immunity against the disease, governments around the world have used non-pharmaceutical, risk mitigation strategies such as lockdowns, shelter-in-place, school and business closures, travel bans or restrictions to limit movement and prevent contagion. The magnitude and effectiveness of such mitigation strategies in preventing contagion and reducing the number of deaths is shown in Europe where such mitigation strategies have reduced the reproduction number over time $(R_t)$ below 1, which means that the virus will gradually stop spreading. Since the beginning of the epidemic, an estimated 3.1 million deaths were averted across 11 European countries attributable to these risk mitigation strategies \cite{flaxman2020estimating}.
In the United States, the adoption, and enforcement of non-pharmaceutical, risk mitigation strategies have varied by state and across time. The first confirmed COVID-19 case was reported on January 21, 2020, in Washington State \cite{ghinai2020first}. While transmissions were documented since, a national emergency was declared later on March 13 \cite{house2020proclamation}. At that time, international travel restrictions were enforced. By March 16, six bay area counties declared shelter-in-place orders and on March 19, California was the first state to issue a state-wide order. Since then, several communities and states implemented stay-at-home orders and social distancing measures. As of March 30, there were 162,600 confirmed COVID-19 cases in the U.S. \cite{house2020proclamation} and 30 states had announced shelter-in-place orders. On April 1, two additional states and the District of Columbia issued statewide shelter-in-place orders followed by 7 more states by April 6.
Historically, among the U.S. cities that were hit by the 1918 Spanish flu, social distancing played a pivotal role in flattening the pandemic curve. In fact, the cities which delayed enforcing social distancing saw the highest peaks in new cases of the disease. Policies aimed at reducing human transmission of COVID-19 included lockdown, travel restrictions, quarantine, curfew, cancellation and postponing events, and facility closures. Measuring the dynamic impact of these interventions is challenging \cite{adiga2020interplay,dascritical} and confounded by several factors such as differences in the specific modes and dates of the policy-driven measures adopted by or enforced across states, regions, and countries, and, of course, the actual diversity of human behaviors at these locations.
Given the current ubiquitous usage of mobile devices among the U.S. populations, social mobility as measured by aggregating the geospatial statistics of their daily movements could serve as a proxy measure to assess the impact of such policies as social distancing on human transmission. In the particular context of the current pandemic, human mobility data could be estimated using geolocation reports from user smartphones and other mobile devices that were made available by multiple providers including Google and Apple, among others. In this study, we obtained such data from Descartes Labs, which made anonymized location-specific time series data on mobility index freely available to researchers through their GitHub site: \url{https://github.com/descarteslabs/DL-COVID-19.} Thus, we obtained location-specific bivariate time series on daily mobility index and disease incidence, i.e., new cases of COVID-19 in the U.S.
In this study, we are interested to (a) measure and compare the temporal dependencies between mobility ($M$) and new cases ($N$) across 151 cities in the U.S. with relatively high incidence of COVID-19 by May 31, 2010. We believe that these dependency patterns vary not only over time but across locations and populations. For this purpose, we proposed a novel application of Optimal Transport to compute the distance between patterns of ($N$, mobility, time) and its variants for each pair of cities. This allowed us to (b) group the cities into different hierarchical clusterings, and (c) compute the barycenter to describe the overall dynamic pattern of each identified cluster. Finally, we also used city-specific socioeconomic covariates to analyze the composition of each cluster. A pipeline for our analytical framework is described in the following section.
\begin{figure}
\centering
\includegraphics[scale=0.2]{heatmap_panel.pdf}
\caption{The dendrograms show 3 hierarchical clusterings of cities (a), (b) and (c) respectively based on ($N$, $M$, $t$), ($N$, $\Delta M$, $t$) and ($N$, $M'$, $t$) using Ward's linkage. Based on visual inspection of the seriated distance matrix, 10 clusters were identified in each case, as shown on the heatmaps.}
\label{fig:f1}
\end{figure}
\section{Data and Methods}
\subsection{Datasets}
\subsubsection{COVID-19 incidence and population data}
Based on cumulative COVID-19 cases data from the Johns Hopkins Coronavirus Resource Center (\url{https://coronavirus.jhu.edu/}), for this study, we compiled time series data on daily new cases of the disease for more than 300 U.S. counties from 32 states and the District of Columbia and matched by five-digit FIPS code or county name to dynamic and static variables from additional data sources. Since a single county may consist of multiple individual cities, we include the list of all city labels within each aggregate group to represent a greater metropolitan area. A total of 151 of such metropolitan areas that had at least 1,000 reported cases of COVID-19 by May 31, 2020, were selected for this study. Population covariates for these areas were collected from the online resources of the U.S. Census Bureau and the U.S. Centers for Disease Control and Prevention (CDC) (\url{https://www.census.gov/quickfacts/}, \url{https://svi.cdc.gov/}).
\subsubsection{Human mobility index data}
Anonymized geolocated mobile phone data from several providers including Google and Apple, timestamped with local time, were recently made available for analysis of human mobility patterns during the pandemic. Based on geolocation pings from a collection of mobile devices reporting consistently throughout the day, anonymous aggregated mobility indices were calculated for each county at Descartes Lab. The maximum distance moved by each node, after excluding outliers, from the first reported location was calculated. Using this value, the median across all devices in the sample is computed to generate a mobility metric for select locations at county level. Descartes Labs further defines a normalized mobility index as a proportion of the median of the maximum distance mobility to the ``normal'' median during an earlier time-period multiplied by a factor of 100. Thus, the mobility index provides a baseline comparison to evaluate relative changes in population behavior during COVID-19 pandemic.\cite{warren2020mobility}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{boxplot_final.pdf}
\caption{The boxplots show the differences across the identified 10 clusters of cities in terms of the values of the 8 most significant covariates: (a) Reaction Time (RT), (b) hispanic percent, (c) black percent, (d) population size, (e) senior percent, (f) population density 2010, (g) persons per household, and (h) SVI ses. We jittered the overlapping RT points for easy visualization.}
\label{fig:f6}
\end{figure}
\subsection{Methods}
Below we list the steps of the overall workflow of our framework, and briefly describe the same in the following paragraphs of this section.
\begin{algorithm}
\caption{The workflow of the analytical framework}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Steps of the Analysis:}}
\REQUIRE {For each of $k (=151)$ given cities, a bivariate time series: mobility ($M$) and new cases ($N$) for each date ($t$) over a fixed time-interval (March 1 -- May 31, 2020).}
\ENSURE .
\STATE As measures of mobility, along with $M$, also consider its variants $\Delta M$ and $M'$ computed with equations \ref{eq1} and \ref{eq2}.
\STATE Performed normalized ranking of variables ($M$/$\Delta M$/$M'$, $N$ and $t$) to represent each city as a discrete set of ranked points in unit cube ($[0, 1]^3$)
\STATE Compute optimal transport (OT) distance between the pointsets representing each pair of cities.
\STATE Cluster the cities based on the OT distance matrix. Three different hierarchical clusterings $HC1$, $HC2$ and $HC3$ were obtained based on Ward's linkage method and 3 variants of mobility: $M$, $\Delta M$, and $M'$ respectively.
\STATE Apply HCMapper to compare the dendrograms of different clusterings ($HC1$, $HC2$ and $HC3$). Select the clustering ($HC3$) that yields the most spatially homogeneous clusters.
\STATE Compute Wasserstein barycenter for each cluster of the selected clustering ($HC3$).
\STATE Analyze the composition of the clusters by applying random forest classifier on 15 city-specific covariates as feature set. Identify the contributions of the covariates to discriminate among the clusters.
\end{algorithmic}
\end{algorithm}
\subsubsection{Temporal patterns of mobility}
To better understand the temporal patterns of mobility, in addition to the given non-negative mobility index $M$, we also use two variants: delta mobility $\Delta M$ and $M'$ defined as follows:
\begin{equation}
\Delta M(t)= M(t)-M(t-1)\\
\label{eq1}
\end{equation}
and
\begin{equation}
M'(t)=((M(t)-M(t-1))+0.5(M(t+1)-M(t-1)))/2.
\label{eq2}
\end{equation}
Here $\Delta M$ is the first difference, and $M'$ approximately the local derivative \cite{keogh2001derivative}, of the time series $M$, and yet, unlike $M$, these are not restricted to be non-negative.
\subsubsection{Representing a city as discrete set of points} With the above definitions, the temporal relationship between mobility (and its variants) and new cases of each city in our data can be depicted as tuples ($M/\Delta M/M'$, $N$, $t$). We represent the time series by performing a normalized ranking of the variables so as to represent each city by a discrete set of points in unit cube $[0, 1]^3$. This normalized ranking is frequently used as a estimator for empirical copulas with good convergence properties \cite{deheuvels1980non}. The cities can have different representations by considering the three definitions of mobility metrics, and in each case, we can have different groupings of cities. A comparative analysis of all groupings can provide a correlation structure between groups of cities from different perspectives.
\subsubsection{Comparing cities using optimal transport}
To distinguish between the temporal dependence between mobility and new cases of two cities, we use Wasserstein distance from optimal transport theory. We compute Wasserstein distance between two discrete sets of points in unit cube, corresponding to two cities, as the minimum cost of transforming the discrete distribution of one set of points to the other set. It can be computed without the need of such steps as fitting kernel densities or arbitrary binning that can introduce noise to data. Wasserstein distance between two distributions on a given metric space $M$ is conceptualized by the minimum ``cost" to transport or morph one pile of dirt into another -- the so-called `earth mover's distance'. This ``global'' minimization over all possible ways to morph takes into consideration the ``local'' cost of morphing each grain of dirt across the piles \cite{peyre2019computational}.
Given a metric space $\mathcal{M}$, the distance optimally transports the probability $\mu$ defined over $\mathcal{M}$ to turn it into $\nu$:
\begin{equation}
W_p(\mu,\nu)=\left(\inf_{\lambda \in \tau(\mu,\nu)} \int _{\mathcal{M} \times \mathcal{M}} d(x,y)^p d\lambda (x,y)\right)^{1/p},
\end{equation} where $p \ge 1$, $\tau(\mu,\nu)$ denotes the collection of all measures on $\mathcal{M}\times \mathcal{M}$ with marginals $\mu$ and $\nu$. The intuition and motivation of this metric came from optimal transport problem, a classical problem in mathematics, which was first introduced by the French mathematician Gaspard Monge in 1781 and later formalized in a relaxed form by L.~Kantorovitch in 1942.
\subsubsection{Clustering the cities}
Upon computing optimal transport based distances for each pair of cities, hierarchical clustering of the cities is performed using Ward's minimum variance method \cite{inbook}. For the 3 variants of mobility ($M/\Delta M/M'$), we obtained 3 different hierarchical clusterings: $\mathrm{HC}{}1$, $\mathrm{HC}{}2$ and $\mathrm{HC}{}3$ respectively. Based on visual inspection of the distance matrix seriated by the hierarchical clustering, and looping over the number of clusters, we take a relevant flat cut in the dendrogram. For each case, we got 10 clusters, each consisting of cities that are similar with respect to their dependence between mobility and new cases.
\subsubsection{Comparing the clusterings}
The resulting clusters are compared using a visualization tool called HCMapper \cite{marti2015hcmapper}. HCMapper can compare a pair of dendrograms of two different hierarchical clusterings computed on the same dataset. It aims to find clustering singularities between two models by displaying multiscale partition-based layered structures. The three different clustering results are compared with HCMapper to sought out the structural instabilities of clustering hierarchies. In particular, the display graph of HCMapper has $n$ columns, where $n$ represents the number of hierarchies we want to compare (here $n=3$). Each column consists of the same number of flat clusters, which is depicted as rectangles within the column. Rectangle size is proportional to the number of cities within the clusters, while an edge between two clusters tells the number of shared cities between them. Thus, a one-to-one mapping between the clusters of two columns preferably depicts a similar perfect clustering while too many edges crossing between two columns describe a dissimilar structure.
We also checked the spatial homogeneity of a clustering in terms of the average number of clusters in which the cities of each state were assigned to, over all states that are represented in our data. Moran's $I$ to assess the spatial correlation among the cluster labels was also computed.
\subsubsection{Summarizing the distinctive cluster patterns}
We summarize the overall pattern of each identified cluster by computing its barycenter in Wasserstein space. It efficiently describes the underlying temporal dependence between the measures of mobility (here we use $M'$) and incidence within each cluster.
Wasserstein distances have several important theoretical and practical properties \cite{villani2008optimal, pele2009fast}. Among these, a barycenter in Wasserstein space is an appealing concept which already shows a high potential in different applications such as, in artificial intelligence, machine learning and Statistics \cite{carlier2015numerical,le2017existence,benamou2015iterative,cuturi2014fast}.
A Wasserstein barycenter \cite{agueh2011barycenters, cuturi2014fast} of $n$ measures $\nu_1 \ldots \nu_n$ in $\mathbb{P} \in P(\mathcal{M})$ is defined as a minimizer of the function $f$ over $\mathbb{P}$, where
\begin{equation}
f(\mu)=\frac{1}{N}\sum_{i=1}^N W_p^p(\nu_i,\mu).
\end{equation}
A fast algorithm \cite{cuturi2014fast} was proposed to minimize the sum of optimal transport distances from one measure (the variable) to a set of fixed measures using gradient descent. These gradients are computed using matrix scaling algorithms in a considerable lower computational cost. We have used the method proposed in \cite{cuturi2014fast} and implemented in the POT library (\url{https://pythonot.github.io/}) to compute the barycenter of each cluster.
\subsubsection{Analysis of the clusters using static covariates}
To understand the composition of the identified clusters, i.e., what could explain the similarity in the temporal dependence between mobility and new cases of the cities that belong to a cluster, we used different city-specific population covariates, while checking their relative contributions to discriminating the clusters. These covariates include (a) date of Stay-at-home order, (b) population size, (c) persons per household, (d) senior percentage, (e) black percent, (f) hispanic percent, (g) poor percent, (h) population density 2010, (i) SVI ses (j) SVI minority, (k) SVI overall, and (l) Gini index. Here SVI stands for Social Vulnerability Index of CDC, and ``ses" socioeconomic status. In addition, we also compute the `reaction time' (RT) of each city as the number of days between the stay-at-home-order at a given city and a common reference stating point date (taken as 15 March, 2020).
This step also provides a form of external validation of the clustering results as none of the above covariates were used for clustering. To demonstrate, we conducted this step with the clustering $\mathrm{HC}{}3$ obtained from the time series $M'$.
Using the covariates as features of the cities, a random forest classifier is trained to learn the cluster labels. The aim is to see how the clustering could be explained by the covariates. To find which of the features contribute most to discriminate the clusters of cities we computed the mean Shapley values \cite{NIPS2017_7062}. A Shapley value quantifies the magnitude of the impact of the features on the classification task. The ranking of the covariates/features based on the mean Shapley values determines the most relevant features in this regard.
\section{Results}
In this study, we used bivariate time series on daily values of mobility index and COVID-19 incidence over a 3-month time-period (March 1 -- May 31, 2020) for 151 U.S. cities that have reported at least 1,000 cases by the end of this period. By transforming the data for each city to a corresponding discrete set of ranked points on the unit cube, we computed the Optimal Transport distance as measure of temporal dependency between mobility and new cases for each pair of cities. Three definitions of mobility ($M$/$\Delta M$/$M'$) allowed us to generate 3 hierarchical clusterings: $HC1$, $HC2$ and $HC3$, as shown in Figure \ref{fig:f1} and Table \ref{longtab} . Each of the clusterings yielded 10 clusters of cities, which were compared for their sizes, singularities and divergences by the tool HCMapper, as shown in Figure \ref{fig:f2}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{hcmapper.pdf}
\caption{HCMapper is used for comparison of 3 hierarchical clusterings of cities based on $\mathrm{HC}{}1$($N$, $M$, $t$), $\mathrm{HC}{}2$($N$, $\Delta M$, $t$) and $\mathrm{HC}{}3$($N$, $M'$, $t$). The cluster sizes and divergences across the clusterings are shown with blue rectangles and grey edges respectively.}
\label{fig:f2}
\end{figure}
Among the clusterings, $HC3$ appeared to have clusters of consistent sizes, and also the fewest singularities and divergences. Further, when we mapped the counties representing the cities with cluster-specific colors, as shown in Figure \ref{fig:f3}, we observed that the $HC3$ clusters showed high spatial correlation (Moran's $I$ p-value of 0). They also showed the least disagreements among the cluster assignments of cities with each state, although some states like California and Florida contained cities from more than one cluster (see table \ref{longtab}). We looked into possible explanations of such cluster-specific differences using local covariates, as described below.
\begin{figure}
\centering
\includegraphics[scale=2.3]{map_mobdash_new1.png}
\caption{The geographic distribution of the 10 identified clusters by $HC3$ are shown. The county corresponding to each city is mapped in its cluster-specific color.}
\label{fig:f3}
\end{figure}
Given the assumption of this study is that there are dynamic relationships between mobility and COVID-19 incidence that change not only over time but also across locations and populations, we computed Wasserstein barycenters of the 10 identified clusters, as shown in Figure \ref{fig:f4}, to describe the overall dependency structure that is specific to each cluster. The temporal changes in the dependencies are shown in 3-dimensional plots, as the shading changes from light (early points) to dark green (later points) along the z-axis (time).
\begin{figure}
\centering
\includegraphics[scale=0.25]{panel_cluster.pdf}
\caption{The overall temporal pattern of dependency between normalized measures of mobility and COVID-19 incidence for each identified cluster of cities is shown along 3-dimensions ($N$, $M'$, $t$). The Wasserstein barycenters of the 10 clusters are depicted within the unit cube with the darker dots representing later points in time (z-axis).}
\label{fig:f4}
\end{figure}
Finally, we sought to understand the factors that possibly underlie the dynamic patterns of each cluster as described above. Towards this, our results from Random Forest classification identified socioeconomic characteristics (or covariates) of the cities that could discriminate among the assigned cluster labels. The 8 most significantly discriminating covariates are shown in Figure \ref{fig:f5} along with their cluster-specific contributions measured by the mean Shapley values. Notably, none of these covariates were used for clustering, and are yet able to discriminate among the clusters. Figure \ref{fig:f6} shows the distinctive distributions of these covariates across the 10 identified clusters as boxplots. Reaction time is robustly the first and major contributor, which is indicative of the effects of stay-at-home on the different patterns of COVID-19 dynamics.
\begin{figure}
\centering
\includegraphics[scale=0.6]{shaple_val.pdf}
\caption{The relative contributions of 8 most significant static city-specific covariates in discrimination of the 10 clusters identified by $\mathrm{HC}{}3$ and shown with different colors. The contributions towards each cluster are measured by mean Shapley values for each covariate.}
\label{fig:f5}
\end{figure}
\vspace{0.1in}
{
\topcaption{Table of 151 cities with their respective Date (mm.dd.2020) of stay-at-home order, Reaction Time (RT), and clusters labels assigned by HC1, HC2 and HC3. The absence of stay-at-home order is denoted by NA.
\label{longtab}
\centering
\begin{supertabular}{|p{1.3in}|p{0.5in}|p{0.2in}|p{0.2in}|p{0.2in}|p{0.2in}|}\hline
County & Date & RT & HC1 & HC2 & HC3 \\ \hline
Jefferson, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Mobile, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Montgomery, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Maricopa, AZ & 3.31 & 16 & 1 & 1 & 1 \\ \hline
Pima, AZ & 3.31 & 16 & 3 & 1 & 1 \\ \hline
Yuma, AZ & 3.31 & 16 & 3 & 1 & 1 \\ \hline
Alameda, CA & 3.19 & 4 & 3 & 1 & 1 \\ \hline
Contra Costa, CA & 3.19 & 4 & 3 & 2 & 1 \\ \hline
Fresno, CA & 3.19 & 4 & 3 & 2 & 1 \\ \hline
Kern, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Los Angeles, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Orange, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Riverside, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Sacramento, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Bernardino, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Diego, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Francisco, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
San Mateo, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Santa Barbara, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Santa Clara, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Tulare, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Ventura, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Adams, CO & 3.26 & 11 & 2 & 9 & 3 \\ \hline
Arapahoe, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
Denver, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
El Paso, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
Jefferson, CO & 3.26 & 11 & 10 & 9 & 2 \\ \hline
Weld, CO & 3.26 & 11 & 10 & 9 & 2 \\ \hline
Fairfield, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
Hartford, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
New Haven, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
New Castle, DE & 3.24 & 9 & 10 & 9 & 2 \\ \hline
Washington, DC & 4.1 & 17 & 10 & 9 & 2 \\ \hline
Broward, FL & 4.3 & 19 & 10 & 9 & 2 \\ \hline
Duval, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Hillsborough, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Lee, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Miami-Dade, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Orange, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Palm Beach, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Pinellas, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Polk, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
DeKalb, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Dougherty, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Fulton, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Cook, IL & 3.21 & 6 & 9 & 8 & 9 \\ \hline
DuPage, IL & 3.21 & 6 & 9 & 8 & 9 \\ \hline
Kane, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Lake, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Will, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Winnebago, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Allen, IN & 3.24 & 9 & 9 & 8 & 10 \\ \hline
Hamilton, IN & 3.24 & 9 & 9 & 8 & 10 \\ \hline
Lake, IN & 3.24 & 9 & 8 & 8 & 10 \\ \hline
Marion, IN & 3.24 & 9 & 8 & 8 & 10 \\ \hline
St. Joseph, IN & 3.24 & 9 & 8 & 5 & 10 \\ \hline
Black Hawk, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Polk, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Woodbury, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Wyandotte, KS & 3.3 & 15 & 8 & 5 & 10 \\ \hline
Jefferson, KY & 3.26 & 11 & 8 & 5 & 10 \\ \hline
Caddo, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
East Baton Rouge, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
Jefferson, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
Orleans, LA & 3.23 & 8 & 8 & 5 & 7 \\ \hline
Cumberland, ME & 4.2 & 18 & 8 & 5 & 7 \\ \hline
Baltimore City, MD & 3.3 & 15 & 8 & 5 & 7 \\ \hline
Bristol, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Essex, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Hampden, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Middlesex, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Norfolk, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Plymouth, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Suffolk, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Worcester, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Genesee, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Kent, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Macomb, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Oakland, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Washtenaw, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Wayne, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Hennepin, MN & 3.27 & 12 & 8 & 5 & 7 \\ \hline
Ramsey, MN & 3.27 & 12 & 8 & 5 & 7 \\ \hline
Hinds, MS & 4.3 & 19 & 8 & 5 & 8 \\ \hline
St. Louis City, MO & 4.6 & 22 & 8 & 5 & 8 \\ \hline
Douglas, NE & NA & 85 & 8 & 5 & 8 \\ \hline
Lancaster, NE & NA & 85 & 6 & 5 & 8 \\ \hline
Clark, NV & 4.1 & 17 & 6 & 5 & 8 \\ \hline
Washoe, NV & 4.1 & 17 & 6 & 5 & 8 \\ \hline
Hillsborough, NH & 3.27 & 12 & 6 & 5 & 8 \\ \hline
Camden, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Essex, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Hudson, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Mercer, NJ & 3.21 & 6 & 6 & 6 & 6 \\ \hline
Passaic, NJ & 3.21 & 6 & 6 & 6 & 6 \\ \hline
Union, NJ & 3.21 & 6 & 7 & 6 & 6 \\ \hline
Bernalillo, NM & 3.24 & 9 & 7 & 6 & 6 \\ \hline
Albany, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Erie, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
New York City, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Onondaga, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Westchester, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Durham, NC & 3.3 & 15 & 7 & 6 & 6 \\ \hline
Forsyth, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Guilford, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Mecklenburg, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Wake, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Cass, ND & NA & 85 & 4 & 7 & 6 \\ \hline
Cuyahoga, OH & 3.23 & 8 & 4 & 7 & 6 \\ \hline
Franklin, OH & 3.23 & 8 & 4 & 7 & 5 \\ \hline
Hamilton, OH & 3.23 & 8 & 4 & 7 & 5 \\ \hline
Lucas, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Mahoning, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Summit, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Oklahoma, OK & NA & 85 & 4 & 3 & 5 \\ \hline
Multnomah, OR & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Allegheny, PA & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Berks, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Lackawanna, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Lehigh, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Northampton, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Philadelphia, PA & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Kent, RI & 3.28 & 13 & 5 & 3 & 5 \\ \hline
Providence, RI & 3.28 & 13 & 5 & 3 & 5 \\ \hline
Richland, SC & NA & 85 & 5 & 3 & 5 \\ \hline
Minnehaha, SD & NA & 85 & 5 & 4 & 5 \\ \hline
Davidson, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Rutherford, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Shelby, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Bexar, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Collin, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Dallas, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Denton, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
El Paso, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Fort Bend, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Harris, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Potter, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Tarrant, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Travis, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Salt Lake, UT & NA & 85 & 5 & 4 & 4 \\ \hline
Utah, UT & NA & 85 & 5 & 4 & 4 \\ \hline
Alexandria, VA & 3.3 & 15 & 5 & 4 & 4 \\ \hline
Richmond City, VA & 3.3 & 15 & 5 & 4 & 4 \\ \hline
King, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Pierce, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Snohomish, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Yakima, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Brown, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Kenosha, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Milwaukee, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Racine, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
\end{supertabular}
}
\section{Discussion}
The U.S. is alone among the countries in the industrialized world where the expected “flattening of the curve” did not take place yet. By May 31, 2020, there were 1.8 million confirmed COVID-19 cases and 99,800 deaths. 45 states were in various phases of re-opening and 5 states did not have shelter-in-place orders. By mid-June, cases had started to rise and as of June 26, there were 2.5 million confirmed cases and over 120,000 deaths. Some states that had begun to re-open parts of their economy have paused or delayed opening in the face of a surge of new cases.
Estimating the impact of mitigation strategies on cases and deaths in the U.S. is challenging particularly due to the lack of uniformity in timing, implementation, enforcement, and adherence across states. Nevertheless, early observations point to the utility of such measures, particularly shelter-in-place orders in reducing infection spread and deaths (per data from California and Washington State) \cite{washin}. Counties implementing shelter-in-place orders were associated with a 30.2\% reduction in weekly cases after 1 week, 40\% reduction after 2 weeks, and 48.6\% reduction after 3 weeks \cite{fowler2020effect} Conversely, model projections estimate a steady rise in cases and over 181,000 deaths if such mitigation strategies were to be eased and not re-enforced before October 1 \cite{washin1}.
Many researchers worldwide are currently investigating the changes in social and individual behaviors in response to the sudden yet prolonged outbreaks of COVID-19, e.g., \cite{adiga2020interplay,dascritical}. As the pandemic progresses, and until medical treatments or vaccination are available, new and diverse patterns of mobility, be they voluntary or via interventions, may emerge in each society. It is, therefore, of great importance to epidemiologists and policy-makers to understand the dynamic patterns of dependency between human mobility and COVID-19 incidence in order to precisely evaluate the impact of such measures. In this study, we have shown that such dependencies not only change over time but across locations and populations, and are possibly determined by underlying socioeconomic characteristics. Our analytical approach is particularly relevant considering the high socioeconomic costs of such measures.
We understand that our study has some limitations. We note that each step of our framework could be improved in isolation or as a pipeline, which we aim to do in our future work. We have also developed a prototype of an interactive tool to run online the steps of our analytical pipeline. It will be made publicly available shortly upon completion.
Here it is important to note the so-called ecological fallacy in inferring about individual health outcomes based on data or results that are obtained at either city or county levels. Such inference may suffer from incorrect assumptions and biases, which, however unintentional, must be avoided. Any views that might have reflected on the analysis or results of our study are those of the authors only, and not the organizations they are associated with.
| proofpile-arXiv_065-264 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Condensed matter offers a fascinating test bed to explore different concepts in non-relativistic and relativistic quantum field theories~\cite{Coleman:2003ku}, with some prominent examples being massless Dirac quasiparticles in graphene~\cite{Novoselov:2005es}, Majorana fermions in superconductors~\cite{Fu:2008gu, Mourik:2012je, Rokhinson:2012ep}, and anyons in two-dimensional electron gases~\cite{Bartolomei:2020gs}. In ultrathin ferromagnets, chiral interactions of the Dzyaloshinskii-Moriya form~\cite{Dzyaloshinsky:1958vq, Moriya:1960go, Moriya:1960kc, Fert:1980hr, Crepieux:1998ux, Bogdanov:2001hr} allow for the existence of skyrmions~\cite{Bogdanov:1989vt, Bogdanov:1994bt}, which are topological soliton solutions to a nonlinear field theory bearing resemblance to a model for mesons and baryons proposed by Skyrme~\cite{Skyrme:1961vo, Skyrme:1962tr}. While these two-dimensional particles have been actively studied for their potential in information storage applications~\cite{Kiselev:2011cm, Sampaio:2013kn}, their original ties to nucleons have been revisited through three-dimensional extensions called hopfions~\cite{Sutcliffe:2017da}, which also provide an intriguing connection to Kelvin's proposal for a vortex theory of atoms~\cite{Thomson:1867}.
Pairs of skyrmions and antiskyrmions, their antiparticle counterpart, can be generated in a variety of ways, such as nucleation under local heating~\cite{Koshibae:2014fg}, homogeneous spin currents~\cite{Stier:2017ic, EverschorSitte:2018bn}, and surface acoustic waves~\cite{Yokouchi:2020cl}. Pairs also appear in ultrathin chiral ferromagnets with frustrated exchange interactions when the magnetization dynamics is driven by spin-orbit torques (SOTs)~\cite{Ritzmann:2018cc}. While both skyrmions and antiskyrmions are metastable states in such systems~\cite{Leonov:2015iz, Lin:2016hh, Rozsa:2017ii}, their motion can be qualitatively different under spin-orbit torques~\cite{Ritzmann:2018cc}. In particular, an antiskyrmion driven beyond its Walker limit can shed skyrmion-antiskyrmion pairs, much like the vortex-antivortex pairs produced during vortex core reversal~\cite{VanWaeyenberge:2006io}, which are then driven apart by the SOTs. Because such nonlinear processes are observed to involve a variety of creation and annihilation events involving particles and antiparticles, the intriguing analogies with high-energy physics compel us to explore whether this system could offer any insight, albeit tangential, into the more general question of matter-antimatter asymmetry in the universe. After all, the Sakharov conditions for baryogenesis~\cite{Sakharov:1967}, namely, baryon number violation, charge conjugation and combined charge conjugation-parity violation, and out-of-equilibrium interactions, appear to be naturally fulfilled in the aforementioned case: no conservation laws exist for the number of skyrmions and antiskyrmions, the Dzyaloshinskii-Moriya interaction (DMI) breaks chiral symmetry and lifts the degeneracy between skyrmion and antiskyrmions, and dissipative torques (spin-orbit and Gilbert damping) representing nonequilibrium processes play a crucial role in pair generation.
In this paper, we examine theoretically the microscopic processes leading to an imbalance in the number of skyrmions and antiskyrmions produced as a result of SOT-driven antiskyrmion dynamics. The remainder of this paper is organized as follows. In Sec. II, we describe the atomistic model used and the dynamics simulated. Section III discusses the main scattering processes that occur between an antiskyrmion and the generated skyrmion-antiskyrmion pair. Detailed phase diagrams of the generation processes are presented in Sec. IV, where the role of the SOTs and material parameters such as the strength of the Dzyaloshinskii-Moriya interaction and polarization angle are discussed. In Sec. V, we present the minimum-energy paths for two scattering processes. Finally, some discussion and concluding remarks are given in Sec. VI.
\section{Model and method}
The system studied is illustrated in Fig.~\ref{fig:geometry}(a).
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure01}
\caption{(a) Film geometry illustrating the Pd/Fe bilayer on an Ir(111) substrate, with schematic illustrations of a skyrmion $s$ and antiskyrmion $\bar{s}$. $\mathbf{B}$ is the applied field and $\theta_p$ is the angle associated with the spin polarization vector $\mathbf{p}$. (b) Phase diagram of antiskyrmion dynamics under fieldlike (FL) and dampinglike (DL) spin-orbit torques~\cite{Ritzmann:2018cc}.
}
\label{fig:geometry}
\end{figure}
Following Refs.~\onlinecite{Romming:2013iq, Dupe:2014fc, Ritzmann:2018cc} we consider a ferromagnetic PdFe bilayer, which hosts the skyrmions $s$ and antiskyrmions $\bar{s}$, on an Ir(111) substrate through which we assume an electric current flows in the film plane, resulting in a spin current generated by the spin Hall effect flowing in the $z$ direction and polarized along $\mathbf{p}$, which is characterized by the angle $\theta_p$ measured from the $x$ axis. A magnetic field $\mathbf{B}$ is applied along the $z$ direction, which defines the uniform background state of the PdFe system. We model the magnetic properties of the PdFe film with a hexagonal lattice of magnetic moments $\mathbf{m}_i$, one atomic layer in thickness, whose dynamics is solved by time integration of the Landau-Lifshitz equation with Gilbert damping and spin-orbit torques,
\begin{eqnarray}
\frac{d \mathbf{m} }{dt} = -\frac{1}{\hbar} \mathbf{m} \times \mathbf{B_{\mathrm{eff}}} + \alpha \mathbf{m} \times \frac{d \mathbf{m} }{dt} + \nonumber \\ \beta_\mathrm{FL} \mathbf{m} \times \mathbf{p} + \beta_\mathrm{DL} \mathbf{m} \times \left( \mathbf{m} \times \mathbf{p} \right),
\label{eq:LLG}
\end{eqnarray}
where $\alpha = 0.3$ is the damping constant and $\hbar \beta_\mathrm{FL}$ and $\hbar \beta_\mathrm{DL}$ characterize the strength of the fieldlike (FL) and dampinglike (DL) contributions of the spin-orbit torques, respectively, in meV. The effective field, $\mathbf{B}_i^{\mathrm{eff}}=-\partial \mathcal{H}/\partial \mathbf{m}_i$, is expressed here in units of energy and is derived from the Hamiltonian $\mathcal{H}$,
\begin{eqnarray}
\mathcal{H} = -\sum_{\langle ij \rangle} J_{ij} \mathbf{m}_i \cdot \mathbf{m}_j - \sum_{\langle ij \rangle} \mathbf{D}_{ij} \cdot \left( \mathbf{m}_i \times \mathbf{m}_j \right) \nonumber \\ - \sum_{i} K m_{i,z}^2 - \sum_{i} \mathbf{B} \cdot \mu_\mathrm{s}\mathbf{m}_i.
\label{eq:Hamiltonian}
\end{eqnarray}
The first term is the Heisenberg exchange interaction, which includes coupling up to ten nearest neighbors and involves frustrated exchange: $J_1 = 14.73$, $J_2=-1.95$, $J_3=-2.88$, $J_4=0.32$, $J_5=0.69$, $J_6=0.01$, $J_7=0.01$, $J_8=0.13$, $J_9=-0.14$, and $J_{10}=-0.28$, where all $J_{ij}$ are given in meV. The second term is the DMI between nearest neighbors, with $\mathbf{D}_{ij}$ oriented along $\hat{\mathbf{r}}_{ij} \times \hat{\mathbf{z}}$ and $ \| \mathbf{D}_{ij} \| = 1.0$ meV. The third term describes a uniaxial anisotropy along the $z$ axis with $K = 0.7$ meV. The fourth term represents the Zeeman interaction with the applied field $\mathbf{B}$, where we take $\mu_s = 2.7\mu_\mathrm{B}$ for iron. The material parameters are obtained from first-principles calculations of the layered system in Fig.~\ref{fig:geometry}(a)~\cite{Dupe:2014fc}. We note that the applied field of 20 T is only slightly greater than the critical field $B_c$, $B=1.06 B_c$, below which the magnetic ground state comprises a skyrmion lattice phase. Under these conditions, \emph{both} isolated skyrmions and antiskyrmions are metastable states due to the frustrated exchange interactions, with skyrmions being energetically favored by the DMI.
Figure~\ref{fig:geometry}(b) represents the phase diagram, indicating different dynamical regimes under SOTs for a system in which the initial state comprises a single isolated antiskyrmion (the ``seed''). The linear, deflected, and trochoidal regimes denote the motion involving single-particle dynamics, while annihilation represents the region in which the seed loses its metastability. The focus here is on $s\bar{s}$ pair generation, which predominantly occurs under small fieldlike torques and large dampinglike torques. We simulated the dynamics in a variety of system sizes $L \times L$ with periodic boundary conditions, with $L$ ranging from 100 to 800 in order to mitigate finite-size effects that primarily involve collisions from generated particles re-entering the simulation area. The time integration of Eq.~(\ref{eq:LLG}) was performed using the Heun method with a time step of 1 fs.
\section{Scattering processes}
The propensity for the initial seed $\bar{s}$ to produce particles ($s$) and antiparticles ($\bar{s}$) is determined by the scattering processes that immediately follow the formation of the $s\bar{s}$ pair, which depend on the strengths of $\beta_\mathrm{FL}$ and $\beta_\mathrm{DL}$. Three key scattering processes are illustrated in Fig.~\ref{fig:pairprocess} for $\theta_p = 0$.
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure02}
\caption{Main scattering processes following pair generation from the seed $\bar{s}$ under SOT. (a) Maximal production, minimal asymmetry process $(N=2,\eta=0)$ leading to proliferation in which the generated $s\bar{s}$ pair splits and collision between the seed and generated $\bar{s}$ conserves skyrmion number. (b) $(N=2,\eta=0)$ process leading to premature saturation or stasis, where collision between the seed and generated $\bar{s}$ proceeds through a transient $Q=-2$ state ($2\bar{s}$) before decaying to an $\bar{s}\bar{s}$ bound pair, preventing further generation. (c) Minimal production, maximal asymmetry process ($N=1,\eta =1$) in which the generated $s\bar{s}$ pair splits and collision between the seed and generated $\bar{s}$ is inelastic, leading to annihilation of seed $\bar{s}$. Crosses denote the point of reference in the film plane and the color map indicates the charge density $q$ of a unit cell. Arrows are shown for moments for which $\sqrt{m_{i,x}^2+m_{i,y}^2} > 0.9$, and the open circles denote the approximate position of the core.}
\label{fig:pairprocess}
\end{figure}
The different processes illustrated typically occur for specific ranges of fieldlike and dampinglike parameters, as will be discussed later. We use a color map based on the local topological (skyrmion) charge density $q$, which is computed from three neighboring moments $\mathbf{m}_{i}, \mathbf{m}_{j}, \mathbf{m}_{k}$ as~\cite{Bottcher:2019hf}
\begin{equation}
q_{ijk} = -\frac{1}{2\pi} \tan^{-1}\left[ \frac{\mathbf{m}_{i} \cdot \left(\mathbf{m}_{j} \times \mathbf{m}_{k} \right)}{1+ \mathbf{m}_{i} \cdot \mathbf{m}_{j} + \mathbf{m}_{i}\cdot \mathbf{m}_{k} + \mathbf{m}_{j}\cdot \mathbf{m}_{k}} \right].
\end{equation}
This represents the contribution from half a unit cell. We use $Q$ to denote the total charge, which represents a sum over $q_{ijk}$, and we adopt the convention where $Q=1$ for $s$ and $Q =-1$ for $\bar{s}$. The processes are characterized by their potential for particle production, measured by $N = N_s + N_{\bar{s}}$, which denotes the total numbers of skyrmions ($N_s$) and antiskyrmions ($N_{\bar{s}}$) produced from the initial antiskyrmion, and by the asymmetry in this production, which is measured by the parameter $\eta = (N_s - N_{\bar{s}})/N$. We consider only processes for which $N>0$ (the seed $\bar{s}$ is not included in this count). In Fig.~\ref{fig:pairprocess}(a) a maximal production ($N=2$), minimal asymmetry ($\eta=0$) scattering process leading to the proliferation of particles is shown for $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.02,1.5)$ meV. An $s\bar{s}$ pair nucleates from the tail of the $\bar{s}$ seed as it undergoes trochoidal motion, which then splits and is followed by a number-conserving collision between the two $\bar{s}$ particles. The $s$ particle escapes the zone of nucleation, and the two $\bar{s}$ particles become new sources of $s\bar{s}$ pair generation. In this scenario, $s$ and $\bar{s}$ are produced in equal numbers, and the process continues indefinitely but can be slowed by annihilation processes, which become more probable as the density of particles increases. In Fig.~\ref{fig:pairprocess}(b), a similar $N=2,\eta = 0$ process is shown for $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.1,1.35)$ meV, but here, the scattering between the two $\bar{s}$ results in a transient higher-order $Q=-2$ antiskyrmion state ($2\bar{s}$), which subsequently decays into an $\bar{s}\bar{s}$ bound pair that executes a rotational motion about its center of mass. As a result, further pair generation is suppressed. Figure~\ref{fig:pairprocess}(c) illustrates a minimal production ($N = 1$), maximal asymmetry ($\eta = 1$) process at $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.13,1.1)$ meV, where the scattering between the seed and generated $\bar{s}$ results in a non-conservation process where the seed $\bar{s}$ is annihilated, which takes place via the creation and annihilation of a meron-antimeron~\cite{Desplat:2019dn}. This scattering event leaves the generated $s$ to propagate away and the surviving $\bar{s}$ to restart the process.
Examples of the growth rates are given in Fig.~\ref{fig:genrate}, where $Q(t)$ is shown for the three cases presented in Fig.~\ref{fig:pairprocess}.
\begin{figure}
\centering\includegraphics[width=8.5cm]{Figure03}
\caption{Representative examples of different growth regimes of the total skyrmion charge, $Q$ for three values of $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL})$. (a) Proliferation, (0.02, 1.5) meV. (b) Stasis or premature saturation, (0.13, 1.1) meV. (c) Linear growth, (0.1, 1.35) meV.}
\label{fig:genrate}
\end{figure}
The data are obtained from simulations of a $500 \times 500$ system over $0.1$ ns with $\theta_p = 0$. Above this timescale, propagating particles can reenter the simulation geometry through the periodic boundary conditions which result in spurious collisions and annihilation events. $Q_s$ and $Q_{\bar{s}}$ are found by summing over separately the contributions from $q_{ijk}>0$ and $q_{ijk}<0$, respectively. Figure~\ref{fig:genrate}(a) illustrates the growth where the process in Fig.~\ref{fig:pairprocess}(a) dominates, where a proliferation of particles can be seen. Unlike the single event in Fig.~\ref{fig:pairprocess}(a) the growth in Fig.~\ref{fig:genrate}(a) also comprises processes such as those described in Figs.~\ref{fig:pairprocess}(b) and ~\ref{fig:pairprocess}(c), which results in an overall asymmetry in the production and finite topological charge that increases with time. When the seed immediately undergoes the scattering process in Fig.~\ref{fig:pairprocess}(b), the generation stops for all future times, and a stasis regime is found [Fig.~\ref{fig:genrate}(b)]. Such processes can also occur after a certain time interval following proliferation, which results in premature saturation. Cases in which the scattering process in Fig.~\ref{fig:pairprocess}(c) repeats periodically results in an approximately linear growth in the number of skyrmions, which is shown in Fig.~\ref{fig:genrate}(c).
\section{Generation phase diagrams}
A ($\beta_\mathrm{FL},\beta_\mathrm{DL}$) phase diagram of the skyrmion production and asymmetry is presented in Fig.~\ref{fig:phasediag}(a).
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure04}
\caption{(a) ($\beta_\mathrm{FL},\beta_\mathrm{DL}$) phase diagram illustrating the total number of skyrmions and antiskyrmions produced over 0.1 ns, where $N$ is represented by the circle size on a logarithmic scale and the asymmetry parameter is shown on a linear color scale. (b) $N$ and (c) $\eta$ as a function of DL torques for the proliferation regime (for different FL torques). (d) $N$ as a function of DL torques for linear growth ($\eta = 1$).}
\label{fig:phasediag}
\end{figure}
As for Fig.~\ref{fig:genrate}, the data were obtained after simulating 0.1 ns on a $500 \times 500$ spin system with periodic boundary conditions and $\theta_p = 0$. The size of the circles represents $N$ on a logarithmic scale, while the color code represents $\eta$ on a linear scale. Three different regimes can be identified visually as the strength of $\beta_\mathrm{FL}$ is increased. For low values of $\beta_\mathrm{FL}$ (primarily $\hbar\beta_\mathrm{FL} \lesssim 0.07$ meV), we observe a regime in which proliferation dominates where large numbers of $s$ and $\bar{s}$ are generated, which is mainly driven by the process in Fig.~\ref{fig:pairprocess}(a). Both $N$ and $\eta$ increase with the dampinglike torques in this regime, as shown in Figs.~\ref{fig:phasediag}(b) and \ref{fig:phasediag}(c), respectively, which can be understood from the fact that $\beta_\mathrm{DL}$ represents a nonconservative torque that transfers spin angular momentum into the system. For intermediate values of $\beta_\mathrm{FL}$ (primarily $0.08 \lesssim \hbar\beta_\mathrm{FL} \lesssim 0.11$ meV), a linear growth regime is seen which is characterized by $\eta \simeq 1$ and moderate values of $N$. As for the proliferation regime, the rate of production in the linear regime increases with $\beta_\mathrm{DL}$ as shown in Fig.~\ref{fig:phasediag}(d). Finally, for large values of $\beta_\mathrm{FL}$ (primarily $\hbar\beta_\mathrm{FL} \gtrsim 0.13$ meV) and close to the boundaries of the generation phase, we observe predominantly a stasis regime where generation stops after the nucleation of a single $s\bar{s}$ pair and the formation of a bound $\bar{s}\bar{s}$ state, as shown in Fig.~\ref{fig:pairprocess}(b).
The roles of DMI and the spin polarization angle are shown in Fig.~\ref{fig:thetad}, where $(\theta_p,D_{ij})$ phase diagrams for $N$ and $\eta$ are presented for the three distinct dynamical regimes discussed above: proliferation [(0.02, 1.5) meV, Fig.~\ref{fig:thetad}(a)], linear growth [(0.1, 1.35) meV, Fig.~\ref{fig:thetad}(b)], and stasis [(0.13, 1.1) meV, Fig.~\ref{fig:thetad}(c)], where the numbers in parentheses indicate values of $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL})$.
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure05}
\caption{($\theta_p,D_{ij}$) phase diagram illustrating the total number of skyrmions and antiskyrmions produced over 0.1 ns, where $N$ is represented by the circle size on a logarithmic scale and $\eta$ is shown on a linear color scale for (a) $(\beta_\mathrm{FL},\beta_\mathrm{DL}) =$ (0.02, 1.5) meV, (b) (0.1, 1.35) meV, and (c) (0.13, 1.1). (d) $\eta$ and $N$ as a function of $D_{ij}$ for the case in (a).}
\label{fig:thetad}
\end{figure}
A weak dependence on $\theta_p$ can be seen. This arises from the interplay between the SOT-driven dynamics of the antiskyrmion helicity, which possesses twofold rotational symmetry about the antiskyrmion core in its rest state, and the underlying hexagonal lattice structure, which introduces a weak lattice potential that arises because of the compact nature of the core~\cite{Ritzmann:2018cc}. Variations in the magnitude of $D_{ij}$, on the other hand, lead to greater changes in the qualitative behavior, where transitions between stasis, linear growth, and proliferation can be seen as $D_{ij}$ is increased for all three base cases considered. This behavior is exemplified in Fig.~\ref{fig:thetad}(d), where $N$ and $\eta$ are shown as a function of $D_{ij}$ for the cases shown in Fig.~\ref{fig:thetad}(a). These results also suggest that a finite threshold for $D_{ij}$ is required for pair generation to take place, a threshold that is also dependent on the strength of the SOT applied.
\section{Minimum-energy paths for merging and annihilation processes}
We can note that both stasis and proliferation states can be found at the phase boundaries. This results from the fact that the scattering processes in Figs.~\ref{fig:pairprocess}(b) and \ref{fig:pairprocess}(c) involve nearly identical energy barriers (in the absence of SOTs), where only slight differences in the relative helicities of the scattering $\bar{s}$ states determine the outcome. To see this, we look at minimum-energy paths (MEPs) on the multidimensional energy surface defined by the Hamiltonian in Eq.~(\ref{eq:Hamiltonian}) at $\beta_\mathrm{FL}=\beta_\mathrm{DL}=0$. We use the geodesic nudged elastic band method (GNEB)~\cite{Bessarab:2015method} to compute the MEPs, for which intermediate states of the system along the reaction coordinate are referred to as images.
\begin{figure*}[hbt]
\centering\includegraphics[width=17.5cm]{Figure06}
\caption{(a, b) Minimum-energy paths for the merging of the $\bar{s}\bar{s}$ pair into (a) a $2\bar{s}$ state and (b) an $\bar{s}$ state. The image indices are given in the bottom left corner. The associated energy profile along the (normalized) reaction coordinate, where (c) corresponds to the paths that results in the $2\bar{s}$ state and (d) corresponds to the path that results in the $\bar{s}$ state. The total topological charge remains constant at $Q=-2$ in (c), while its variation with the reaction coordinate is shown in (d). The inset in (c) shows the saddle point configuration (image 7), where the dashed arrows indicate the reference axis along which the clockwise (CW) or counterclockwise (CCW) Bloch states are defined and through which the merging of $\bar{s}$ occurs. The inset in (d) represents an expanded view of the region around the energy barrier.}
\label{fig:meps}
\end{figure*}
First, the MEP for the merging into a higher-order $2\bar{s}$ state is shown in Fig.~\ref{fig:meps}(a), where the image index is shown in the bottom left corner. The corresponding energy profile along the reaction coordinate is shown in Fig.~\ref{fig:meps}(c). This path resembles the mechanism identified in Fig.~\ref{fig:pairprocess}(b), which, under SOTs, subsequently results in the formation of a bound $\bar{s}\bar{s}$ pair and suppresses generation. The initial state (A) in the GNEB method is set as a pair of metastable, isolated $\bar{s}$ states, where both $\bar{s}$ have the same helicity. The antiskyrmions then undergo a rotation of helicity, during which the total energy increases, to reach a higher-energy configuration at image 6. The next image, image 7, corresponds to the barrier top, in the form of a saddle point, and precedes the merging. At the saddle point, the antiskyrmions come into contact from the side and join through their counterclockwise and clockwise rotating Bloch axes, respectively, with a helicity difference of about $\pi$ rad. The corresponding energy barrier is found to be $\Delta E = 1.089 J_1$, where $J_1 = 14.73$ meV is the exchange constant for the Heisenberg interaction between nearest neighbors and is employed here as a characteristic energy scale. Subsequent images correspond to the merging into the final metastable $2\bar{s}$ state via the antiskyrmions' Bloch axes, accompanied by a decrease in the total energy of the system. The total topological charge remains constant throughout this process.
Next, we describe the path corresponding to the merging of the $\bar{s}\bar{s}$ pair into a single $\bar{s}$ via a process that does not conserve the total topological charge. The MEP is shown in Fig.~\ref{fig:meps}(b), with the corresponding energy profile shown in Fig.~\ref{fig:meps}(d). This mechanism resembles the process presented in Fig.~~\ref{fig:pairprocess}(c), through which an inelastic collision of two antiskyrmions results in the destruction of the seed and leads to a linear growth in the number of skyrmions [Fig.~\ref{fig:pairprocess}(c)]. Similar to the mechanism described above, the initial state is set as a pair of isolated, metastable $\bar{s}$ states, where both $\bar{s}$ have the same helicity. From there, the helicities of the antiskyrmions rotate as the energy increases, until the system reaches the barrier top at image 6. This state is very similar to the saddle point of the MEP in Fig.~\ref{fig:meps}(a), with, once more, a corresponding energy barrier of $\Delta E = 1.089 J_1$. However, the difference in the helicities seems to be, in this case, slightly inferior to $\pi$ rad. The following images correspond to the merging into a metastable single $\bar{s}$ state. This involves the destruction of one unit of negative topological charge, which occurs via the nucleation of a meron of charge $Q=+\frac{1}{2}$ at image 8. This is accompanied by a sharp decrease in the total energy of the system, as well as a drop in the total negative topological charge, from $-2$ to $-1$. The meron then annihilates with the extra antimeron of charge $Q=-\frac{1}{2}$, thus leaving a single $\bar{s}$ state of charge $Q=-1$ at image 9, accompanied by a further drop in the total energy.
The above results show that, in the generation regime, the scattering processes undergone by the $\bar{s}$ seed closely resemble the paths of minimum energy at zero SOT. Additionally, we find that the paths for the merging of the $\bar{s}\bar{s}$ pair into either a $2\bar{s}$ state or an $\bar{s}$ state traverse very similar saddle points, where only a small relative difference in the helicities appears to determine the fate of the final state. The associated energy barriers are practically identical and relatively low, of the order of $J_1$. This weak differentiation between the saddle points is in line with the fact that the boundaries of the phase diagram in Fig. 4 are not sharp and that small variations in the applied torques are sufficient to transition between the stasis and linear growth regimes.
\section{Discussion and concluding remarks}
With the frustrated exchange and in the absence of dipolar interactions, setting $D_{ij}$ to zero restores the chiral symmetry between skyrmions and antiskyrmions, where SOTs result in circular motion with opposite rotational sense for $s$ and $\bar{s}$~\cite{Leonov:2015iz, Lin:2016hh, Zhang:2017iv, Ritzmann:2018cc}. While the focus here has been on the consequences of generation from an antiskyrmion seed, the choice of an anisotropic form of the Dzyaloshinskii-Moriya interaction, i.e., one that energetically favors antiskyrmions over skyrmions~\cite{Nayak:2017hv, Hoffmann:2017kl, Camosi:2018eu, Raeliarijaona:2018eg, Jena:2020db}, would result in the opposite behavior whereby skyrmion seeds would lead to pair generation and proliferation of antiskyrmions over skyrmions~\cite{Ritzmann:2018cc}.
Naturally, dipolar interactions are present in real materials, and their role has not been considered in this present study. This is justified for the following reasons. First, the long-range nature of dipolar interactions becomes apparent only as the film thickness is increased, i.e., beyond several nanometers. The system considered here is one atomic layer thick, which represents the limit in which the dipolar interaction is well described by a local approximation which results in the renormalization of the perpendicular magnetic anisotropy constant. Second, dipolar interactions favor a Bloch-like state for skyrmions and modify the energy dependence of the helicity for antiskyrmions. However, these corrections would almost be negligible in comparison with the strength of the frustrated exchange and Dzyaloshinskii-Moriya interactions considered. Finally, the inclusion of dipolar interactions would not suppress the Walker-like transition of the antiskyrmion dynamics, which results in pair generation.
In summary, we have presented results from atomistic spin dynamics simulations of skyrmion-antiskyrmion generation processes that result from the SOT-driven dynamics of an initial antiskyrmion state. Three fundamental scattering processes are identified, namely, elastic collisions, double-antiskyrmion bound states, and antiskyrmion annihilation, which form the basis of more complex generation processes leading to stasis, linear growth, and proliferation of particles. We investigated how the strength of the spin-orbit torques, including the orientation of the spin polarization with respect to the lattice, and the DMI constant affect the generation processes. Overall, the asymmetry in the production of particles and antiparticles from a given seed is driven by the strength of the chiral symmetry breaking, here measured by $D_{ij}$, and the nonequilibrium torques leading to pair generation, here characterized by $\beta_\mathrm{DL}$. Last, we investigated the paths of minimum energy at zero SOT for the two fundamental scattering processes that respectively lead to the stasis and linear growth regimes. We found that these resemble the processes undergone by the seed under SOT, and that the two mechanisms involve extremely similar saddle points, which explains the lack of sharp boundaries between the two regimes.
\begin{acknowledgments}
This work was supported by the Agence Nationale de la Recherche under Contract No. ANR-17-CE24-0025 (TOPSKY), the Deutsche Forschungsgemeinschaft via TRR 227, and the University of Strasbourg Institute for Advanced Study (USIAS) for via a fellowship, within the French national program ``Investment for the Future'' (IdEx-Unistra).
\end{acknowledgments}
| proofpile-arXiv_065-281 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Quantum annealing (QA) has been studied as a way to solve combinational
optimization problems~\cite{kadowaki1998quantum,farhi2000quantum,farhi2001quantum}
where the goal is to minimize a cost function. Such a problem is mapped
into a finding of a ground state of Ising Hamiltonians that contain the information of the problem.
QA is designed to find an energy eigenstate of the target Hamiltonian by using adiabatic dynamics.
So, by using the QA, we can find the ground state of the Ising Hamiltonian for the combinational optimization problem.
D-Wave systems, Inc. has
have realized a quantum device to implement the QA \cite{johnson2011quantum}.
Superconducting flux qubits \cite{orlando1999superconducting,mooij1999josephson,harris2010experimental}
have been used in the device
for the QA. Since superconducting qubits are artificial atoms,
there are many degree of freedoms to control parameters
by changing the design and external fields, which is suitable for a programmable device.
QA with the D-Wave machines can be used not only for finding the ground state, but also for
quantum simulations \cite{harris2018phase,king2018observation}
and machine learning \cite{mott2017solving,amin2018quantum}.
Quantum chemistry is one of the important applications of
quantum information processing~\cite{levine2009quantum,serrano2005quantum,mcardle2020quantum}, and
it was recently shown that the QA can be also used for quantum chemistry
calculations~\cite{perdomo2012finding,aspuru2005simulated,lanyon2010towards,du2010nmr,peruzzo2014variational,
mazzola2017quantum,streif2019solving,babbush2014adiabatic}.
Important properties of molecules can be investigated by the second quantized Hamiltonian of the molecules.
Especially, the energy gap between the ground state and excited states is essential information for
calculating optical spectra and reaction rates
~\cite{serrano2005quantum}.
The second quantized Hamiltonian can be mapped into the Hamiltonian of
qubits~\cite{jordanwigner,bravyi2002fermionic,aspuru2005simulated,seeley2012bravyi,tranter2015b}.
Importantly, not only the ground state
but also the excited state of the Hamiltonian can be prepared by the QA \cite{chen2019demonstration,seki2020excited}. By measuring suitable observable on
such states prepared by the QA, we can estimate the eigenenergy of the Hamiltonian. In the conventional approaches,
we need to perform two separate experiments to estimate an energy gap between the ground state and the excited state.
In the first (second) experiment, we measure the eigenenergy of the ground (excited) state prepared by the QA. From the subtraction between the estimation of the eigenenergy of the ground state and that of the excited state, we can obtain the information of the energy gap \cite{seki2020excited}.
Here, we propose a way to estimate an energy gap between the ground state and excited state in a more direct manner.
The key idea is to use the Ramsey type measurement where a superposition between the ground state and excited state
acquires a relative phase that depends on the energy gap \cite{ramsey1950molecular}. By performing the Fourier transform of the signal from the
Ramsey type experiments, we can estimate the energy gap. We numerically study the performance of our protocol to estimate
the energy gap between the ground state and first excited state. We show robustness of our scheme against non-adiabatic
transitions between the ground state and first excited state.
\section{Estimation of the energy gap between the ground state and excited state based on the Ramsey type measurement
}
We use the following time-dependent Hamiltonian in our scheme
\begin{eqnarray}
H&=&A(t)H_{\rm{D}}+(1-A(t))H_{\rm{P}}\nonumber \\
A(t)&=&\left\{ \begin{array}{ll}
1 -\frac{t}{T}& (0\leq t \leq T) \\
0 & (T \leq t \leq T +\tau ) \\
\frac{t-(T+\tau )}{T} & (T+\tau \leq t \leq 2T+\tau )
\\
\end{array} \right.
\end{eqnarray}
where $A(t)$ denotes an external control parameter (as shown in the Fig. \ref{aatfigure}), $H_{\rm{D}}$ denotes the driving Hamiltonian that is typically chosen as the transverse magnetic field term,
and $H_{\rm{P}}$ denotes the target (or problem) Hamiltonian whose energy gap we want to know.
This means that, depending on the time period,
we have three types of the Hamiltonian as follows
\begin{eqnarray}
H_{\rm{QA}}&=&(1-\frac{t}{T})H_{\rm{D}}+\frac{t}{T}H_{\rm{P}}
\nonumber \\
H_{\rm{R}}&=&H_{\rm{P}}\nonumber \\
H_{\rm{RQA}}&=&\frac{t-(T+\tau )}{T}H_{\rm{D}}+(1-\frac{t-(T+\tau )}{T})H_{\rm{P}}\nonumber
\end{eqnarray}
In the first time period of $0\leq t \leq T$,
the Hamiltonian is $H_{\rm{QA}}$, and this
is the same as that is used in the standard QA.
In the next time period of $T \leq t \leq T +\tau$,
the Hamiltonian becomes $H_{\rm{R}}$, and
the dynamics induced by this Hamiltonian
corresponds to that of the Ramsey type evolution \cite{ramsey1950molecular} where the superposition
of the state acquires a relative phase depending on the energy gap.
In the last time period of $T+\tau \leq t \leq 2T+\tau$,
the Hamiltonian becomes $H_{\rm{RQA}}$, and this has a similar form of that
is used in a reverse QA
\cite{perdomo2011study,ohkuwa2018reverse,yamashiro2019dynamics,arai2020mean}.
\begin{figure}[bhtp]
\centering
\includegraphics[width=16cm]{atfigure}
\caption{
An external control parameter $A(t)$ of our time-dependent Hamiltonian
$H(t)=A(t)H_{\rm{D}}+(1-A(t))H_{\rm{P}}$ where $H_{\rm{D}}$ denotes
the driving Hamiltonian
and $H_{\rm{P}}$ denotes the target (problem) Hamiltonian.
With a time period of $0\leq t \leq T$, we have the Hamiltonian $H_{\rm{QA}}$ that is used in the standard QA.
With the next time period of $T \leq t \leq T+\tau $, we have the Ramsey time Hamiltonian $H_{\rm{R}}$
where the quantum state acquires a relative phase induced from the energy gap.
In the final time period of $T+\tau \leq t \leq 2T+\tau $, we have the Hamiltonian
$H_{\rm{RQA}}$ which is used in a reverse QA. By using the dynamics induced by these Hamiltonians, we can estimate the energy gap of the target Hamiltonian.
}\label{aatfigure}
\end{figure}
We explain the details of our scheme.
Firstly, prepare an initial state of
$|\psi _0\rangle =\frac{1}{\sqrt{2}}(|E_0^{\rm{(D)}}\rangle +|E_1^{\rm{(D)}}\rangle)$
where $|E_0^{\rm{(D)}}\rangle$ ($|E_1^{\rm{(D)}}\rangle$)
denotes the ground (excited) state of the driving Hamiltonian.
Secondly, let this state evolve in an adiabatic way
by the Hamiltonian of $H_{\rm{QA}}$
and we obtain a state of
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(P)}}\rangle +e^{-i\theta }|E_1^{\rm{(P)}}\rangle)$
where $|E_0^{\rm{(P)}}\rangle$ ($|E_1^{\rm{(P)}}\rangle$)
denotes the ground (excited) state of the target Hamiltonian and $\theta $
denotes a relative phase acquired during the dynamics. Thirdly, let the state evolve by the Hamiltonian
of $H_{\rm{R}}$
for a time $T\leq t \leq T+\tau $, and we obtain
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(P)}}\rangle +e^{-i\Delta E \tau
-i\theta }|E_1^{\rm{(P)}}\rangle)$ where $\Delta E= E_1^{(\rm{P})}-E_0^{(\rm{P})}$ denotes
an energy gap and $E_0^{(\rm{P})}$ ($E_1^{(\rm{P})}$) denotes the eigenenergy of the ground
(first excited) state of the target Hamiltonian.
Fourthly, let this state evolve in an adiabatic way
by the Hamiltonian of $H_{\rm{RQA}}$ from $t=T+\tau $ to $T$,
and we obtain a state of
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(D)}}\rangle +e^{-i\Delta E \tau
-i\theta '}|E_1^{\rm{(D)}}\rangle)$
where $\theta '$ denotes
a relative phase acquired during the dynamics. Fifthly, we readout the state by using a projection operator of
$|\psi _0\rangle \langle \psi _0|$, and the projection probability is
$P_{\tau }=\frac{1}{2}+\frac{1}{2} \cos (\Delta E \tau +\theta ')$, which is
an oscillating signal with a frequency of the energy gap.
Finally, we repeat the above five steps by sweeping $\tau $, and obtain several values of $P_{\tau }$.
We can perform the Fourier transform of $P_{\tau }$ such as
\begin{eqnarray}
f(\omega )= \sum_{n=1}^{N}(P_{\tau }-\frac{1}{2})e^{-i\omega \tau _n}
\end{eqnarray}
where $\tau _n= t_{\rm{min}}+\frac{n-1}{(N-1) }(t_{\rm{max}} - t_{\rm{min}})$
denotes a time step, $t_{\rm{min}}$ ($t_{\rm{max}}$)
denotes a minimum (maximum) time to be considered,
and $N$ denotes the number of the steps. The peak in $f(\omega )$ shows the energy gap $\Delta E$.
\begin{figure}[bhtp]
\centering
\includegraphics[width=16cm]{combinedfigure-jjap}
\caption{Fourier function against a frequency. Here, we set parameters as $\lambda _1/2\pi =1$ GHz, $g/2\pi =0.5$ GHz, $\omega _1/2\pi = 0.2$ GHz, $\omega _2/\omega _1=1.2$, $g'/g=2.1$, $\lambda _2/\lambda _1 =10.7$, $L=2$,
$N=10000$, $t_{\rm{min}}=0$, $t_{\rm{max}}=100$ ns.
(a) We set $T=150 \ (75)$ ns for the blue (red) plot,
where we have a peak around $1.067$ GHz, which corresponds to
the energy difference between the ground state and first excited state of the target Hamiltonian. We have another
peak around $\omega =0$, and this comes from non-adiabatic transition during the QA.
(b) We set $T=37.5$($12.5$) ns for the blue (red) plot. We have an additional peak around $1.698$ GHz ($2.7646$ GHz),
which corresponds to
the energy difference between the first excited state (ground state) and second excited state of the target Hamiltonian.
}\label{dcodmr}
\end{figure}
To check the efficiency, we perform the numerical simulations to estimate the energy gap between the ground state and first excited state,
based on typical parameters for superconducting qubits. We choose the following Hamiltonians
\begin{eqnarray}
H_{\rm{D}}&=&\sum_{j=1}^{L}\frac{\lambda _j}{2}\hat{\sigma }_x^{(j)}\nonumber \\
H_{\rm{P}} &=&\sum_{j=1}^{L} \frac{\omega _j}{2}\hat{\sigma }_z^{(j)}
+\sum_{j=1}^{L-1}g \hat{\sigma }_z^{(j)}\hat{\sigma }_z^{(j+1)}
+g'(\hat{\sigma }_+^{(j)} \hat{\sigma }_-^{(j+1)} + \hat{\sigma }_-^{(j)} \hat{\sigma }_+^{(j+1)} )
\end{eqnarray}
where $\lambda _j$ denotes the amplitude of the transverse magnetic fields of the $j$-th qubit,
$\omega _j$ denotes the frequency of the $j$-th qubit, and $g$($g'$) denotes the Ising (flip-flop)
type coupling strength between qubits.
We consider the case of two qubits, and
the initial state is $|1\rangle |-\rangle $ where $|1\rangle $ ($|-\rangle $)
is an eigenstate of $\hat{\sigma }_z$($\hat{\sigma }_x$)
with an eigenvalue of +1 (-1). In the Fig. \ref{dcodmr} (a), we plot the Fourier function $|f(\omega )|$ against $\omega $ for this
case. When we set $T=150$ (ns) or $75$ (ns),
we have a peak around $\omega = 1.067$ GHz, which corresponds to the energy gap $\Delta E$
of the problem Hamiltonian in our parameter. So this result shows that we can estimate the energy gap by using our scheme.
Also, we have a smaller peak of around $\omega =0$ in the Fig. \ref{dcodmr} (a),
and this comes from non-adiabatic transitions between the ground state and first excited state.
If the dynamics is perfectly adiabatic, the population of both the ground state and first excited state should be
$\frac{1}{2}$ at
$t=T$.
However, in the parameters with $T=150$ ($T=75$) ns,
the population of the ground state and excited state is around 0.6 (0.7) and 0.4 (0.3) at $t=T$, respectively.
In this case, the probability at the readout step should be modified as
$P'_{\tau }=a+b \cos (\Delta E \tau +\theta ')$ where the parameters
$a$ and $b$ deviates from $\frac{1}{2}$ due to the non-adiabatic transitions. This induces the peak of around
$\omega =0$ in the Fourier function $f(\omega )$. As we decrease $T$,
the dynamics becomes less adiabatic, and the peak of $\omega =0$ becomes higher
while the target peak corresponding the energy gap $\Delta E $ becomes smaller as shown in the Fig. 1.
Importantly, we can still identify the peak corresponding to the energy gap for the following reasons.
First, there is a large separation between the peaks.
Second, the non-adiabatic transitions between
the ground state and first excited state
do not affect the peak
position. So our scheme is robust against the non-adiabatic transition between
the ground state and first excited state.
This is stark contrast with a previous scheme that is fragile against such non-adiabatic transitions
\cite{seki2020excited}.
Moreover, we have two more peaks in the Fig. \ref{dcodmr} (b) where we choose
$T=37.5$($12.5$) ns for the red (blue) plot, which is shorter than that of the Fig. \ref{dcodmr} (a).
The peaks are around $1.698$ GHz and $2.7646$ GHz, respectively.
The former (latter)
peak corresponds to
the energy difference between the first excited state (ground state) and second excited state.
We can interpret these peaks as follows.
Due to the non-adiabatic dynamics, not only the first excited state but also the second excited state is
induced in this case. The state after the evolution with $H_{\rm{R}}$ at $t=T+\tau $
is approximately given as
a superposition between the ground state, the first excited state,
and the second excited state such as
$c_0e^{-i E_0^{\rm{(P)}} \tau
-i\theta _0} |E_0^{\rm{(P)}}\rangle +c_1e^{-i E_1^{\rm{(P)}} \tau
-i\theta _1}
|E_1^{\rm{(P)}}\rangle + c_2e^{-i E_2^{\rm{(P)}} \tau
-i\theta _2}
|E_2^{\rm{(P)}}\rangle$ where $c_{i}$ $(i=0,1,2)$ denote real values and $\theta _i$ ($i=0,1,2$)
denotes
the relative phase induced by the QA.
So the Fourier transform of the probability distribution obtained from the measurements provides us with three frequencies
such as
$(E_0^{\rm{(P)}}-E_1^{\rm{(P)}})$, $(E_1^{\rm{(P)}}-E_2^{\rm{(P)}})$, and $(E_2^{\rm{(P)}}-E_0^{\rm{(P)}})$.
In the actual experiment, we do not know which peak corresponds to the energy gap between the ground state and first excited state, because there are other relevant peaks.
However, it is worth mentioning that we can still obtain some information of the energy spectrum (or energy eigenvalues of the Hamiltonian) from the experimental data, even under the effect of the non-adiabatic
transitions between the ground state and other excited states.
Again, this shows the robustness of our scheme against the non-adiabatic transitions compared with the previous schemes \cite{seki2020excited}.
\section{Conclusion}
In conclusion, we propose a scheme that
allows the direct estimation of an energy gap
of the target Hamiltonian by using quantum annealing (QA). While a ground state of a driving Hamiltonian
is prepared as an initial state for the conventional QA, we prepare a superposition between a ground state
and the first excited state of the driving Hamiltonian as the initial state. Also, the key idea in our scheme
is to use a Ramsey type measurement after the quantum annealing process where information of the energy gap
is encoded as a relative phase between the superposition. The readout of the relative phase by sweeping the
Ramsey measurement time duration provides a direct estimation of the energy gap of the target Hamiltonian.
We show that, unlike the previous scheme, our scheme is robust against non-adiabatic transitions.
Our
scheme paves an alternative way to estimate the energy gap of the target Hamiltonian for applications of quantum chemistry.
While this manuscript was being written,
an independent article also proposes to use a Ramsey measurement to estimate an energy
gap by using a quanutm device \cite{2020quantumenergygap}.
This paper is partly based on results obtained from a
project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan.
This work was also supported
by Leading Initiative for Excellent Young Researchers MEXT Japan, JST presto (Grant No. JPMJPR1919) Japan
, KAKENHI Scientific Research C (Grant No. 18K03465), and JST-PRESTO (JPMJPR1914).
| proofpile-arXiv_065-310 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Outline}
\section{Introduction}
Basketball is a global and growing sport with interest from fans of all ages. This growth has coincided with a rise in data availability and innovative methodology that has inspired fans to study basketball through a statistical lens. Many of the approaches in basketball analytics can be traced to pioneering work in baseball~\citep{schwartz_2013}, beginning with Bill James' publications of \emph{The Bill James Baseball Abstract} and the development of the field of ``sabermetrics''~\citep{james1984the-bill, james1987bill, james2010new}. James' sabermetric approach captivated the larger sports community when the 2002 Oakland Athletics used analytics to win a league-leading 102 regular season games despite a prohibitively small budget. Chronicled in Michael Lewis' \textit{Moneyball}, this story demonstrated the transformative value of analytics in sports~\citep{lewis2004moneyball}.
In basketball, Dean Oliver and John Hollinger were early innovators who argued for evaluating players on a per-minute basis rather than a per-game basis and developed measures of overall player value, like Hollinger's Player Efficiency Rating (PER)~\citep{oliver2004basketball, hollingerper, hollinger2004pro}. The field of basketball analytics has expanded tremendously in recent years, even extending into popular culture through books and articles by data-journalists like Nate Silver and Kirk Goldsberry, to name a few~\citep{silver2012signal, goldsberry2019sprawlball}. In academia, interest in basketball analytics transcends the game itself, due to its relevance in fields such as psychology \citep{gilovich1985hot, vaci2019large, price2010racial}, finance and gambling \citep{brown1993fundamentals, gandar1998informed}, economics (see, for example, the Journal of Sports Economics), and sports medicine and health \citep{drakos2010injury, difiori2018nba}.
Sports analytics also has immense value for statistical and mathematical pedagogy. For example, \citet{drazan2017sports} discuss how basketball can broaden the appeal of math and statistics across youth. At more advanced levels, there is also a long history of motivating statistical methods using examples from sports, dating back to techniques like shrinkage estimation \citep[e.g.][]{efron1975data} up to the emergence of modern sub-fields like deep imitation learning for multivariate spatio-temporal trajectories \citep{le2017data}. Adjusted plus-minus techniques (Section \ref{reg-section}) can be used to motivate important ideas like regression adjustment, multicollinearity, and regularization \citep{sill2010improved}.
\subsection{This review}
Our review builds on the early work of \citet{kubatko2007starting} in ``A Starting Point for Basketball Analytics,'' which aptly establishes the foundation for basketball analytics. In this review, we focus on modern statistical and machine learning methods for basketball analytics and highlight the many developments in the field since their publication nearly 15 years ago. Although we reference a broad array of techniques, methods, and advancements in basketball analytics, we focus primarily on understanding team and player performance in gameplay situations. We exclude important topics related to drafting players~\citep[e.g.][]{, mccann2003illegal,groothuis2007early,berri2011college,arel2012NBA}, roster construction, win probability models, tournament prediction~\citep[e.g.][]{brown2012insights,gray2012comparing,lopez2015building, yuan2015mixture, ruiz2015generative, dutta2017identifying, neudorfer2018predicting}, and issues involving player health and fitness~\citep[e.g.][]{drakos2010injury,mccarthy2013injury}. We also note that much of the literature pertains to data from the National Basketball Association (NBA). Nevertheless, most of the methods that we discuss are relevant across all basketball leagues; where appropriate, we make note of analyses using non-NBA data.
We assume some basic knowledge of the game of basketball, but for newcomers, \url{NBA.com} provides a useful glossary of common NBA terms~\citep{nba_glossary}. We begin in Section~\ref{datatools} by summarizing the most prevalent types of data available in basketball analytics. The online supplementary material highlights various data sources and software packages. In Section~\ref{teamsection} we discuss methods for modeling team performance and strategy. Section~\ref{playersection} follows with a description of models and methods for understanding player ability. We conclude the paper with a brief discussion on our view on the future of basketball analytics.
\subsection{Data and tools}
\label{datatools}
\noindent \textbf{Box score data:} The most available datatype is box score data. Box scores, which were introduced by Henry Chadwick in the 1900s~\citep{pesca_2009}, summarize games across many sports. In basketball, the box score includes summaries of discrete in-game events that are largely discernible by eye: shots attempted and made, points, turnovers, personal fouls, assists, rebounds, blocked shots, steals, and time spent on the court. Box scores are referenced often in post-game recaps.
\url{Basketball-reference.com}, the professional basketball subsidiary of \url{sports-reference.com}, contains preliminary box score information on the NBA and its precursors, the ABA, BAA, and NBL, dating back to the 1946-1947 season; rebounds first appear for every player in the 1959-60 NBA season \citep{nbaref}. There are also options for variants on traditional box score data, including statistics on a per 100-possession, per game, or per 36-minute basis, as well as an option for advanced box score statistics. Basketball-reference additionally provides data on the WNBA and numerous international leagues. Data on further aspects of the NBA are also available, including information on the NBA G League, NBA executives, referees, salaries, contracts, and payrolls as well as numerous international leagues. One can find similar college basketball information on the \url{sports-reference.com/cbb/} site, the college basketball subsidiary of \url{sports-reference.com}.
For NBA data in particular, \url{NBA.com} contains a breadth of data beginning with the 1996-97 season~\citep{nbastats}. This includes a wide range of summary statistics, including those based on tracking information, a defensive dashboard, ''hustle''-based statistics, and other options. \url{NBA.com} also provides a variety of tools for comparing various lineups, examining on-off court statistics, and measuring individual and team defense segmented by shot type, location, etc. The tools provided include the ability to plot shot charts for any player on demand.
\hfill
\noindent \textbf{Tracking data}: Around 2010, the emergence of ``tracking data,'' which consists of spatial and temporally referenced player and game data, began to transform basketball analytics. Tracking data in basketball fall into three categories: player tracking, ball tracking, and data from wearable devices. Most of the basketball literature that pertains to tracking data has made use of optical tracking data from SportVU through Stats, LLC and Second Spectrum, the current data provider for the NBA. Optical data are derived from raw video footage from multiple cameras in basketball arenas, and typically include timestamped $(x, y)$ locations for all 10 players on the court as well as $(x, y, z)$ locations for the basketball at over 20 frames per second.\footnote{A sample of SportVU tracking data can currently be found on Github \citep{github-tracking}.} Many notable papers from the last decade use tracking data to solve a range of problems: evaluating defense \citep{franks2015characterizing}, constructing a ``dictionary'' of play types \citep{miller2017possession}, evaluating expected value of a possession \citep{cervonepointwise}, and constructing deep generative models of spatio-temporal trajectory data \citep{yu2010hidden, yue2014learning, le2017data}. See \citet{bornn2017studying} for a more in-depth introduction to methods for player tracking data.
Recently, high resolution technology has enabled $(x,y,z)$ tracking of the basketball to within one centimeter of accuracy. Researchers have used data from NOAH~\citep{noah} and RSPCT~\citep{rspct}, the two largest providers of basketball tracking data, to study several aspects of shooting performance~\citep{marty2018high, marty2017data, bornn2019using, shah2016applying, harmon2016predicting}, see Section \ref{sec:shot_efficiency}. Finally, we also note that many basketball teams and organizations are beginning to collect biometric data on their players via wearable technology. These data are generally unavailable to the public, but can help improve understanding of player fitness and motion~\citep{smith_2018}. Because there are few publications on wearable data in basketball to date, we do not discuss them further.
\hfill
\noindent \textbf{Data sources and tools:} For researchers interested in basketball, we have included two tables in the supplementary material. Table 1 contains a list of R and Python packages developed for scraping basketball data, and Table 2 enumerates a list of relevant basketball data repositories.
\section{Team performance and strategy}
\label{teamsection}
Sportswriters often discuss changes in team rebounding rate or assist rate after personnel or strategy changes, but these discussions are rarely accompanied by quantitative analyses of how these changes actually affect the team's likelihood of winning. Several researchers have attempted to address these questions by investigating which box score statistics are most predictive of team success, typically with regression models \citep{hofler2006efficiency, melnick2001relationship, malarranha2013dynamic, sampaio2010effects}. Unfortunately, the practical implications of such regression-based analyses remains unclear, due to two related difficulties in interpreting predictors for team success: 1) multicollinearity leads to high variance estimators of regression coefficients~\citep{ziv2010predicting} and 2) confounding and selection bias make it difficult to draw any causal conclusions. In particular, predictors that are correlated with success may not be causal when there are unobserved contextual factors or strategic effects that explain the association (see Figure \ref{fig:simpsons} for an interesting example). More recent approaches leverage spatio-temporal data to model team play within individual possessions. These approaches, which we summarize below, can lead to a better understanding of how teams achieve success.
\label{sec:team}
\subsection{Network models}
One common approach to characterizing team play involves modeling the game as a network and/or modeling transition probabilities between discrete game states. For example, \citet{fewell2012basketball} define players as nodes and ball movement as edges and compute network statistics like degree and flow centrality across positions and teams. They differentiate teams based on the propensity of the offense to either move the ball to their primary shooters or distribute the ball unpredictably.~\citet{fewell2012basketball} suggest conducting these analyses over multiple seasons to determine if a team's ball distribution changes when faced with new defenses.~\citet{xin2017continuous} use a similar framework in which players are nodes and passes are transactions that occur on edges. They use more granular data than \citet{fewell2012basketball} and develop an inhomogeneous continuous-time Markov chain to accurately characterize players' contributions to team play.
\citet{skinner2015method} motivate their model of basketball gameplay with a traffic network analogy, where possessions start at Point A, the in-bounds, and work their way to Point B, the basket. With a focus on understanding the efficiency of each pathway, Skinner proposes that taking the highest percentage shot in each possession may not lead to the most efficient possible game. He also proposes a mathematical justification of the ``Ewing Theory'' that states a team inexplicably plays better when their star player is injured or leaves the team~\citep{simmons}, by comparing it to a famous traffic congestion paradox~\citep{skinner2010price}. See \citet{skinner2015optimal} for a more thorough discussion of optimal strategy in basketball.
\subsection{Spatial perspectives}
Many studies of team play also focus on the importance of spacing and spatial context.~\citet{metulini2018modelling} try to identify spatial patterns that improve team performance on both the offensive and defensive ends of the court. The authors use a two-state Hidden Markov Model to model changes in the surface area of the convex hull formed by the five players on the court. The model describes how changes in the surface area are tied to team performance, on-court lineups, and strategy.~\citet{cervone2016NBA} explore a related problem of assessing the value of different court-regions by modeling ball movement over the course of possessions.
Their court-valuation framework can be used to identify teams that effectively suppress their opponents' ability to control high value regions.
Spacing also plays a crucial role in generating high-value shots. ~\citet{lucey2014get} examined almost 20,000 3-point shot attempts from the 2012-2013 NBA season and found that defensive factors, including a ``role swap'' where players change roles, helped generate open 3-point looks.
In related work, \citet{d2015move} stress the importance of ball movement in creating open shots in the NBA. They show that ball movement adds unpredictability into offenses, which can create better offensive outcomes. The work of D'Amour and Lucey could be reconciled by recognizing that unpredictable offenses are likely to lead to ``role swaps'', but this would require further research.~\citet{sandholtz2019measuring} also consider the spatial aspect of shot selection by quantifying a team's ``spatial allocative efficiency,'' a measure of how well teams determine shot selection. They use a Bayesian hierarchical model to estimate player FG\% at every location in the half court and compare the estimated FG\% with empirical field goal attempt rates. In particular, the authors identify a proposed optimum shot distribution for a given lineup and compare the true point total with the proposed optimum point total. Their metric, termed Lineup Points Lost (LPL), identifies which lineups and players have the most efficient shot allocation.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/playbook}
\caption{Unsupervised learning for play discovery \citep{miller2017possession}. A) Individual player actions are clustered into a set of discrete actions. Cluster centers are modeled using Bezier curves. B) Each possession is reduced to a set of co-occurring actions. C) By analogy, a possession can be thought of as a ``document'' consisting of ``words.'' ``Words'' correspond to all pairs of co-occurring actions. A ``document'' is the possession, modeled using a bag-of-words model. D) Possessions are clustered using Latent Dirichlet Allocation (LDA). After clustering, each possession can be represented as a mixture of strategies or play types (e.g. a ``weave'' or ``hammer'' play).}
\label{fig:playbook}
\end{figure}
\subsection{Play evaluation and detection}
Finally, \citet{lamas2015modeling} examine the interplay between offensive actions, or space creation dynamics (SCDs), and defensive actions, or space protection dynamics (SPDs). In their video analysis of six Barcelona F.C. matches from Liga ACB, they find that setting a pick was the most frequent SCD used but it did not result in the highest probability of an open shot, since picks are most often used to initiate an offense, resulting in a new SCD. Instead, the SCD that led to the highest proportion of shots was off-ball player movement. They also found that the employed SPDs affected the success rate of the SCD, demonstrating that offense-defense interactions need to be considered when evaluating outcomes.
Lamas' analysis is limited by the need to watch games and manually label plays. Miller and Bornn address this common limitation by proposing a method for automatically clustering possessions using player trajectories computed from optical tracking data~\citep{miller2017possession}. First, they segment individual player trajectories around periods of little movement and use a functional clustering algorithm to cluster individual segments into one of over 200 discrete actions. They use a probabilistic method for clustering player trajectories into actions, where cluster centers are modeled using Bezier curves. These actions serve as inputs to a probabilistic clustering model at the possession level. For the possession-level clustering, they propose Latent Dirichlet Allocation (LDA), a common method in the topic modeling literature~\citep{blei2003latent}. LDA is traditionally used to represent a document as a mixture of topics, but in this application, each possession (``document'') can be represented as a mixture of strategies/plays (``topics''). Individual strategies consist of a set of co-occurring individual actions (``words''). The approach is summarized in Figure \ref{fig:playbook}. This approach for unsupervised learning from possession-level tracking data can be used to characterize plays or motifs which are commonly used by teams. As they note, this approach could be used to ``steal the opponent's playbook'' or automatically annotate and evaluate the efficiency of different team strategies. Deep learning models \citep[e.g.][]{le2017data, shah2016applying} and variational autoencoders could also be effective for clustering plays using spatio-temporal tracking data.
It may also be informative to apply some of these techniques to quantify differences in strategies and styles around the world. For example, although the US and Europe are often described as exhibiting different styles~\citep{hughes_2017}, this has not yet been studied statistically. Similarly, though some lessons learned from NBA studies may apply to The EuroLeague, the aforementioned conclusions about team strategy and the importance of spacing may vary across leagues.
\begin{comment}
\citep{fichman2018three, fichman2019optimal}
\citep{kozar1994importance} Importance of free throws at different stages of games. \franks{better here than in the individual performance section?} \Terner{will take a look at the papers and see where they belong}
\citep{ervculj2015basketball} also use hierarchical multinomial logistic regression, to explore differences in shot types across multiple levels of play, across multiple levels of play.
\Terner{This paper looks like it would be a good inclusion in the paper (sandholtz and bornn) but not sure where it fits:}
\citep{sandholtz2018transition} Game-theoretic approach to strategy
\end{comment}
\section{Player performance}
\label{playersection}
In this section, we focus on methodologies aimed at characterizing and quantifying different aspects of individual performance. These include metrics which reflect both the overall added value of a player and specific skills like shot selection, shot making, and defensive ability.
When analyzing player performance, one must recognize that variability in metrics for player ability is driven by a combination of factors. This includes sampling variability, effects of player development, injury, aging, and changes in strategy (see Figure \ref{fig:player_variance}). Although measurement error is usually not a big concern in basketball analytics, scorekeepers and referees can introduce bias \citep{van2017adjusting, price2010racial}. We also emphasize that basketball is a team sport, and thus metrics for individual performance are impacted by the abilities of their teammates. Since observed metrics are influenced by many factors, when devising a method targeted at a specific quantity, the first step is to clearly distinguish the relevant sources of variability from the irrelevant nuisance variability.
To characterize the effect of these sources of variability on existing basketball metrics, \citet{franks2016meta} proposed a set of three ``meta-metrics": 1) \emph{discrimination}, which quantifies the extent to which a metric actually reflects true differences between player skill rather than chance variation 2) \emph{stability}, which characterizes how a player-metric evolves over time due to development and contextual changes and 3) \emph{independence}, which describes redundancies in the information provided across multiple related metrics. Arguably, the most useful measures of player performance are metrics that are discriminative and reflect robust measurement of the same (possibly latent) attributes over time.
One of the most important tools for minimizing nuisance variability in characterizing player performance is shrinkage estimation via hierarchical modeling. In their seminal paper, \citet{efron1975data} provide a theoretical justification for hierarchical modeling as an approach for improving estimation in low sample size settings, and demonstrate the utility of shrinkage estimation for estimating batting averages in baseball. Similarly, in basketball, hierarchical modeling is used to leverage commonalities across players by imposing a shared prior on parameters associated with individual performance. We repeatedly return to these ideas about sources of variability and the importance of hierarchical modeling below.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/sources_of_variance2}
\caption{Diagram of the sources of variance in basketball season metrics. Metrics reflect multiple latent player attributes but are also influenced by team ability, strategy, and chance variation. Depending on the question, we may be interested primarily in differences between players, differences within a player across seasons, and/or the dependence between metrics within a player/season. Player 2 in 2018-2019 has missing values (e.g. due to injury) which emphasizes the technical challenge associated with irregular observations and/or varying sample sizes.}
\label{fig:player_variance}
\end{figure}
\subsection{General skill}
\label{sec:general_skill}
One of the most common questions across all sports is ``who is the best player?'' This question takes many forms, ranging from who is the ``most valuable'' in MVP discussions, to who contributes the most to helping his or her team win, to who puts up the most impressive numbers. Some of the most popular metrics for quantifying player-value are constructed using only box score data. These include Hollinger's PER \citep{kubatko2007starting}, Wins Above Replacement Player (WARP)~\citep{pelton}, Berri's quantification of a player's win production~\citep{berri1999most}, Box Plus-Minus (BPM), and Value Over Replacement Player (VORP)~\citep{myers}. These metrics are particularly useful for evaluating historical player value for players who pre-dated play-by-play and tracking data. In this review, we focus our discussion on more modern approaches like the regression-based models for play-by-play data and metrics based on tracking data.
\subsubsection{Regression-based approaches}
\label{reg-section}
One of the first and simplest play-by-play metrics aimed at quantifying player value is known as ``plus-minus''. A player's plus-minus is computed by adding all of the points scored by the player's team and subtracting all the points scored against the player's team while that player was in the game. However, plus-minus is particularly sensitive to teammate contributions, since a less-skilled player may commonly share the floor with a more-skilled teammate, thus benefiting from the better teammate's effect on the game. Several regression approaches have been proposed to account for this problem. \citet{rosenbaum} was one of the first to propose a regression-based approach for quantifying overall player value which he terms adjusted plus-minus, or APM~\citep{rosenbaum}. In the APM model, Rosenbaum posits that
\begin{equation}
\label{eqn:pm}
D_i = \beta_0 + \sum_{p=1}^P\beta_p x_{ip} + \epsilon_i
\end{equation}
\noindent where $D_i$ is 100 times the difference in points between the home and away teams in stint $i$; $x_{ip} \in \lbrace 1, -1, 0 \rbrace $ indicates whether player $p$ is at home, away, or not playing, respectively; and $\epsilon$ is the residual. Each stint is a stretch of time without substitutions. Rosenbaum also develops statistical plus-minus and overall plus-minus which reduce some of the noise in pure adjusted plus-minus~\citep{rosenbaum}. However, the major challenge with APM and related methods is multicollinearity: when groups of players are typically on the court at the same time, we do not have enough data to accurately distinguish their individual contributions using plus-minus data alone. As a consequence, inferred regression coefficients, $\hat \beta_p$, typically have very large variance and are not reliably informative about player value.
APM can be improved by adding a penalty via ridge regression~\citep{sill2010improved}. The penalization framework, known as regularized APM, or RAPM, reduces the variance of resulting estimates by biasing the coefficients toward zero~\citep{jacobs_2017}. In RAPM, $\hat \beta$ is the vector which minimizes the following expression
\begin{equation}
\mathbf{\hat \beta} = \underset{\beta}{\argmin }(\mathbf{D} - \mathbf{X} \beta)^T (\mathbf{D} - \mathbf{X}\beta) + \lambda \beta^T
\beta
\end{equation}
\noindent where $\mathbf{D}$ and $\mathbf{X}$ are matrices whose rows correspond to possessions and $\beta$ is the vector of skill-coefficients for all players. $\lambda \beta^T \beta$ represents a penalty on the magnitude of the coefficients, with $\lambda$ controlling the strength of the penalty. The penalty ensures the existence of a unique solution and reduces the variance of the inferred coefficients. Under the ridge regression framework, $\hat \beta = (X^T X + \lambda I)^{-1}X^T D$ with $\lambda$ typically chosen via cross-validation. An alternative formulation uses the lasso penalty, $\lambda \sum_p |\beta_p|$, instead of the ridge penalty~\citep{omidiran2011pm}, which encourages many players to have an adjusted plus-minus of exactly zero.
Regularization penalties can equivalently be viewed from the Bayesian perspective, where ridge regression estimates are equivalent to the posterior mode when assuming mean-zero Gaussian prior distributions on $\beta_p$ and lasso estimates are equivalent to the posterior mode when assuming mean-zero Laplace prior distributions. Although adding shrinkage priors ensures identifiability and reduces the variance of resulting estimates, regularization is not a panacea: the inferred value of players who often share the court is sensitive to the precise choice of regularization (or prior) used. As such, careful consideration should be placed on choosing appropriate priors, beyond common defaults like the mean-zero Gaussian or Laplace prior. More sophisticated informative priors could be used; for example, a prior with right skewness to reflect beliefs about the distribution of player value in the NBA, or player- and position-specific priors which incorporate expert knowledge. Since coaches give more minutes to players that are perceived to provide the most value, a prior on $\beta_p$ which is a function of playing time could provide less biased estimates than standard regularization techniques, which shrink all player coefficients in exactly the same way. APM estimates can also be improved by incorporating data across multiple seasons, and/or by separately inferring player's defensive and offensive contributions, as explored in \citet{fearnhead2011estimating}.
\begin{comment}
discuss regression in terms of apm. This starts with Rosenbaum's APM and regressions there, continues with Sill. Sill adds ridge regression; Omidiran adds lasso.
Fearnhead is similar to Rosenbaum, but "
The main difference compared with Rosenbaum (2004) is that the author estimates only a combined ability for each player. The model presented here further uses a structured approach to combining information from multiple seasons."
\end{comment}
Several variants and alternatives to the RAPM metrics exist. For example,~\citet{page2007using} use a hierarchical Bayesian regression model to identify a position's contribution to winning games, rather than for evaluating individual players.~\citet{deshpande2016estimating} propose a Bayesian model for estimating each player's effect on the team's chance of winning, where the response variable is the home team's win probability rather than the point spread. Models which explicitly incorporate the effect of teammate interactions are also needed. \citet{piette2011evaluating} propose one approach based on modeling players as nodes in a network, with edges between players that shared the court together. Edge weights correspond to a measure of performance for the lineup during their shared time on the court, and a measure of network centrality is used as a proxy for player importance. An additional review with more detail on possession-based player performance can be found in \citet{engelmann2017possession}.
\begin{comment}
Deshpande, Jensen: "We propose instead to regress the change in the home team’s win probability during a shift onto signed indicators corresponding to the five home team players and five away team players in order to estimate each player’s partial effect on his team’s chances of winning."
This paper usefully explains Rosenbaum's approach too:
"To compute Adjusted Plus-Minus, one first breaks the game into several “shifts,” periods of play between substitutions, and measures both the point differential and total number of possessions in each shift. One then regresses the point differential per 100 possessions from the shift onto indicators corresponding to the ten players on the court."
\end{comment}
\subsubsection{Expected Possession Value}
\label{sec:epv}
The purpose of the Expected Possession Value (EPV) framework, as developed by~\citet{cervone2014multiresolution}, is to infer the expected value of the possession at every moment in time. Ignoring free throws for simplicity, a possession can take on values $Z_i \in \{0, 2, 3\}$. The EPV at time $t$ in possession $i$ is defined as
\begin{equation}
\label{eqn:epv}
v_{it}=\mathbb{E}\left[Z_i | X_{i0}, ..., X_{it}\right]
\end{equation}
\noindent where $X_{i0}, ..., X_{it}$ contain all available covariate information about the game or possession for the first $t$ timestamps of possession $i$. The EPV framework is quite general and can be applied in a range of contexts, from evaluating strategies to constructing retrospectives on the key points or decisions in a possession. In this review, we focus on its use for player evaluation and provide a brief high-level description of the general framework.
~\citet{cervone2014multiresolution} were the first to propose a tractable multiresolution approach for inferring EPV from optical tracking data in basketball. They model the possession at two separate levels of resolution. The \emph{micro} level includes all spatio-temporal data for the ball and players, as well as annotations of events, like a pass or shot, at all points in time throughout the possession. Transitions from one micro state to another are complex due to the high level of granularity in this representation. The \emph{macro} level represents a coarsening of the raw data into a finite collection of states. The macro state at time $t$, $C_t = C(X_t)$, is the coarsened state of the possession at time $t$ and can be classified into one of three state types: $\mathcal{C}_{poss}, \mathcal{C}_{trans},$ and $\mathcal{C}_{end}.$ The information used to define $C_t$ varies by state type. For example,
$\mathcal{C}_{poss}$ is defined by the ordered triple containing the ID of the player with the ball, the location of the ball in a discretized court region, and an indicator for whether the player has a defender within five feet of him or her. $\mathcal{C}_{trans}$ corresponds to ``transition states'' which are typically very brief in duration, as they include moments when the ball is in the air during a shot, pass, turnover, or immediately prior to a rebound: $\mathcal{C}_{trans} = $\{shot attempt from $c \in \mathcal{C}_{poss}$, pass from $c \in \mathcal{C}_{poss}$ to $c' \in \mathcal{C}_{poss}$, turnover in progress, rebound in progress\}. Finally, $\mathcal{C}_{end}$ corresponds to the end of the possession, and simply encodes how the possession ended and the associated value: a made field goal, worth two or three points, or a missed field goal or a turnover, worth zero points. Working with macrotransitions facilitates inference, since the macro states are assumed to be semi-Markov, which means the sequence of new states forms a homogeneous Markov chain~\citep{bornn2017studying}.
Let $C_t$ be the current state and $\delta_t > t$ be the time that the next non-transition state begins, so that $C_{\delta_t} \notin \mathcal{C}_{trans}$ is the next possession state or end state to occur after $C_t$. If we assume that coarse states after time $\delta_t$ do not depend on the data prior to $\delta_t$, that is
\begin{equation}
\textrm{for } s>\delta_{t}, P\left(C_s \mid C_{\delta_{t}}, X_{0}, \ldots, X_{t}\right)=P\left(C_{s} | C_{\delta_{t}}\right),
\end{equation}
\noindent then EPV can be defined in terms of macro and micro factors as
\begin{equation}
v_{it}=\sum_{c} \mathbb{E}\left[Z_i | C_{\delta_{t}}=c\right] P\left(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}\right)
\end{equation}
\noindent since the coarsened Markov chain is time-homogeneous. $\mathbb{E}\left[Z | C_{\delta_{t}}=c\right]$ is macro only, as it does not depend on the full resolution spatio-temporal data. It can be inferred by estimating the transition probabilities between coarsened-states and then applying standard Markov chain results to compute absorbing probabilities. Inferring macro transition probabilities could be as simple as counting the observed fraction of transitions between states, although model-based approaches would likely improve inference.
The micro models for inferring the next non-transition state (e.g. shot outcome, new possession state, or turnover) given the full resolution data, $P(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}),$ are more complex and vary depending on the state-type under consideration.~\citet{cervone2014multiresolution} use log-linear hazard models~\citep[see][]{prentice1979hazard} for modeling both the time of the next major event and the type of event (shot, pass to a new player, or turnover), given the locations of all players and the ball. \citet{sicilia2019deephoops} use a deep learning representation to model these transitions. The details of each transition model depend on the state type: models for the case in which $C_{\delta_t}$ is a shot attempt or shot outcome are discussed in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. See~\citet{masheswaran2014three} for a discussion of factors relevant to modeling rebounding and the original EPV papers for a discussion of passing models~\citep{cervone2014multiresolution, bornn2017studying}.
~\citet{cervone2014multiresolution} suggested two metrics for characterizing player ability that can be derived from EPV: Shot Satisfaction (described in Section \ref{sec:shot_selection}) and EPV Added (EPVA), a metric quantifying the overall contribution of a player. EPVA quantifies the value relative to the league average of an offensive player receiving the ball in a similar situation. A player $p$ who possesses the ball starting at time $s$ and ending at time $e$ contributes value $v_{t_e} - v_{t_s}^{r(p)}$ over the league average replacement player, $r(p)$. Thus, the EPVA for player $p$, or EPVA$(p)$, is calculated as the average value that this player brings over the course of all times that player possesses the ball:
\begin{equation}
\text{EPVA(p)} = \frac{1}{N_p}\sum_{\{t_s, t_e\} \in \mathcal{T}^{p}} v_{t_e} - v_{t_s}^{r(p)}
\end{equation}
\noindent where $N_p$ is the number of games played by $p$, and $\mathcal{T}^{p}$ is the set of starting and ending ball-possession times for $p$ across all games. Averaging over games, instead of by touches, rewards high-usage players. Other ways of normalizing EPVA, e.g. by dividing by $|\mathcal{T}^p|$, are also worth exploring.
Unlike RAPM-based methods, which only consider changes in the score and the identities of the players on the court, EPVA leverages the high resolution optical data to characterize the precise value of specific decisions made by the ball carrier throughout the possession. Although this approach is powerful, it still has some crucial limitations for evaluating overall player value. The first is that EPVA measures the value added by a player only when that player touches the ball. As such, specialists, like three point shooting experts, tend to have high EPVA because they most often receive the ball in situations in which they are uniquely suited to add value. However, many players around the NBA add significant value by setting screens or making cuts which draw defenders away from the ball. These actions are hard to measure and thus not included in the original EPVA metric proposed by \citet{cervone2014multiresolution}. In future work, some of these effects could be captured by identifying appropriate ways to measure a player's ``gravity''~\citep{visualizegravity} or through new tools which classify important off-ball actions. Finally, EPVA only represents contributions on the offensive side of the ball and ignores a player's defensive prowess; as noted in Section~\ref{defensive ability}, a defensive version of EPVA would also be valuable.
In contrast to EPVA, the effects of off-ball actions and defensive ability are implicitly incorporated into RAPM-based metrics. As such, RAPM remains one of the key metrics for quantifying overall player value. EPVA, on the other hand, may provide better contextual understanding of how players add value, but a less comprehensive summary of each player's total contribution. A more rigorous comparison between RAPM, EPVA and other metrics for overall ability would be worthwhile.
\subsection{Production curves}
\label{sec:production_curves}
A major component of quantifying player ability involves understanding how ability evolves over a player's career. To predict and describe player ability over time, several methods have been proposed for inferring the so-called ``production curve'' for a player\footnote{Production curves are also referred to as ``player aging curves'' in the literature, although we prefer ``production curves'' because it does not imply that changes in these metrics over time are driven exclusively by age-related factors.}. The goal of a production curve analysis is to provide predictions about the future trajectory of a current player's ability, as well as to characterize similarities in production trajectories across players. These two goals are intimately related, as the ability to forecast production is driven by assumptions about historical production from players with similar styles and abilities.
Commonly, in a production curve analysis, a continuous measurement of aggregate skill (i.e. RAPM or VORP), denoted $\mathbf Y$ is considered for a particular player at time t:
$$Y_{pt} = f_p(t) + \epsilon_{pt}$$
\noindent where $f_p$ describes player $p$'s ability as a function of time, $t$, and $\epsilon_{pt}$ reflects irreducible errors which are uncorrelated over time, e.g. due to unobserved factors like minor injury, illness and chance variation. Athletes not only exhibit different career trajectories, but their careers occur at different ages, can be interrupted by injuries, and include different amounts of playing time. As such, the statistical challenge in production curve analysis is to infer smooth trajectories $f_p(t)$ from sparse irregular observations of $Y_{pt}$ across players \citep{wakim2014functional}.
There are two common approaches to modeling production curves: 1) Bayesian hierarchical modeling and 2) methods based on functional data analysis and clustering. In the Bayesian hierarchical paradigm, ~\citet{berry1999bridging} developed a flexible hierarchical aging model to compare player abilities across different eras in three sports: hockey, golf, and baseball. Although not explored in their paper, their framework can be applied to basketball to account for player-specific development and age-related declines in performance. ~\citet{page2013effect} apply a similar hierarchical method based on Gaussian Process regressions to infer how production evolves across different basketball positions. They find that production varies across player type and show that point guards (i.e. agile ball-handlers) generally spend a longer fraction of their career improving than other player types. \citet{vaci2019large} also use a Bayesian hierarchical modeling with distinct parametric curves to describe trajectories before and after peak-performance. They assume pre-peak performance reflects development whereas post-peak performance is driven by aging. Their findings suggest that athletes which develop more quickly also exhibit slower age-related declines, an observation which does not appear to depend on position.
In contrast to hierarchical Bayesian models, \citet{wakim2014functional} discuss how the tools of functional data analysis can be used to model production curves. In particular, functional principal components metrics can be used in an unsupervised fashion to identify clusters of players with similar trajectories. Others have explicitly incorporated notions of player similarity into functional models of production. In this framework, the production curve for any player $p$ is then expressed as a linear combination of the production curves from a set of similar players: $f_p(t) \approx \sum_{k \neq p} \alpha_{pk} f_k(t)$. For example, in their RAPTOR player rating system, \url{fivethirtyeight.com} uses a nearest neighbor algorithm to characterize similarity between players~\citep{natesilver538_2015, natesilver538_2019}. The production curve for each player is an average of historical production curves from a distinct set of the most similar athletes. A related approach, proposed by \citet{vinue2019forecasting}, employs the method of archetypoids \citep{vinue2015archetypoids}. Loosely speaking, the archetypoids consist of a small set of players, $\mathcal{A}$, that represent the vertices in the convex hull of production curves. Different from the RAPTOR approach, each player's production curve is represented as a convex combination of curves from the \emph{same set} of archetypes, that is, $\alpha_{pk} = 0 \; \forall \ k \notin \mathcal{A}$.
One often unaddressed challenge is that athlete playing time varies across games and seasons, which means sampling variability is non-constant. Whenever possible, this heteroskedasticity in the observed outcomes should be incorporated into the inference, either by appropriately controlling for minutes played or by using other relevant notions of exposure, like possessions or attempts.
Finally, although the precise goals of these production curve analyses differ, most current analyses focus on aggregate skill. More work is needed to capture what latent player attributes drive these observed changes in aggregate production over time. Models which jointly infer how distinct measures of athleticism and skill co-evolve, or models which account for changes in team quality and adjust for injury, could lead to further insight about player ability, development, and aging (see Figure \ref{fig:player_variance}). In the next sections we mostly ignore how performance evolves over time, but focus on quantifying some specific aspects of basketball ability, including shot making and defense.
\subsection{Shot modeling}
\label{sec:shooting}
Arguably the most salient aspect of player performance is the ability to score. There are two key factors which drive scoring ability: the ability to selectively identify the highest value scoring options (shot selection) and the ability to make a shot, conditioned on an attempt (shot efficiency). A player's shot attempts and his or her ability to make them are typically related. In \emph{Basketball on Paper}, Dean Oliver proposes the notion of a ``skill curve,'' which roughly reflects the inverse relationship between a player's shot volume and shot efficiency \citep{oliver2004basketball, skinner2010price, goldman2011allocative}. Goldsberry and others gain further insight into shooting behavior by visualizing how both player shot selection and efficiency vary spatially with a so-called ``shot chart.'' (See \citet{goldsberry2012courtvision} and \citet{goldsberry2019sprawlball} for examples.) Below, we discuss statistical models for inferring how both shot selection and shot efficiency vary across players, over space, and in defensive contexts.
\subsubsection{Shot efficiency}
\label{sec:shot_efficiency}
Raw FG\% is usually a poor measure for the shooting ability of an athlete because chance variability can obscure true differences between players. This is especially true when conditioning on additional contextual information like shot location or shot type, where sample sizes are especially small. For example, \citet{franks2016meta} show that the majority of observed differences in 3PT\% are due to sampling variability rather than true differences in ability, and thus is a poor metric for player discrimination. They demonstrate how these issues can be mitigated by using hierarchical models which shrink empirical estimates toward more reasonable prior means. These shrunken estimates are both more discriminative and more stable than the raw percentages.
With the emergence of tracking data, hierarchical models have been developed which target increasingly context-specific estimands. \citet{franks2015characterizing} and \citet{cervone2014multiresolution} propose similar hierarchical logistic regression models for estimating the probability of making a shot given the shooter identity, defender distance, and shot location. In their models, they posit the logistic regression model
\begin{equation}
E[Y_{ip} \mid \ell_{ip}, X_{ijp}] = \textrm{logit}^{-1} \big( \alpha_{\ell_i,p} + \sum_{j=1}^J \beta_{j} X_{ij} \big)
\end{equation} where $Y_{ip}$ is the outcome of the $i$th shot by player $p$ given $J$ covariates $X_{ij}$ (i.e. defender distance) and $\alpha_{{\ell_i}, p}$ is a spatial random effect describing the baseline shot-making ability of player $p$ in location $\ell_i$. As shown in Figure \ref{fig:simpsons}, accounting for spatial context is crucial for understanding defensive impact on shot making.
Given high resolution data, more complex hierarchical models which capture similarities across players and space are needed to reduce the variance of resulting estimators. Franks et al. propose a conditional autoregressive (CAR) prior distribution for $\alpha_{\ell_i,p}$ to describe similarity in shot efficiencies between players. The CAR prior is simply a multivariate normal prior distribution over player coefficients with a structured covariance matrix. The prior covariance matrix is structured to shrink the coefficients of players with low attempts in a given region toward the FG\%s of players with similar styles and skills. The covariance is constructed from a nearest-neighbor similarity network on players with similar shooting preferences. These prior distributions improve out-of-sample predictions for shot outcomes, especially for players with fewer attempts. To model the spatial random effects, they represent a smoothed spatial field as a linear combination of functional bases following a matrix factorization approach proposed by \citet{miller2013icml} and discussed in more detail in Section \ref{sec:shot_selection}.
More recently, models which incorporate the full 3-dimensional trajectories of the ball have been proposed to further improve estimates of shot ability. Data from SportVU, Second Spectrum, NOAH, or RSPCT include the location of the ball in space as it approaches the hoop, including left/right accuracy and the depth of the ball once it enters the hoop.~\citet{marty2017data} and~\citet{marty2018high} use ball tracking data from over 20 million attempts taken by athletes ranging from high school to the NBA. From their analyses, \citet{marty2018high} and \citet{daly2019rao} show that the optimal entry location is about 2 inches beyond the center of the basket, at an entry angle of about $45^{\circ}$.
Importantly, this trajectory information can be used to improve estimates of shooter ability from a limited number of shots. \citet{daly2019rao} use trajectory data and a technique known as Rao-Blackwellization to generate lower error estimates of shooting skill. In this context, the Rao-Blackwell theorem implies that one can achieve lower variance estimates of the sample frequency of made shots by conditioning on sufficient statistics; here, the probability of making the shot. Instead of taking the field goal percentage as $\hat \theta_{FG} = \sum Y_{i} / n$, they infer the percentage as $\hat \theta_{FG\text{-}RB} = \sum p_{i} / n$, where $p_i = E[Y_i \mid X]$ is the inferred probability that shot $i$ goes in, as inferred from trajectory data $X$. The shot outcome is not a deterministic function of the observed trajectory information due to the limited precision of spatial data and the effect of unmeasured factors, like ball spin. They estimate the make probabilities, $p_i$, from the ball entry location and angle using a logistic regression.
~\citet{daly2019rao} demonstrate that Rao-Blackwellized estimates are better at predicting end-of-season three point percentages from limited data than empirical make percentages. They also integrate the RB approach into a hierarchical model to achieve further variance reduction. In a follow-up paper, they focus on the effect that defenders have on shot trajectories~\citep{bornn2019using}. Unsurprisingly, they demonstrate an increase in the variance of shot depth, left-right location, and entry angle for highly contested shots, but they also show that players are typically biased toward short-arming when heavily defended.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figs/court_colored.pdf}
\end{subfigure}
~~
\begin{subfigure}[b]{0.55 \textwidth}
\includegraphics[width=\textwidth]{figs/bball_simpsons.pdf}
\end{subfigure}
\caption{Left) The five highest-volume shot regions, inferred using the NMF method proposed by \citet{miller2013icml}. Right) Fitted values in a logistic regression of shot outcome given defender distance and NMF shot region from over 115,000 shot attempts in the 2014-2015 NBA season \citep{franks2015characterizing, simpsons_personal}. The make probability increases approximately linearly with increasing defender distance in all shot locations. The number of observed shots at each binned defender distance is indicated by the point size. Remarkably, when ignoring shot region, the coefficient of defender distance has a slightly \emph{negative} coefficient, indicating that the probability of making a shot increases slightly with the closeness of the defender (gray line). This effect, which occurs because defender distance is also dependent on shot region, is an example of a ``reversal paradox''~\citep{tu2008simpson} and highlights the importance of accounting for spatial context in basketball. It also demonstrates the danger of making causal interpretations without carefully considering the role of confounding variables. }
\label{fig:simpsons}
\end{figure}
\subsubsection{Shot selection}
\label{sec:shot_selection}
Where and how a player decides to shoot is also important for determining one's scoring ability. Player shot selection is driven by a variety of factors including individual ability, teammate ability, and strategy~\citep{goldman2013live}. For example, \citet{alferink2009generality} study the psychology of shot selection and how the positive ``reward'' of shot making affects the frequency of attempted shot types. The log relative frequency of two-point shot attempts to three-point shot attempts is approximiately linear in the log relative frequency of the player's ability to make those shots, a relationship known to psychologists as the generalized matching law~\citep{poling2011matching}. \citet{neiman2011reinforcement} study this phenomenon from a reinforcement learning perspective and demonstrate that a previous made three point shot increases the probability of a future three point attempt. Shot selection is also driven by situational factors, strategy, and the ability of a player's teammates. \citet{zuccolotto2018big} use nonparametric regression to infer how shot selection varies as a function of the shot clock and score differential, whereas \citet{goldsberry2019sprawlball} discusses the broader strategic shift toward high volume three point shooting in the NBA.
The availability of high-resolution spatial data has spurred the creation of new methods to describe shot selection.~\citet{miller2013icml} use a non-negative matrix factorization (NMF) of player-specific shot patterns across all players in the NBA to derive a low dimensional representation of a pre-specified number of approximiately disjoint shot regions. These identified regions correspond to interpretable shot locations, including three-point shot types and mid-range shots, and can even reflect left/right bias due to handedness. See Figure \ref{fig:simpsons} for the results of a five-factor NMF decomposition. With the inferred representation, each player's shooting preferences can be approximated as a linear combination of the canonical shot ``bases.'' The player-specific coefficients from the NMF decomposition can be used as a lower dimensional characterization of the shooting style of that player \citep{bornn2017studying}.
While the NMF approach can generate useful summaries of player shooting styles, it incorporates neither contextual information, like defender distance, nor hierarchical structure to reduce the variance of inferred shot selection estimates. As such, hierarchical spatial models for shot data, which allow for spatially varying effects of covariates, are warranted \citep{reich2006spatial, franks2015characterizing}. \citet{franks2015characterizing} use a hierarchical multinomial logistic regression to predict who will attempt a shot and where the attempt will occur given defensive matchup information. They consider a 26-outcome multinomial model, where the outcomes correspond to shot attempts by one of the five offensive players in any of five shot regions, with regions determined \textit{a priori} using the NMF factorization. The last outcome corresponds to a possession that does not lead to a shot attempt. Let $\mathcal{S}(p, b)$ be an indicator for a shot by player $p$ in region $b$. The shot attempt probabilities are modeled as
\begin{equation}
\label{eqn:shot_sel}
E[\mathcal{S}(p, b) \mid \ell_{ip}, X_{ip}] = \frac{\exp \left(\alpha_{p b}+\sum_{j=1}^{5} F_{n}(j, p) \beta_{j b}\right)}{1+\sum_{\tilde p,\tilde b} \exp \left(\alpha_{\tilde p \tilde b}+\sum_{j=1}^{5} F_{n}(j, \tilde p) \beta_{j \tilde b}\right)}
\end{equation}
\noindent where $\alpha_{pb}$ is the propensity of the player to shoot from region $b$, and $F(j, p)$ is the fraction of time in the possession that player $p$ was guarded by defender $j$. Shrinkage priors are again used for the coefficients based on player similarity. $\beta_{jb}$ accounts for the effect of defender $j$ on offensive player $p$'s shooting habits (see Section \ref{defensive ability}).
Beyond simply describing the shooting style of a player, we can also assess the degree to which players attempt high value shots. \citet{chang2014quantifying} define effective shot quality (ESQ) in terms of the league-average expected value of a shot given the shot location and defender distance.~\citet{shortridge2014creating} similarly characterize how expected points per shot (EPPS) varies spatially. These metrics are useful for determining whether a player is taking shots that are high or low value relative to some baseline, i.e., the league average player.
\citet{cervonepointwise} and \citet{cervone2014multiresolution} use the EPV framework (Section \ref{sec:epv}) to develop a more sophisticated measure of shot quality termed ``shot satisfaction''. Shot satisfaction incorporates both offensive and defensive contexts, including shooter identity and all player locations and abilities, at the moment of the shot. The ``satisfaction'' of a shot is defined as the conditional expectation of the possession value at the moment the shot is taken, $\nu_{it}$, minus the expected value of the possession conditional on a counterfactual in which the player did not shoot, but passed or dribbled instead. The shot satisfaction for player $p$ is then defined as the average satisfaction, averaging over all shots attempted by the player:
$$\textrm{Satis}(p)=\frac{1}{\left|\mathcal{T}_{\textrm{shot }}^{p}\right|} \sum_{(i, t) \in \mathcal{T}_{\textrm{shot }}^{p}} \left(v_{it}-\mathbb{E}\left[Z_i | X_{it}, C_{t} \textrm{ is a non-shooting state} \right]\right)$$
\noindent where $\mathcal{T}_{\textrm{shot }}^{p}$ is the set of all possessions and times at which a player $p$ took a shot, $Z_i$ is the point value of possession $i$, $X_{it}$ corresponds to the state of the game at time $t$ (player locations, shot clock, etc) and $C_t$ is a non-shooting macro-state. $\nu_t$ is the inferred EPV of the possession at time $t$ as defined in Equation \ref{eqn:epv}. Satisfaction is low if the shooter has poor shooting ability, takes difficult shots, or if the shooter has teammates who are better scorers. As such, unlike other metrics, shot satisfaction measures an individual's decision making and implicitly accounts for the shooting ability of both the shooter \emph{and} the ability of their teammates. However, since shot satisfaction only averages differential value over the set $\mathcal{T}_{\textrm{shot}}^{p}$, it does not account for situations in which the player passes up a high-value shot. Additionally, although shot satisfaction is aggregated over all shots, exploring spatial variability in shot satisfaction would be an interesting extension.
\subsubsection{The hot hand}
\label{hothand}
One of the most well-known and debated questions in basketball analytics is about the existence of the so-called ``hot-hand''. At a high level, a player is said to have a ``hot hand'' if the conditional probability of making a shot increases given a prior sequence of makes. Alternatively, given $k$ previous shot makes, the hot hand effect is negligible if $E[Y_{p,t}|Y_{p, t-1}=1, ..., Y_{p, t-k}=1, X_t] \approx E[Y_{p,t}| X_t]$ where $Y_{p, t}$ is the outcome of the $t$th shot by player $p$ and $X_t$ represents contextual information at time $t$ (e.g. shot type or defender distance). In their seminal paper,~\citet{gilovich1985hot} argued that the hot hand effect is negligible. Instead, they claim streaks of made shots arising by chance are misinterpreted by fans and players \textit{ex post facto} as arising from a short-term improvement in ability. Extensive research following the original paper has found modest, but sometimes conflicting, evidence for the hot hand~\citep[e.g.][]{bar2006twenty, yaari2011hot,hothand93online}.
Amazingly, 30 years after the original paper,~\citet{miller2015surprised} demonstrated the existence of a bias in the estimators used in the original and most subsequent hot hand analyses. The bias, which attenuates estimates of the hot hand effect, arises due to the way in which shot sequences are selected and is closely related to the infamous Monty Hall problem~\citep{sciam, miller2017bridge}. After correcting for this bias, they estimate that there is an 11\% increase in the probability of making a three point shot given a streak of previous makes, a significantly larger hot-hand effect than had been previously reported.
Relatedly,~\citet{stone2012measurement} describes the effects of a form of ``measurement error'' on hot hand estimates, arguing that it is more appropriate to condition on the \emph{probabilities} of previous makes, $E\left[Y_{p,t}|E[Y_{p, t-1}], ... E[Y_{p, t-k}], X_t\right]$, rather than observed makes and misses themselves -- a subtle but important distinction. From this perspective, the work of \citet{marty2018high} and \citet{daly2019rao} on the use of ball tracking data to improve estimates of shot ability could provide fruitful views on the hot hand phenomenon by exploring autocorrelation in shot trajectories rather than makes and misses. To our knowledge this has not yet been studied. For a more thorough review and discussion of the extensive work on statistical modeling of streak shooting, see \citet{lackritz2017probability}.
\subsection{Defensive ability}
\label{defensive ability}
Individual defensive ability is extremely difficult to quantify because 1) defense inherently involves team coordination and 2) there are relatively few box scores statistics related to defense. Recently, this led Jackie MacMullan, a prominent NBA journalist, to proclaim that ``measuring defense effectively remains the last great frontier in analytics''~\citep{espnmac}. Early attempts at quantifying aggregate defensive impact include Defensive Rating (DRtg), Defensive Box Plus/Minus (DBPM) and Defensive Win Shares, each of which can be computed entirely from box score statistics \citep{oliver2004basketball, bbref_ratings}. DRtg is a metric meant to quantify the ``points allowed'' by an individual while on the court (per 100 possessions). Defensive Win Shares is a measure of the wins added by the player due to defensive play, and is derived from DRtg. However, all of these measures are particularly sensitive to teammate performance, and thus are not reliable measures of individual defensive ability.
Recent analyses have targeted more specific descriptions of defensive ability by leveraging tracking data, but still face some of the same difficulties. Understanding defense requires as much an understanding about what \emph{does not} happen as what does happen. What shots were not attempted and why? Who \emph{did not} shoot and who was guarding them? \citet{goldsberry2013dwight} were some of the first to use spatial data to characterize the absence of shot outcomes in different contexts. In one notable example from their work, they demonstrated that when Dwight Howard was on the court, the number of opponent shot attempts in the paint dropped by 10\% (``The Dwight Effect'').
More refined characterizations of defensive ability require some understanding of the defender's goals. \citet{franks2015characterizing} take a limited view on defenders' intent by focusing on inferring whom each defender is guarding. Using tracking data, they developed an unsupervised algorithm, i.e., without ground truth matchup data, to identify likely defensive matchups at each moment of a possession. They posited that a defender guarding an offensive player $k$ at time $t$ would be normally distributed about the point $\mu_{t k}=\gamma_{o} O_{t k}+\gamma_{b} B_{t}+\gamma_{h} H$, where $O_t$ is the location of the offensive player, $B_t$ is the location of the ball, and $H$ is the location of the hoop. They use a Hidden Markov model to infer the weights $\mathbf{\gamma}$ and subsequently the evolution of defensive matchups over time. They find that the average defender location is about 2/3 of the way between the segment connecting the hoop to the offensive player being guarded, while shading about 10\% of the way toward the ball location.~\citet{keshri2019automatic} extend this model by allowing $\mathbf{\gamma}$ to depend on player identities and court locations for a more accurate characterization of defensive play that also accounts for the ``gravity'' of dominant offensive players.
Defensive matchup data, as derived from these algorithms, is essential for characterizing the effectiveness of individual defensive play. For example, \citet{franks2015characterizing} use matchup data to describe the ability of individual defenders to both suppress shot attempts and disrupt attempted shots at different locations. To do so, they include defender identities and defender distance in the shot outcome and shot attempt models described in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. Inferred coefficients relate to the ability of a defensive player to either reduce the propensity to make a shot given that it is taken, or to reduce the likelihood that a player attempts a shot in the first place.
These coefficients can be summarized in different ways. For example,~\citet{franks2015characterizing} introduce the defensive analogue of the shot chart by visualizing where on the court defenders reduce shot attempts and affect shot efficiency. They found that in the 2013-2014 season, Kawhi Leonard reduced the percentage of opponent three attempts more than any other perimeter defender; Roy Hibbert, a dominant big that year, faced more shots in the paint than any other player, but also did the most to reduce his opponent's shooting efficiency. In~\citet{franks2015counterpoints}, matchup information is used to derive a notion of ``points against''-- the number of points scored by offensive players when guarded by a specific defender. Such a metric can be useful in identifying the weak links in a team defense, although this is very sensitive to the skill of the offensive players being guarded.
Ultimately, the best matchup defenders are those who encourage the offensive player to make a low value decision. The EPVA metric discussed in Section \ref{sec:general_skill} characterizes the value of offensive decisions by the ball handler, but a similar defender-centric metric could be derived by focusing on changes in EPV when ball handlers are guarded by a specific defender. Such a metric could be a fruitful direction for future research and provide insight into defenders which affect the game in unique ways. Finally, we note that a truly comprehensive understanding of defensive ability must go beyond matchup defense and incorporate aspects of defensive team strategy, including strategies for zone defense. Without direct information from teams and coaches, this is an immensely challenging task. Perhaps some of the methods for characterizing team play discussed in Section \ref{sec:team} could be useful in this regard. An approach which incorporates more domain expertise about team defensive strategy could also improve upon existing methods.
\section{Discussion}
Basketball is a game with complex spatio-temporal dynamics and strategies. With the availability of new sources of data, increasing computational capability, and methodological innovation, our ability to characterize these dynamics with statistical and machine learning models is improving. In line with these trends, we believe that basketball analytics will continue to move away from a focus on box-score based metrics and towards models for inferring (latent) aspects of team and player performance from rich spatio-temporal data. Structured hierarchical models which incorporate more prior knowledge about basketball and leverage correlations across time and space will continue to be an essential part of disentangling player, team, and chance variation. In addition, deep learning approaches for modeling spatio-temporal and image data will continue to develop into major tools for modeling tracking data.
However, we caution that more data and new methods do not automatically imply more insight. Figure \ref{fig:simpsons} depicts just one example of the ways in which erroneous conclusions may arise when not controlling for confounding factors related to space, time, strategy, and other relevant contextual information. In that example, we are able to control for the relevant spatial confounder, but in many other cases, the relevant confounders may not be observed. In particular, strategic and game-theoretic considerations are of immense importance, but are typically unknown. As a related simple example, when estimating field goal percentage as a function of defender distance, defenders may strategically give more space to the poorest shooters. Without this contextual information, this would make it appear as if defender distance is \emph{negatively} correlated with the probability of making the shot.
As such, we believe that causal thinking will be an essential component of the future of basketball analytics, precisely because many of the most important questions in basketball are causal in nature. These questions involve a comparison between an observed outcome and a counterfactual outcome, or require reasoning about the effects of strategic intervention: ``What would have happened if the Houston Rockets had not adopted their three point shooting strategy?'' or ``How many games would the Bucks have won in 2018 if Giannis Antetokounmpo were replaced with an `average' player?'' Metrics like Wins Above Replacement Player are ostensibly aimed at answering the latter question, but are not given an explicitly causal treatment. Tools from causal inference should also help us reason more soundly about questions of extrapolation, identifiability, uncertainty, and confounding, which are all ubiquitous in basketball. Based on our literature review, this need for causal thinking in sports remains largely unmet: there were few works which explicitly focused on causal and/or game theoretic analyses, with the exception of a handful in basketball \citep{skinner2015optimal, sandholtz2018transition} and in sports more broadly \citep{lopez2016persuaded, yamlost, gauriot2018fooled}.
Finally, although new high-resolution data has enabled increasingly sophisticated methods to address previously unanswerable questions, many of the richest data sources are not openly available. Progress in statistical and machine learning methods for sports is hindered by the lack of publicly available data. We hope that data providers will consider publicly sharing some historical spatio-temporal tracking data in the near future. We also note that there is potential for enriching partnerships between data providers, professional leagues, and the analytics community. Existing contests hosted by professional leagues, such as the National Football League's ``Big Data Bowl''~\citep[open to all,][]{nfl_football_operations}, and the NBA Hackathon~\citep[by application only,][]{nbahack}, have been very popular. Additional hackathons and open data challenges in basketball would certainly be well-received.
\section*{DISCLOSURE}
Alexander Franks is a consultant for a basketball team in the National Basketball Association. This relationship did not affect the content of this review. Zachary Terner is not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
The authors thank Luke Bornn, Daniel Cervone, Alexander D'Amour, Michael Lopez, Andrew Miller, Nathan Sandholtz, Hal Stern, and an anonymous reviewer for their useful comments, feedback, and discussions.
\newpage
\section*{SUPPLEMENTARY MATERIAL}
\input{supplement_content.tex}
\section*{LITERATURE\ CITED}
\renewcommand{\section}[2]{}%
\bibliographystyle{ar-style1.bst}
\section*{Response to comments}
We thank the reviewer for their thoughtful and helpful comments. As per the reviewer's suggestions, our major revisions were to: 1) reduce the length of the sections suggested by the reviewer 2) clarify the discussion on EPV and the comparison to APM 3) update some of the text in the production curves sections. Our detailed responses are below.
\begin{itemize}
\item The article provides a comprehensive review of the fast-changing field of basketball analytics. It is very well written and addresses all of the main topics. I have identified a few areas where I think additional discussion (or perhaps rewriting) would be helpful. In addition, the paper as a whole is longer than we typically publish by 5-10\%. We would encourage the authors to examine with an eye towards areas that can be cut. Some suggestions for editing down include Sections 2.1, 2.2.1, 2.2.2, and 3.3.3.
\emph{We cut close to 500 words (approx 5\%) by reducing the text in the suggested sections. However, after editing sections related to the reviewer comments, this length reduction was partially offset. According to the latex editor we are using, there are now 8800 words in the document, less than the 9500 limit listed by the production editor. If the reviewer still finds the text too long, we will likely cut the hot hand discussion, limiting it to a few sentences in the section on shot efficiency. This will further reduce the length by about 350 words. Our preference, however is to keep it.}
\item Vocabulary – I understand the authors decision to not spend much time describing basketball in detail. I think there would be value on occasion though in spelling out some of the differences (e.g., when referring to types of players as ‘bigs’ or ‘wings’). I suppose another option is to use more generic terms like player type.
\emph{We have made an attempt to reduce basketball jargon and clarify where necessary.}
\item Page 8, para 4 – “minimizing nuisance variability in characterizing player performance”
\emph{We have corrected this.}
\item Page 8, para 4, line 4 – Should this be “low-sample size settings”?
\emph{We have corrected this.}
\item Page 10, line 7 – Should refer to the “ridge regression model” or “ridge regression framework”
\emph{We now refer to the "ridge regression framework"}
\item Page 11 – The EPV discussion was not clear to me. To start it is not clear what information is in $C_t$. It seems that the information in $C_t$ depends on which of the type we are in. Is that correct? So $C_t$ is a triple if we are in $C_{poss}$ but it is only a singleton if it is in $C_{trans}$ or $C_{end}?$ Then in the formula that defines EPV there is some confusion about $\delta_t$; is $\delta_t$ the time of the next action ends (after time $t$)? If so, that should be made clearer.
\emph{We agree this discussion may not have been totally clear. There are many details and we have worked hard to revise and simplify the discussion as much as possible. It is probably better to think about $C_t$ as a discrete state that is \emph{defined} by the full continuous data $X$, rather than "carrying information". $C_{poss}$ is a state defined by the ball carrier and location, whereas $C_{end}$ is defined only by the outcome that ends the possession (a made or missed shot or turnover. It is also associated with value of 0, 2, or 3.) The game is only in $C_{trans}$ states for short periods of time, e.g. when the ball is traveling in the air during a pass or shot attempt. $\delta_t$ is the start time of the next non-transition state. The micromodels are used to predict both $\delta_t$ and the identity of the next non-transition state (e.g. who is possessing the ball next and where, or the ending outcome of the possession) given the full resolution tracking data. We have worked to clarify this in the text.}
\item Page 12 – The definition of EPVA is interesting. Is it obvious that this should be defined by dividing the total by the number of games? This weights games by the number of possessions. I’d say a bit more discussion would help.
\emph{This is a great point. Averaging over games implicitly rewards players who have high usage, even if their value added per touch might be low. This is mentioned in their original paper and we specifically highlight this choice in the text.}
\item Page 12 – End of Section 3.1 .... Is it possible to talk about how APM and EPV compare? Has anyone compared the results for a season? Seems like that would be very interesting.
\emph{We have added some discussion about EPVA and APM in the text, and also recommended that a more rigorous comparison would be worthwhile. In short, EPVA is limited by focusing on the ball-handler. Off-ball actions are not rewarded. This can be a significant source of value-added and it implicitly included in APM. The original EPV authors also note that one-dimensional offensive players often accrue the most EPVA per touch since they only handle the ball when they are uniquely suited to scoring.}
\item Page 13 – Production curves – The discussion of $\epsilon_{pt}$ is a bit confusing. What do you mean by random effects? This is often characterized as variation due to other variables (e.g., physical condition). Also, a reference to Berry et al. (JASA, 1999) might be helpful here. They did not consider basketball but have some nice material on different aging functions.
\emph{We have changed this to read: ``$\epsilon_{pt}$ reflects irreducible error which is independent of time, e.g. due to unobserved factors like injury, illness and chance variation.'' The suggestion to include Berry et al is a great one. This is closely related to the work of Page et al and Vaci et al and is now included in the text.}
\item Page 13 – middle – There is a formula here defining $f_p(t)$ as an average over similar players. I think it should be made clear that you are summing over $k$ here.
\emph{We have made this explicit.}
\item Page 16 – Figure 3 – Great figure. Seems odd though to begin your discussion with the counterintuitive aggregate result. I’d recommend describing the more intuitive results in the caption. Perhaps you want to pull the counterintuitive out of the figure and mention it only in text (perhaps only in conclusion?).
\emph{The reviewer's point is well taken. We have restructured the caption to start with the more intuitive results. However, we kept the less intuitive result in the caption, since we wanted to highlight that incorporating spatial context is essential for making the right conclusions. }
\end{itemize}
\end{document}
\section{Tags}
Data Tags: \\
\#spatial \#tracking \#college \#nba \#intl \#boxscore \#pbp (play-by-play) \#longitudinal \#timeseries \\
Goal Tags: \\
\#playereval \#defense \#lineup \#team \#projection \#behavioral \#strategy \#rest \#health \#injury \#winprob \#prediction \\
Miscellaneous: \\
\#clustering \#coaching \#management \#refs \#gametheory \#intro \#background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. “A starting point for analyzing basketball statistics.” \cite{kubatko}
\newline
Tags: \#intro \#background \#nba \#boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
John Hollinger \newline
Pro Basketball Forecast, 2005-06 \newline
\cite{hollinger2004pro}
This is Hollinger's yearly publication forecast for how NBA players will perform.
Can go in an intro section or in a forecast section.
Tags: \#nba \#forecast \#prediction
Handbook of Statistical Methods and Analyses in Sports \newline
Albert, Jim and Glickman, Mark E and Swartz, Tim B and Koning, Ruud H \newline
\cite{albert2017handbook} \newline
Tags: \#intro \#background
\begin{enumerate}
\item This handbook will provide both overviews of statistical methods in sports and in-depth
treatment of critical problems and challenges confronting statistical research in sports. The
material in the handbook will be organized by major sport (baseball, football, hockey,
basketball, and soccer) followed by a section on other sports and general statistical design
and analysis issues that are common to all sports. This handbook has the potential to
become the standard reference for obtaining the necessary background to conduct serious...
\end{enumerate}
Basketball on paper: rules and tools for performance analysis \newline
Dean Oliver \newline
\cite{oliver2004basketball}
Seems like a useful reference / historical reference for analyzing basketball performance?
tags: \#intro \#background
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: \#playereval \#lineup \#nba \#team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Darryl Blackport, Nylon Calculus / fansided \newline
\cite{threeStabilize} \newline
Discusses
\begin{itemize}
\item Three-point shots are high-variance shots, leading television analysts to use the phrase “live by the three, die by the three”, so when a career 32\% three point shooter suddenly shoots 38\% for a season, do we know that he has actually improved? How many attempts are enough to separate the signal from the noise? I decided to apply techniques used to calculate how long various baseball stats take to stabilize to see how long it takes for three-point shooting percentage to stabilize.
\item \#shooting \#playereval
\end{itemize}
Andrew Patton / Nylon Calculus / fansided \newline
\cite{visualizegravity} \newline
\begin{itemize}
\item Contains interesting plots to show the gravity of players at different locations on the court
\item Clever surface plots
\#tracking \#shooting \#visualization
\end{itemize}
The price of anarchy in basketball \\
Brian Skinner \\
\cite{skinner2010price} \\
\begin{enumerate}
\item Treats basketball like a network problem
\item each play represents a “pathway” through which the ball and players may move from origin (the inbounds pass) to goal (the basket). Effective field goal percentages from the resulting shot attempts can be used to characterize the efficiency of each pathway. Inspired by recent discussions of the “price of anarchy” in traffic networks, this paper makes a formal analogy between a basketball offense and a simplified traffic network.
\item The analysis suggests that there may be a significant difference between taking the highest-percentage shot each time down the court and playing the most efficient possible game. There may also be an analogue of Braess’s Paradox in basketball, such that removing a key player from a team can result in the improvement of the team’s offensive efficiency.
\item If such thinking [meaning that one should save their best plays for later in the game] is indeed already in the minds of coaches and players, then it should probably be in the minds of those who do quantitative analysis of sports as well. It is my hope that the introduction of ``price of anarchy” concepts will constitute a small step towards formalizing this kind of reasoning, and in bringing the analysis of sports closer in line with the playing and coaching of sports.
\end{enumerate}
Basketball teams as strategic networks \\
Fewell, Jennifer H and Armbruster, Dieter and Ingraham, John and Petersen, Alexander and Waters, James S \\
\cite{fewell2012basketball} \\
Added late
\subsection{Team performance}
Tags: \#team \#boxscore \#longitudinal
Efficiency in the National Basketball Association: A Stochastic Frontier Approach with Panel Data
Richard A. Hofler, and James E. Payneb
\cite{hofler2006efficiency}
\begin{enumerate}
\item Shooting, rebounding, stealing, blocking shots help teams’ performance
\item Turnovers lower it
\item We also learn that better coaching and defensive prowess raise a team’s win efficiency.
\item Uses a stochastic production frontier model
\end{enumerate}
Predicting team rankings in basketball: The questionable use of on-court performance statistics \\
Ziv, Gal and Lidor, Ronnie and Arnon, Michal \\
\cite{ziv2010predicting} \\
\begin{enumerate}
\item Statistics on on-court performances (e.g. free-throw shots, 2-point shots, defensive and offensive rebounds, and assists) of basketball players during actual games are typically used by basketball coaches and sport journalists not only to assess the game performance of individual players and the entire team, but also to predict future success (i.e. the final rankings of the team). The purpose of this correlational study was to examine the relationships between 12 basketball on-court performance variables and the final rankings of professional basketball teams, using information gathered from seven consecutive seasons and controlling for multicollinearity.
\item Data analyses revealed that (a) some on-court performance statistics can predict team rankings at the end of a season; (b) on-court performance statistics can be highly correlated with one another (e.g. 2-point shots and 3-point shots); and (c) condensing the correlated variables (e.g. all types of shots as one category) can lead to more stable regressional models. It is recommended that basketball coaches limit the use of individual on-court statistics for predicting the final rankings of their teams. The prediction process may be more reliable if on-court performance variables are grouped into a large category of variables.
\end{enumerate}
Relationship between Team Assists and Win-Loss Record in the National Basketball Association \\
Merrill J Melnick \\
\cite{melnick2001relationship} \\
\begin{enumerate}
\item Using research methodology for analysis of secondary data, statistical data for five National Basketball Association (NBA) seasons (1993–1994 to 1997–1998) were examined to test for a relationship between team assists (a behavioral measure of teamwork) and win-loss record. Rank-difference correlation indicated a significant relationship between the two variables, the coefficients ranging from .42 to .71. Team assist totals produced higher correlations with win-loss record than assist totals for the five players receiving the most playing time (“the starters”).
\item A comparison of “assisted team points” and “unassisted team points” in relationship to win-loss record favored the former and strongly suggested that how a basketball team scores points is more important than the number of points it scores. These findings provide circumstantial support for the popular dictum in competitive team sports that “Teamwork Means Success—Work Together, Win Together.”
\end{enumerate}
\subsection{Shot selection}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: \#playereval \#team \#shooting \#spatial \#nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
The problem of shot selection in basketball \\
Brian Skinner
\cite{skinner2012problem} \\
\begin{enumerate}
\item In this article, I explore the question of when a team should shoot and when they should pass up the shot by considering a simple theoretical model of the shot selection process, in which the quality of shot opportunities generated by the offense is assumed to fall randomly within a uniform distribution. Within this model I derive an answer to the question ‘‘how likely must the shot be to go in before the player should take it?’’
\item The theoretical prediction for the optimal shooting rate is compared to data from the National Basketball Association (NBA). The comparison highlights some limitations of the theoretical model, while also suggesting that NBA teams may be overly reluctant to shoot the ball early in the shot clock.
\end{enumerate}
Generality of the matching law as a descriptor of shot selection in basketball \newline
Alferink, Larry A and Critchfield, Thomas S and Hitt, Jennifer L and Higgins, William J \newline
\cite{alferink2009generality}
Studies the matching law to explains shot selection
\begin{enumerate}
\item Based on a small sample of highly successful teams, past studies suggested that shot selection (two- vs. three-point field goals) in basketball corresponds to predictions of the generalized matching law. We examined the generality of this finding by evaluating shot selection of college (Study 1) and professional (Study 3) players. The matching law accounted for the majority of variance in shot selection, with undermatching and a bias for taking three-point shots.
\item Shotselection matching varied systematically for players who (a) were members of successful versus unsuccessful teams, (b) competed at different levels of collegiate play, and (c) served as regulars versus substitutes (Study 2). These findings suggest that the matching law is a robust descriptor of basketball shot selection, although the mechanism that produces matching is unknown.
\end{enumerate}
Basketball Shot Types and Shot Success in
Different Levels of Competitive Basketball \newline
Er{\v{c}}ulj, Frane and {\v{S}}trumbelj, Erik \newline
\cite{ervculj2015basketball} \newline
\begin{enumerate}
\item Relative frequencies of shot types in basketball
\item The purpose of our research was to investigate the relative frequencies of different types of basketball shots (above head, hook shot, layup, dunk, tip-in), some details about their technical execution (one-legged, two-legged, drive, cut, ...), and shot success in different levels of basketball competitions. We analysed video footage and categorized 5024 basketball shots from 40 basketball games and 5 different levels of competitive basketball (National Basketball Association (NBA), Euroleague, Slovenian 1st Division, and two Youth basketball competitions).
\item Statistical analysis with hierarchical multinomial logistic regression models reveals that there are substantial differences between competitions. However, most differences decrease or disappear entirely after we adjust for differences in situations that arise in different competitions (shot location, player type, and attacks in transition).
\item In the NBA, dunks are more frequent and hook shots are less frequent compared to European basketball, which can be attributed to better athleticism of NBA players. The effect situational variables have on shot types and shot success are found to be very similar for all competitions.
\item tags: \#nba \#shotselection
\end{enumerate}
A spatial analysis of basketball shot chart data \\
Reich, Brian J and Hodges, James S and Carlin, Bradley P and Reich, Adam M \\
\begin{enumerate}
\item Uses spatial methods (like CAR, etc) to understand shot chart of NBA player (Sam Cassell)
\item Has some shot charts and maps
\item Fits an actual spatial model to shot chart data
\item Basketball coaches at all levels use shot charts to study shot locations and outcomes for their own teams as well as upcoming opponents. Shot charts are simple plots of the location and result of each shot taken during a game. Although shot chart data are rapidly increasing in richness and availability, most coaches still use them purely as descriptive summaries. However, a team’s ability to defend a certain player could potentially be improved by using shot data to make inferences about the player’s tendencies and abilities.
\item This article develops hierarchical spatial models for shot-chart data, which allow for spatially varying effects of covariates. Our spatial models permit differential smoothing of the fitted surface in two spatial directions, which naturally correspond to polar coordinates: distance to the basket and angle from the line connecting the two baskets. We illustrate our approach using the 2003–2004 shot chart data for Minnesota Timberwolves guard Sam Cassell.
\end{enumerate}
Optimal shot selection strategies for the NBA \\
Fichman, Mark and O’Brien, John Robert \\
\cite{fichman2019optimal}
Three point shooting and efficient mixed strategies: A portfolio management approach \\
Fichman, Mark and O’Brien, John \\
\cite{fichman2018three}
Rao-Blackwellizing field goal percentage \\
Daniel Daly-Grafstein and Luke Bornn\\
\cite{daly2019rao}
How to get an open shot: Analyzing team movement in basketball using tracking data \\
Lucey, Patrick and Bialkowski, Alina and Carr, Peter and Yue, Yisong and Matthews, Iain \\
\cite{lucey2014get} \\
Big data analytics for modeling scoring probability in basketball: The effect of shooting under high-pressure conditions \\
Zuccolotto, Paola and Manisera, Marica and Sandri, Marco \\
\cite{zuccolotto2018big}
\subsection{Tracking}
Miller and Bornn \newline
Possession Sketches: Mapping NBA Strategies
\cite{miller2017possession}
Tags: \#spatial \#clustering \#nba \#coaching \#strategy
\begin{enumerate}
\item Use LDA to create a database of movements of two players
\item Hierarchical model to describe interactions between players
\item Group together possessions with similar offensive structure
\item Uses tracking data to study strategy
\item Depends on \cite{blei2003latent}
\end{enumerate}
Nazanin Mehrasa*, Yatao Zhong*, Frederick Tung, Luke Bornn, Greg Mori
Deep Learning of Player Trajectory Representations for Team Activity Analysis
\cite{mehrasa2018deep}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item Use deep learning to learn player trajectories
\item Can be used for event recognition and team classification
\item Uses convolutional neural networks
\end{enumerate}
A. Miller and L. Bornn and R. Adams and K. Goldsberry \newline
Factorized Point Process Intensities: A Spatial Analysis of Professional Basketball \newline
~\cite{miller2013icml}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item We develop a machine learning approach to represent and analyze the underlying spatial structure that governs shot selection among professional basketball players in the NBA. Typically, NBA players are discussed and compared in an
heuristic, imprecise manner that relies on unmeasured intuitions about player behavior. This makes it difficult to draw comparisons between players and make accurate player specific predictions.
\item Modeling shot attempt data as a point process, we create a low dimensional representation of offensive player types in the NBA. Using non-negative matrix factorization (NMF), an unsupervised dimensionality reduction technique, we show that a low-rank spatial decomposition summarizes the shooting habits of NBA players. The spatial representations discovered by the algorithm correspond to intuitive descriptions of NBA player types, and can be used to model other spatial effects, such as shooting accuracy
\end{enumerate}
Creating space to shoot: quantifying spatial relative field goal efficiency in basketball \newline
Shortridge, Ashton and Goldsberry, Kirk and Adams, Matthew \newline
\cite{shortridge2014creating} \newline
\begin{enumerate}
\item This paper addresses the
challenge of characterizing and visualizing relative spatial
shooting effectiveness in basketball by developing metrics
to assess spatial variability in shooting. Several global and
local measures are introduced and formal tests are proposed to enable the comparison of shooting effectiveness
between players, groups of players, or other collections of
shots. We propose an empirical Bayesian smoothing rate
estimate that uses a novel local spatial neighborhood tailored for basketball shooting. These measures are evaluated
using data from the 2011 to 2012 NBA basketball season in
three distinct ways.
\item First we contrast nonspatial and spatial
shooting metrics for two players from that season and then
extend the comparison to all players attempting at least 250
shots in that season, rating them in terms of shooting effectiveness.
\item Second, we identify players shooting significantly better than the NBA average for their shot constellation, and formally compare shooting effectiveness of different players.
\item Third, we demonstrate an approach to map spatial shooting effectiveness. In general, we conclude that these measures are relatively straightforward to calculate with the right input data, and they provide distinctive and useful information about relative shooting ability in basketball.
\item We expect that spatially explicit basketball metrics will be useful additions to the sports analysis toolbox
\item \#tracking \#shooting \#spatial \#efficiency
\end{enumerate}
Courtvision: New Visual and Spatial Analytics for the NBA \newline
Goldsberry, Kirk \newline
\cite{goldsberry2012courtvision}
\begin{enumerate}
\item This paper investigates spatial and visual analytics as means to enhance basketball expertise. We
introduce CourtVision, a new ensemble of analytical techniques designed to quantify, visualize, and
communicate spatial aspects of NBA performance with unprecedented precision and clarity. We propose
a new way to quantify the shooting range of NBA players and present original methods that measure,
chart, and reveal differences in NBA players’ shooting abilities.
\item We conduct a case study, which applies
these methods to 1) inspect spatially aware shot site performances for every player in the NBA, and 2) to
determine which players exhibit the most potent spatial shooting behaviors. We present evidence that
Steve Nash and Ray Allen have the best shooting range in the NBA.
\item We conclude by suggesting that
visual and spatial analysis represent vital new methodologies for NBA analysts.
\item \#tracking \#shooting \#spatial
\end{enumerate}
Cervone, Daniel and D'Amour, Alex and Bornn, Luke and Goldsberry, Kirk \newline
A Multiresolution Stochastic Process Model for Predicting Basketball Possession Outcomes \newline
\cite{cervone2014multiresolution}
\begin{enumerate}
\item In this article, we propose a framework for using optical player tracking data to estimate, in real time, the expected number of points obtained by the end of a possession. This quantity, called expected possession value (EPV),
derives from a stochastic process model for the evolution of a basketball possession.
\item We model this process
at multiple levels of resolution, differentiating between continuous, infinitesimal movements of players,
and discrete events such as shot attempts and turnovers. Transition kernels are estimated using hierarchical spatiotemporal models that share information across players while remaining computationally tractable
on very large data sets. In addition to estimating EPV, these models reveal novel insights on players’ decisionmaking tendencies as a function of their spatial strategy. In the supplementary material, we provide a data sample and R code for further exploration of our model and its results.
\item This article introduces a new quantity, EPV, which represents a paradigm shift in the possibilities for statistical inferences about basketball. Using high-resolution, optical tracking data, EPV
reveals the value in many of the schemes and motifs that characterize basketball offenses but are omitted in the box score.
\item \#tracking \#expectedvalue \#offense
\end{enumerate}
POINTWISE: Predicting Points and Valuing Decisions in Real Time with NBA Optical Tracking Data \newline
Cervone, Dan and D'Amour, Alexander and Bornn, Luke and Goldsberry, Kirk \newline
\cite{cervonepointwise} \newline
\begin{enumerate}
\item EPV paper
\item . We propose a framework for using player-tracking data to assign a point value to each moment of a possession by computing how many points the offense is expected to score by the end of the possession, a quantity we call expected possession value (EPV).
\item EPV allows analysts to evaluate every decision made during a basketball game – whether it is to pass, dribble, or shoot – opening the door for a multitude of new metrics and analyses of basketball that quantify value in terms of points.
\item In this paper, we propose a modeling framework for estimating EPV, present results of EPV computations performed using playertracking data from the 2012-13 season, and provide several examples of EPV-derived metrics that answer real basketball questions.
\end{enumerate}
A method for using player tracking data in basketball to learn player skills and predict team performance \\
Brian Skinner and Stephen Guy\\
\cite{skinner2015method} \\
\begin{enumerate}
\item Player tracking data represents a revolutionary new data source for basketball analysis, in which essentially every aspect of a player’s performance is tracked and can be analyzed numerically. We suggest a way by which this data set, when coupled with a network-style model of the offense that relates players’ skills to the team’s success at running different plays, can be used to automatically learn players’ skills and predict the performance of untested 5-man lineups in a way that accounts for the interaction between players’ respective skill sets.
\item After developing a general analysis procedure, we present as an example a specific implementation of our method using a simplified network model. While player tracking data is not yet available in the public domain, we evaluate our model using simulated data and show that player skills can be accurately inferred by a simple statistical inference scheme.
\item Finally, we use the model to analyze games from the 2011 playoff series between the Memphis Grizzlies and the Oklahoma City Thunder and we show that, even with a very limited data set, the model can consistently describe a player’s interactions with a given lineup based only on his performance with a different lineup.
\item Tags \#network \#playereval \#lineup
\end{enumerate}
Exploring game performance in the National Basketball Association using player tracking data \\
Sampaio, Jaime and McGarry, Tim and Calleja-Gonz{\'a}lez, Julio and S{\'a}iz, Sergio Jim{\'e}nez and i del Alc{\'a}zar, Xavi Schelling and Balciunas, Mindaugas \\
\cite{sampaio2015exploring}
\begin{enumerate}
\item Recent player tracking technology provides new information about basketball game performance. The aim of this study was to (i) compare the game performances of all-star and non all-star basketball players from the National Basketball Association (NBA), and (ii) describe the different basketball game performance profiles based on the different game roles. Archival data were obtained from all 2013-2014 regular season games (n = 1230). The variables analyzed included the points per game, minutes played and the game actions recorded by the player tracking system.
\item To accomplish the first aim, the performance per minute of play was analyzed using a descriptive discriminant analysis to identify which variables best predict the all-star and non all-star playing categories. The all-star players showed slower velocities in defense and performed better in elbow touches, defensive rebounds, close touches, close points and pull-up points, possibly due to optimized attention processes that are key for perceiving the required appropriate environmental information.
\item The second aim was addressed using a k-means cluster analysis, with the aim of creating maximal different performance profile groupings. Afterwards, a descriptive discriminant analysis identified which variables best predict the different playing clusters.
\item The results identified different playing profile of performers, particularly related to the game roles of scoring, passing, defensive and all-round game behavior. Coaching staffs may apply this information to different players, while accounting for individual differences and functional variability, to optimize practice planning and, consequently, the game performances of individuals and teams.
\end{enumerate}
Modelling the dynamic pattern of surface area in basketball and its effects on team performance \\
Metulini, Rodolfo and Manisera, Marica and Zuccolotto, Paola \\
\cite{metulini2018modelling} \\
Deconstructing the rebound with optical tracking data \\ Maheswaran, Rajiv and Chang, Yu-Han and Henehan, Aaron and Danesis, Samanth \\
\cite{maheswaran2012deconstructing}
NBA Court Realty \\
Cervone, Dan and Bornn, Luke and Goldsberry, Kirk \\
\cite{cervone2016nba} \\
Applying deep learning to basketball trajectories \\
Shah, Rajiv and Romijnders, Rob \\
\cite{shah2016applying} \\
Predicting shot making in basketball using convolutional neural networks learnt from adversarial multiagent trajectories \\
Harmon, Mark and Lucey, Patrick and Klabjan, Diego \\
\cite{harmon2016predicting}
\subsection{Defense}
Characterizing the spatial structure of defensive skill in professional basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk and others
\cite{franks2015characterizing}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper attempts to fill this void, combining spatial and spatio-temporal processes, matrix factorization techniques and hierarchical regression models with player tracking data to advance the state of defensive analytics in the NBA.
\item Our approach detects, characterizes and quantifies multiple aspects of defensive play in basketball, supporting some common understandings of defensive effectiveness, challenging others and opening up many new insights into the defensive elements of basketball.
\item Specifically, our approach allows us to characterize how players affect both shooting frequency and efficiency of the player they are guarding.
\item By using an NMF-based decomposition of the court, we find an efficient and data-driven characterization of common shot regions which naturally corresponds to common basketball intuition.
\item Additionally, we are able to use this spatial decomposition to simply characterize the spatial shot and shot-guarding tendencies of players, giving a natural low-dimensional representation of a player’s shot chart.
\end{enumerate}
Counterpoints: Advanced defensive metrics for nba basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk
\cite{franks2015counterpoints}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper bridges this gap, introducing a new suite of defensive metrics that aim to progress the field of basketball analytics by enriching the measurement of defensive play.
\item Our results demonstrate that the combination of player tracking, statistical modeling, and visualization enable a far richer characterization of defense than has previously been possible.
\item Our method, when combined with more traditional offensive statistics, provides a well-rounded summary of a player’s contribution to the final outcome of a game.
\item Using optical tracking data and a model to infer defensive matchups at every moment throughout, we are able to provide novel summaries of defensive performance, and report “counterpoints” - points scored against a particular defender.
\item One key takeaway is that defensive ability is difficult to quantify with a single value. Summaries of points scored against and shots attempted against can say more about the team’s defensive scheme (e.g. the Pacers funneling the ball toward Hibbert) than the individual player’s defensive ability.
\item However, we argue that these visual and statistical summaries provide a much richer set of measurements for a player’s performance, particularly those that give us some notion of shot suppression early in the possession.
\end{enumerate}
The Dwight effect: A new ensemble of interior defense analytics for the NBA
Goldsberry, Kirk and Weiss, Eric
Tags: \#tracking \#nba \#playereval \#defense
\cite{goldsberry2013dwight}
\begin{enumerate}
\item This paper introduces new spatial and visual analytics capable of assessing and characterizing the nature of interior defense in the NBA. We present two case studies that each focus on a different component of defensive play. Our results suggest that the integration of spatial approaches and player tracking data promise to improve the status quo of defensive analytics but also reveal some important challenges associated with evaluating defense.
\item Case study 1: basket proximity
\item Case study 2: shot proximity
\item Despite some relevant limitations, we contend that our results suggest that interior defensive abilities vary considerably across the league; simply stated, some players are more effective interior defenders than others. In terms of affecting shooting, we evaluated interior defense in 2 separate case studies.
\end{enumerate}
Automatic event detection in basketball using HMM with energy based defensive assignment \\
Keshri, Suraj and Oh, Min-hwan and Zhang, Sheng and Iyengar, Garud \\
\cite{keshri2019automatic}
\subsection{Player Eval}
Estimating an NBA player’s impact on his team’s chances of winning
Deshpande, Sameer K and Jensen, Shane T
\cite{deshpande2016estimating}
Tags: \#playereval \#winprob \#team \#lineup \#nba
\begin{enumerate}
\item We instead use a win probability framework for evaluating the impact NBA players have on their teams’ chances of winning. We propose a Bayesian linear regression model to estimate an individual player’s impact, after controlling for the other players on the court. We introduce several posterior summaries to derive rank-orderings of players within their team and across the league.
\item This allows us to identify highly paid players with low impact relative to their teammates, as well as players whose high impact is not captured by existing metrics.
\end{enumerate}
Who is ‘most valuable’? Measuring the player's production of wins in the National Basketball Association
David J. Berri*
\cite{berri1999most}
Tags: \#playereval \#team \#nba
\begin{enumerate}
\item The purpose of this inquiry is to answer this question via an econometric model that links the player’s statistics in the National Basketball Association (NBA) to team wins.
\item The methods presented in this discourse take the given NBA data and provide an accurate answer to this question. As noted, if the NBA tabulated a wider range of statistics, this accuracy could be improved. Nevertheless, the picture painted by the presented methods does provide a fair evaluation of each player’s contribution to his team’s relative success or failure. Such evaluations can obviously be utilized with respect to free agent signings, player-for-player trades, the allocation of minutes, and also to determine the impact changes in coaching methods or strategy have had on an individual’s productivity.
\end{enumerate}
On estimating the ability of nba players
Fearnhead, Paul and Taylor, Benjamin Matthew
\cite{fearnhead2011estimating}
Tags: \#playereval \#nba \#lineup
\begin{enumerate}
\item This paper introduces a new model and methodology for estimating the ability of NBA players. The main idea is to directly measure how good a player is by comparing how their team performs when they are on the court as opposed to when they are off it. This is achieved in a such a way as to control for the changing abilities of the other players on court at different times during a match. The new method uses multiple seasons’ data in a structured way to estimate player ability in an isolated season, measuring separately defensive and offensive merit as well as combining these to give an overall rating. The use of game statistics in predicting player ability will be considered
\item To the knowledge of the authors, the model presented here is unique in providing a structured means of updating player abilities between years. One of the most important findings here is that whilst using player game statistics and a simple linear model to infer offensive ability may be okay, the very poor fit of the defensive ratings model suggests that defensive ability depends on some trait not measured by the current range of player game statistics.
\end{enumerate}
Add \cite{macdonald} as a reference? (Adj. PM for NHL players)
A New Look at Adjusted Plus/Minus for Basketball Analysis \newline
Tags: \#playereval \#nba \#lineups \newline
D. Omidiran \newline
\cite{omidiran2011pm}
\begin{enumerate}
\item We interpret the Adjusted Plus/Minus (APM) model as a special case of a general penalized regression problem indexed by the parameter $\lambda$. We provide a fast technique for solving this problem for general values of $\lambda$. We
then use cross-validation to select the parameter $\lambda$ and demonstrate that this choice yields substantially better prediction performance than APM.
\item Paper uses cross-validation to choose $\lambda$ and shows they do better with this.
\end{enumerate}
Improved NBA adjusted plus-minus using regularization and out-of-sample testing \newline
Joseph Sill \newline
\cite{adjustedpm}
\begin{enumerate}
\item Adjusted +/- (APM) has grown in popularity as an NBA player evaluation technique in recent years. This paper presents a framework for evaluating APM models and also describes an enhancement to APM which nearly doubles its accuracy. APM models are evaluated in terms of their ability to predict the outcome of future games not included in the model’s training data.
\item This evaluation framework provides a principled way to make choices about implementation details. The enhancement is a Bayesian technique called regularization (a.k.a. ridge regression) in which the data is combined with a priori beliefs regarding reasonable ranges for the parameters in order to
produce more accurate models.
\end{enumerate}
Article by Rosenbaum on 82games.com \newline
\cite{rosenbaum}
Measuring How NBA Players Help Their Teams Win \newline
Effects of season period, team quality, and playing time on basketball players' game-related statistics \newline
Sampaio, Jaime and Drinkwater, Eric J and Leite, Nuno M \newline
\cite{sampaio2010effects}
Tags: \#playereval \#team
\begin{enumerate}
\item The aim of this study was to identify within-season differences in basketball players’ game-related statistics according to team quality and playing time. The sample comprised 5309 records from 198 players in the Spanish professional basketball league (2007-2008).
\item Factor analysis with principal components was applied to the game-related statistics gathered from the official box-scores, which limited the analysis to five factors (free-throws, 2-point field-goals, 3-point field-goals, passes, and errors) and two variables (defensive and offensive rebounds). A two-step cluster analysis classified the teams as stronger (6998 winning percentage), intermediate (4395 winning percentage), and weaker teams (3295 winning percentage); individual players were classified based on playing time as important players (2894 min) or less important players (1694 min). Seasonal variation was analysed monthly in eight periods.
\item A mixed linear model was applied to identify the effects of team quality and playing time within the months of the season on the previously identified factors and game-related statistics. No significant effect of season period was observed. A team quality effect was identified, with stronger teams being superior in terms of 2-point field-goals and passes. The weaker teams were the worst at defensive rebounding (stronger teams: 0.1790.05; intermediate teams: 0.1790.06; weaker teams: 0.1590.03; P0.001). While playing time was significant in almost all variables, errors were the most important factor when contrasting important and less important players, with fewer errors being made by important players. The trends identified can help coaches and players to create performance profiles according to team quality and playing time. However, these performance profiles appear to be independent of season period.
\item Identify effects of strength of team on players' performance
\item Conclusion: There appears to be no seasonal variation in high level basketball performances. Although offensive plays determine success in basketball, the results of the current study indicate that securing more defensive rebounds and committing fewer errors are also important. Furthermore, the identified trends allow the creation of performance profiles according to team quality and playing time during all seasonal periods. Therefore, basketball coaches (and players) will benefit from being aware of these results, particularly when designing game strategies and when taking tactical decisions.
\end{enumerate}
Forecasting NBA Player Performance using a Weibull-Gamma Statistical Timing Model \newline
Douglas Hwang \newline
\cite{hwang2012forecasting} \newline
\begin{enumerate}
\item Uses a Weibull-Gamma statistical timing model
\item Fits a player’s performance over time to a Weibull distribution, and accounts for unobserved heterogeneity by fitting the parameters of the Weibull distribution to a gamma distribution
\item This will help predict performance over the next season, estimate contract value, and the potential “aging” effects of a certain player
\item tags \#forecasting \#playereval \#performance
\end{enumerate}
\subsubsection{Position analysis?}
Using box-scores to determine a position's contribution to winning basketball games
Page, Garritt L and Fellingham, Gilbert W and Reese, C Shane
\cite{page2007using}
\begin{enumerate}
\item Tried to quantify importance of different positions in basketball
\item Used hierarchical Bayesian model at five positions
\item One result is that defensive rebounds from point and shooting guards are important
\item While it is generally recognized that the relative importance of different skills is not constant across different positions on a basketball team, quantification of the differences has not been well studied. 1163 box scores from games in the National Basketball Association during the 1996-97 season were used to study the relationship of skill performance by position and game outcome as measured by point differentials.
\item A hierarchical Bayesian model was fit with individual players viewed as a draw from a population of players playing a particular position: point guard, shooting guard, small forward, power forward, center, and bench. Posterior distributions for parameters describing position characteristics were examined to discover the relative importance of various skills as quantified in box scores across the positions.
\item Results were consistent with expectations, although defensive rebounds from both point and shooting guards were found to be quite important.
\item In summary, the point spread of a basketball game increases if all five positions have more offensive rebounds, out-assist, have a better field goal percentage, and fewer turnovers than their positional opponent. These results are cer- tainly not surprising. Some trends that were somewhat more surprising were the importance of defensive rebounding by the guard positions and offensive rebounding by the point guard. These results also show the emphasis the NBA places on an all-around small forward.
\end{enumerate}
\subsubsection{Player production curves}
Effect of position, usage rate, and per game minutes played on NBA player production curves \\
Page, Garritt L and Barney, Bradley J and McGuire, Aaron T \\
\cite{page2013effect}
\begin{enumerate}
\item Production related to games played
\item Production curve modeled using GPs
\item Touches on deterioration of production relative to age
\item Learn how minutes played and usage rate affect production curves
\item One good nugget from conclusion: The general finding from this is that aging trends do impact natural talent evaluation of NBA players. Point guards appear to exhibit a different aging pattern than wings or bigs when it comes to Hollinger’s Game Score, with a slightly later peak and a much steeper decline once that peak is met. The fact that the average point guard spends more of his career improving his game score is not lost on talent evaluators in the NBA, and as such, point guards are given more time on average to perform the functions of a rotation player in the NBA.
\end{enumerate}
Functional data analysis of aging curves in sports \\
Alex Wakim and Jimmy Jin \\
\cite{wakim2014functional} \\
\begin{enumerate}
\item Uses FDA and FPCA to study aging curves in sports
\item Includes NBA and finds age patterns among NBA players with different scoring abilities
\item It is well known that athletic and physical condition is affected by age. Plotting an individual athlete's performance against age creates a graph commonly called the player's aging curve. Despite the obvious interest to coaches and managers, the analysis of aging curves so far has used fairly rudimentary techniques. In this paper, we introduce functional data analysis (FDA) to the study of aging curves in sports and argue that it is both more general and more flexible compared to the methods that have previously been used. We also illustrate the rich analysis that is possible by analyzing data for NBA and MLB players.
\item In the analysis of MLB data, we use functional principal components analysis (fPCA) to perform functional hypothesis testing and show differences in aging curves between potential power hitters and potential non-power hitters. The analysis of aging curves in NBA players illustrates the use of the PACE method. We show that there are three distinct aging patterns among NBA players and that player scoring ability differs across the patterns. We also show that aging pattern is independent of position.
\end{enumerate}
Large data and Bayesian modeling—aging curves of NBA players \\
Vaci, Nemanja and Coci{\'c}, Dijana and Gula, Bartosz and Bilali{\'c}, Merim \\
\cite{vaci2019large} \\
\begin{enumerate}
\item Researchers interested in changes that occur as people age are faced with a number of methodological problems, starting with the immense time scale they are trying to capture, which renders laboratory experiments useless and longitudinal studies rather rare. Fortunately, some people take part in particular activities and pastimes throughout their lives, and often these activities are systematically recorded. In this study, we use the wealth of data collected by the National Basketball Association to describe the aging curves of elite basketball players.
\item We have developed a new approach rooted in the Bayesian tradition in order to understand the factors behind the development and deterioration of a complex motor skill. The new model uses Bayesian structural modeling to extract two latent factors, those of development and aging. The interaction of these factors provides insight into the rates of development and deterioration of skill over the course of a player’s life. We show, for example, that elite athletes have different levels of decline in the later stages of their career, which is dependent on their skill acquisition phase.
\item The model goes beyond description of the aging function, in that it can accommodate the aging curves of subgroups (e.g., different positions played in the game), as well as other relevant factors (e.g., the number of minutes on court per game) that might play a role in skill changes. The flexibility and general nature of the new model make it a perfect candidate for use across different domains in lifespan psychology.
\end{enumerate}
\subsection{Predicting winners}
Modeling and forecasting the outcomes of NBA basketball games
Manner, Hans
Tags: \#hothand \#winprob \#team \#multivariate \#timeseries
\cite{manner2016modeling}
\begin{enumerate}
\item This paper treats the problem of modeling and forecasting the outcomes of NBA basketball games. First, it is shown how the benchmark model in the literature can be extended to allow for heteroscedasticity and treat the estimation and testing in this framework. Second, time-variation is introduced into the model by (i) testing for structural breaks in the model and (ii) introducing a dynamic state space model for team strengths.
\item The in-sample results based on eight seasons of NBA data provide some evidence for heteroscedasticity and a few structural breaks in team strength within seasons. However, there is no evidence for persistent time variation and therefore the hot hand belief cannot be confirmed. The models are used for forecasting a large number of regular season and playoff games and the common finding in the literature that it is difficult to outperform the betting market is confirmed. Nevertheless, it turns out that a forecast combination of model based forecasts with betting odds can outperform either approach individually in some situations.
\end{enumerate}
Basketball game-related statistics that discriminate between teams’ season-long success \newline
~\cite{ibanez2008basketball} \newline
Tags: \#prediction \#winner \#defense \newline
Ibanez, Sampaio, Feu, Lorenzo, Gomez, Ortega,
\begin{enumerate}
\item The aim of the present study was to identify the game-related statistics that discriminate between season-long successful and
unsuccessful basketball teams participating in the Spanish Basketball League (LEB1). The sample included all 145 average
records per season from the 870 games played between the 2000-2001 and the 2005-2006 regular seasons.
\item The following
game-related statistics were gathered from the official box scores of the Spanish Basketball Federation: 2- and 3-point fieldgoal attempts (both successful and unsuccessful), free-throws (both successful and unsuccessful), defensive and offensive
rebounds, assists, steals, turnovers, blocks (both made and received), and fouls (both committed and received). To control
for season variability, all results were normalized to minutes played each season and then converted to z-scores.
\item The results allowed discrimination between best and worst teams’ performances through the following game-related statistics: assists (SC0.47), steals (SC0.34), and blocks (SC0.30). The function obtained correctly classified 82.4\% of the cases. In conclusion, season-long performance may be supported by players’ and teams’ passing skills and defensive preparation.
\item Our results suggest a number of differences between
best and worst teams’ game-related statistics, but
globally the offensive (assists) and defensive (steals
and blocks) actions were the most powerful factors
in discriminating between groups. Therefore, game
winners and losers are discriminated by defensive
rebounding and field-goal shooting, whereas season-long performance is discriminated by players’
and teams’ passing skills and defensive preparation.
Players should be better informed about these
results and it is suggested that coaches pay attention
to guards’ passing skills, to forwards’ stealing skills,
and to centres’ blocking skills to build and prepare
offensive communication and overall defensive
pressure
\end{enumerate}
Simulating a Basketball Match with a Homogeneous Markov Model and Forecasting the Outcome \newline
Strumbelj and Vracar \newline
\#prediction \#winner \newline
~\cite{vstrumbelj2012simulating}
\begin{enumerate}
\item We used a possession-based Markov model to model the progression of a basketball match. The model’s transition matrix was estimated directly from NBA play-by-play data and indirectly from the teams’ summary statistics. We evaluated both this approach and other commonly used forecasting approaches: logit regression of the outcome, a latent strength rating method, and bookmaker odds. We found that the Markov model approach is appropriate for modelling a basketball match and produces forecasts of a quality comparable to that of other statistical approaches, while giving more insight into basketball. Consistent with previous studies, bookmaker odds were the best probabilistic forecasts
\item Using summary statistics to estimate Shirley’s Markov model for basketball produced a model for a match between two specific teams. The model was be used to simulate the match and produce outcome forecasts of a quality comparable to that of other statistical approaches, while giving more insights into basketball. Due to its homogeneity, the model is still limited with respect to what it can simulate, and a non-homogeneous model is required to deal with the issues. As far as basketball match simulation is concerned, more work has to be done, with an emphasis on making the transitional probabilities conditional on the point spread and the game time.
\end{enumerate}
Differences in game-related statistics of basketball performance by game location for men's winning and losing teams \\
G{\'o}mez, Miguel A and Lorenzo, Alberto and Barakat, Rub{\'e}n and Ortega, Enrique and Jos{\'e} M, Palao \\
\cite{gomez2008differences} \\
\#location \#winners \#boxscore
\begin{enumerate}
\item The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004–2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis).
\item The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = –.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.
\end{enumerate}
\subsubsection{NCAA bracket}
Predicting the NCAA basketball tournament using isotonic least squares pairwise comparison model \\
Neudorfer, Ayala and Rosset, Saharon \\
\cite{neudorfer2018predicting}
Identifying NCAA tournament upsets using Balance Optimization Subset Selection \\
Dutta, Shouvik and Jacobson, Sheldon H and Sauppe, Jason J \\
\cite{dutta2017identifying}
Building an NCAA men’s basketball predictive model and quantifying its success
Lopez, Michael J and Matthews, Gregory J
\cite{lopez2015building}
Tags: \#ncaa \#winprob \#outcome \#prediction
\begin{enumerate}
\item This manuscript both describes our novel predictive models and quantifies the possible benefits, with respect to contest standings, of having a strong model. First, we describe our submission, building on themes first suggested by Carlin (1996) by merging information from the Las Vegas point spread with team-based possession metrics. The success of our entry reinforces longstanding themes of predictive modeling, including the benefits of combining multiple predictive tools and the importance of using the best possible data
\end{enumerate}
Introduction to the NCAA men’s basketball prediction methods issue
Glickman, Mark E and Sonas, Jeff \\
\cite{glickman2015introduction} \\
This is a whole issue of JQAS so not so important to include the intro cited here specifically \\
A generative model for predicting outcomes in college basketball
Ruiz, Francisco JR and Perez-Cruz, Fernando
\cite{ruiz2015generative}
Tags: \#ncaa \#prediction \#outcomes \#team
\begin{enumerate}
\item We show that a classical model for soccer can also provide competitive results in predicting basketball outcomes. We modify the classical model in two ways in order to capture both the specific behavior of each National collegiate athletic association (NCAA) conference and different strategies of teams and conferences. Through simulated bets on six online betting houses, we show that this extension leads to better predictive performance in terms of profit we make. We compare our estimates with the probabilities predicted by the winner of the recent Kaggle competition on the 2014 NCAA tournament, and conclude that our model tends to provide results that differ more from the implicit probabilities of the betting houses and, therefore, has the potential to provide higher benefits.
\item In this paper, we have extended a simple soccer model for college basketball. Outcomes at each game are modeled as independent Poisson random variables whose means depend on the attack and defense coefficients of teams and conferences. Our conference-specific coefficients account for the overall behavior of each conference, while the perteam coefficients provide more specific information about each team. Our vector-valued coefficients can capture different strategies of both teams and conferences. We have derived a variational inference algorithm to learn the attack and defense coefficients, and have applied this algorithm to four March Madness Tournaments. We compare our predictions for the 2014 Tournament to the recent Kaggle competition results and six online betting houses. Simulations show that our model identifies weaker but undervalued teams, which results in a positive mean profit in all the considered betting houses. We also outperform the Kaggle competition winner in terms of mean profit.
\end{enumerate}
A new approach to bracket prediction in the NCAA Men’s Basketball Tournament based on a dual-proportion likelihood
Gupta, Ajay Andrew
\cite{gupta2015new}
Tags: \#ncaa \#prediction
\begin{enumerate}
\item This paper reviews relevant previous research, and then introduces a rating system for teams using game data from that season prior to the tournament. The ratings from this system are used within a novel, four-predictor probability model to produce sets of bracket predictions for each tournament from 2009 to 2014. This dual-proportion probability model is built around the constraint of two teams with a combined 100\% probability of winning a given game.
\item This paper also performs Monte Carlo simulation to investigate whether modifications are necessary from an expected value-based prediction system such as the one introduced in the paper, in order to have the maximum bracket score within a defined group. The findings are that selecting one high-probability “upset” team for one to three late rounds games is likely to outperform other strategies, including one with no modifications to the expected value, as long as the upset choice overlaps a large minority of competing brackets while leaving the bracket some distinguishing characteristics in late rounds.
\end{enumerate}
Comparing Team Selection and Seeding for the 2011 NCAA Men's Basketball Tournament
Gray, Kathy L and Schwertman, Neil C
\cite{gray2012comparing}
Tags: \#ncaa \#seeding \#selection
\begin{enumerate}
\item In this paper, we propose an innovative heuristic measure of team success, and we investigate how well the NCAA committee seeding compares to the computer-based placements by Sagarin and the rating percentage index (RPI). For the 2011 tournament, the NCAA committee selection process performed better than those based solely on the computer methods in determining tournament success.
\item This analysis of 2011 tournament data shows that the Selection Committee in 2011 was quite effective in the seeding of the tournament and that basing the seeding entirely on computer ratings was not an advantage since, of the three placements, the NCAA seeding had the strongest correlation with team success points and the fewest upsets. The incorporation of some subjectivity into their seeding appears to be advantageous. The committee is able to make an adjustment, for example, if a key player is injured or unable to play for some reason. From this analysis of the data for the 2011 tournament, there is ample evidence that NCAA selection committee was proficient in the selection and seeding of the tournament teams.
\end{enumerate}
Joel Sokol's group on LRMC method: \\
\cite{brown2010improved} \\
\cite{kvam2006logistic} \\
\cite{brown2012insights} \\
\subsection{Uncategorized}
Hal Stern in JASA on Brownian motion model for progress of sports scores: \\
\cite{stern1994brownian} \\
Random walk picture of basketball scoring \\
Gabel, Alan and Redner, Sidney \\
\cite{gabel2012random}\\
\begin{enumerate}
\item We present evidence, based on play-by-play data from all 6087 games from the 2006/07– 2009/10 seasons of the National Basketball Association (NBA), that basketball scoring is well described by a continuous-time anti-persistent random walk. The time intervals between successive scoring events follow an exponential distribution, with essentially no memory between different scoring intervals.
\item By including the heterogeneity of team strengths, we build a detailed computational random-walk model that accounts for a variety of statistical properties of scoring in basketball games, such as the distribution of the score difference between game opponents, the fraction of game time that one team is in the lead, the number of lead changes in each game, and the season win/loss records of each team.
\item In this work, we focus on the statistical properties of scoring during each basketball game. The scoring data are consistent with the scoring rate being described by a continuous-time Poisson process. Consequently, apparent scoring bursts or scoring droughts arise from Poisson statistics rather than from a temporally correlated process.
\item Our main hypothesis is that the evolution of the score difference between two competing teams can be accounted by a continuous-time random walk.
\item However, this competitive rat race largely eliminates systematic advantages between teams, so that all that remains, from a competitive standpoint, are small surges and ebbs in performance that arise from the underlying stochasticity of the game.
\end{enumerate}
Importance of Free-Throws at Various Stages of Basketball Games \\
\cite{kozar1994importance}
\begin{enumerate}
\item Basketball coaches often refer to their teams' success or failure as a product of their players' performances at the free-throw line. In the present study, play-by-play records of 490 NCAA Division I men's basketball games were analyzed to assess the percentage of points scored from free-throws at various stages of the games.
\item About 20\% of all points were scored from free-throws. Free-throws comprised a significantly higher percentage of total points scored during the last 5 minutes than the first 35 minutes of the game for both winning and losing teams. Also, in the last 5 minutes of 246 games decided by 9 points or less and 244 decided by 10 points or more, winners scored a significantly higher percentage of points from free-throws than did losers. Suggestions for structuring practice conditions are discussed.
\end{enumerate}
Dynamic modeling of performance in basketball \\
\cite{malarranha2013dynamic} \\
\begin{enumerate}
\item The aim of this study was to identify the intra-game variation from four performance indicators that determine the outcome of basketball games, controlling for quality of opposition. All seventy-four games of the Basketball World Championship (Turkey 2010) were analyzed to calculate the performance indicators in eight 5-minute periods. A repeated measures ANOVA was performed to identify differences in time and game outcome for each performance indicator. The quality of opposition was included in the models as covariable.
\item The effective field goal percentage (F=14.0 p <.001, η2=.09) influenced the game outcome throughout the game, while the offensive rebounds percentage (F=7.6 p <.05, η2=.05) had greater influence in the second half. The offensive (F=6.3, p <.05, η2=.04) and defensive (F=12.0, p <.001, η2=.08) ratings also influenced the outcome of the games. These results may allow coaches to have more accurate information aimed to prepare their teams for the competition.
\end{enumerate}
Modeling the offensive-defensive interaction and resulting outcomes in basketball \\
Lamas, Leonardo and Santana, Felipe and Heiner, Matthew and Ugrinowitsch, Carlos and Fellingham, Gilbert \\
\cite{lamas2015modeling} \\
\subsubsection{Sampaio section}
Discriminative power of basketball game-related statistics by level of competition and sex \\
\cite{sampaio2004discriminative} \\
Discriminative game-related statistics between basketball starters and nonstarters when related to team quality and game outcome \\
\cite{sampaio2006discriminative} \\
Discriminant analysis of game-related statistics between basketball guards, forwards and centres in three professional leagues \\
\cite{sampaio2006discriminant} \\
Statistical analyses of basketball team performance: understanding teams’ wins and losses according to a different index of ball possessions \\
\cite{sampaio2003statistical}
Game related statistics which discriminate between winning and losing under-16 male basketball games \\
\cite{lorenzo2010game} \\
Game location influences basketball players' performance across playing positions \\
\cite{sampaio2008game} \\
Identifying basketball performance indicators in regular season and playoff games \\
\cite{garcia2013identifying} \\
\subsection{General references}
Basketball reference: \cite{bballref} \newline
Moneyball: \cite{lewis2004moneyball} \newline
NBA website: \cite{nba_glossary} \newline
Spatial statistics: \cite{diggle2013statistical} and \cite{ripley2005spatial} \newline
Statistics for spatial data: \cite{cressie93} \newline
Efron and Morris on Stein's estimator? \cite{efron1975data} \\
Bill James?: \cite{james1984the-bill} and \cite{james2010the-new-bill}
I think Cleaning the Glass would be a good reference, especially the stats component of its site. The review article should mention somewhere that the growth of basketball analytics in academic articles - and its growth in industry - occurred simultaneously with its use on different basketball stats or fan sites. Places like CTG, hoopshype, etc which mention and popularize fancier statistics contributed to the growth of analytics in basketball.
Not sure where or how is the best way to include that in the paper but thought it was at least worth writing down somewhere.
\subsection{Hot hand?}
\cite{hothand93online}
\bibliographystyle{plain}
\section{Tags}
Data Tags: \\
#spatial #tracking #college #nba #intl #boxscore #pbp (play-by-play) #longitudinal #timeseries \\
Goal Tags: \\
#playereval #defense #lineup #team #projection #behavioral #strategy #rest #health #injury #winprob #prediction \\
Miscellaneous: \\
#clustering #coaching #management #refs #gametheory #intro #background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. “A starting point for analyzing basketball statistics.” \cite{kubatko}
\newline
\citefield{kubatko}{title}
Tags: #intro #background #nba #boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: #playereval #lineup #nba #team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: #playereval #team #shooting #spatial #nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
\bibliographystyle{plain}
\section{INTRODUCTION}
Please begin the main text of your article here.
\section{FIRST-LEVEL HEADING}
This is dummy text.
\subsection{Second-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\subsubsection{Third-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\paragraph{Fourth-Level Heading} Fourth-level headings are placed as part of the paragraph.
\section{ELEMENTS\ OF\ THE\ MANUSCRIPT}
\subsection{Figures}Figures should be cited in the main text in chronological order. This is dummy text with a citation to the first figure (\textbf{Figure \ref{fig1}}). Citations to \textbf{Figure \ref{fig1}} (and other figures) will be bold.
\begin{figure}[h]
\includegraphics[width=3in]{SampleFigure}
\caption{Figure caption with descriptions of parts a and b}
\label{fig1}
\end{figure}
\subsection{Tables} Tables should also be cited in the main text in chronological order (\textbf {Table \ref{tab1}}).
\begin{table}[h]
\tabcolsep7.5pt
\caption{Table caption}
\label{tab1}
\begin{center}
\begin{tabular}{@{}l|c|c|c|c@{}}
\hline
Head 1 &&&&Head 5\\
{(}units)$^{\rm a}$ &Head 2 &Head 3 &Head 4 &{(}units)\\
\hline
Column 1 &Column 2 &Column3$^{\rm b}$ &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
\hline
\end{tabular}
\end{center}
\begin{tabnote}
$^{\rm a}$Table footnote; $^{\rm b}$second table footnote.
\end{tabnote}
\end{table}
\subsection{Lists and Extracts} Here is an example of a numbered list:
\begin{enumerate}
\item List entry number 1,
\item List entry number 2,
\item List entry number 3,\item List entry number 4, and
\item List entry number 5.
\end{enumerate}
Here is an example of a extract.
\begin{extract}
This is an example text of quote or extract.
This is an example text of quote or extract.
\end{extract}
\subsection{Sidebars and Margin Notes}
\begin{marginnote}[]
\entry{Term A}{definition}
\entry{Term B}{definition}
\entry{Term C}{defintion}
\end{marginnote}
\begin{textbox}[h]\section{SIDEBARS}
Sidebar text goes here.
\subsection{Sidebar Second-Level Heading}
More text goes here.\subsubsection{Sidebar third-level heading}
Text goes here.\end{textbox}
\subsection{Equations}
\begin{equation}
a = b \ {\rm ((Single\ Equation\ Numbered))}
\end{equation}
Equations can also be multiple lines as shown in Equations 2 and 3.
\begin{eqnarray}
c = 0 \ {\rm ((Multiple\ Lines, \ Numbered))}\\
ac = 0 \ {\rm ((Multiple \ Lines, \ Numbered))}
\end{eqnarray}
\begin{summary}[SUMMARY POINTS]
\begin{enumerate}
\item Summary point 1. These should be full sentences.
\item Summary point 2. These should be full sentences.
\item Summary point 3. These should be full sentences.
\item Summary point 4. These should be full sentences.
\end{enumerate}
\end{summary}
\begin{issues}[FUTURE ISSUES]
\begin{enumerate}
\item Future issue 1. These should be full sentences.
\item Future issue 2. These should be full sentences.
\item Future issue 3. These should be full sentences.
\item Future issue 4. These should be full sentences.
\end{enumerate}
\end{issues}
\section*{DISCLOSURE STATEMENT}
If the authors have noting to disclose, the following statement will be used: The authors are not aware of any affiliations, memberships, funding, or financial holdings that
might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
Acknowledgements, general annotations, funding.
\section*{LITERATURE\ CITED}
\noindent
Please see the Style Guide document for instructions on preparing your Literature Cited.
The citations should be listed in alphabetical order, with titles. For example:
\begin{verbatim}
| proofpile-arXiv_065-316 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The source GRS 1758--258 was discovered in the hard X--ray/soft
$\gamma$--ray energy range with the SIGMA/GRANAT coded mask
telescope (Sunyaev et al. 1991). GRS 1758--258 is of particular
interest since, together with the more famous source 1E 1740.7--
2942, it is the only persistent hard X--ray emitter ($E>100$ keV) in
the vicinity of the Galactic Center (Goldwurm et al. 1994). Both
sources have peculiar radio counterparts with relativistic jets
(Mirabel et al. 1992a; Rodriguez, Mirabel \& Mart\`i 1992; Mirabel
1994) and might be related to the 511 keV line observed from
the Galactic Center direction (Bouchet et al. 1991). Despite the
precise localization obtained at radio wavelengths, an optical
counterpart of GRS 1758--258 has not been identified
(Mereghetti et al. 1992; Mereghetti, Belloni \& Goldwurm 1994a).
Simultaneous ROSAT and SIGMA observations, obtained in the
Spring of 1993, indicated the presence of a soft excess
(Mereghetti, Belloni \& Goldwurm 1994b). This spectral
component ($E<2$ keV) was weaker in 1990, when the hard X--ray
flux ($E>40$ keV) was in its highest observed state. On the basis of
its hard X--ray spectrum, GRS 1758--258 is generally considered
a black hole candidate (Tanaka \& Lewin 1995; Stella et al.
1995). The possible evidence for a soft spectral component
anticorrelated with the intensity of the hard ($>40$ keV) emission
supports this interpretation.
No detailed studies of GRS 1758--258 in the "classical" X--ray
range have been performed so far. Here we report the first
observations of this source obtained with an imaging instrument
in the $0.5-10$ keV energy range.
\section{Data Analysis and Results}
The observation of GRS 1758--258 took place between 1995
March 29 22:39 UT and March 30 15:38 UT. The ASCA satellite
(Tanaka, Inoue \& Holt 1994) provides simultaneuos data in four
coaligned telescopes, equipped with two solid state detectors
(SIS0 and SIS1) and two gas scintillation proportional counters
(GIS2 and GIS3).
We applied stringent screening criteria to reject periods of high
background, and eliminated all the time intervals with the
bright earth within 40 degrees of the pointing direction for the
SIS data (10 degrees for the GIS), resulting in the net exposure
times given in Table 1.
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{3}{|c|}{TABLE 1}\\
\hline
\hline
&Exposure Time (s)&Count Rate (counts/s)\\
SIS0&9,471&5.310$\pm$0.024\\
SIS1&9,455&4.220$\pm$0.022\\
GIS2&12,717&4.507$\pm$0.022\\
GIS3&11,949&5.155$\pm$0.025\\
\hline
\end{tabular}
\subsection{GIS Data}
Figure 1 shows the image obtained with the GIS2 detector. Most
of the detector area is covered by stray light due to the bright
source GX5--1, located outside the field of view, at an off--axis
angle of about 40 arcmin. Fortunately, GRS 1758--258 lies in a
relatively unaffected region of the detector, which allows us to
estimate the contamination from GX5--1 as explained below.
The source counts were extracted from a circle of 6 arcmin
radius centered at the position of GRS 1758--258, and rebinned
in order to have a minimum of 25 counts in each energy
channel. Due to the present uncertainties in the ASCA response
at low energies, we only considered photons in the 0.8--10 keV
range. The background spectrum was extracted from the
corresponding regions of observations of empty fields provided
by the ASCA Guest Observer Facility. The contribution to the
background due to GX5--1 is mostly concentrated in a circular
segment with area $\sim$36 arcmin2 indicated with A in figure
1. Its spectrum was estimated by the difference of regions A
and B, and added to the background. A similar procedure was
followed to extract the GIS3 net spectrum.
Using XSPEC (Version 9.0) we explored several spectral models
by simultaneously fitting the data sets of both GIS instruments.
The best fit was obtained with a power law with photon index
$1.66\pm 0.03$ and column density $N_H=(1.42\pm 0.04)\times 10^{22}$ cm$^{-2}$
(reduced $\chi^2= 1.013$ for 372 d.o.f., errors at 90\% confidence
intervals for a single interesting parameter). Other models based
on a single spectral component (e.g. blackbody, thermal
bremsstrahlung) gave unacceptable results, with the exception
of the Comptonized disk model of Sunyaev \& Titarchuk (1980).
However, the limited energy range of the ASCA data alone, does
not allow in the latter case to pose interesting constraints on the
fit parameters.
The GIS instruments have a time resolution of 0.5 s or 62.5 ms,
according to the available telemetry rate. Most of our data had
the higher time resolution. Using a Fourier transform technique,
and after correction of the times of arrivals to the solar system
barycenter, we performed a search for periodicities. No coherent
pulsations in the 0.125--1000 s period range were found. For the
hypothesis of a sinusoidal modulation we can set an upper limit
of $\sim$5\% to the pulsed fraction.
\subsection{SIS Data}
Both SIS instruments were operated in the single chip mode,
which gives a time resolution of 4 s and images of a square
11x11 arcmin2 region (Figure 2). Most of the SIS data (83\%)
were acquired in "faint mode" and then converted to "bright".
This allows to minimize the errors due to the echo effects in the
analog electronics and to the uncertainties in the dark frame
value (Otani \& Dotani, 1994). The inclusion of the data directly
acquired in bright mode resulted in spectra of lower quality
(significant residuals in the 2--3 keV region). We therefore
concentrated the spectral analysis on the faint mode data. The
source counts (0.6--10 keV) were extracted from circles with a
radius of 3 arcmin, and the resulting energy spectra (1024 PI
energy channels) rebinned in order to have at least 25 counts in
each bin. We subtracted a background spectrum derived during
our observation from an apparently source free region of the
CCDs (see figure 2). This background is higher than that obtained
from the standard observations of empty fields, probably due to
the contamination from GX5--1. It contributes $\sim$4\% of the
extracted counts. We verified that the derived spectral
parameters do not change significantly if we use the blank sky
background file, or even if we completely neglect the
background subtraction.
By fitting together the data from the two SIS we obtained
results similar to those derived with the GIS instruments. In
particular, a power law spectrum gives photon index $\alpha=1.70\pm 0.03$
and $N_H=(1.55\pm 0.03) \times 10^{22}$ cm$^{-2}$, with a reduced $\chi^2$ of
1.031 (872 d.o.f.).
No prominent emission lines are visible in the spectrum of GRS
1758--258 (as already mentioned, some features in the region
around 2 keV are probably due to instrumental problems, they
appear stronger when the bright mode data and the
corresponding response matrix are used). Upper limits on the
possible presence of an iron line were computed by adding a
gaussian line centered at 6.4 keV to the best fit power law
model and varying its parameters (intensity and width) until an
unacceptable increase in the c2 was obtained. The 95\% upper
limit on the equivalent width is $\sim$50 eV for a line width of
$\sigma=0.1$ keV and increases for wider lines (up to $\sim$110 eV for
$\sigma=0.5$ keV).
Also in the case of the SIS, a search for periodicities (limited to
period greater than 8 s) resulted only in upper limits similar to
the GIS ones.
\section{Discussion}
The soft X--ray flux observed with ROSAT in 1993 (Mereghetti et
al. 1994b) was higher than that expected from the extrapolation
of the quasi--simultaneous SIGMA measurement ($E>40$ keV),
indicating the presence of a soft spectral component with power
law photon index $\sim$3 below $\sim$2 keV. Clearly, such a
steep, low--energy component is not visible in the present ASCA
data, which are well described by a single flat power law. The
corresponding flux of $4.8\times 10^{-10}$ ergs cm$^{-2}$ s$^{-1}$
(in the 1--10 keV
band, corrected for the absorption) is within the range of values
measured in March--April 1990 (Sunyaev et al. 1991), when the
source was in its highest observed state. This fact is consistent
with the presence of a prominent soft component only when the
hard X--ray flux is at a lower intensity level.
Though a single power law provides an acceptable fit to the
ASCA data, we also explored spectral models consisting of two
different components: a soft thermal emission plus a hard tail.
For instance, with a blackbody plus power law, we obtained a
good fit to both the SIS and GIS data with $kT\sim 0.4-0.5$ keV
and photon index $\sim 1.4-1.5$ ($\chi^2 \simeq 0.98$). Obviously
such a power law must steepen at higher energy to be
consistent with the SIGMA observations. In fact a Sunyaev--Titarchuk
Comptonization model can equally well fit the ASCA
hard tail and provide an adequate spectral steepening to match
the high energy data (see Figure 3). Good results were also
obtained when the soft thermal component was fitted with
models of emission from accretion disks (e.g. Makishima et al.
1986, Stella \& Rosner 1984). In all cases the total flux in the soft
component amounts only to a few percent of the overall (0.1--
300 keV) luminosity. However, the low observed flux, coupled
to the high accretion rates required by the fitted temperatures,
implies an unplausible large distance for GRS 1758--258 and/or
very high inclination angles (note that there is no evidence so
far of eclipses or periodic absorption dips which could hint to a
high inclination system). A possible alternative solution is to
invoke a significant dilution of the optically thick soft
component by Comptonization in a hot corona. A very rough
estimate shows that, in order to effectively remove photons
from the thermal distribution, a scattering opacity of
$\tau_{es}\sim 2-5$ is required.
Our ASCA observation provides the most accurate measurement
of the absorption toward GRS 1758--258 obtained so far.
Obviously the derived value is slightly dependent on the
adopted spectral model. However, values within at most 10\% of
$1.5\times 10^{22}$ cm$^{-2}$ were obtained for all the models (one or two
components) fitting the data. This column density is consistent
with a distance of the order of the Galactic center and similar to
that of other sources in the galactic bulge (Kawai et al. 1988),
but definitely smaller than that observed with ASCA in 1E
1740.7--2942 (Churazov et al. 1996).
The information on the galactic column density, coupled to the
optical/IR data, can yield some constraints on the possible
companion star of GRS 1758--258 (see Chen, Gehrels \& Leventhal
1994). A candidate counterpart with $I\sim$19 and $K\sim$17
(Mereghetti et al. 1994a) lies within $\sim$2" of the best radio
position (Mirabel et al. 1992b). Other infrared sources present in
the X--ray error circle (10" radius) are fainter than $K\sim 17$
(Mirabel \& Duc 1992). Using an average relation between $N_H$
and optical reddening (Gorenstein 1975), we estimate a value of
$A_V\sim 7$, corresponding to less than one magnitude of
absorption in the K band (Cardelli, Clayton \& Mathis 1989).
Thus, for a distance of the order of 10 kpc, the K band absolute
magnitude must be fainter than $M_K\sim 1$. This limit clearly
rules out supergiant or giant companion stars, as well as main
sequence stars earlier than type A (Johnson 1966), thus
excluding the possibility that GRS 1758--258 is in a high mass
binary system.
The flux of GRS 1758--258 measured with the SIS instruments
corresponds to a 1--10 keV luminosity of $4.5\times 10^{36}$
ergs s$^{-1}$ (for a distance of 10 kpc). A reanalysis of archival data from
TTM/MIR, XRT/Spacelab--2 and EXOSAT (Skinner 1991), showed
that GRS 1758--258 had a similar intensity also in 1985 and in
1989. An earlier discovery had been prevented only by
confusion problems with GX5--1, much brighter than GRS 1758--
258 below $\sim$20 keV. Subsequent hard X--ray observations
with SIGMA (Gilfanov et al. 1993, Goldwurm et al. 1994)
repeatedly detected GRS 1758--258 with a hard spectrum
extending up to $\sim$300 keV. It is therefore clear that GRS
1758--258, though variable by a factor of $\sim$10 on a
timescale of months, is not a transient source.
\section{Conclusions}
The ASCA satellite has provided the first detailed data on GRS
1758--258 in the 1--10 keV region, allowing to minimize the
confusion problems caused by the vicinity of GX5--1, that
affected previous observations with non imaging instruments.
The possible black hole nature of GRS 1758--258, inferred from
the high energy data (Sunyaev et al. 1991, Goldwurm et al.
1994), is supported by the ASCA results. The power law
spectrum, extending up to the hard X--ray domain is similar to
that of Cyg X--1 and other black hole candidates in their low (or
hard) state. Furthermore, our stringent limits on the presence of
periodic pulsations and accurate measurement of interstellar
absorption make the possibility of a neutron star accreting from
a massive companion very unlikely. The lack of iron emission
lines in the SIS data has to be confirmed by more stringent
upper limits to rule out, e.g., the presence of a reflection
component as proposed for Cyg X--1 (Done et al. 1992). For
comparison, the iron line recently observed with ASCA in Cyg X--
1 has an equivalent width of only 10--30 eV (Ebisawa et al.
1996).
The prominent soft excess observed with ROSAT in 1993, when
the hard X--ray flux was in a lower intensity state, was absent
during our observation. The source was in a hard spectral state,
with a possible soft component accounting for $\sim$5\% of the
total luminosity at most. A similar soft component
($kT\sim 0.14$ keV), but contributing a larger fraction of the
flux, has been observed in Cyg X--1 and attributed to emission
from the accretion disk (Balucinska--Church et al. 1995). If the
soft component in GRS 1758--258 originates from the disk,
strong dilution is required. An optically thick hot cloud
embedding the innermost part of the disk is an attractive
hypothesis. To test the viability of this model, a detailed fit to
simultaneous data over a broad energy range, as available, e.g.,
with SAX in the near future, is required.
\clearpage
| proofpile-arXiv_065-353 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Figure Captions\markboth
{FIGURECAPTIONS}{FIGURECAPTIONS}}\list
{Figure \arabic{enumi}:\hfill}{\settowidth\labelwidth{Figure
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endfigcap\endlist \relax
\def\tablecap{\section*{Table Captions\markboth
{TABLECAPTIONS}{TABLECAPTIONS}}\list
{Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endtablecap\endlist \relax
\def\reflist{\section*{References\markboth
{REFLIST}{REFLIST}}\list
{[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endreflist\endlist \relax
\def\list{}{\rightmargin\leftmargin}\item[]{\list{}{\rightmargin\leftmargin}\item[]}
\let\endquote=\endlist
\makeatletter
\newcounter{pubctr}
\def\@ifnextchar[{\@publist}{\@@publist}{\@ifnextchar[{\@publist}{\@@publist}}
\def\@publist[#1]{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}
\setcounter{pubctr}{#1}\addtocounter{pubctr}{-1}}}
\def\@@publist{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}}}
\let\endpublist\endlist \relax
\makeatother
\newskip\humongous \humongous=0pt plus 1000pt minus 1000pt
\def\mathsurround=0pt{\mathsurround=0pt}
\def\eqalign#1{\,\vcenter{\openup1\jot \mathsurround=0pt
\ialign{\strut \hfil$\displaystyle{##}$&$
\displaystyle{{}##}$\hfil\crcr#1\crcr}}\,}
\newif\ifdtup
\def\panorama{\global\dtuptrue \openup1\jot \mathsurround=0pt
\everycr{\noalign{\ifdtup \global\dtupfalse
\vskip-\lineskiplimit \vskip\normallineskiplimit
\else \penalty\interdisplaylinepenalty \fi}}}
\def\eqalignno#1{\panorama \tabskip=\humongous
\halign to\displaywidth{\hfil$\displaystyle{##}$
\tabskip=0pt&$\displaystyle{{}##}$\hfil
\tabskip=\humongous&\llap{$##$}\tabskip=0pt
\crcr#1\crcr}}
\relax
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\bar{\partial}{\bar{\partial}}
\def\bar{J}{\bar{J}}
\def\partial{\partial}
\def f_{,i} { f_{,i} }
\def F_{,i} { F_{,i} }
\def f_{,u} { f_{,u} }
\def f_{,v} { f_{,v} }
\def F_{,u} { F_{,u} }
\def F_{,v} { F_{,v} }
\def A_{,u} { A_{,u} }
\def A_{,v} { A_{,v} }
\def g_{,u} { g_{,u} }
\def g_{,v} { g_{,v} }
\def\kappa{\kappa}
\def\rho{\rho}
\def\alpha{\alpha}
\def {\bar A} {\Alpha}
\def\beta{\beta}
\def\Beta{\Beta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\delta{\delta}
\def\Delta{\Delta}
\def\epsilon{\epsilon}
\def\Epsilon{\Epsilon}
\def\p{\pi}
\def\Pi{\Pi}
\def\chi{\chi}
\def\Chi{\Chi}
\def\theta{\theta}
\def\Theta{\Theta}
\def\mu{\mu}
\def\nu{\nu}
\def\omega{\omega}
\def\Omega{\Omega}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\s{\sigma}
\def\Sigma{\Sigma}
\def\varphi{\varphi}
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\defSchwarzschild {Schwarzschild}
\defReissner-Nordstr\"om {Reissner-Nordstr\"om}
\defChristoffel {Christoffel}
\defMinkowski {Minkowski}
\def\bigskip{\bigskip}
\def\noindent{\noindent}
\def\hfill\break{\hfill\break}
\def\qquad{\qquad}
\def\bigl{\bigl}
\def\bigr{\bigr}
\def\overline\del{\overline\partial}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def Nucl. Phys. { Nucl. Phys. }
\def Phys. Lett. { Phys. Lett. }
\def Mod. Phys. Lett. { Mod. Phys. Lett. }
\def Phys. Rev. Lett. { Phys. Rev. Lett. }
\def Phys. Rev. { Phys. Rev. }
\def Ann. Phys. { Ann. Phys. }
\def Commun. Math. Phys. { Commun. Math. Phys. }
\def Int. J. Mod. Phys. { Int. J. Mod. Phys. }
\def\partial_+{\partial_+}
\def\partial_-{\partial_-}
\def\partial_{\pm}{\partial_{\pm}}
\def\partial_{\mp}{\partial_{\mp}}
\def\partial_{\tau}{\partial_{\tau}}
\def \bar \del {\bar \partial}
\def {\bar h} { {\bar h} }
\def \bphi { {\bar \phi} }
\def {\bar z} { {\bar z} }
\def {\bar A} { {\bar A} }
\def {\tilde {A }} { {\tilde {A }}}
\def {\tilde {\A }} { {\tilde { {\bar A} }}}
\def {\bar J} {{\bar J} }
\def {\tilde {J }} { {\tilde {J }}}
\def {1\over 2} {{1\over 2}}
\def {1\over 3} {{1\over 3}}
\def \over {\over}
\def\int_{\Sigma} d^2 z{\int_{\Sigma} d^2 z}
\def{\rm diag}{{\rm diag}}
\def{\rm const.}{{\rm const.}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$}
\def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$}
\def\ghc{ G^c_h }
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$}
\def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$}
\def\ghc{ G^c_h }
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\begin{document}
\renewcommand{\thesection.\arabic{equation}}}{\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\eeq}[1]{\label{#1}\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\eer}[1]{\label{#1}\end{eqnarray}}
\newcommand{\eqn}[1]{(\ref{#1})}
\begin{titlepage}
\begin{center}
\hfill THU--96/33\\
\hfill September 1996\\
\hfill hep--th/9609165\\
\vskip .8in
{\large \bf COSET MODELS AND DIFFERENTIAL GEOMETRY
\footnote{Contribution to the proceedings of the
{\em Conference on Gauge Theories,
Applied Supersymmetry and Quantum Gravity}, Imperial College, London,
5-10 July 1996 and the e--proceedings
of Summer 96 Theory Institute, {\em Topics in Non-Abelian Duality},
Argonne, IL, 27 June - 12 July 1996. }}
\vskip 0.6in
{\bf Konstadinos Sfetsos
\footnote{e--mail address: sfetsos@fys.ruu.nl}}\\
\vskip .1in
{\em Institute for Theoretical Physics, Utrecht University\\
Princetonplein 5, TA 3508, The Netherlands}\\
\vskip .2in
\end{center}
\vskip .6in
\begin{center} {\bf ABSTRACT } \end{center}
\begin{quotation}\noindent
\noindent
String propagation on a
curved background defines an embedding problem of surfaces
in differential geometry.
Using this, we show that in a wide class of backgrounds the
classical dynamics of the physical degrees of freedom of the string
involves 2--dim
$\s$--models corresponding to coset conformal field theories.
\vskip .2in
\noindent
\end{quotation}
\end{titlepage}
\def1.2{1.2}
\baselineskip 16 pt
\noindent
Coset models have been used in string theory
for the construction of classical vacua,
either as internal theories in string compactification or as
exact conformal field theories representing curved spacetimes.
Our primary aim in this note, based on \cite{basfe}, is to reveal their
usefulness in a different context by
demonstrating that certain
classical aspects of constraint systems are governed by 2--dim
$\s$--models corresponding to some specific
coset conformal field theories.
In particular, we will examine string propagation
on arbitrary curved backgrounds with Lorentzian signature which
defines an embedding problem in differential geometry,
as it was first shown for 4--dim Minkowski space by Lund and
Regge \cite{LuRe}.
Choosing, whenever possible, the temporal gauge one may solve the Virasoro
constraints and hence be left with $D-2$
coupled non--linear differential equations governing the dynamics of
the physical degrees of freedom of the string.
By exploring their integrability properties, and considering
as our Lorentzian
background $D$--dim Minkowski space or the product form
$R\otimes K_{D-1}$, where $K_{D-1}$ is any WZW model
for a semi--simple compact group,
we will establish
connection with the coset model conformal field theories
$SO(D-1)/SO(D-2)$.
This universal behavior irrespectively of the particular WZW model
$K_{D-1}$ is rather remarkable,
and sheds
new light into the differential geometry of embedding surfaces using
concepts and field variables, which so far have been natural
only in conformal field theory.
Let us consider classical propagation of closed strings on a
$D$--dim background that is
the direct product of the real line $R$ (contributing a minus
in the signature matrix)
and a general manifold (with Euclidean signature) $K_{D-1}$.
We will denote $\s^\pm= {1\over 2}(\tau\pm \s)$, where
$\tau$ and $\s$ are the natural time and spatial variables
on the world--sheet $\Sigma$.
Then,
the 2--dim $\s$--model action is given by
\begin{equation}
S= {1\over 2} \int_\Sigma (G_{\mu\nu} + B_{\mu\nu}) \partial_+ y^\mu \partial_- y^\nu
- \partial_+ y^0 \partial_- y^0 ~ , ~~~~~~~~ \mu,\nu =1,\dots , D-1~ ,
\label{smoac}
\end{equation}
where $G$, $B$ are the non--trivial metric
and antisymmetric tensor fields and
are independent of $y^0$.
The conformal gauge, we have implicitly chosen in writing
down (\ref{smoac}),
allows us to further set $y^0=\tau$ (temporal gauge).
Then we are left with the $D-1$ equations
of motion corresponding to the $y^\mu$'s,
as well as with the Virasoro constraints
\begin{equation}
G_{\mu\nu} \partial_\pm y^\mu \partial_\pm y^\nu = 1 ~ ,
\label{cooss}
\end{equation}
which can be used to further reduce
the degrees of freedom by one, thus leaving only the
$D-2$ physical ones.
We also define an angular variable $\theta$ via the relation
\begin{equation}
G_{\mu\nu} \partial_+ y^\mu \partial_- y^\nu = \cos \theta ~ .
\label{angu}
\end{equation}
In the temporal gauge we may restrict our analysis
entirely on $K_{D-1}$ and on
the projection of the string world--sheet $\Sigma$ on the
$y^0=\tau$ hyperplane. The resulting 2--dim surface
$S$ has Euclidean signature with metric given by
the metric $G_{\mu\nu}$ on $K_{D-1}$ restricted on $S$.
Using (\ref{cooss}), (\ref{angu}) we find that the
corresponding line element reads
\begin{equation}
ds^2 = d{\s^+}^2 + d{\s^-}^2 + 2 \cos\theta d\s^+ d\s^- ~ .
\label{dsS2}
\end{equation}
In general, determining the classical
evolution of the string is equivalent to
the problem of determining the 2--dim surface
that it forms as it moves.
Phrased in purely geometrical terms this is equivalent,
in our case, to
the embedding problem of the 2--dim surface $S$
with metric (\ref{dsS2}) into the $(D-1)$--dim space
$K_{D-1}$. The solution requires that a complete set
of $D-1$ vectors tangent and normal to the surface $S$ as functions
of $\s_+$ and $\s_-$ is found.
In our case the 2 natural tangent vectors are
$\{\partial_+ y^\mu, \partial_- y^\mu\}$,
whereas the remaining $D-3$ normal ones will be denoted by
$\{\xi^\mu_\s, \s=3,4,\dots, D-1\}$.
These vectors obey first order partial
differential equations \cite{Eisenhart} that depend, as expected,
on the detailed structure of
$K_{D-1}$. Since we are only interested
in some universal aspects we will solely restrict to the
corresponding \underline{compatibility} equations.
In general, these involve the
Riemann curvatures for the metrics of the two spaces
$S$ and $K_{D-1}$, as well as the second
fundamental form with components $\Omega^\s_{\pm\pm}$,
$\Omega^\s_{+-}=\Omega^\s_{-+}$ and the third
fundamental form ($\equiv$ torsion) with components
$\mu^{\s\tau}_\pm =-\mu^{\tau\s}_\pm$ \cite{Eisenhart}. It turns out
that the $D-1$ classical equations of motion for \eqn{smoac}
(in the gauge $y^0 = \tau$) and the two
constraints (\ref{cooss}) completely determine the components
of the second fundamental form $\Omega^\s_{+-}$ \cite{basfe}.
In what follows we will also use instead of $\mu_\pm^{\s\tau}$
a modified, by a term that
involves $H_{\mu\nu\rho}=\partial_{[\mu}B_{\nu\rho]}$,
torsion $M_\pm^{\s\tau}$ \cite{basfe}.
Then the compatibility equations
for the remaining components $\Omega^\s_{\pm\pm}$ and
$M_\pm^{\s\tau}$ are \cite{basfe}:
\begin{eqnarray}
&& \Omega^\tau_{++} \Omega^\tau_{--} + \sin\theta \partial_+ \partial_- \theta
= - R^+_{\mu\nu\alpha\beta}
\partial_+ y^\mu \partial_+ y^\alpha \partial_- y^\nu \partial_- y^\beta ~ ,
\label{gc1} \\
&& \partial_{\mp} \Omega^\s_{\pm\pm} - M_\mp^{\tau\s} \Omega^\tau_{\pm\pm}
-{1\over \sin\theta} \partial_\pm\theta \Omega^\s_{\mp\mp}
= R^\mp_{\mu\nu\alpha\beta}
\partial_\pm y^\mu \partial_\pm y^\alpha \partial_\mp y^\beta \xi^\nu_\s ~ ,
\label{gc2} \\
&& \partial_+ M_-^{\s\tau} - \partial_- M_+^{\s\tau}
- M_-^{\rho[\s} M_+^{\tau]\rho}
+ {\cos\theta \over \sin^2\theta} \Omega^{[\s}_{++} \Omega^{\tau]}_{--}
= R^-_{\mu [\beta \alpha]\nu}
\partial_+ y^\mu \partial_- y^\nu \xi^\alpha_\s \xi^\beta_\tau ~ ,
\label{gc3}
\end{eqnarray}
where the curvature tensors and the covariant derivatives $D^\pm_\mu$
are defined using the generalized
connections that include the string torsion
$H_{\mu\nu\rho}$.\footnote{We have written \eqn{gc3}
in a slightly different form compared to the same equation in \cite{basfe}
using the identity $D^-_\mu H_{\nu\alpha\beta} = R^-_{\mu[\nu\alpha\beta]}$.}
Equations (\ref{gc1})--\eqn{gc3}
are generalizations of the
Gauss--Codazzi and Ricci equations for a surface
immersed in Euclidean space.
For $D\geq 5$ there are
${1\over 2} (D-3)(D-4)$ more unknown functions ($\theta$, $\Omega^\s_{\pm\pm}$
and $M_\pm^{\s\tau}$) than equations in \eqn{gc1}--\eqn{gc3}.
However, there is an underlying gauge invariance \cite{basfe}
which accounts for the extra (gauge) degrees of freedom
and can be used to eliminate them (gauge fix).
Making further progress with
the embedding system of equations (\ref{gc1})--(\ref{gc3})
as it stands seems a difficult task. This is
due to the presence of terms
depending explicitly on $\partial_\pm y^\mu$ and $\xi^\mu_\s$,
which can only be determined by solving the
actual string evolution equations.
Moreover, a Lagrangian from which
(\ref{gc1})--(\ref{gc3}) can be derived as equations of
motion is also lacking. Having such a description is advantageous in
determining the operator content of the theory and for quantization.
Rather remarkably, all of these problems can be simultaneously
solved by considering for $K_{D-1}$ either flat space with zero torsion or
any WZW model based on a
semi--simple compact group $G$, with $\dim(G)=D-1$.
This is due to the identity
\begin{equation}
R^\pm_{\mu\nu\alpha\beta} = 0 ~ ,
\label{rdho}
\end{equation}
which is valid not only for flat space with zero torsion but also
for all WZW models \cite{zachos}.
Then we completely get rid of the bothersome terms on the right
hand side of (\ref{gc1})--(\ref{gc3}).\footnote{Actually, the same
result is obtained by demanding the weaker condition
$R^-_{\mu\nu\alpha\beta}=R^-_{\mu\alpha\nu\beta}$, but we are not aware of any examples
where these weaker conditions hold.}
In is convenient to
extend the range of definition of
$\Omega^\s_{++}$ and $M_\pm^{\s\tau}$ by appending new components
defined as: $\Omega^2_{++}= \partial_+ \theta$,
$M_+^{\s 2}= \cot \theta \Omega^\s_{++}$ and
$M_-^{\s2} = - \Omega^\s_{--}/\sin\theta$.
Then equations (\ref{gc1})--(\ref{gc3}) can be recast into the
suggestive form
\begin{eqnarray}
&& \partial_- \Omega^i_{++} + M_-^{ij} \Omega^j_{++} = 0 ~ ,
\label{new1} \\
&& \partial_+ M_-^{ij} - \partial_- M_+^{ij} + [M_+,M_-]^{ij} = 0 ~ ,
\label{new2}
\end{eqnarray}
where the new index $i=(2,\s)$.
Equation (\ref{new2}) is a
zero curvature condition for the matrices $M_\pm$ and it is locally
solved by $M_\pm = \Lambda^{-1} \partial_\pm \Lambda$,
where $\Lambda \in SO(D-2)$. Then (\ref{new1}) can be cast into
equations for $Y^i=\Lambda^{i2} \sin \theta$ \cite{basfe}
\begin{equation}
\partial_- \left( {\partial_+ Y^i \over \sqrt{1-\vec Y^2}} \right) = 0~ ,
~~~~~ i = 2,3,\dots ,D-1 ~ .
\label{fiin}
\end{equation}
These equations were derived before in \cite{barba}, while
describing
the dynamics of a free string propagating in $D$--dimensional
{\it flat} space--time. It is remarkable that they remain
unchanged even if the flat $(D-1)$--dim space--like part is replaced
by a curved background corresponding to a general WZW model.
Nevertheless, it should be emphasized that
the actual evolution equations of the normal and tangent
vectors to the surface are certainly different from those
of the flat space free string and can be found in \cite{basfe}.
As we have already mentioned, it would be advantageous if
(\ref{fiin}) (or an equivalent system) could be derived
as classical equations of motion for a 2--dim action of the
form
\begin{equation}
S = {1\over 2\pi \alpha'} \int (g_{ij} + b_{ij})
\partial_+ x^i \partial_- x^j ~ , ~~~~~~~~ i,j = 1,2,\dots, D-2 ~ .
\label{dynsm}
\end{equation}
The above action has a $(D-2)$--dim target space and only
models the non--trivial dynamics of the physical degrees
of freedom of the
string which itself
propagates on the background corresponding to \eqn{smoac} which
has a $D$--dim target space.
The construction of such an action involves
a non--local change
of variables and is based on
the observation \cite{basfe} that (\ref{fiin})
imply chiral conservation laws, which
are the same as the
equations obeyed by the classical
parafermions for the coset model $SO(D-1)/SO(D-2)$ \cite{BSthree}.
We recall that the classical $\s$--model action
corresponding to a coset $G/H$ is derived from the associated
gauged WZW model and the result is given by
\begin{equation}
S= I_0(g) + {1\over \pi \alpha'} \int
{\rm Tr}(t^a g^{-1} \partial_+ g) M^{-1}_{ab} {\rm Tr}
(t^a \partial_- g g^{-1}) ~ , ~~~~
M^{ab} \equiv {\rm Tr}(t^a g t^b g^{-1}- t^a t^b) ~ ,
\label{dualsmo}
\end{equation}
where $I_0(g)$ is the WZW action for a group element $g\in G$ and
$\{t^A\}$ are representation matrices of the Lie algebra for
$G$ with indices split as $A=(a,\alpha)$, where $a\in H$
and $\alpha\in G/H$.
We have also assumed that a unitary gauge has been chosen
by fixing $\dim(H)$
variables among the total number of $\dim(G)$ parameters
of the group element $g$. Hence, there are
$\dim(G/H)$ remaining variables, which will be denoted by $x^i$.
The natural objects generating infinite dimensional symmetries
in the background \eqn{dualsmo} are the classical parafermions
(we restrict to one chiral sector only) defined in general as \cite{BCR}
\begin{equation}
\Psi_+^\alpha = {i \over \pi \alpha'} {\rm Tr} (t^\alpha f^{-1} \partial_+ f ) ~ ,
~~~~~~~~~ f\equiv h_+^{-1} g h_+ \in G ~ ,
\label{paraf}
\end{equation}
and obeying on shell $\partial_- \Psi_+^\alpha = 0 $.
The group element $h_+\in H$ is given as a path order exponential
using the on shell value of the gauge field $A_+$
\begin{equation}
h_+^{-1} = {\rm P} e^{- \int^{\s^+} A_+}~ , ~~~~~~~~
A_+^a = M^{-1}_{ba} {\rm Tr} (t^b g^{-1}\partial_+ g) ~ .
\label{hphm}
\end{equation}
Next we specialize to the $SO(D-1)/SO(D-2)$ gauged WZW models.
In this case the index $a=(ij)$ and the index $\alpha=(0i)$ with
$i=1,2,\dots , D-2$. Then the
parafermions \eqn{paraf} assume the
form (we drop $+$ as a subscript) \cite{BSthree,basfe}
\begin{eqnarray}
&& \Psi^i = {i\over \pi \alpha'}
{\partial_+ Y^i\over \sqrt{1-\vec Y^2}} =
{i \over \pi \alpha'} {1\over \sqrt{1-\vec X^2}} (D_+X)^j h_+^{ji} ~ ,
\nonumber \\
&& (D_+X)^j = \partial_+ X^j - A_+^{jk} X^k ~ , ~~~~~~
Y^i = X^j (h_+)^{ji}~ .
\label{equff}
\end{eqnarray}
Thus, equation $\partial_- \Psi^i = 0$ is
precisely (\ref{fiin}), whereas \eqn{dualsmo}
provides the action \eqn{dynsm} to our embedding problem.
The relation between the $X^i$'s and the $Y^i$'s in \eqn{equff}
provides
the necessary non--local change of variables that transforms
(\ref{fiin}) into a Lagrangian system of equations.
It is highly non--intuitive
in differential geometry, and only
the correspondence with parafermions makes it natural.
It remains to conveniently parametrize the group element
$g\in SO(D-1)$. In the right coset decomposition with respect to
the subgroup $SO(D-2)$ we may write \cite{BSthree}
\begin{equation}
g = \left( \begin{array} {cc}
1 & 0 \\
& \\
0 & h \\
\end{array}
\right) \cdot
\left( \begin{array} {cc}
b & X^j \\
& \\
- X^i & \delta_{ij} - {1\over b+1} X^i X^j \\
\end{array}\right) ~ ,
\label{H}
\end{equation}
where $h\in SO(D-2)$ and $b \equiv \sqrt{1-\vec X^2}$.
The range of the parameters in the vector $\vec X$ is restricted
by $\vec X^2\leq 1$.
A proper gauge fixing is to choose
the group element $h$ in the Cartan torus of $SO(D-2)$
and then use the remaining gauge symmetry to gauge fix
some of the components of the vector $\vec X$.
If \underline{$D=2 N + 3= {\rm odd}$} then we may
cast the orthogonal matrix $h\in SO(2N+1)$ and
the row vector $\vec X$ into the form \cite{basfe}
\begin{eqnarray}
&& h={\rm diagonal}\left(h_1,h_2,\dots,h_N,1\right)~ ,~~~~~
h_i = \pmatrix{
\cos 2\phi_i & \sin 2\phi_i \cr
-\sin 2\phi_i & \cos 2\phi_i \cr} \nonumber \\
&& \vec X =\left(0,X_2,0,X_4,\dots,0,X_{2N},X_{2N+1}\right) ~ .
\label{hdixn}
\end{eqnarray}
On the other hand if \underline{$D=2 N + 2= {\rm even}$}
then $h\in SO(2N)$ can be gauge fixed in a
form similar to the one in \eqn{hdixn} with the 1 removed.
Similarly in the vector $\vec X$ there is no
$X_{2N+1}$ component.
In both cases the total number of
independent variables is $D-2$, as it should be.
\underline{\em Examples:}
As a first example we consider the Abelian coset $SO(3)/SO(2)$ \cite{BCR}.
In terms of our original problem it arises after solving the
Virasoro constraints for
strings propagating on 4--dim Minkowski space or on the
direct product of the real line $R$ and the WZW model for $SU(2)$.
Using $X_2= \sin 2\theta$ one finds that
\begin{equation}
A_+ = \pmatrix{ 0 & 1\cr -1 & 0 }
(1- \cot^2\theta) \partial_+ \phi ~ ,
\label{gsu2}
\end{equation}
and that the background corresponding to \eqn{dynsm}
has metric \cite{BCR}
\begin{equation}
ds^2 = d\theta^2 + \cot^2\theta d\phi^2 ~ .
\label{S1}
\end{equation}
Using (\ref{equff}), the corresponding Abelian parafermions
$\Psi_\pm = \Psi_2 \pm i\Psi_1$ assume the familiar form
\begin{equation}
\Psi_\pm = (\partial_+ \theta \pm i \cot\theta \partial_+ \phi)
e^{\mp i \phi \pm i \int \cot^2\theta \partial_+ \phi } ~ ,
\label{pasu2}
\end{equation}
up to an overall normalization. An alternative way of seeing the
emergence of the coset
$SO(3)/SO(2)$ is from the original system of
embedding equations \eqn{gc1}--\eqn{gc3} for $D=4$ and zero
curvatures. They just reduce to the classical equations of
motion for the 2--dim $\s$--model corresponding to the metric
\eqn{S1} \cite{LuRe}, as it was observed in \cite{Baso3}.
Our second example is the simplest
non--Abelian coset $SO(4)/SO(3)$ \cite{BSthree}.
In our context it
arises in string
propagation on 5--dim Minkowski space or on the direct
product of the real line $R$ and the WZW model based on
$SU(2)\otimes U(1)$.
Parametrizing $X_2 = \sin 2\theta \cos \omega$
and $X_3 = \sin 2\theta \sin \omega$ one finds that
the $3 \times 3$ antisymmetric matrix for the $SO(3)$
gauge field $A_+$ has independent components given by
\begin{eqnarray}
A^{12}_+ & = & -\left( {\cos 2\theta \over \sin^2\theta \cos^2\omega }
+ \tan^2\omega {\cos^2\theta -\cos^2\phi \cos 2\theta\over
\cos^2\theta \sin^2 \phi} \right)
\partial_+\phi - \cot\phi \tan\omega \tan^2 \theta
\partial_+\omega ~ ,\nonumber \\
A^{13}_+ & = & \tan\omega
{\cos^2\theta -\cos^2\phi \cos 2\theta\over
\cos^2\theta \sin^2 \phi} \partial_+\phi
+ \cot\phi \tan^2 \theta \partial_+\omega ~ ,
\label{expap} \\
A^{23}_+ & = & \cot\phi \tan \omega {\cos 2\theta\over \cos^2\theta}
\partial_+ \phi - \tan^2 \theta \partial_+\omega ~ .
\nonumber
\end{eqnarray}
Then, the
background metric for the action \eqn{dynsm} governing the
dynamics of the 3 physical string degrees of freedom
is \cite{BSthree}
\begin{equation}
ds^2 = d\theta^2 + \tan^2\theta (d\omega + \tan\omega \cot \phi d\phi)^2
+ {\cot^2\theta \over \cos^2\omega} d\phi^2 ~ ,
\label{ds3}
\end{equation}
and the antisymmetric tensor is zero.
The parafermions of the $SO(4)/SO(3)$ coset are non--Abelian and are
given by (\ref{equff}) with some explicit expressions
for the covariant derivatives \cite{basfe}.
In addition to the two examples above, there also
exist explicit results for the coset $SO(5)/SO(4)$
\cite{BShet}.
This would correspond in our context to string
propagation on a 6--dim Minkowski space or
on the background
$R$ times the $SU(2)\otimes U(1)^2$
WZW model.
An obvious extension one could make is to
consider the same embedding problem but with Lorenzian instead of
Euclidean backgrounds representing the ``spatial'' part $K_{D-1}$.
This would necessarily involve $\s$--models for cosets based on
non--compact groups. The case for $D=4$
has been considered in \cite{vega}.
It is interesting to consider supersymmetric
extensions of the present work in connection also with \cite{susyre}.
In addition, formulating
classical propagation of $p$--branes
on curved backgrounds as a geometrical problem of embedding surfaces
(for work in this direction see \cite{kar}) and
finding the $p+1$--dim $\s$--model action (analog of \eqn{dynsm} for
strings ($p=1$)) that governs the
dynamics of the physical degrees of freedom of the $p$--brane
is an open interesting problem.
The techniques we have presented in this note can also be used to
find the Lagrangian description of the symmetric space
sine--Gordon models \cite{Pohlalloi} which
have been described as perturbations of coset conformal field
theories \cite{bapa}.
Hence, the corresponding parafermion variables will play
the key role in such a construction.
Finally, an interesting issue is the quantization of constrained
systems.
Quantization in string theory usually proceeds by quantizing
the unconstrained degrees of freedom and then imposing the
Virasoro constraints
as quantum conditions on the physical states.
However, in the
present framework the physical degrees of freedom should be quantized
directly using the quantization of the associated parafermions.
Quantization of the $SO(3)/SO(2)$ parafermions has been
done in the seminal work of \cite{zafa},
whereas for higher dimensional cosets there
is already some work in the literature \cite{BABA}.
A related problem is also finding a consistent quantum theory for
vortices.
This appears to have been the initial motivation of Lund and Regge
(see \cite{LuRe}).
\bigskip\bs
\centerline{\bf Acknowledgments }
\noindent
I would like to thank the organizers of the conferences in Imperial college
and in Argonne Nat.Lab. for their warm hospitality and for financial support.
This work was also carried out with the financial support
of the European Union Research Program
``Training and Mobility of Researchers'', under contract ERBFMBICT950362.
It is also work supported by the European Commision TMR program
ERBFMRX-CT96-0045.
\newpage
| proofpile-arXiv_065-376 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The importance of studying continuous but nowhere differentiable
functions was emphasized a long time ago by Perrin,
Poincar\'e and others (see Refs. \cite{1} and \cite{2}).
It is possible for a continuous function to be sufficiently irregular
so that its graph is a fractal. This observation points out to a
connection between the lack of differentiability of such a function and
the dimension of its graph.
Quantitatively one would like to convert the question concerning the lack of
differentiability into one concerning the amount of loss
of differentiability. In other words, one would
look at derivatives of fractional order rather than only those of
integral order and relate them to dimensions.
Indeed some recent papers~\cite{3,4,5,6} indicate a connection
between fractional calculus~\cite{7,8,9}
and fractal structure \cite{2,10} or fractal processes \cite{11,22,23}.
Mandelbrot and Van Ness \cite{11} have
used fractional integrals to formulate fractal processes such as fractional
Brownian motion. In Refs. \cite{4} and \cite{5}
a fractional diffusion equation has been
proposed for the diffusion on fractals.
Also Gl\"ockle and Nonnenmacher \cite{22} have formulated fractional
differential equations for some relaxation processes which are essentially
fractal time \cite{23} processes.
Recently Zaslavasky~\cite{46} showed that the Hamiltonian chaotic dynamics
of particles can be described by a fractional generalization of the
Fokker-Plank-Kolmogorov equation which is defined by two fractional
critical exponents $(\alpha , \beta)$ responsible for the space and
time derivatives of the distribution function correspondingly.
However, to our knowledge, the precise nature of the
connection between the dimension of the graph of a fractal curve
and fractional differentiability
properties has not been established.
Irregular functions arise naturally in various
branches of physics. It is
well known that the graphs of projections of Brownian paths
are nowhere differentiable and have
dimension $3/2$. A generalization of Brownian motion called fractional
Brownian motion \cite{2,10} gives rise to graphs having dimension between 1 and 2.
Typical Feynmann paths \cite{30,31}, like the Brownian paths are continuous but nowhere
differentiable.
Also, passive scalars advected by a turbulent fluid
\cite{19,20} can have isoscalar surfaces which are highly irregular, in the limit
of the diffusion constant going to zero. Attractors of some dynamical systems
have been shown \cite{15} to be continuous but nowhere differentiable.
All these irregular functions are characterized at every point by a local H\"older
exponent typically lying between 0 and 1.
In the case of functions having the same H\"older exponent $h$ at every
point it is well known that the
box dimension of its graph is $2-h$. Not all functions have the same exponent $h$
at every point but have a range of H\"older exponents.
A set $\{x\vert h(x)=h \}$ may be a fractal set.
In such situations the corresponding functions are multifractal. These kind of
functions also arise in various physical situations, for instance, velocity
field of a turbulent fluid \cite{26} at low viscosity.
Also there exists a
class of problems where one has to solve a partial
differentiable equation subject to fractal boundary
conditions, e.g. the Laplace equation near a fractal conducting surface.
As noted in reference \cite{39} irregular boundaries may appear, down to
a certain spatial resolution, to be non-differentiable everywhere and/or
may exhibit convolutions over many length scales.
Keeping in view such
problems there is a need to characterize pointwise behavior using something
which can be readily used.
We consider the Weierstrass function as a prototype example of a function which is
continuous everywhere but differentiable nowhere and has an exponent which is
constant everywhere.
One form of the Weierstrass
function is
\begin{eqnarray}
W_{\lambda}(t) = \sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^kt,\;\;\;\;\;
t \;\;\;{\rm real}.
\end{eqnarray}
For this form, when $\lambda > 1$ it is well known \cite{12} that
$W_{\lambda}(t)$ is nowhere differentiable if $1<s<2$.
This curve has been extensively studied \cite{1,10,13,14} and
its graph is known to have a box dimension $s$, for sufficiently large $\lambda$.
Incidently, the Weierstrass functions are not just mathematical curiosities
but occur at several places. For instance,
the graph of this function is known \cite{10,15} to be a repeller
or attractor of some dynamical
systems.
This kind of function can also be recognized as
the characteristic function of a L\'evy
flight on a one dimensional lattice \cite{16}, which means that such a L\'evy
flight can be considered as a superposition of Weierstrass type functions.
This function has also been used \cite{10} to generate a fractional Brownian signal by multiplying
every term by a random amplitude and randomizing phases of every term.
The main aim of the present paper is to explore the precise nature of the
connection between fractional differentiability properties of irregular
(non-differentiable) curves and dimensions/ H\"older exponents of
their graphs. A second aim is to provide a possible tool to study
pointwise behavior.
The organization of the paper is as follows.
In section II we motivate and define what we call
local fractional differentiability,
formally and use a local fractional derivative to formulate the Taylor series.
Then in section III we
apply this definition to a specific example, viz., Weierstrass' nowhere
differentiable function and show that this function, at every point, is
locally fractionally differentiable for all orders below $2-s$
and it is not so for orders between $2-s$ and 1, where $s$, $1<s<2$
is the box dimension of the graph of the function. In section IV we prove a
general result showing the relation between local fractional differentiability
of nowhere differentiable functions and the local H\"older exponent/
the dimension of its graph. In section V we demonstrate the use of the local
fractional derivatives (LFD) in unmasking isolated singularities and
in the study of the pointwise behavior of multifractal functions.
In section VI we conclude after pointing out a few possible consequences of
our results.
\section{Fractional Differentiability}
We begin by recalling the Riemann-Liouville definition of the
fractional integral
of a real function, which is
given by \cite{7,9}
\begin{eqnarray}
{{d^qf(x)}\over{[d(x-a)]^q}}={1\over\Gamma(-q)}{\int_a^x{{f(y)}\over{(x-y)^{q+1}}}}dy
\;\;\;{\rm for}\;\;\;q<0,\;\;\;a\;\;{\rm real},\label{def1}
\end{eqnarray}
and of the fractional derivative
\begin{eqnarray}
{{d^qf(x)}\over{[d(x-a)]^q}}={1\over\Gamma(1-q)}{d\over{dx}}{\int_a^x{{f(y)}\over{(x-y)^{q}}}}dy
\;\;\;{\rm for}\;\;\; 0<q<1.\label{def2}
\end{eqnarray}
The case of $q>1$ is of no relevance in this paper.
For future reference we note \cite{7,9}
\begin{eqnarray}
{{d^qx^p}\over {d x^q}} = {\Gamma(p+1) \over {\Gamma(p-q+1)}} x^{p-q}\;\;\;
{\rm for}\;\;\;p>-1.\label{xp}
\end{eqnarray}
We also note that the fractional derivative has the property (see Ref.
\cite{7}), viz.,
\begin{eqnarray}
{d^qf(\beta x)\over{d x^q}}={{\beta}^q{d^qf(\beta x)\over{d(\beta x)^q}}}
\end{eqnarray}
which makes it suitable for the study of scaling.
One may note that except in the case of positive integral $q$, the $q$th derivative
will be nonlocal through its dependence
on the lower limit "$a$".
On the other hand we wish to
study local scaling properties and hence we need to introduce the notion
of local fractional differentiability.
Secondly from Eq. (\ref{xp}) it is clear that the fractional derivative of a constant
function is not zero.
These two features play an
important role in defining local fractional differentiability.
We note that changing the lower
limit or adding a constant to a function alters the value of the fractional
derivative. This forces one to choose the lower limit as well as the additive
constant before hand. The most natural choices are as follows.
(1) We subtract, from the function, the value of the function at the point where
fractional differentiability is to be checked. This makes the value of the function
zero at that point, washing out the effect of any constant term.
(2) The natural choice of a lower limit will be
that point, where we intend to examine the fractional differentiability, itself.
This has an advantage in that it preserves local nature of
the differentiability property. With these motivations we now introduce
the following.
\begin{defn} If, for a function $f:[0,1]\rightarrow I\!\!R$, the limit
\begin{eqnarray}
I\!\!D^qf(y) =
{\lim_{x\rightarrow y} {{d^q(f(x)-f(y))}\over{d(x-y)^q}}},\label{defloc}
\end{eqnarray}
exists and is finite, then we say that the {\it local fractional derivative} (LFD)
of order $q$, at $x=y$,
exists.
\end{defn}
\begin{defn}
We define {\it critical order} $\alpha$, at $y$, as
$$
\alpha(y) = Sup \{q \vert {\rm {all\;local\; fractional\; derivatives\; of\; order\; less\; than\;}} q{{\rm\; exist\; at}\;y}\}.
$$
\end{defn}
Incidentally we note that Hilfer \cite{17,18} used a similar notion to extend
Ehrenfest's classification of phase
transition to continuous transitions. However in his work only the singular part of
the free energy was considered. So the first of the above mentioned
condition was automatically
satisfied. Also no lower limit of fractional derivative was considered
and by default it was taken as zero.
In order to see the information contained in the LFD we consider the
fractional Taylor's series with a remainder term for a real function $f$.
Let
\begin{eqnarray}
F(y,x-y;q) = {d^q(f(x)-f(y))\over{[d(x-y)]^q}}.
\end{eqnarray}
It is clear that
\begin{eqnarray}
I\!\!D^qf(y)=F(y,0;q).
\end{eqnarray}
Now, for $0<q<1$,
\begin{eqnarray}
f(x)-f(y)& =& {1\over\Gamma(q)} \int_0^{x-y} {F(y,t;q)\over{(x-y-t)^{-q+1}}}dt\\
&=& {1\over\Gamma(q)}[F(y,t;q) \int (x-y-t)^{q-1} dt]_0^{x-y} \nonumber\\
&&\;\;\;\;\;\;\;\;+ {1\over\Gamma(q)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q\over{q}}dt,
\end{eqnarray}
provided the last term exists. Thus
\begin{eqnarray}
f(x)-f(y)&=& {I\!\!D^qf(y)\over \Gamma(q+1)} (x-y)^q \nonumber\\
&&\;\;\;\;\;\;\;\;+ {1\over\Gamma(q+1)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q}dt,\label{taylor}
\end{eqnarray}
i.e.
\begin{eqnarray}
f(x) = f(y) + {I\!\!D^qf(y)\over \Gamma(q+1)} (x-y)^q + R_1(x,y),\label{taylor2}
\end{eqnarray}
where $R_1(x,y)$ is a remainder given by
\begin{eqnarray}
R_1(x,y) = {1\over\Gamma(q+1)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q}dt
\end{eqnarray}
Equation (\ref{taylor2}) is a fractional Taylor expansion of $f(x)$ involving
only the lowest and the second leading terms. This expansion can be carried
to higher orders provided the corresponding remainder term is well defined.
We note that the local fractional derivative as defined above
(not just the fractional derivative) provides
the coefficient $A$ in the approximation
of $f(x)$ by the function $f(y) + A(x-y)^q/\Gamma(q+1)$, for $0<q<1$,
in the vicinity of $y$.
We further note that the terms
on the RHS of Eq. (\ref{taylor}) are non-trivial and finite only in the case
$q=\alpha$.
Osler in Ref.\cite{21} has constructed a fractional Taylor
series using usual (not local in the present sense) fractional derivatives.
His results are, however, applicable to analytic functions and cannot be
used for non-differentiable scaling functions directly. Further Osler's
formulation involves terms with negative $q$ also and hence is not suitable
for approximating schemes.
One may further notice that when $q$ is set equal to
one in the above approximation one gets
the equation of the tangent.
It may be recalled that all the curves passing through a point $y$ and having
the same tangent
form an equivalence class (which is modeled by a linear behavior).
Analogously all the functions (curves) with the same critical order $\alpha$
and the same $I\!\!D^{\alpha}$
will form an equivalence class modeled by $x^{\alpha}$ [
If $f$ differs from $x^{\alpha}$ by a logarithmic correction then
terms on RHS of Eq. (\ref{taylor})
do not make sense precisely as in the case of ordinary calculus].
This is how one may
generalize the geometric interpretation of derivatives in terms of tangents.
This observation is useful when one wants to approximate an irregular
function by a piecewise smooth (scaling) function.
To illustrate the definitions of local fractional differentiability and critical order
consider an example of a polynomial of degree $n$ with its graph passing through the
origin and for which the first derivative at the origin
is not zero. Then all the local fractional derivatives of order
less than or equal to one exist at the origin. Also all derivatives
of integer order greater than
one exist, as expected. But local derivatives of any other order,
e.g. between 1 and 2 [see
equations (\ref{xp}) and (\ref{defloc})] do not exist.
Therefore critical order for this function at
$x=0$ is one. In fact, except at a finite number of points
where the function has a
vanishing first derivative, critical order
of a polynomial function will be one, since the linear term is expected to
dominate near these points.
\noindent
{\bf Remark}: We would like to point out that
there is a multiplicity of definitions of a fractional derivative.
The use of a Riemann-Liouville
definition, and other equivalent definitions such as Grunwald's
definition, are suitable for our purpose.
The other definitions of fractional derivatives which
do not allow control over both the limits, such as Wyel's definition or definition
using Fourier transforms, are not suitable since
it would not be possible to retrieve the local nature of
the differentiability property which is essential for the study of
local behavior. Also, the important difference between our work and
the work of \cite{4,22} is that while we are trying to study the local scaling behavior these works apply to asymptotic scaling properties.
\section{Fractional Differentiability of Weierstrass Function}
Consider a form of the Weierstrass function as given above, viz.,
\begin{eqnarray}
W_{\lambda}(t) = \sum_{k=1}^{\infty} {\lambda}^{(s-2)k}
\sin{\lambda}^kt,\;\;\;\;
\lambda>1.
\end{eqnarray}
Note that $W_{\lambda}(0)=0$.
Now
\begin{eqnarray}
{{d^qW_{\lambda}(t)}\over{dt^q}}
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2)k}{{d^q\sin({\lambda}^kt)}\over {dt^q}}}\nonumber\\
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2+q)k}{{d^q\sin({\lambda}^kt)}\over {d({\lambda}^kt)^q}}}, \nonumber
\end{eqnarray}
provided the right hand side converges uniformly. Using, for $0<q<1$,
\begin{eqnarray}
{{d^q\sin(x)}\over {d x^q}}={{d^{q-1}\cos(x)}\over{d x^{q-1}}}, \nonumber
\end{eqnarray}
we get
\begin{eqnarray}
{{d^qW_{\lambda}(t)}\over{dt^q}}
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2+q)k}{{d^{q-1}cos({\lambda}^kt)}\over {d({\lambda}^kt)^{q-1}}}}\label{a}
\end{eqnarray}
From the second mean value theorem it follows that the fractional integral
of $\cos({\lambda}^kt)$ of order $q-1$ is
bounded uniformly for all values of ${\lambda}^kt$.
This implies that the series on the right
hand side will converge uniformly for $q<2-s$, justifying our action of taking
the fractional derivative operator inside the sum.
Also as $t \rightarrow 0$ for
any $k$ the fractional integral in the summation of equation (\ref{a}) goes to zero.
Therefore it is easy to see from this that
\begin{eqnarray}
I\!\!D^qW_{\lambda}(0) = {\lim_{t\rightarrow 0} {{d^qW_{\lambda}(t)}\over{dt^q}}}=0\;\;\;
{\rm for} \;\;\;q<2-s.
\end{eqnarray}
This shows that the $q${th} local derivative of the Weierstrass function exists and
is continuous, at $t=0$, for $q<2-s$.
To check the fractional differentiability at any other point, say $\tau$,
we use $t'=t-\tau$ and $\widetilde{W} (t' )=W(t'+\tau )-W(\tau)$ so that
$\widetilde{W}(0)=0$. We have
\begin{eqnarray}
\widetilde{W}_{\lambda} (t' ) &=& \sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^k(t' +\tau)-
\sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^k\tau \nonumber\\
&=&\sum_{k=1}^{\infty} {\lambda}^{(s-2)k}(\cos{\lambda}^k\tau \sin{\lambda}^kt' +
\sin{\lambda}^k\tau(\cos{\lambda}^kt' -1)). \label{c}
\end{eqnarray}
Taking the fractional derivative of this with respect to $t'$ and following the
same procedure we can show that the fractional derivative of the Weierstrass
function of order $q<2-s$ exists at all points.
For $q>2-s$, right hand side of the equation (\ref{a}) seems to diverge.
We now prove that the LFD of order $q>2-s$ in fact does not
exist.
We do this by showing that there exists a sequence of
points approaching 0 along which
the limit of the fractional derivative of order $2-s < q <1$ does not exist.
We use the property of the Weierstrass function \cite{10}, viz.,
for each $t' \in [0,1]$ and $0 < \delta \leq {\delta}_0$
there exists $t$ such that $\vert t-t' \vert \leq \delta$ and
\begin{eqnarray}
c{\delta}^{\alpha} \leq \vert W(t)-W(t') \vert , \label{uholder}
\end{eqnarray}
where $c > 0$ and $\alpha=2-s$, provided $\lambda$ is sufficiently large.
We consider the case of $t'=0$ and
$t>0$.
Define $g(t)=W(t)-ct^{\alpha}$.
Now the above mentioned property, along with continuity
of the Weierstrass function assures us a
sequence of points $t_1>t_2>...>t_n>...\geq 0$ such that
$t_n \rightarrow 0$ as $n \rightarrow \infty$ and $g(t_n) = 0$
and $g(t)>0$ on $(t_n,\epsilon)$ for some $\epsilon>0$, for all
$n$ (it is not ruled out that $t_n$ may be zero for finite $n$).
Define
\begin{eqnarray}
g_n(t)&=&0,\;\;\;{\rm if}\;\;\;t\leq t_n, \nonumber\\
&=&g(t),\;\;\; {\rm otherwise}.\nonumber
\end{eqnarray}
Now we have, for $0 <\alpha < q < 1$,
\begin{eqnarray}
{{d^qg_n(t)}\over{d(t-t_n)^q}}={1\over\Gamma(1-q)}{d\over{dt}}{\int_{t_n}^t{{g(y)}\over{(t-y)^{q}}}}dy,\nonumber
\end{eqnarray}
where $t_n \leq t \leq t_{n-1}$. We assume that the left hand side of the above
equation exists for if it does not then we have nothing to prove.
Let
\begin{eqnarray}
h(t)={\int_{t_n}^t{{g(y)}\over{(t-y)^{q}}}}dy.\nonumber
\end{eqnarray}
Now $h(t_n)=0$ and $h(t_n+\epsilon)>0$, for a suitable $\epsilon$, as the integrand is positive.
Due to continuity there must exist an ${\epsilon}'>0$ and ${\epsilon}'<\epsilon $
such that $h(t)$ is increasing on $(t_n,{\epsilon}')$.
Therefore
\begin{eqnarray}
0 \leq {{d^qg_n(t)}\over{d(t-t_n)^q}} {\vert}_{t=t_n},\;\;\;\;n=1,2,3,... .
\end{eqnarray}
This implies that
\begin{eqnarray}
c{{d^qt^{\alpha}}\over{d(t-t_n)^q}} {\vert}_{t=t_n} \leq {{d^qW(t)}\over{d(t-t_n)^q}} {\vert}_{t=t_n},
\;\;\;\;n=1,2,3,... .
\end{eqnarray}
But we know from Eq. (\ref{xp}) that, when $0<\alpha <q<1$,
the left hand side in the above inequality approaches infinity as $t\rightarrow 0$.
This implies that the right hand side of the above inequality does not
exist as $t \rightarrow 0$. This argument can be generalized
for all non-zero $t'$ by
changing the variable $t''=t-t'$.
This concludes the proof.
Therefore the critical order of the Weierstrass function
will be $2-s$ at all points.
\noindent
{\bf Remark}: Schlesinger et al \cite{16} have considered a
L\'evy flight on a one dimensional
periodic lattice where a particle jumps from one lattice site
to other with the probability given by
\begin{eqnarray}
P(x) = {{{\omega}-1}\over{2\omega}} \sum_{j=0}^{\infty}{\omega}^{-j}
[\delta(x, +b^j) + \delta(x, -b^j)],
\end{eqnarray}
where $x$ is magnitude of the jump, $b$ is a lattice spacing and $b>\omega>1$.
$\delta(x,y)$ is a Kronecker delta.
The characteristic function for $P(x)$ is given by
\begin{eqnarray}
\tilde{P}(k) = {{{\omega}-1}\over{2\omega}} \sum_{j=0}^{\infty}{\omega}^{-j}
\cos(b^jk).
\end{eqnarray}
which is nothing but the Weierstrass cosine function.
For this distribution the L\'evy index is $\log{\omega}/\log{b}$, which can be
identified as the critical order of $\tilde{P}(k)$.
More generally for the L\'evy distribution with index $\mu$
the characteristic function
is given by
\begin{eqnarray}
\tilde{P}(k) =A \exp{c\vert k \vert^{\mu}}.
\end{eqnarray}
The critical order of this function at $k=0$
also turns out to be same as $\mu$. Thus the L\'evy index can be identified as
the critical order of the characteristic function at $k=0$.
\section{Connection between critical order and the box dimension of the curve}
\begin{thm}
Let $f:[0,1]\rightarrow I\!\!R$ be a continuous function.
a) If
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=0,\;\;\;
{\rm for}\;\; q<\alpha\;\;
,\nonumber
\end{eqnarray}
where $q,\alpha \in (0,1)$,
for all $y \in (0,1)$,
then $dim_Bf(x) \leq 2-\alpha$.
b) If there exists a sequence $x_n \rightarrow y$ as
$n \rightarrow \infty$ such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q>\alpha,\;\;
,\nonumber
\end{eqnarray}
for all $y$,
then $dim_Bf \geq 2-\alpha$.
\end{thm}
\noindent
{\bf Proof}: (a) Without loss of generality assume $y=0$ and $f(0)=0$.
We consider the case of $q<\alpha$.
As $0<q<1$ and $f(0)=0$ we can write \cite{7}
\begin{eqnarray}
f(x)&=&{d^{-q}\over{d x^{-q}}}{d^qf(x)\over{d x^q}}\nonumber\\
&=&{1\over\Gamma(q)}{\int_0^x{{d^qf(y)\over{dy^q}}\over{(x-y)^{-q+1}}}}dy. \label{comp}
\end{eqnarray}
Now
\begin{eqnarray}
\vert f(x)\vert \leq {1\over\Gamma(q)}{\int_0^x{\vert {d^qf(y)\over{dy^q}}\vert
\over{(x-y)^{-q+1}}}}dy. \nonumber
\end{eqnarray}
As, by assumption, for $q<\alpha$,
\begin{eqnarray}
\lim_{x\rightarrow 0}{d^qf(x)\over{d x^q}}=0,\nonumber
\end{eqnarray}
we have, for any $\epsilon > 0$, a $\delta > 0$ such that
$\vert {d^qf(x)/{d x^q}}\vert < \epsilon$ for all $x< \delta$,
\begin{eqnarray}
\vert f(x)\vert &\leq& {\epsilon\over\Gamma(q)}{\int_0^x{dy
\over{(x-y)^{-q+1}}}}\nonumber\\
&=&{\epsilon\over \Gamma(q+1)}x^q.\nonumber
\end{eqnarray}
As a result we have
\begin{eqnarray}
\vert f(x)\vert &\leq& K \vert x\vert ^q, \;\;\;\;{\rm for}\;\;\; x<\delta.\nonumber
\end{eqnarray}
Now this argument can be extended for general $y$ simply by considering
$x-y$ instead of $x$ and $f(x)-f(y)$ instead of $f(x)$. So finally
we get for $q<\alpha$
\begin{eqnarray}
\vert f(x)-f(y)\vert &\leq& K \vert x-y\vert ^q, \;\;\;\;{\rm for}
\;\;\vert x-y \vert < \delta,\label{holder}
\end{eqnarray}
for all $y \in (0,1)$. Hence we have \cite{10}
\begin{eqnarray}
{\rm dim}_Bf(x) \leq 2-\alpha.\nonumber
\end{eqnarray}
b) Now we consider the case $q>\alpha$. If we have
\begin{equation}
\lim_{x_n\rightarrow 0}{d^qf(x_n)\over{dx_n^q}}=\infty, \label{k0}
\end{equation}
then for given $M_1 >0$ and $\delta > 0$ we can find positive integer $N$ such that $|x_n|<\delta$ and
$ {d^qf(x_n)}/{dx_n^q} \geq M_1$ for all $n>N$. Therefore by Eq. (\ref{comp})
\begin{eqnarray}
f(x_n) &\geq& {M_1\over\Gamma(q)}{\int_0^{x_n}{dy
\over{(x_n-y)^{-q+1}}}}\nonumber\\
&=&{M_1\over \Gamma(q+1)}x_n^q\nonumber
\end{eqnarray}
If we choose $\delta=x_N$ then we can say that there exists $x<\delta$
such that
\begin{eqnarray}
f(x) \geq k_1 {\delta}^q. \label{k1}
\end{eqnarray}
If we have
\begin{eqnarray}
\lim_{x_n\rightarrow 0}{d^qf(x_n)\over{dx_n^q}}=-\infty, \nonumber
\end{eqnarray}
then for given $M_2 >0$ we can find a positive integer $N$ such that
$ {d^qf(x_n)}/{dx_n^q} \leq -M_2$ for all $n>N$. Therefore
\begin{eqnarray}
f(x_n) &\leq& {-M_2\over\Gamma(q)}{\int_0^{x_n}{dy
\over{(x_n-y)^{-q+1}}}}\nonumber\\
&=&{-M_2\over \Gamma(q+1)}x_n^q.\nonumber
\end{eqnarray}
Again if we write $\delta=x_N$, there exists $x<\delta$ such that
\begin{eqnarray}
f(x) \leq -k_2 {\delta}^q.\label{k2}
\end{eqnarray}
Therefore by (\ref{k1}) and (\ref{k2}) there exists $x<\delta$ such that, for $q>\alpha$,
\begin{eqnarray}
\vert f(x)\vert &\geq& K \delta^q.\nonumber
\end{eqnarray}
Again for any $y \in (0,1)$ there exists $x$ such that
for $q>\alpha$ and $|x-y|<\delta$
\begin{eqnarray}
\vert f(x)-f(y)\vert &\geq& k \delta^q.\nonumber
\end{eqnarray}
Hence we have \cite{10}
\begin{eqnarray}
{\rm dim}_Bf(x) \geq 2-\alpha.\nonumber
\end{eqnarray}
Notice that part (a) of the theorem above is the generalization
of the statement that $C^1$ functions are locally Lipschitz (hence their
graphs have dimension 1) to the case when the function has a H\"older type
upper bound (hence their dimension is greater than one).
Here the function is required to
have the same critical order throughout the interval. We can weaken this
condition slightly. Since we are dealing with a box dimension which
is finitely stable \cite{10}, we can allow a finite number of points having
different critical order so that we can divide the set in finite parts
having the same critical order in each part.
The example of a polynomial of degree $n$ having critical order one and dimension one is
consistent with the above result, as we can divide the graph of the polynomial
in a finite
number of parts such that at each point in every part the critical order is one.
Using the finite stability of the box dimension, the dimension of the whole curve
will be one.
We can also prove a partial converse of the above theorem.
\begin{thm}
Let $f:[0,1]\rightarrow I\!\!R$ be a continuous function.
a) Suppose
\begin{eqnarray}
\vert f(x)- f(y) \vert \leq c\vert x-y \vert ^{\alpha}, \nonumber
\end{eqnarray}
where $c>0$, $0<\alpha <1$ and $|x-y|< \delta$ for some $\delta >0$.
Then
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=0,\;\;\;
{\rm for}\;\; q<\alpha,\;\;
\nonumber
\end{eqnarray}
for all $y\in (0,1)$.
b) Suppose that for each $y\in (0,1)$ and for each $\delta >0$ there exists x such that
$|x-y| \leq \delta $ and
\begin{eqnarray}
\vert f(x)- f(y) \vert \geq c{\delta}^{\alpha}, \nonumber
\end{eqnarray}
where $c>0$, $\delta \leq {\delta}_0$ for some ${\delta}_0 >0$ and $0<\alpha<1$.
Then there exists a sequence $x_n \rightarrow y$ as $n\rightarrow \infty$
such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q>\alpha,\;\;
\nonumber
\end{eqnarray}
for all $y$.
\end{thm}
\noindent
{\bf Proof}
a) Assume that there exists a sequence $x_n \rightarrow y$ as
$n \rightarrow \infty$ such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q<\alpha\;\;,\nonumber
\end{eqnarray}
for some $y$. Then by arguments between Eq. (\ref{k0}) and Eq. (\ref{k1}) of the second part of the previous theorem it is a
contradiction.
Therefore
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}={\rm const}\;\;
{\rm or}\;\; 0,\;\;\;
{\rm for}\;\; q<\alpha.
\nonumber
\end{eqnarray}
Now if
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}={\rm const},\;\;\;
{\rm for}\;\; q<\alpha,\;\;
\nonumber
\end{eqnarray}
then we can write
\begin{eqnarray}
{d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=K+\eta(x,y),\;\;\;
\nonumber
\end{eqnarray}
where $K={\rm const}$ and $\eta(x,y) \rightarrow 0$
sufficiently fast as $x\rightarrow y$. Now
taking the $\epsilon$ derivative of both sides,
for sufficiently small $\epsilon$ we get
\begin{eqnarray}
{d^{q+\epsilon}(f(x)-f(y)) \over{[d(x-y)]^{q+\epsilon}}}={{K(x-y)^{-\epsilon}}
\over {\Gamma(1-\epsilon)}} + {d^{\epsilon}{\eta(x,y)}\over{[d(x-y)]^{\epsilon}}}
\;\;\;{\rm for}\;\; q+\epsilon <\alpha. \nonumber
\end{eqnarray}
As $x\rightarrow y$ the right hand side of the above equation goes
to infinity (the term involving $\eta$ does not matter since $\eta$ goes to 0
sufficiently fast)
which again is a contradiction. Hence the proof.
b)The proof follows by the method used in the previous section to show that
the fractional derivative of order greater than $2-\alpha$ of the Weierstrass
function does not exist.
These two theorems give an equivalence between the H\"older exponent and the critical
order of fractional differentiability.
\section{Local Fractional Derivative as a tool to study pointwise regularity of functions}
Motivation for studying pointwise behavior of irregular functions
and its relevance in physical
processes was given in the Introduction.
There are several approaches to
studying the pointwise behavior of functions. Recently wavelet transforms \cite{29,38}
were used for this purpose and have met with some success.
In this section we argue that LFDs is a tool that can be used to characterize
irregular functions and has certain advantages over its
counterpart using wavelet transforms in aspects explained below.
Various authors \cite{27,24} have used the following general definition
of H\"older exponent. The H\"older exponent $\alpha(y)$ of a function $f$
at $y$ is defined as the largest exponent such that there exists a polynomial
$P_n(x)$ of order $n$ that satisfies
\begin{eqnarray}
\vert f(x) - P_n(x-y) \vert = O(\vert x-y \vert^{\alpha}),
\end{eqnarray}
for $x$ in the neighborhood of $y$. This definition is equivalent to
equation (\ref{holder}), for $0<\alpha<1$, the range of interest in this work.
It is clear from theorem I that
LFDs provide an algorithm to calculate H\"older exponents and
dimensions. It may be noted that since there is a clear change
in behavior when order $q$ of the derivative crosses the critical order
of the function
it should be easy to determine the H\"older exponent numerically.
Previous methods using autocorrelations for fractal signals \cite{10}
involve an additional step of finding an autocorrelation.
\subsection{Isolated singularities and masked singularities}
Let us first consider the case of isolated
singularities. We choose the simplest example $f(x)=ax^{\alpha},\;\;\;0<\alpha
<1,\;\;\;x>0$. The critical order at $x=0$ gives the order of
singularity at that point whereas
the value of the LFD $I\!\!D^{q=\alpha}f(0)$, viz
$a\Gamma(\alpha+1)$, gives the strength of the singularity.
Using LFD we can detect a weaker singularity masked by a stronger singularity.
As demonstrated below, we can estimate and subtract the contribution due to
the stronger singularity from the
function and find out the critical order of the remaining function.
Consider, for example, the function
\begin{eqnarray}
f(x)=ax^{\alpha}+bx^{\beta},\;\;\;\;\;\;0<\alpha <\beta <1,\;\;\;x>0.
\label{masked}
\end{eqnarray}
The LFD of this function at $x=0$ of the order $\alpha$ is
$I\!\!D^{\alpha}f(0)=a\Gamma(\alpha+1)$.
Using this estimate of stronger singularity we now write
$$
G(x;\alpha)=f(x)-f(0)-{I\!\!D^{\alpha}f(0)\over\Gamma(\alpha+1)}x^{\alpha},
$$
which for the function $f$ in Eq. (\ref{masked}) is
\begin{eqnarray}
{ {d^q G(x'\alpha) }
\over{d x^q}} = {b\Gamma(\beta+1)\over{\Gamma(\beta-q+1)}}x^{\beta-q}.
\end{eqnarray}
Therefore the critical order of the function $G$, at $x=0$, is $\beta$.
Notice that the estimation of the weaker singularity was possible in the
above calculation just because the LFD gave the coefficient of $x^{\alpha}/
{\Gamma(\alpha+1)}$. This suggests that using LFD, one should be able to extract the secondary singularity spectrum
masked by the primary singularity spectrum of strong singularities. Hence one
can gain more insight into the processes giving rise to irregular
behavior. Also, one may note that this procedure can be used to detect
singularities masked by regular polynomial behavior. In this way one can extend
the present analysis beyond the range $0<\alpha<1$, where $\alpha$ is a H\"older
exponent.
A comparison of the two methods of studying pointwise behavior
of functions, one using wavelets and the other using LFD,
shows that characterization of H\"older classes of
functions using LFD is direct and involves fewer assumptions.
The characterization of a H\"older class of functions with
oscillating singularity,
e.g. $f(x)=x^{\alpha}\sin(1/x^{\beta})$ ($x>0$, $0< \alpha <1$ and $\beta>0$),
using wavelets needs two exponents \cite{25}.
Using LFD, owing to theorem I and II critical order
directly gives the H\"older exponent for such a function.
It has
been shown in the context of wavelet transforms that
one can detect singularities masked by regular polynomial
behavior \cite{27} by choosing the analyzing wavelet with
its first $n$ (for suitable $n$)
moments vanishing. If one has to extend the wavelet method
for the unmasking of weaker singularities,
one would then require analyzing wavelets with fractional moments vanishing.
Notice that
one may require this condition along with the condition
on the first $n$ moments. Further the class of functions to be analyzed is in
general restricted in these analyses. These restrictions essentially arise
from the asymptotic properties of the wavelets used.
On the other hand, with the truly
local nature of LFD one does not have to bother about the behavior of functions
outside our range of interest.
\subsection{Treatment of multifractal function}
Multifractal measures have been the object of many investigations
\cite{32,33,34,35,40}. This
formalism has met with many applications. Its importance also stems
from the fact such measures are natural measures to be used in the
analysis of many phenomenon \cite{36,37}. It may however happen that the object
one wants to understand is a function (e.g., a fractal or multifractal signal)
rather than a set or a measure. For instance one would like to
characterize the velocity of fully developed turbulence. We now proceed
with the analysis of such multifractal functions using LFD.
Now we consider the case
of multifractal functions. Since LFD gives the local
and pointwise behavior of the function, conclusions of theorem I will
carry over even in the case of multifractal functions where we have
different H\"older exponents at different points.
Multifractal functions have been defined by Jaffard \cite{24}
and Benzi et al. \cite{28}.
However as noted by Benzi et al. their functions are random in nature and
the pointwise behavior
can not be studied. Since we are dealing with non-random
functions in this paper,
we shall consider a specific (but non-trivial) example of a function
constructed by Jaffard to illustrate the procedure. This function is a
solution $F$ of the functional equation
\begin{eqnarray}
F(x)=\sum_{i=1}^d {\lambda}_iF(S_i^{-1}(x)) + g(x),
\end{eqnarray}
where $S_i$'s are the affine transformations of the kind
$S_i(x)={\mu}_ix+b_i$ (with $\vert \mu_i \vert < 1$ and $b_i$'s real)
and $\lambda_i$'s
are some real numbers and $g$ is any sufficiently smooth function ($g$ and its
derivatives should have a fast decay). For the sake of illustration
we choose ${\mu}_1={\mu}_2=1/3$, $b_1=0$, $b_2=2/3$,
${\lambda}_1=3^{-\alpha}$, ${\lambda}_2=3^{-\beta}$ ($0<\alpha<\beta<1$) and
\begin{eqnarray}
g(x)&=& \sin(2\pi x),\;\;\;\;\;\;{\rm if}\;\;\;\; x\in [0,1],\nonumber\\
&=&0,\;\;\;\;\;\;\;\;\;{\rm otherwise}. \nonumber
\end{eqnarray}
Such functions are studied in detail in Ref. \cite{24} using wavelet transforms
where it has been shown that the above functional equation (with the
parameters we have chosen)
has a unique solution $F$ and at any point
$F$ either has H\"older exponents ranging from
$\alpha$ to $\beta$ or is smooth. A sequence of points $S_{i_1}(0),\;\;$
$\;S_{i_2}S_{i_1}(0),\;\;$
$\cdots,\;\;\; S_{i_n}\cdotp \cdotp \cdotp S_{i_1}(0), \;\;\cdots$,
where $i_k$ takes values 1 or 2,
tends to a point in $[0,1]$ (in fact to a point of a triadic
cantor set) and for the values of
${\mu}_i$s we have chosen this correspondence between sequences and limits
is one to one.
The solution of the above functional equation is given by Ref. \cite{24} as
\begin{eqnarray}
F(x)=\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}
g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x)). \label{soln}
\end{eqnarray}
Note that with the above choice of parameters the inner sum in (\ref{soln})
reduces to a single term. Jaffard \cite{24} has shown that
\begin{eqnarray}
h(y)=\liminf_{n\rightarrow \infty}{{\log{({\lambda}_{{i_1}(y)}\cdots{\lambda}_{{i_n}(y)})}}
\over{\log{({\mu}_{{i_1}(y)}\cdots{\mu}_{{i_n}(y)})}}},
\end{eqnarray}
where $\{i_1(y)\cdot\cdot\cdot i_n(y)\}$ is a sequence of integers appearing in
the sum in equation (\ref{soln}) at a point $y$,
and is the local H\"older exponent at $y$.
It is clear that $h_{min}=\alpha$ and
$h_{max}=\beta$. The function $F$ at the points of a triadic cantor
set have $h(x) \in [\alpha , \beta]$
and at other points it is smooth ( where $F$ is as smooth as $g$).
Now
\begin{eqnarray}
{d^q(F(x)-F(y))\over{[d(x-y)]^q}}&=&\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}\nonumber\\
&&\;\;\;\;\;\;\;\;\;{d^q[g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x))-g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(y))]
\over{[d(x-y)]^q}}\nonumber\\
&=&\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}
({\mu}_{i_1}\cdots{\mu}_{i_n})^{-q} \nonumber\\
&&\;\;\;\;\;\;\;\;\;\;{d^q[g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x))-g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(y))]
\over{[d(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x-y))]^q}}, \label{fdj}
\end{eqnarray}
provided the RHS is uniformly bounded.
Following the procedure described in section III the fractional
derivative on the RHS can easily be seen to be uniformly bounded and
the series is convergent if $q<\min\{h(x),h(y)\}$.
Further it vanishes in the limit as $x\rightarrow y$. Therefore if $q<h(y)$ $I\!\!D^qF(y)=0$, as
in the case of the Weierstrass function, showing that $h(y)$ is a lower
bound on the critical order.
The procedure of finding an upper bound is technical and lengthy.
It is carried out in the Appendix below.
In this way an intricate analysis of finding out the lower bound on the
H\"older exponent has been replaced by a calculation involving few steps. This
calculation can easily be generalized for more general functions $g(x)$.
Summarizing, the LFD enables one to calculate the local H\"older exponent even
for the case of multifractal functions. This fact, proved in theorems I and II
is demonstrated with a concrete illustration.
\section{Conclusion}
In this paper we have introduced the notion of a local fractional derivative
using Riemann-Liouville formulation (or equivalents such as Grunwald's)
of fractional calculus. This definition was found to appear naturally
in the Taylor expansion (with a remainder) of functions and thus is
suitable for approximating scaling functions. In particular
we have pointed out a possibility of replacing the notion of a
tangent as an equivalence class of curves passing through the same point
and having the same derivative with a more general one.
This more general notion is in terms of an equivalence class of curves
passing through the same point and having the same critical order and
the same LFD.
This generalization has the advantage
of being applicable to non-differentiable functions also.
We have established that (for sufficiently large $\lambda$ ) the critical order of the
Weierstrass function is related to the box dimension of its graph. If the dimension of
the graph of such a function is $1+\gamma$, the critical order is $1-\gamma$. When
$\gamma$ approaches unity the function becomes more and more irregular and local fractional
differentiability is lost accordingly. Thus there is a direct quantitative connection between the
dimension of the graph and the fractional differentiability property of the function.
This is one of the main conclusions of the present work.
A consequence of our result is that a classification of continuous paths
(e.g., fractional Brownian paths) or
functions according to local fractional differentiability properties is also
a classification according to dimensions (or H\"older exponents).
Also the L\'evy index of a L\'evy flight on a one dimensional
lattice is identified as
the critical order of the characteristic function of the walk. More generally,
the L\'evy index of a L\'evy distribution is identified as
the critical order of its characteristic function at the origin.
We have argued and demonstrated that LFDs are useful for studying isolated singularities and singularities masked by the stronger singularity (not just by
regular behavior). We have further shown that the pointwise
behavior of irregular, fractal or multifractal functions can be studied
using the methods of this paper.
We hope that future study in this direction will make random irregular
functions as well as multivariable irregular functions
amenable to analytic treatment, which is badly needed at this
juncture. Work is in progress in this direction.
\section*{Acknowledgments}
We acknowledge helpful discussions with Dr. H. Bhate and Dr. A. Athavale.
One of the authors (KMK) is grateful to CSIR (India) for financial assistance and the other author
(ADG) is grateful to UGC (India) financial assistance during initial stages of the work.
| proofpile-arXiv_065-388 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{intro}
\section{KPZ equation}
\input{kpze}
\section{Ideal MBE}
\input{idmbe}
\section{Ballistic deposition}
\input{bd}
\section{Summary and discussion}
\input{sd}
\noindent
{\Large{\bf Acknowledgment}}
The author gratefully acknowledges useful correspondence with S. Pal,
J. Krug, H.W. Diehl, and D.P. Landau.
\setcounter{section}{0}
\renewcommand{\thesection}{Appendix \Alph{section}:}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\section{Gaussian theory}
\input{gt}
\section{Perturbation theory}
\input{pt}
\section{Response and correlation functions}
\input{rcf}
\input{mkbibl}
\end{document}
| proofpile-arXiv_065-389 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The electric polarisability of the charged pion $\alpha_E$ can be inferred
from the amplitude for low energy Compton scattering
$ \gamma + \pi^+ \rightarrow \gamma + \pi^+ $.
This amplitude cannot be measured
at low energies directly, but can be determined from measurements on
related processes like $\pi N \rightarrow \pi N \gamma$, $ \gamma N
\rightarrow \gamma N \pi$ and $ \gamma \gamma \rightarrow \pi \pi$.
The measured values for $\alpha_E$,
in units of $10^{-4}\; {{\rm fm}}^3$,
are $6.8 \pm 1.4 \pm 1.2$ \cite{Antipov},
$ 20 \pm 12$ \cite{Aibergenov} and $2.2 \pm 1.6$ \cite{Babusci}, respectively.
Alternatively, the polarisability can be predicted theoretically by relating
it to other
quantities which are better known experimentally. In chiral perturbation
theory, it can be shown \cite{Holstein} that the pion polarisability
is given by
\begin{equation}
\alpha_E = \frac{ \alpha \, F_A}{m_{\pi} F_{\pi}^2},
\label{alpha}
\end{equation}
where $F_{\pi}$ is the pion decay
constant and $F_A$ is the axial structure-dependent form factor for radiative
charged pion decay $\pi \rightarrow e \nu \gamma$ \cite{Bryman}.
The latter is often re-expressed in the form
\begin{equation}
F_A \equiv \gamma\,{F_V}\, ,
\end{equation}
because the ratio $\gamma$ can be measured
in radiative pion decay experiments more accurately than $F_A$ itself,
while the corresponding vector form factor $F_V$ is determined to be
\cite{PDG}\footnote{Our definitions of $F_V$ and $F_A$ differ from
those used by the Particle Data Group \cite{PDG} by a factor of two.}
$F_V = 0.0131 \pm 0.0003$
by using the conserved
vector current (CVC) hypothesis to relate $\pi \rightarrow e \nu \gamma$
and $\pi^0 \rightarrow \gamma\gamma$ decays.
$\gamma$ has been measured in three $\pi \rightarrow e \nu \gamma$
experiments, giving the values: $0.25 \pm 0.12$ \cite{lampf};
$0.52 \pm 0.06$ \cite{sin}; and $0.41 \pm 0.23$ \cite{istra}. The weighted
average $ \gamma = 0.46 \pm 0.06$
can be combined with the above equations to give
\begin{equation}
\alpha_E = (2.80 \pm 0.36) \, 10^{-4} \; {{\rm fm}}^3 \; .
\label{chiral}
\end{equation}
This result is often referred to as the chiral theory prediction for
the pion polarisability \cite{Holstein}. However $\alpha_E$, or
equivalently $F_A$, can be
determined in
other ways. In particular, the latter occurs in the Das-Mathur-Okubo
(DMO) sum rule \cite{Das}
\begin{equation}
I = F_{\pi}^2 \frac{\langle r_{\pi}^2 \rangle }{3} - {F_A}\,,
\label{DMO}
\end{equation}
where
\begin{equation}
I\equiv\int \frac{{\rm d}s}{s} \rho_{V-A}(s) \,,
\label{i}
\end{equation}
with $\rho_{V-A}(s) = \rho_V(s) - \rho_A(s)$ being the difference
in the spectral functions of the vector and axial-vector
isovector current correlators, while
$\langle r_{\pi}^2\rangle $
is the pion mean-square charge radius. Using its
standard value $\langle r_{\pi}^2\rangle = 0.439 \pm 0.008 \; {{\rm fm}}^2$
\cite{Amendolia} and eqs. (\ref{alpha}), (\ref{chiral}) one gets:
\begin{equation}\label{iexp}
I_{DMO}=(26.6 \pm 1.0)\cdot 10^{-3}
\end{equation}
Alternatively, if the integral $I$ is known, eq. (\ref{DMO}) can be rewritten
in the form of a prediction for the polarisability:
\begin{equation}
\alpha_E = \frac{\alpha}{m_{\pi}} \biggl(
\frac{\langle r_{\pi}^2 \rangle }{3}
- \frac{I}{F_{\pi}^2} \biggr).
\label{DMOpredict}
\end{equation}
Recent attempts to analyse this relation have resulted in some contradiction
with the chiral prediction.
Lavelle et al. \cite{Lavelle}
use related QCD sum rules to estimate the integral $I$ and obtain
$\alpha_E = (5.60 \pm 0.50) \, 10^{-4} \; {{\rm fm}}^3$.
Benmerrouche et. al. \cite{Bennerrouche} apply certain sum rule
inequalities to obtain a lower bound on the polarisability
(\ref{DMOpredict}) as a function of
${\langle r_{\pi}^2 \rangle }$. Their analysis also
tends to prefer larger $\alpha_E$ and/or smaller
${\langle r_{\pi}^2\rangle }$ values.
In the following we use available experimental data to reconstruct the
hadronic spectral function $\rho_{V-A}(s)$, in order to calculate
the integral
\begin{equation}\label{i0}
I_0(s_0) \equiv \int_{4m_{\pi}^2}^{s_0}
\frac{{\rm d}s}{s} \rho_{V-A}(s) \; .
\end{equation}
for $s_0\simeq M_{\tau}^2$, test the saturation of the DMO sum rule
(\ref{DMO}) and its compatibility with the chiral prediction (\ref{chiral}).
We also test the saturation of the first Weinberg sum rule \cite{Weinberg}:
\begin{eqnarray}
W_1(s_0) \equiv \int_{4m_{\pi}^2}^{s_0}
{\rm d}s \, \rho_{V-A}(s)\;;\;\;\;\;\;\;\;\;\;\;
W_1(s_0)\;\bigl|_{s_0\to\infty} = F_{\pi}^2
\label{w1}
\end{eqnarray}
and use the latter to improve convergence and
obtain a more accurate estimate for the integral $I$:
\begin{eqnarray}\label{i1}
I_1(s_0) & = & \int_{4m_{\pi}^2}^{s_0} \frac{{\rm d}s}{s}
\rho_{V-A}(s) \nonumber \\
& + & {{\beta}\over{s_0}}
\left[ F_{\pi}^{2} - \int_{4m_{\pi}^2}^{s_0} {\rm d}s \, \rho_{V-A}(s)
\right]
\end{eqnarray}
Here the parameter $\beta$ is arbitrary and can be chosen to minimize the
estimated error in $I_1$ \cite{markar}.
Yet another way of reducing the uncertainty in our estimate of $I$ is to
use the Laplace-transformed version of the DMO sum rule \cite{marg}:
\begin{equation}
I_2(M^2) = F_{\pi}^2 \frac{\langle r_{\pi}^2 \rangle}{3} - {F_A}
\end{equation}
with $M^2$ being the Borel parameter in the integral
\begin{eqnarray}
I_2(M^2) &\equiv& \int \frac{{\rm d}s}{s} \exp{\left( \frac{-s}{M^2} \right)} \,
\rho_{V-A}(s) + \frac{F_{\pi}^2}{M^2} \nonumber \\
& - & \frac{C_6 \langle O^6\rangle}{6 M^6} -
\frac{C_8 \langle O^8\rangle }{24 M^8} + \ldots \;.
\label{i2}
\end{eqnarray}
Here $ C_6 \langle O^6\rangle $ and $ C_8 \langle O^8\rangle $
are the four-quark vacuum condensates of dimension 6 and 8, whose values
could be estimated theoretically or taken from previous analyses
\cite{markar,kar}.
All the three integrals (\ref{i0}), (\ref{i1}) and (\ref{i2}) obviously reduce
to (\ref{i}) as $s_0, M^2 \to \infty$.
\section{Evaluation of the spectral densities}
Recently ALEPH published a comprehensive and consistent
set of $\tau$ branching fractions \cite{aleph}, where in many cases
the errors are
smaller than previous world averages. We have used these values to
normalize the contributions of specific hadronic final states, while
various available experimental data have been used to determine the
shapes of these contributions. Unless stated otherwise,
each shape was fitted with a single
relativistic Breit-Wigner distribution with appropriately chosen threshold
behaviour.
\subsection{ Vector current contributions.}
A recent comparative study {\cite{eidel}}
of corresponding final states in $\tau$ decays and
$e^+e^-$ annihilation
has found no significant violation of CVC or isospin symmetry.
In order to determine the shapes of the hadronic spectra, we have used
mostly $\tau$ decay data, complemented by $e^+e^-$ data in some
cases.
{{ $\pi^-\pi^0$ :}} ${{\rm BR}}=25.30\pm0.20\%$ \cite{aleph},
and the $s$-dependence was
described by the three interfering resonances $\rho(770)$, $\rho(1450)$
and $\rho(1700)$, with the parameters taken from
\cite{PDG} and \cite{pi2}.
{{ $3\pi^{\pm}\pi^0$ :}} ${{\rm BR}}=4.50\pm0.12\%$, including $\pi^-\omega$
final state \cite{aleph}. The shape was determined by
fitting the spectrum measured by ARGUS \cite{pi4a}.
{{ $\pi^{-}3\pi^0$ :}} ${{\rm BR}}=1.17\pm0.14\%$ \cite{aleph}.
The $s$-dependence
is related to that of the reaction $e^+e^-\to 2\pi^+2\pi^-$. We have fitted
the latter measured by OLYA and DM2 \cite{pi4b}.
{{ $6\pi$ :}} various charge contributions give the overall
${{\rm BR}}=0.13\pm0.06\%$ \cite{aleph}, fairly close to CVC expectations
\cite{eidel}.
{{ $\pi^-\pi^0\eta$ :}} ${{\rm BR}}=0.17\pm0.03\%$ \cite{pi0}.
The $s$-dependence
was determined by fitting the distribution measured by CLEO
\cite{pi0}.
{{ $K^-K^0$ :}} ${{\rm BR}}=0.26\pm0.09\%$ \cite{aleph}.
Again, the fit of the CLEO measurement \cite{cleok} was performed.
\subsection{ Axial current contributions.}
The final states with odd number of pions contribute to the axial-vector
current.
Here, $\tau$ decay is the only source of precise information.
{{ $\pi^-$ :}} ${{\rm BR}}=11.06\pm0.18\%$ \cite{aleph}. The single pion
contribution has a trivial $s$-dependence and hence is
explicitly taken into account in theoretical formulae. The quoted
branching ratio corresponds to $F_{\pi}=93.2$ MeV.
{{ $3\pi^{\pm}$ and $\pi^-2\pi^0$ :}}
${{\rm BR}}=8.90\pm0.20\%$ and ${{\rm BR}}=9.21\pm0.17\%$,
respectively \cite{aleph}. Theoretical models \cite{pi3th}
assume that these two modes are identical in both shape and normalization.
The $s$-dependence has been analyzed in \cite{opal},
where the parameters of two theoretical models describing this decay
have been determined. We have used the average of these two distributions,
with their difference taken as an estimate of the shape uncertainty.
{{ $3\pi^{\pm}2\pi^0$ :}} ${{\rm BR}}=0.50\pm0.09\%$,
including $\pi^-\pi^0\omega$
final state \cite{aleph}. The shape was fitted using the
CLEO measurement \cite{pi5a}.
{{ $5\pi^{\pm}$ and $\pi^-4\pi^0$ :}}
${{\rm BR}}=0.08\pm0.02\%$ and $BR=0.11\pm0.10\%$,
respectively \cite{aleph}. We have assumed that these two terms have the same
$s$-dependence measured in \cite{pi5b}.
\subsection{ $K{\overline K} \pi$ modes.}
$K{\overline K} \pi$ modes can
contribute to both vector and axial-vector currents, and various theoretical
models cover the widest possible range of predictions \cite{kkpith}.
According to \cite{aleph}, all three $K{\overline K} \pi$ modes
(${\overline K}^0K^0\pi^-, K^-K^0\pi^0$ and
$K^-K^+\pi^-$) add up to BR$({\overline K} K \pi)=0.56\pm0.18\%$, in
agreement with other measurements (see \cite{cleok}).
The measured $s$-dependence suggests that these final
states are dominated by $K^*K$ decays \cite{cleok}.
We have fitted the latter, taking into
account the fact that due to parity constraints, vector and axial-vector
$K^*K$ terms have different threshold behaviour. A parameter
$\xi$ was defined as the portion of $K{\overline K} \pi$ final state with
axial-vector quantum numbers, so that
\begin{eqnarray}
{\rm BR}({\overline K} K\pi)_{V}&=&(1-\xi)\;{\rm BR}({\overline K}K\pi)
\nonumber\\
{\rm BR}({\overline K} K\pi)_{A}&=&\xi\;{\rm BR}({\overline K} K\pi)\,.
\end{eqnarray}
\section{Results and conclusions}
\begin{figure}[htb]
\begin{center}
\epsfig{file=fig1.eps,width=10cm,clip=}
\end{center}
\vspace{1cm}
\caption{\tenrm
Difference of vector and axial-vector hadronic spectral densities.
In figs.1-5: the three curves correspond
to $\xi=0$, $0.5$ and 1 from top to bottom;
the errors originating from the shape variation and those coming
from the errors in the branching fractions are roughly equal and have
been added in quadrature to form the error bars, shown only for $\xi=0.5$.}
\label{fig1}
\end{figure}
\begin{figure}[p]
\begin{center}
\epsfig{file=fig2.eps,width=10cm,clip=}
\end{center}
\vspace{1cm}
\caption{\tenrm
Saturation of the DMO sum rule integral (8).
The thick dashed line is the chiral prediction for the asymptotic value
(6) and the dotted lines show its errors.}
\label{fig2}
\begin{center}
\epsfig{file=fig3.eps,width=10cm,clip=}
\end{center}
\vspace{1cm}
\caption{\tenrm
Saturation of the first Weinberg sum rule (9).
The dashed line shows the expected asymptotic value $F_{\pi}^2$.}
\label{fig3}
\end{figure}
The resulting spectral function is shown in Fig.1.
The results of its integration according to (\ref{i0})
are presented in fig.2 as a function of the upper bound $s_0$.
One can see that as $s_0$
increases, $I_0$ converges towards an asymptotic value which we
estimate to be{\footnote{In the following, the first error
corresponds to the quadratic sum of the errors in the branching ratios
and the assumed shapes, while the second one arises from to the variation
of $\xi$ in the interval $0.5\pm 0.5$.}
\begin{equation}\label{i0m}
I_0 \equiv I_0(\infty) = ( 27.5 \pm 1.4 \pm 1.2 ) \cdot 10^{-3},
\end{equation}
in good agreement with the chiral value (\ref{iexp}).
The saturation of the Weinberg sum rule (\ref{w1}) is shown in fig.3.
One sees that the expected value $F_{\pi}^2$ is well within the errors,
and $\xi \simeq 0.25\div0.3$ seems to be preferred. No
significant deviation from this sum rule is expected theoretically
\cite{Floratos}, so we use (\ref{i1}) to calculate our second
estimate of the integral $I$. The results of this
integration are presented in fig.4, with the asymptotic value
\begin{equation}\label{i1m}
I_1 \equiv I_1(\infty) = ( 27.0 \pm 0.5 \pm 0.1 ) \cdot 10^{-3},
\end{equation}
corresponding to $\beta\approx 1.18$.
One sees that the convergence has improved, the errors
are indeed much smaller, and the $\xi$-dependence is very weak.
\begin{figure}[p]
\begin{center}
\epsfig{file=fig4.eps,width=10cm,clip=}
\end{center}
\vspace{1cm}
\caption{\tenrm
Saturation of the modified DMO sum rule integral (10).
The chiral prediction is also shown as in fig.2.}
\label{fig4}
\begin{center}
\epsfig{file=fig5.eps,width=10cm,clip=}
\end{center}
\vspace{1cm}
\caption{\tenrm
The Laplace-transformed sum rule (12) as a function of
the Borel parameter $M^2$, compared
to the chiral prediction.}
\label{fig5}
\end{figure}
Now we use (\ref{i}) to obtain our third estimate of the spectral
integral. The integration results are plotted against the
Borel parameter $M^2$ in fig.5, assuming standard values for dimension
6 and 8 condensates.
The results are independent
of $M^2$ for $M^2 > 1 \, GeV^2$, indicating that higher order terms
are negligible in this region, and giving
\begin{equation}\label{i2m}
I_2\equiv I_2(\infty) = ( 27.2 \pm 0.4 \pm 0.2 \pm 0.3) \, 10^{-3} \; ,
\end{equation}
where the last error reflects the sensitivity of (\ref{i}) to the variation
of the condensate values.
One sees that these three numbers (\ref{i0m}) -- (\ref{i2m}) are in
good agreement with each other and with the chiral prediction (\ref{iexp}).
Substitution of our most precise result (\ref{i1m}) into (\ref{DMOpredict})
yields for the standard value of the pion charge radius quoted above:
\begin{equation}\label{alem}
\alpha_E = ( 2.64 \pm 0.36 ) \, 10^{-4} \; {{\rm fm}}^3,
\end{equation}
in good agreement
with (\ref{chiral}) and the smallest of the measured values, \cite{Babusci}.
Note that by substituting a larger value
$\langle r_{\pi}^2\rangle = 0.463 \pm 0.006 \; {{\rm fm}}^2$ \cite{gesh},
one obtains
$\alpha_E = (3.44 \pm 0.30) \, 10^{-4} \; {{\rm fm}}^3$,
about two standard deviations higher than
(\ref{chiral}).
In conclusion, we have used recent precise data
to reconstruct the difference in vector and axial-vector hadronic
spectral densities and to study the saturation of Das-Mathur-Okubo
and the first Weinberg sum rules. Two methods of improving convergence
and decreasing the errors have been used.
Within the present level of accuracy, we have found perfect consistence
between $\tau$ decay data, chiral and QCD sum rules, the standard value
of $\langle r_{\pi}^2\rangle$, the average value of $\gamma$ and the chiral
prediction for $\alpha_E$.
Helpful discussions and correspondence with R. Alemany, R. Barlow, M.Lavelle
and P. Poffenberger are gratefully acknowledged.
\newpage
| proofpile-arXiv_065-404 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper is focused on the actual problem of heavy flavor physics,
exclusive s.l. decays of low-lying bottom and charm baryons. Recently,
activity in this field has obtained a great interest due to the
experiments worked out by the CLEO Collaboration \cite{CLEO} on the
observation of the heavy-to-light s.l. decay $\Lambda_c^+\to\Lambda e^+\nu_e$.
Also the ALEPH~\cite{ALEPH} and OPAL~\cite{OPAL} Collaborations expect in
the near future to observe the exclusive mode $\Lambda_b\to\Lambda_c\ell\nu$.
In ref. \cite{Aniv} a model for QCD bound states composed from
light and heavy quarks was proposed. Actually, this model is
the Lagrangian formulation of the NJL model with separable interaction
\cite{Goldman} but its advantage consists in the possibility of
studying of baryons as the relativistic systems of three quarks.
The framework was developed for light mesons~\cite{Aniv} and
baryons~\cite{Aniv,PSI}, and also for heavy-light hadrons~\cite{Manchester}.
The purpose of present work is to give a description of properties of baryons
containing a single heavy quark within framework proposed in
ref. \cite{Aniv} and developed in ref. \cite{PSI,Manchester}.
Namely, we report the calculation of observables of semileptonic decays of
bottom and charm baryons: Isgur-Wise functions, asymmetry parameters,
decay rates and distributions.
\section{Model}
Our approach \cite{Aniv} is based on the interaction Lagragians describing
the transition of hadrons into constituent quarks and {\it vice versa}:
\begin{eqnarray}\label{strong}
{\cal L}_B^{\rm int}(x)=g_B\bar B(x)\hspace*{-0.2cm}\int \hspace*{-0.2cm}
dy_1...\hspace*{-0.2cm}\int \hspace*{-0.2cm}dy_3
\delta\left(x-\frac{\sum\limits_i m_iy_i}{\sum\limits_i m_i}\right)
F\left(\sum\limits_{i<j}\frac{(y_i-y_j)^2}{18}\right)
J_B(y_1,y_2,y_3)+h.c.\nonumber
\end{eqnarray}
with $J_B(y_1,y_2,y_3)$ being the 3-quark current with quantum numbers
of a baryon $B$:
\begin{eqnarray}\label{current}
J_B(y_1,y_2,y_3)=\Gamma_1 q^{a_1}(y_1)q^{a_2}(y_2)C\Gamma_2 q^{a_3}(y_3)
\varepsilon^{a_1a_2a_3}\nonumber
\end{eqnarray}
Here $\Gamma_{1(2)}$ are the Dirac matrices, $C=\gamma^0\gamma^2$ is
the charge conjugation matrix, and $a_i$ are the color indices.
We assume that the momentum distribution of the constituents inside
a baryon is modeled by an effective relativistic vertex function
which depends on the sum of relative coordinates only
$F\left(\frac{1}{18}\sum\limits_{i<j}(y_i-y_j)^2\right)$ in the configuration
space where $y_i$ (i=1,2,3) are the spatial 4-coordinates of quarks with
masses $m_i$, respectively. They are expressed through the center of mass
coordinate $(x)$ and relative Jacobi coordinates $(\xi_1,\xi_2)$. The shape
of vertex function is chosen to guarantee ultraviolet convergence of matrix
elements. At the same time the vertex function is a phenomenological
description of the long distance QCD interactions between quarks and gluons.
In the case of light baryons we shall work in the limit of isospin
invariance by assuming the masses of $u$ and $d$ quarks are equal each other,
$m_u=m_d=m$. Breaking of the unitary SU(3) symmetry is taken into account
via a difference of strange and nonstrange quark masses $m_s-m\neq 0$.
In the case of heavy-light baryonic currents we suppose that heavy quark
is much larger than light quark $(m_Q\gg m_{q_1},m_{q_2})$,
i.e. a heavy quark is in the c.m. of heavy-light baryon.
Now we discuss the model parameters. First, there are the baryon-quark
coupling constants and the vertex function in the Lagrangian
${\cal L}_B^{\rm int}(x)$.
The coupling constant $g_B$ is calculated from {\it the compositeness
condition} that means that the renormalization
constant of the baryon wave function is equal to zero,
$Z_B=1-g_B^2\Sigma^\prime_B(M_B)=0$, with $\Sigma_B$ being the baryon mass
operator.
The vertex function is an arbitrary function except that it should make the
Feynman diagrams ultraviolet finite, as we have mentioned above.
We choose in this paper a Gaussian vertex function for simplicity.
In Minkowski space we write $F(k^2_1+k^2_2)=\exp[(k^2_1+k^2_2)/\Lambda_B^2]$
where $\Lambda_B$ is the Gaussian range parameter which is
related to the size of a baryon. It was found \cite{PSI} that for nucleons
$(B=N)$ the value $\Lambda_N=1.25$ GeV gives a good description of the
nucleon static characteristics (magnetic moments, charge radii)
and also form factors in space-like region of $Q^2$ transfer up to 1 GeV$^2$.
In this work we will use the value $\Lambda_{B_q}\equiv\Lambda_N=1.25$ GeV
for light baryons and consider the value $\Lambda_{B_Q}$ for the heavy-light
baryons as an adjustable parameter.
As far as the quark propagators are concerned we shall use the standard form
of light quark propagator with a mass $m_q$
\begin{eqnarray}\label{Slight}
<0|{\rm T}(q(x)\bar q(y))|0>=
\int{d^4k\over (2\pi)^4i}e^{-ik(x-y)}S_q(k), \,\,\,\,\,\,
S_q(k)={1\over m_q-\not\! k}\nonumber
\end{eqnarray}
and the form
\begin{eqnarray}\label{Sheavy}
S(k+v\bar\Lambda_{\{q_1q_2\}})=
\frac{(1+\not\! v)}{2(v\cdot k+\bar\Lambda_{\{q_1q_2\}}+i\epsilon)}
\nonumber
\end{eqnarray}
for heavy quark propagator obtained in the heavy quark limit (HQL)
$m_Q\to\infty$. The notation are the following:
$\bar\Lambda_{\{q_1q_2\}}=M_{\{Qq_1q_2\}}-m_Q$ is the difference between
masses of heavy baryon $M_{\{Qq_1q_2\}}\equiv M_{B_Q}$
and heavy quark $m_Q$ in the HQL, $v$ is the four-velocity of heavy baryon.
It is seen that the value $\bar\Lambda_{\{q_1q_2\}}$
depends on a flavor of light quarks $q_1$ and $q_2$. Neglecting
the SU(2)-isotopic breaking gives three independent parameters:
$\bar\Lambda\equiv\bar\Lambda_{uu}=\bar\Lambda_{dd}=\bar\Lambda_{du}$,
$\bar\Lambda_{s}\equiv\bar\Lambda_{us}=\bar\Lambda_{ds}$, and
$\bar\Lambda_{ss}$.
Of course, the deficiency of such a choice of light
quark propagator is lack of confinement. This
could be corrected by changing the analytic properties of the propagator.
We leave that to a future study. For the time being we shall
avoid the appearance of unphysical imaginary parts in the Feynman diagrams
by restricting the calculations to the following condition:
the baryon mass must be less than the sum of constituent quark masses
$M_B<\sum\limits_i m_{q_i}$.
In the case of heavy-light baryons the restriction
$M_B<\sum\limits_i m_{q_i}$ trivially gives that the parameter
$\bar\Lambda_{\{q_1q_2\}}$ must be less than the
sum of light quark masses $\bar\Lambda_{\{q_1q_2\}} < m_{q_1}+m_{q_2}$. The
last constraint serves as the upper limit for a choice of parameter
$\bar\Lambda_{\{q_1q_2\}}$.
Parameters $\Lambda_{B_Q}$, $m_s$, $\bar\Lambda$ are fixed in this paper
from the description of data on $\Lambda^+_c\to\Lambda^0+e^+ +\nu_e$ decay.
It is found that $\Lambda_Q$=2.5 GeV, $m_s$=570 MeV and
$\bar\Lambda$=710 MeV.
Parameters $\bar\Lambda_s$ and $\bar\Lambda_{\{ss\}}$ cannot be adjusted
at this moment since the experimental data on the decays of heavy-light
baryons having the strange quarks (one or two) are not available. In this
paper we use $\bar\Lambda_s=$850 MeV and $\bar\Lambda_{\{ss\}}=$1000 MeV.
\section{Results}
In this section we give the numerical results for the observables of
semileptonic decays of bottom and charm baryons:
the baryonic Isgur-Wise functions, decay rates and asymmetry parameters.
We check that $\xi_1$ and $\xi_2$ functions are
satisfied to the model-independent Bjorken-Xu inequalities.
Also the description of the $\Lambda^+_c\to\Lambda^0+e^+ +\nu_e$
decay which was recently measured by
CLEO Collaboration \cite{CLEO} is given. In what follows we will use
the following values for CKM matrix elements: $|V_{bc}|$=0.04,
$|V_{cs}|$=0.975.
In our calculations of heavy-to-heavy matrix elements we are restricted
only by one variant of three-quark current for each kind of heavy-light
baryon: {\it Scalar current} for $\Lambda_Q$-type baryons and
{\it Vector current} for $\Omega_Q$-type baryons \cite{Shuryak,Manchester}.
The functions $\zeta$ and $\xi_1$ have the upper limit
$\Phi_0(\omega)=\frac{\ln(\omega+\sqrt{\omega^2-1})}{\sqrt{\omega^2-1}}$.
It is easy to show that $\zeta(\omega)=\xi_1(\omega)=\Phi_0(\omega)$
when $\bar\lambda=0$. The radii of $\zeta$ and $\xi_1$ have
have the lower bound $\zeta\geq 1/3$ and $\xi_1\geq 1/3$.
Increasing of the $\bar\lambda$ value leads to
the suppression of IW-functions in the physical
kinematical region for variable $\omega$.
The IW-functions $\xi_1$ and $\xi_2$ must satisfy two
model-independent Bjorken-Xu inequalities \cite{Xu}
derived from the Bjorken sum rule for semileptonic $\Omega_b$ decays to
ground and low-lying negative-parity excited charmed baryon states in
the HQL
\begin{eqnarray}
& &1\geq B(\omega)=\frac{2+\omega^2}{3}\xi_1^2(\omega)+
\frac{(\omega^2-1)^2}{3}\xi_2^2(\omega)
+\frac{2}{3}(\omega-\omega^3)\xi_1(\omega)\xi_2(\omega)
\label{ineq1}\\
& &\rho^2_{\xi_1}\geq \frac{1}{3}-\frac{2}{3}\xi_2(1)
\label{ineq2}
\end{eqnarray}
\noindent
The inequality (\ref{ineq2}) for the slope of the $\xi_1$-function
is fulfilled automatically because of $\rho^2_{\xi_1} \geq 1/3$ and
$\xi_2(1) > 0$.
From the inequality (\ref{ineq1})
one finds the upper limit for the function $\xi_1(\omega)$:
$\xi_1(\omega)\leq\sqrt{3/(2+\omega^2)}$
In Fig.1 we plot the $\zeta$ function in the kinematical region
$1\leq \omega \leq \omega_{max}$.
For a comparison the results of other phenomenological
approaches are drawn. There are data of QCD sum
rule~\cite{Grozin}, IMF models~\cite{Kroll,Koerner2},
MIT bag model~\cite{Zalewski}, a simple quark model (SQM)~\cite{Mark1} and
the dipole formula~\cite{Koerner2}. Our result is close to the result of
QCD sum rules~\cite{Grozin}.
In Table 1 our results for total rates are compared with
the predictions of other phenomenological approaches:
constituent quark model \cite{DESY},
spectator quark model \cite{Singleton}, nonrelativistic
quark model \cite{Cheng}.
\newpage
\begin{center}
{\bf Table 1.} Model Results for Rates of Bottom Baryons
(in $10^{10}$ sec$^{-1}$)\\
\end{center}
\begin{center}
\def1.{1.}
\begin{tabular}{|c|c|c|c|c|} \hline
Process & Ref. \cite{Singleton} & Ref. \cite{Cheng} & Ref. \cite{DESY}
& Our results\\
\hline\hline
$\Lambda_b^0\to\Lambda_c^+ e^-\bar{\nu}_e$ & 5.9 & 5.1 & 5.14 & 5.39
\\
\hline
$\Xi_b^0\to\Xi_c^+ e^-\bar{\nu}_e$ & 7.2 & 5.3 & 5.21& 5.27
\\
\hline
$\Sigma_b^+\to\Sigma_c^{++} e^-\bar{\nu}_e$
& 4.3 & & & 2.23 \\
\hline
$\Sigma_b^{+}\to\Sigma_c^{\star ++} e^-\bar{\nu}_e$
& & & &4.56 \\
\hline
$\Omega_b^-\to\Omega_c^0 e^-\bar{\nu}_e$
& 5.4 & 2.3 & 1.52 & 1.87\\
\hline
$\Omega_b^-\to\Omega_c^{\star 0} e^-\bar{\nu}_e$
& & & 3.41 & 4.01 \\
\hline\hline
\end{tabular}
\end{center}
\vspace*{0.4cm}
Now we consider the heavy-to-light semileptonic modes.
Particular the process $\Lambda^+_c\to\Lambda^0+e^++\nu_e$ which was recently
investigated by CLEO Collaboration \cite{CLEO} is studied in details.
At the HQL ($m_C\to\infty$), the weak hadronic
current of this process is defined by two form factors $f_1$ and $f_2$
\cite{DESY,Cheng}.
Supposing identical dipole forms of the form factors
(as in the model of K\"{o}rner and Kr\"{a}mer \cite{DESY}),
CLEO found that $R=f_2/f_1=$-0.25$\pm$0.14$\pm$0.08. Our form factors have
different $q^2$ dependence. In other words, the quantity $R=f_2/f_1$
has a $q^2$ dependence in our approach. In Fig.10 we plot the results
for $R$ in the kinematical region $1\leq \omega \leq \omega_{max}$ for
different magnitudes of $\bar\Lambda$ parameter.
Here $\omega$ is the scalar product of four velocities of
$\Lambda_c^+$ and $\Lambda^0$ baryons.
It is seen that growth of the $\bar\Lambda$ leads to the
increasing of ratio $R$. The best fit of experimental data is achieved
when our parameters are equal to $m_s=$570 MeV, $\Lambda_Q=$2.5 GeV
and $\bar\Lambda=$710 MeV. In this case the $\omega$-dependence of the
form factors $f_1$, $f_2$ and their ratio $R$ are drawn in Fig.11.
Particularly, we get $f_1(q^2_{max})$=0.8, $f_2(q^2_{max})$=-0.18,
$R$=-0.22 at zero recoil ($\omega$=1 or q$^2$=q$^2_{max}$) and
$f_1(0)$=0.38, $f_2(0)$=-0.06, $R$=-0.16 at maximum recoil
($\omega=\omega_{max}$ or $q^2$=0).
One has to remark that our results at $q^2_{max}$ are closed to the results
of nonrelativistic quark model \cite{Cheng}:
$f_1(q^2_{max})$=0.75, $f_2(q^2_{max})$=-0.17, $R$=-0.23.
Also our result for $R$ weakly deviate from the experimental
data \cite{CLEO} $R=-0.25 \pm 0.14 \pm 0.08$ and the result of
nonrelativistic quark model (Ref. \cite{Cheng}). Our prediction for
the decay rate
$\Gamma(\Lambda^+_c\to\Lambda^0e^+\nu_e)$=7.22$\times$ 10$^{10}$ sec$^{-1}$
and asymmetry parameter $\alpha_{\Lambda_c}$=-0.812 also coincides with the
experimental data $\Gamma_{exp}$=7.0$\pm$ 2.5 $\times$ 10$^{10}$ sec$^{-1}$
and $\alpha_{\Lambda_c}^{exp}$=-0.82$^{+0.09+0.06}_{-0.06-0.03}$ and
the data of Ref. \cite{Cheng} $\Gamma$=7.1 $\times$ 10$^{10}$ sec$^{-1}.$
One has to remark that the success in the reproducing of experimental
results is connected with the using of the $\Lambda^0$ three-quark current
in the $SU(3)$-flavor symmetric form.
By analogy, in the nonrelativistic quark model \cite{Cheng} the assuming
the $SU(3)$ flavor symmetry leads to the presence of the flavor-suppression
factor $N_{\Lambda_c\Lambda}=1/\sqrt{3}$ in matrix element of
$\Lambda_c^+\to\Lambda^0 e^+\nu_e$ decay. If the $SU(3)$ symmetric
structure of $\Lambda^0$ hyperon is not taken into account the
predicted rate for $\Lambda_c^+\to\Lambda^0 e^+\nu_e$ became too large
(see, discussion in ref. \cite{DESY,Cheng}).
Finally, in Table 2 we give our predictions for some modes of
semileptonic heavy-to-lights transitions.
Also the results of other approaches are tabulated.
\vspace*{0.4cm}
\begin{center}
{\bf Table 2.} Heavy-to-Light Decay Rates (in 10$^{10}$ s$^{-1}$).
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Process & Quantity & Ref.\cite{Singleton} & Ref.\cite{Cheng} &
Ref.\cite{Datta}
& Our & Experiment \\
\hline\hline
$\Lambda_c^+\to\Lambda^0 e^+\nu_e$ & $\Gamma$ & 9.8 & 7.1 &
5.36 & 7.22 & 7.0$\pm$ 2.5 \\
\hline
$\Xi_c^0\to\Xi^- e^+\nu_e$ & $\Gamma$ & 8.5 & 7.4 & & 8.16 & \\
\hline
$\Lambda_b^0\to p e^-\bar\nu_e$ & $\Gamma/|V_{bu}|^2$ & & & 6.48$\times$ 10$^2$ &
7.47$\times$ 10$^2$ &\\
\hline
$\Lambda_c^+\to ne^+\nu_e$ & $\Gamma/|V_{cd}|^2$ & & & &
0.26$\times$ 10$^2$ & \\
\hline\hline
\end{tabular}
\end{center}
\vspace*{.5cm}
\section{Acknowledgements}
We would like to thank J\"{u}rgen K\"{o}rner and Peter Kroll for useful
discussions. This work was supported in part by the INTAS Grant 94-739,
the Heisenberg-Landau Program by the Russian Fund of
Fundamental Research (RFFR) under contract 96-02-17435-a and the
State Committee of the Russian Federation for
Education (project N 95-0-6.3-67,
Grand Center at S.-Petersburg State University).
| proofpile-arXiv_065-409 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
This talk reviews the status of two distinct though related subjects:
our understanding of chiral extrapolations,
and results for weak matrix elements involving light (i.e. $u$, $d$
and $s$) quarks.
The major connection between these subjects
is that understanding chiral extrapolations allows
us to reduce, or at least estimate, the errors in matrix elements.
Indeed, in a number of matrix elements, the dominant errors are those
due to chiral extrapolation and to the use of the quenched approximation.
I will argue that an understanding of chiral extrapolations gives us
a handle on both of these errors.
While understanding errors in detail is a sign of a maturing field,
we are ultimately interested in the results for the matrix elements
themselves.
The phenomenological implications of these results were emphasized
in previous reviews \cite{martinelli94,soni95}.
Here I only note that it is important to calculate matrix elements
for which we know the answer, e.g. $f_\pi/M_\rho$ and $f_K/f_\pi$,
in order to convince ourselves, and others, that our predictions for
unknown matrix elements are reliable.
The reliability of a result for $f_D$, for example, will be gauged
in part by how well we can calculate $f_\pi$.
And it would be a real coup if we were able to show in detail that
QCD indeed explains the $\Delta I=1/2$ rule in $K\to\pi\pi$ decays.
But to me the most interesting part of the enterprise is the possibility
of using the lattice results to calculate quantities which allow us to
test the Standard Model. In this category, the light-quark matrix element
with which we have had the most success is $B_K$.
The lattice result is already used by phenomenological analyses which
attempt to determine the CP violation in the CKM matrix
from the experimental number for $\epsilon$.
I describe below the latest twists in the saga of the lattice
result for $B_K$.
What we would like to do is extend this success to the raft of
$B$-parameters which are needed to predict $\epsilon'/\epsilon$.
There has been some progress this year on the
contributions from electromagnetic penguins,
but we have made no headway towards calculating strong penguins.
I note parenthetically that another input into the prediction of
$\epsilon'/\epsilon$ is the strange quark mass.
The recent work of Gupta and Bhattacharya \cite{rajanmq96}
and the Fermilab group \cite{mackenzie96}
suggest a value of $m_s$ considerably smaller than the
accepted phenomenological estimates, which will substantially
increase the prediction for $\epsilon'/\epsilon$.
Much of the preceding could have been written in 1989, when I
gave the review talk on weak matrix elements \cite{sharpe89}.
How has the field progressed since then?
I see considerable advances in two areas.
First, the entire field of weak matrix elements involving heavy-light
hadrons has blossomed. 1989 was early days in our calculation of the
simplest such matrix elements, $f_D$ and $f_B$. In 1996 a plethora
of quantities are being calculated, and the subject deserves its own
plenary talk \cite{flynn96}.
Second, while the subject of my talks then and now is similar, there has
been enormous progress in understanding and reducing systematic errors.
Thus, whereas in 1989 I noted the possibility of using chiral loops
to estimate quenching errors, we now have a technology (quenched chiral
perturbation theory---QChPT) which allows us to make these estimates.
We have learned that the quenched approximation (QQCD) is most probably
singular in the chiral limit, and there is a growing body of numerical
evidence showing this, although the case is not closed.
The reduction in statistical errors has allowed us to
go beyond simple linear extrapolations in light quark masses,
and thus begin to test the predictions of QChPT.
The increase in computer power has allowed us to study systematically the
dependence of matrix elements on the lattice spacing, $a$.
We have learned how to get more reliable estimates
using lattice perturbation theory \cite{lepagemackenzie}.
And, finally, we have begun to use non-perturbative matching of lattice
and continuum operators, as discussed here by Rossi \cite{rossi96}.
The body of this talk is divided into two parts. In the first,
Secs. \ref{sec:whychi}-\ref{sec:querr}, I focus on chiral extrapolations:
why we need them, how we calculate their expected form,
the evidence for chiral loops, and how we can use them to estimate
quenching errors.
In the second part, Secs. \ref{sec:decayc}-\ref{sec:otherme},
I give an update on results for weak matrix elements.
I discuss $f_\pi/M_\rho$, $f_K/f_\pi$, $B_K$ and a few related $B$-parameters.
Results for structure functions have been reviewed here
by G\"ockeler \cite{gockeler96}.
There has been little progress on semi-leptonic form factors,
nor on flavor singlet matrix elements and scattering lengths,
since last years talks by Simone \cite{simone95} and Okawa \cite{okawa95}.
\section{WHY DO WE NEED QChPT?}
\label{sec:whychi}
Until recently linear chiral extrapolations
(i.e. $\alpha + \beta m_q$) have sufficed for most quantities.
This is no longer true.
This change has come about for two reasons. First, smaller statistical
errors (and to some extent the use of a larger range of quark masses)
have exposed the inadequacies of linear fits, as already stressed here
by Gottlieb \cite{gottlieb96}.
An example, taken from Ref. \cite{ourspect96}
is shown in Fig. \ref{fig:NDelta}.
The range of quark masses is $m_s/3- 2 m_s$,
typical of that in present calculations.
While $M_\Delta$ is adequately fit by a straight line,
there is definite, though small, curvature in $M_N$.
The curves are the result of a fit using QChPT \cite{sharpebary},
to which I return below.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig1.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{$M_N$ and $M_\Delta$ versus $M_\pi^2$,
with quenched Wilson fermions, at $\beta=6$, on $32^3\times64$
lattices. The vertical band is the range of estimates for $m_s$.}
\vspace{-0.2truein}
\label{fig:NDelta}
\end{figure}
The second demonstration of the failure of linear extrapolations
has come from studying the octet baryon mass splittings \cite{ourspect96}.
The new feature here is the consideration of baryons
composed of non-degenerate quarks.
I show the results for $(M_\Sigma-M_\Lambda)/(m_s-m_u)$ (with $m_u=m_d$)
in Fig. \ref{fig:sigdel}.
If baryons masses were linear functions of $m_s$ and $m_u$
then the data would lie on a horizontal line.
Instead the results vary by a factor of four.
This is a glaring deviation from linear behavior,
in contrast to the subtle effect shown in Fig. \ref{fig:NDelta}.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig2.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{Results for $(M_\Sigma-M_\Lambda)/(m_s-m_u)$ (diamonds).
The ``burst'' is the physical result using the linear fit shown.
Crosses are from a global chiral fit.}
\vspace{-0.2truein}
\label{fig:sigdel}
\end{figure}
Once there is evidence of non-linearity, we need to know
the appropriate functional form with which to extrapolate to
the chiral limit. This is where (Q)ChPT comes in.
In the second example, the prediction is \cite{LS,sharpebary}
\begin{eqnarray}
\lefteqn{{M_\Sigma - M_\Lambda \over m_s-m_u} \approx} \nonumber\\
&& - {8 D\over3}
+ d_1 {M_K^3-M_\pi^3 \over m_s-m_u}
+ d'_1 {M_{ss}^3-M_\pi^3 \over m_s-m_u} \nonumber \\
&& + e_1 (m_u + m_d + m_s) + e'_1 (m_u + m_s) \,,
\label{eq:chptsiglam}
\end{eqnarray}
where $M_{ss}$ is the mass of the quenched $\bar ss$ meson.\footnote{%
I have omitted terms which, though more singular in the chiral limit,
are expected to be numerically small for the range of $m_q$ under study.}
QChPT also implies that $|d'_1| < |d_1|$, which is why in
Fig. \ref{fig:sigdel} I have plotted the data versus
$(M_K^3-M_\pi^3)/(m_s-m_u)$. The good news is that the points do lie
approximately on a single curve---which would not be true for a poor
choice of y-axis. The bad news is that the fit to the
the $d_1$ term and a constant is not that good.
This example illustrates the benefits which can accrue if one knows
the chiral expansion of the quantity under study.
First, the data collapses onto a single curve, allowing
an extrapolation to the physical quark masses.
And, second, the theoretical input reduces the length of the
extrapolation.
In Fig. \ref{fig:sigdel}, for example, to reach the physical point requires an
extrapolation by less than a factor of 2.
This is much smaller than the ratio of the lightest quark mass to
the physical value of $(m_u+m_d)/2$---a factor of roughly 8.
In fact, in the present example,
the data are not yet accurate enough to distinguish between
the $d$ and $e$ terms in Eq. \ref{eq:chptsiglam}.
A global fit of the QChPT prediction to this and other mass differences
(the results of which are shown in the Figure)
implies that both types of term are present \cite{sharpebary}.
\section
CHIRAL PERTURBATION THEORY}
\label{sec:qchpt}
This brings me to a summary of QChPT.
In QCD, chiral perturbation theory predicts the form of the chiral expansion
for quantities involving one or more light quarks.
The expansion involves terms analytic in $m_q$ and the external momenta,
and non-analytic terms due to pion loops.\footnote{``Pion'' here refers to
any of the pseudo-Goldstone bosons.}
The analytic terms are restricted in form,
though not in magnitude, by chiral symmetry,
while the non-analytic terms are completely predicted given the
analytic terms.
The same type of expansions can be developed in quenched QCD using QChPT.
The method was worked out in Refs. \cite{morel,sharpestcoup,BGI,sharpechbk},
with the theoretically best motivated formulation
being that of Bernard and Golterman \cite{BGI}.
Their method
gives a precise meaning to the quark-flow diagrams which I use below.
They have extended it also to partially quenched theories
(those having both valence and sea quarks but with different masses)
\cite{BGPQ}.
Results are available for pion properties and the condensate \cite{BGI},
$B_K$ and related matrix elements \cite{sharpechbk},
$f_B$, $B_B$ and the Isgur-Wise function \cite{booth,zhang},
baryon masses \cite{sharpebary,LS},
and scattering lengths \cite{BGscat}.
I will not describe technical details, but rather focus on
the aims and major conclusions of the approach.
For a more technical review see Ref. \cite{maartench}.
As I see it, the major aims of QChPT are these. \\
$\bullet\ $
To predict the form of the chiral expansion, which can then be
used to fit and extrapolate the data. This was the approach taken above
for the baryon masses. \\
$\bullet\ $
To estimate the size of the quenching error by comparing the
contribution of the pion loops in QCD and QQCD,
using the coefficients of the chiral fits in QCD (from phenomenology)
and in QQCD (from a fit to the lattice data).
I return to such estimates in Sec. \ref{sec:querr}.
I begin by describing the form of the chiral expansions,
and in particular how they are affected by quenching.
The largest changes are to the non-analytic terms,
i.e. those due to pion loops.
There are two distinct effects.\\
(1) Quenching removes loops which, at the underlying quark level,
require an internal loop.
This is illustrated for baryon masses in Fig. \ref{fig:quarkloops}.
Diagrams of types (a) and (b) contribute in QCD,
but only type (b) occurs in QQCD.
These loops give rise to $M_\pi^3$ terms in the chiral expansions.
Thus for baryon masses, quenching only changes the coefficient of
these terms. In other quantities, e.g. $f_\pi$, they are
removed entirely. \\
(2) Quenching introduces artifacts due to $\eta'$ loops---as
in Fig. \ref{fig:quarkloops}(c).
These are chiral loops because the $\eta'$ remains light in QQCD.
Their strength is determined by the size of the ``hairpin'' vertex,
and is parameterized by $\delta$ and $\alpha_\Phi$
(defined in Sec. \ref{sec:chevidence} below).
\begin{figure}[tb]
\centerline{\psfig{file=fig3.ps,height=2.4truein}}
\vspace{-0.3truein}
\caption{Quark flow diagrams for $M_{\rm bary}$.}
\vspace{-0.2truein}
\label{fig:quarkloops}
\end{figure}
The first effect of quenching is of greater practical importance,
and dominates most estimates of quenching errors.
It does not change the form of the chiral expansion,
only the size of the terms.
The second effect leads to new terms in the chiral expansion,
some of which are singular in the chiral limit.
If these terms are large,
then one can be sure that quenching errors are large.
One wants to work at large enough quark mass so that these new terms are
numerically small. In practice this means keeping $m_q$ above
about $m_s/4-m_s/3$.
I illustrate these general comments using the results of
Labrenz and I for baryon masses \cite{LS}.
The form of the chiral expansion in QCD is\footnote{%
There are also $M_\pi^4 \log M_\pi$ terms in both QCD and QQCD,
the coefficients of which have not yet been calculated in QQCD.
For the limited range of quark masses used in simulations,
I expect that these terms can be adequately represented by the
$M_\pi^4$ terms, whose coefficients are unknown parameters.}
\begin{equation}
M_{\rm bary} = M_0 + c_2 M_\pi^2 + c_3 M_\pi^3 + c_4 M_\pi^4 + \dots
\end{equation}
with $c_3$ predicted in terms of $g_{\pi NN}$ and $f_\pi$.
In QQCD
\begin{eqnarray}
\lefteqn{
M_{\rm bary}^Q = M_0^Q + c_2^Q M_\pi^2 + c_3^Q M_\pi^3 + c_4^Q M_\pi^4 + }
\nonumber \\
&& \delta \left( c_1^Q M_\pi + c_2^Q M_\pi^2 \log M_\pi\right) + \alpha_\Phi
\tilde c_3^Q M_\pi^3 + \dots
\label{eq:baryqchpt}
\end{eqnarray}
The first line has the same form as in QCD,
although the constants multiplying the analytic terms in the two theories
are unrelated.
$c_3^Q$ is predicted to be non-vanishing, though different from $c_3$.
The second line is the contribution of $\eta'$ loops and is a quenched
artifact.
Note that it is the dominant correction in the chiral limit.
In order to test QChPT in more detail,
I have attempted to fit the expressions outlined above
to the octet and decuplet baryon masses from Ref. \cite{ourspect96}.
There are 48 masses, to be fit in terms of 19 underlying
parameters: the octet and decuplet masses in the chiral limit,
3 constants of the form $c_2^Q$, 6 of the form $c_4^Q$,
6 pion-nucleon couplings, $\delta$ and $\alpha_\Phi$.
I have found a reasonable description of the data with $\delta\approx0.1$,
but the errors are too large to pin down the values
of all the constants \cite{sharpebary}.
Examples of the fit are shown in Figs. \ref{fig:NDelta} and \ref{fig:sigdel}.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig4.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{Contributions to $M_N - M_0^Q$ in the global chiral fit.
All quantities in lattice units.
Vertical lines indicate range of estimates of $m_s$.}
\vspace{-0.2truein}
\label{fig:mnchfit}
\end{figure}
Although this is a first, rather crude, attempt at such a fit,
several important lessons emerge.\\
(1) For present quark masses, one needs several terms in the chiral
expansion to fit the data.
This in turn requires that one have high statistics results for a
number of light quark masses. \\
(2) One must check that the ``fit'' is not resulting from
large cancellations between different orders.
The situation for my fit is illustrated by Fig. \ref{fig:mnchfit},
where I show the different contributions to $M_N$.
Note that the relative size of the terms
is not determined by $M_N$, but rather by the fit as a whole.
The most important point is that the $M_\pi^4$ terms are considerably smaller
than those of $O(M_\pi^2)$ up to at least $m_q= m_s$.
The $M_\pi^3$ terms are part of a different series and need not be
smaller than those of $O(M_\pi^2)$.
Similarly the $M_\pi$ terms are the leading quenched artifact, and
should not be compared to the other terms.
Thus the convergence is acceptable for $m_q < m_s$, though it is dubious for
the highest mass point. \\
(3) The artifacts (in particular the $\delta M_\pi$ terms)
can lead to unusual behavior at small $M_\pi$,
as illustrated in the fit to $M_\Delta$ (Fig. \ref{fig:NDelta}). \\
(4) Since the ``$\delta$-terms'' are artifacts of quenching,
and their relative contribution increases as $M_\pi\to0$,
it makes more sense phenomenologically {\em to extrapolate
without including them}. In other words, a better estimate of the
unquenched value for $M_\Delta$ in the chiral limit
can probably be obtained simply using a linear extrapolation in $M_\pi^2$.
This is, however, a complicated issue which needs more thought.\\
(5) The output of the fit includes pion-nucleon couplings whose
values should be compared to more direct determinations. \\
(6) Finally, the fact that a fit can be found at all
gives me confidence to stick my neck out and
proceed with the estimates of quenching errors in baryon masses.
It should be noted, however, that a fit involving only analytic terms,
including up to $M_\pi^6$, can probably not be ruled out.
What of quantities other than baryon masses?
In Sec. \ref{subsec:bkchfit} I discuss fits to $B_K$,
another quantity in which chiral loops survive quenching.
The data is consistent with the non-analytic term predicted by QChPT.
Good data also exists for $M_\rho$.
It shows curvature,
but is consistent with either cubic or quartic terms\cite{sloan96}.
What do we expect from QChPT?
In QCD the chiral expansion for $M_\rho$
has the same form as for baryon masses \cite{rhochpt}.
The QChPT theory calculation has not been done,
but it is simple to see that form will be as for baryons,
Eq.~\ref{eq:baryqchpt}, {\em except that $c_3^Q=0$}.
Thus an $M_\pi^3$ term is entirely a quenched
artifact---and a potential window on $\alpha_\Phi$.
What of quantities involving pions, for which there is very good data?
For the most part, quenching simply
removes the non-analytic terms of QCD and replaces them with artifacts
proportional to $\delta$.
The search for these is the subject of the next section.
\section{EVIDENCE FOR $\eta'$ LOOPS}
\label{sec:chevidence}
The credibility of QChPT rests in part on the observation of
the singularities predicted in the chiral limit.
If such quenched artifacts are present,
then we need to study them if only to know at
what quark masses to work in order to avoid them!
What follows in this section is an update of the 1994 review of Gupta
\cite{gupta94}.
The most direct way of measuring $\delta$ is from the $\eta'$ correlator.
If the quarks are degenerate, then the part of the quenched chiral
Lagrangian bilinear in the $\eta'$ is \cite{BGI,sharpechbk}
\begin{eqnarray}
2 {\cal L}_{\eta'} &=& \partial_\mu \eta' \partial^\mu \eta' - M_\pi^2 \eta'^2
\\
&&+ {(N_f/3)} \left( \alpha_\Phi \partial_\mu \eta' \partial^\mu \eta' -
m_0^2 \eta'^2 \right) \,.
\end{eqnarray}
$\delta$ is defined by $\delta = m_0^2/(48 \pi^2 f_\pi^2)$.
In QQCD the terms in the second line must be treated as vertices, and
cannot be iterated. They contribute to the disconnected part of the
$\eta'$ correlator, whereas the first line determines the connected part.
Thus the $\eta'$ is degenerate with the pion, but has additional vertices.
To study this, various groups have looked at the ratio of the
disconnected to connected parts, at $\vec p=0$, whose predicted form
at long times is
\begin{equation}
R(t) = t (N_f/3) (m_0^2 - \alpha_\Phi M_\pi^2)/(2 M_\pi) \,.
\end{equation}
I collect the results in Table \ref{tab:etapres}, converting
$m_0^2$ into $\delta$ using $a$ determined from $m_\rho$.
\begin{table}[tb]
\caption{Results from the quenched $\eta'$ two point function.
$W$ and $S$ denote Wilson and staggered fermions.}
\label{tab:etapres}
\begin{tabular}{ccccc}
\hline
Ref. & Yr. &$\delta$&$\alpha_\Phi$ &$\beta$ W/S \\
\hline
JLQCD\cite{kuramashi94} &94&$0.14 (01)$ & $0.7$ & $5.7$ W \\
OSU\cite{kilcup95}&95& $0.27 (10)$ & $0.6$ & $6.0$ S \\
Rome\cite{masetti96}&96& $\approx 0.15$ & & $5.7$ W \\
OSU\cite{venkat96}& 96&$0.19(05)$ & $0.6$ & $6.0$ S \\
FNAL\cite{thacker96} &96& $< 0.02$ & $>0$ & $5.7$ W \\
\hline
\end{tabular}
\vspace{-0.2truein}
\end{table}
All groups except Ref. \cite{thacker96} report a non-zero value for $\delta$
in the range $0.1-0.3$.
What they actually measure, as illustrated in Fig. \ref{fig:osuR},
is the combination $m_0^2-\alpha_\Phi M_\pi^2$,
which they then extrapolate to $M_\pi=0$.
I have extracted the results for $\alpha_\Phi$ from such plots.
As the figure shows, there is a considerable cancellation between the
$m_0$ and $\alpha_\Phi$ terms at the largest quark masses, which
correspond to $M_\pi \approx 0.8\,$GeV.
This may explain why Ref. \cite{thacker96}
does not see a signal for $\delta$.
\begin{figure}[tb]
\vspace{-0.3truein}
\centerline{\psfig{file=fig5.ps,height=2.4truein}}
\vspace{-0.5truein}
\caption{$a^2(m_0^2-\alpha_\Phi M_\pi^2)$ from the OSU group.}
\vspace{-0.2truein}
\label{fig:osuR}
\end{figure}
Clearly, further work is needed to
sort out the differences between the various groups.
As emphasized by Thacker \cite{thacker96}, this is mainly an issue of
understanding systematic errors.
In particular, contamination from excited states leads to an
apparent linear rise of $R(t)$, and thus to an overestimate of $m_0^2$.
Indeed, the difference between the OSU results last year and this
is the use of smeared sources to reduce such contamination.
This leads to a smaller $\delta$, as shown in Fig. \ref{fig:osuR}.
Ref. \cite{thacker96} also find that $\delta$ decreases as the volume
is increased.
I want to mention also that the $\eta'$ correlator has also been studied
in partially quenched theories, with $N_f=-6, -4, -2$,
\cite{masetti96} and $N_f=2, 4$ \cite{venkat96}.
The former work is part of the ``bermion'' program which aims to
extrapolate from negative to positive $N_f$.
For any non-zero $N_f$ the analysis is different than in the quenched
theory, because the hairpin vertices do iterate, and
lead to a shift in the $\eta'$ mass. Indeed $m_{\eta'}$ is reduced (increased)
for $N_f<0$ ($>0$), and both changes are observed!
This gives me more confidence in the results of these groups at $N_f=0$.
The bottom line appears to be
that there is relatively little dependence of $m_0^2$ on $N_f$.
Other ways of obtaining $\delta$ rely on loop effects,
such as that in Fig. \ref{fig:quarkloops}(c).
For quenched pion masses $\eta'$ loops lead to terms which are
singular in the chiral limit \cite{BGI}\footnote{%
The $\alpha_\Phi$ vertex leads to terms proportional
to $M_\pi^2 \log M_\pi$ which are not singular in the chiral limit,
and can be represented approximately by analytic terms.}
\begin{eqnarray}
\lefteqn{{M_{12}^2 \over m_1 + m_2} = \mu^Q \left[
1 - \delta \left\{\log{\widetilde M_{11}^2\over \Lambda^2} \right. \right.}
\nonumber \\
&& \left.\left.
+ {\widetilde M_{22}^2 \over \widetilde M_{22}^2 -\widetilde M_{11}^2 }
\log{\widetilde M_{22}^2 \over \widetilde M_{11}^2} \right\}
+ c_2 (m_1 + m_2) \right]
\label{eq:mpichpt}
\end{eqnarray}
Here $M_{ij}$ is the mass of pion
composed of a quark of mass $m_i$ and antiquark of mass $m_j$,
$\Lambda$ is an unknown scale, and $c_2$ an unknown constant.
The tilde is relevant only to staggered fermions, and indicates that it
is the mass of the flavor singlet pion,
and not of the lattice pseudo-Goldstone pion, which appears.
This is important because, at finite lattice spacing,
$\tilde M_{ii}$ does not vanish in the chiral limit,
so there is no true singularity.
In his 1994 review, Gupta fit the world's data for
staggered fermions at $\beta=6$ having $m_1=m_2$.
I have updated his plot, including new JLQCD data \cite{yoshie96},
in Fig. \ref{fig:mpibymq}. To set the scale, note that $m_s a \approx 0.024$.
The dashed line is Gupta's fit to Eq. \ref{eq:mpichpt},
giving $\delta=0.085$,
while the solid line includes also an $m_q^2$ term, and gives $\delta=0.13$.
These non-zero values were driven by the results from Kim and Sinclair (KS),
who use quark masses as low as $0.1 m_s$ \cite{kimsinclair},
but they are now supported by the JLQCD results.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig6.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{Chiral fit to $\log(M_\pi^2/m_q)$ at $\beta=6$.
Some points have been offset for clarity.}
\vspace{-0.2truein}
\label{fig:mpibymq}
\end{figure}
Last year, Mawhinney proposed an alternative
explanation for the increase visible at small $m_q$ \cite{mawhinney95},
namely an offset in the intercept of $M_\pi^2$
\begin{equation}
M_{12}^2 = c_0 + \mu^Q (m_1 + m_2) + \dots
\label{eq:mpimawh}
\end{equation}
In his model, $c_0$ is proportional to the
minimum eigenvalue of the Dirac operator, and thus falls as $1/V$.
This model explains the detailed structure of his results for
$M_\pi^2$ and $\langle \bar\psi\psi \rangle$ at $\beta=5.7$.
It also describes the data of KS, {\em except for volume dependence of $c_0$}.
As the Fig. \ref{fig:mpibymq} shows,
the results of KS from $24^3$ and $32^3$ lattices are consistent,
whereas in Mawhinney's model the rise at small $m_q$
should be reduced in amplitude by $0.4$ on the larger lattice.
The fit in Fig. \ref{fig:mpibymq} is for pions composed of degenerate quarks.
One can further test QChPT by noting that Eq. \ref{eq:mpichpt} is
not simply a function of the average quark mass---there is a predicted
dependence on $m_1-m_2$. In Mawhinney's model, this dependence would
presumably enter only through a $(m_1-m_2)^2$ term, and thus would be
a weaker effect.
JLQCD have extensive data from the range $\beta=5.7-6.4$,
with both $m_1=m_2$ and $m_1\ne m_2$.
They have fit to Eq. \ref{eq:mpichpt}, and thus obtained
$\delta$ as a function of $\beta$.
They find reasonable fits, with $\delta\approx 0.06$ for most $\beta$.
I have several comments on their fits.
First, they have used $M_{ii}$ rather than $\widetilde M_{ii}$,
which leads to an underestimate of $\delta$, particularly at the
smaller values of $\beta$.
Second, their results for the constants, particularly $c_2$, vary rapidly with
$\beta$. One would expect that all dimensionless parameters in the fit
(which are no less physical than, say, $f_\pi/M_\rho$)
should vary smoothly and slowly with $\beta$.
This suggests to me that terms of $O(m_q^2)$ may be needed.
Finally, it would be interesting to attempt a fit to the JLQCD data
along the lines suggested by Mawhinney, but including a $(m_1-m_2)^2$ term.
Clearly more work is needed to establish convincingly that there
are chiral singularities in $M_\pi$.
One should keep in mind that the effects are small,
$\sim 5\%$ at the lightest $m_q$,
so it is impressive that we can study them at all.
Let me mention also some other complications.\\
(1) It will be hard to see the ``singularities'' with staggered fermions
for $\beta<6$. This is because $\widetilde M_{ii} - M_{ii}$ grows
like $a^2$ (at fixed physical quark mass).
Indeed, by $\beta=5.7$ the flavor singlet pion has a
mass comparable to $M_\rho$!
Thus the $\eta'$ is no longer light,
and its loop effects will be suppressed.
In fact, the rise in $M_\pi^2/m_q$ as $m_q\to0$ for $\beta=5.7$ is very
gradual\cite{gottlieb96}, and could be due to the $c_2$ term.\\
(2) It will be hard to see the singularities using Wilson fermions.
This is because we do not know, {\it a priori}, where $m_q$ vanishes,
and, as shown by Mawhinney, it is hard to distinguish
the log divergence of QChPT from an offset in $m_q$.\\
(3) A related log divergence is predicted for $\langle\bar\psi\psi\rangle$,
which has not been seen so far \cite{kimsinclair,mawhinney95}.
It is not clear to me that this is a problem for QChPT, however,
because it is difficult to extract the non-perturbative part of
$\langle\bar\psi\psi\rangle$ from the quadratically divergent perturbative
background.
Two other quantities give evidence concerning $\delta$.
The first uses the ratio of decay constants
\begin{equation}
R_{BG} = f_{12}^2/(f_{11} f_{22}) \,.
\end{equation}
This is designed to cancel the analytic terms proportional
to $m_q$ \cite{BGI}, leaving a non-analytic term proportional to $\delta$.
The latest analysis finds $\delta\approx 0.14$ \cite{guptafpi}.
It is noteworthy that a good fit once again requires
the inclusion of $O(m_q^2)$ terms.
The second quantity is the double difference
\begin{equation}
{\rm ES}2 = (M_\Omega - M_\Delta) - 3 (M_{\Xi^*} - M_{\Sigma^*}) \,,
\end{equation}
which is one measure of the breaking of the equal spacing rule for decuplets.
This is a good window on artifacts due to quenching because
its expansion begins at $O(M_\pi^5)$ in QCD, but contains terms
proportional to $\delta M_\pi^2 \log M_\pi$ in QQCD \cite{LS}.
The LANL group finds that ${\rm ES}2$ differs from zero
by 2-$\sigma$ \cite{ourspect96},
and I find that the data can be fit with $\delta\approx 0.1$
\cite{sharpebary}.
In my view, the preponderance of the evidence suggests a value of
$\delta$ in the range $0.1-0.2$. All extractions are complicated by
the fact that the effects proportional to $\delta$ are small with
present quark masses.
To avoid them, one should use quark masses above $m_s/4-m_s/3$.
This is true not only for the light quark quantities
discussed above, but also for heavy-light quantities such as $f_B$.
This, too, is predicted to be singular as the light quark mass vanishes
\cite{zhang}.
\section{QUENCHING ERRORS}
\label{sec:querr}
I close the first part of the talk by listing, in Table \ref{tab:querr},
a sampling of estimates of quenching errors, defined by
\begin{equation}
{\rm Error}({\rm Qty}) = { [ {\rm Qty}({\rm QCD})- {\rm Qty}({\rm QQCD})]
\over{\rm Qty}({\rm QCD})} \,.
\end{equation}
I make the estimates by taking
the numerator to be the difference between the pion loop contributions
in the full and quenched chiral expansions.
To obtain numerical values I set $\Lambda=m_\rho$ ($\Lambda$ is
the scale occurring in chiral logs), use $f=f^Q=f_K$, and assume
$\delta=0.1$ and $\alpha_\Phi=0$. For the estimates of heavy-light
quantities I set $g'=0$, where $g'$ is an $\eta'$-$B$-$B$ coupling defined
in Ref. \cite{zhang}. These estimates assume that the extrapolation
to the light quark mass is done linearly from $m_q\approx m_s/2$.
For example, $f_{B_d}$ in QQCD is {\em not} the quenched value with the
physical $d$-quark mass (which would contain a large artifact proportional
to $\delta$), but rather the value obtained by linear extrapolation from
$m_s/2$, where the $\delta$ terms are much smaller.
This is an attempt to mimic what is actually done in numerical simulations.
\begin{table}[tb]
\caption{Estimates of quenching errors.}
\label{tab:querr}
\begin{tabular}{ccl}
\hline
Qty. &Ref. & Error \\
\hline
$f_\pi/M_\rho$ &\cite{gassleut,sharpechbk} & $\,\sim 0.1$ \\
$f_K/f_\pi-1$ & \cite{BGI} & $\ 0.4$ \\
$f_{B_s}$ & \cite{zhang} & $\ 0.2$\\
$f_{B_s}/f_{B_d}$ & \cite{zhang} & $\ 0.16$ \\
$B_{B_s}/B_{B_d}$ & \cite{zhang} & $\,-0.04$ \\
$B_K$ ($m_d=m_s$) & \cite{sharpechbk} & $\ 0$ \\
$B_K$ ($m_d\ne m_s$) & \cite{sharpetasi} & $\ 0.05$ \\
$M_\Xi-M_\Sigma$& \cite{sharpebary} & $\ 0.4$ \\
$M_\Sigma-M_N$ & \cite{sharpebary} & $\ 0.3$ \\
$M_\Omega-M_\Delta$& \cite{sharpebary} & $\ 0.3$ \\
\hline
\end{tabular}
\vspace{-0.2truein}
\end{table}
For the first two estimates, I have used the facts that, in QCD,
\cite{gassleut}
\begin{eqnarray}
f_\pi &\approx& f\, [1 - 0.5 L(M_K)]
\,,\\
f_K/f_\pi &\approx& 1 - 0.25 L(M_K) - 0.375 L(M_\eta)
\,,
\end{eqnarray}
(where $L(M) = (M/4\pi f)^2 \log(M^2/\Lambda^2)$, and $f_\pi= 93\,$MeV),
while in QQCD \cite{BGI,sharpechbk}
\begin{eqnarray}
f_\pi &\approx& f^Q \,,\\
{f_K\over f_\pi} &\approx& 1 + {\delta \over 2} \left[
{M_K^2 \over M_{ss}^2 - M_\pi^2} \log {M_{ss}^2 \over M_\pi^2} - 1 \right]
\,.
\end{eqnarray}
I have not included the difference of pion loop contributions to $M_\rho$,
since the loop has not been evaluated in QChPT,
and a model calculation suggests that the difference is small
\cite{cohenleinweber}.
Details of the remaining estimates can be found in the references.
Let me stress that these are estimates and not calculations.
What they give is a sense of the effect of quenching on
the contributions of ``pion'' clouds surrounding hadrons---these clouds are
very different in QQCD and QCD!
But this difference in clouds could be cancelled numerically by differences
in the analytic terms in the chiral expansion.
As discussed in Ref. \cite{zhang}, a more conservative view is thus to
treat the estimates as rough upper bounds on the quenching error.
Those involving ratios (e.g. $f_K/f_\pi$)
are probably more reliable since some of the analytic terms do not contribute.
One can also form double ratios for which the error
estimates are yet more reliable
(e.g. $R_{BG}$ and ES2 from the previous section; see also Ref. \cite{zhang}),
but these quantities are of less phenomenological interest.
My aim in making these estimates is to obtain a sense of which
quenched quantities are likely to be more reliable and which less,
and to get an sense of the possible size of quenching errors.
My conclusion is that the errors could be significant
in a number of quantities, including those involving heavy-light mesons.
One might have hoped that the ratio $f_{B_s}/f_{B_d}$ would have
small quenching errors, but the chiral loops indicate otherwise.
For some other quantities, such as $B_{B_s}/B_{B_d}$ and $B_K$,
the quenching errors are likely to be smaller.
If these estimates work, then it will be worthwhile extending them
to other matrix elements of phenomenological interest, e.g.
$K\to\pi\pi$ amplitudes.
Then, when numerical results in QQCD are obtained,
we have at least a rough estimate of the quenching error in hand.
Do the estimates work? As we will see below,
those for $f_\pi/m_\rho$, $f_K/f_\pi$ and $B_K$ are consistent
with the numerical results obtained to date.
\section{RESULTS FOR DECAY CONSTANTS}
\label{sec:decayc}
For the remainder of the talk I will don the hat of a reviewer,
and discuss the status of results for weak matrix elements.
All results will be quenched, unless otherwise noted.
I begin with $f_\pi/M_\rho$, the results for which are shown
in Figs. \ref{fig:fpi_mrhoW} (Wilson fermions) and \ref{fig:fpi_mrhoCL}
(SW fermions, with tadpole improved $c_{SW}$).
The normalization here is $f_\pi^{\rm expt}=0.131\,$MeV,
whereas I use $93\,$MeV elsewhere in this talk.
\begin{figure}[tb]
\vspace{-0.6truein}
\centerline{\psfig{file=fig7.ps,height=2.5truein}}
\vspace{-0.6truein}
\caption{$f_\pi/M_\rho$ with quenched Wilson fermions.}
\vspace{-0.2truein}
\label{fig:fpi_mrhoW}
\end{figure}
Consider the Wilson data first. One expects a linear dependence on $a$,
and the two lines are linear extrapolations taken from Ref. \cite{guptafpi}.
The solid line is a fit to all the data,
while the dashed curve excludes the point with largest $a$
(which might lie outside the linear region).
It appears that the quenched result is lower than experiment,
but there is a $5-10\%$ uncertainty.
Improving the fermion action (Fig. \ref{fig:fpi_mrhoCL})
doesn't help much because of
uncertainties in the normalization of the axial current.
For the FNAL data, the upper (lower) points correspond to
using $\alpha_s(\pi/a)$ ($\alpha_s(1/a)$) in the matching factor.
The two sets of UKQCD95 points correspond to different normalization schemes.
Again the results appear to extrapolate to a point below experiment.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig8.ps,height=3.truein}}
\vspace{-0.6truein}
\caption{$f_\pi/M_\rho$ with quenched SW fermions.}
\vspace{-0.2truein}
\label{fig:fpi_mrhoCL}
\end{figure}
It is disappointing that we have not done better with such a basic quantity.
We need to reduce both statistical errors and normalization uncertainty.
The latter may require non-perturbative methods, or the use of
staggered fermions (where $Z_A=1$).
Note that chiral loops estimate that the quenched result
will undershoot by 12\%,
and this appears correct in sign, and not far off in magnitude.
Results for $(f_K-f_\pi)/f_\pi$ are shown in Fig. \ref{fig:fk_fpi}.
This ratio measures the mass dependence of decay constants.
Chiral loops suggest a 40\% underestimate in QQCD.
The line is a fit to all the Wilson data (including the largest $a$'s),
and indeed gives a result about half of the experimental value.
The new UKQCD results, using tadpole improved SW fermions,
are, by contrast, rising towards the experimental value.
It will take a substantial reduction in statistical errors to sort this out.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig9.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{$(f_K-f_\pi)/f_\pi$ in quenched QCD.}
\vspace{-0.2truein}
\label{fig:fk_fpi}
\end{figure}
\section{STATUS OF $B_K$: STAGGERED}
\label{sec:bks}
$B_K$ is defined by
\begin{equation}
B_K =
{\langle \bar K| \bar s \gamma_\mu^L d\, \bar s\gamma_\mu^L d | K \rangle
\over
(8/3) \langle \bar K| \bar s \gamma_\mu^L d|0 \rangle
\langle 0 |\bar s\gamma_\mu^L d | K \rangle } \,.
\label{eq:bkdef}
\end{equation}
It is a scale dependent quantity, and I will quote results in
the NDR (naive dimensional regularization) scheme at 2 GeV.
It can be calculated with very small statistical errors,
and has turned out to be a fount of knowledge about systematic errors.
This is true for both staggered and Wilson fermions,
though for different reasons.
There has been considerable progress with both types of fermions
in the last year. I begin with staggered fermions,
which hold the advantage for $B_K$ as they have a remnant chiral symmetry.
Back in 1989, I thought we knew what the
quenched answer was, based on calculations at $\beta=6$ on
$16^3$ and $24^3$ lattices:
$B_K = 0.70(2)$ \cite{sharpe89,ourbkprl}.
I also argued that quenching errors
were likely small (see Table \ref{tab:querr}).
I was wrong on the former, though maybe not on the latter.
By 1993, Gupta, Kilcup and I had found that $B_K$ had a
considerable $a$ dependence \cite{sharpe93}.
Applying Symanzik's improvement program, I argued that the discretization
errors in $B_K$ should be $O(a^2)$, and not $O(a)$.
Based on this, we extrapolated our data quadratically, and quoted
$B_K(NDR,2{\rm GeV}) = 0.616(20)(27)$ for the quenched result.
Our data alone, however, was not good enough to distinguish linear and
quadratic dependences.
Last year, JLQCD presented results from a more
extensive study (using $\beta=5.85$, $5.93$, $6$ and $6.2$) \cite{jlqcdbk95}.
Their data strongly favored a linear dependence on $a$.
If correct, this would lead to a value of $B_K$ close to $0.5$.
The only hope for someone convinced of an $a^2$ dependence was
competition between a number of terms.
Faced with this contradiction between numerical data and theory,
JLQCD have done further work on both fronts \cite{aoki96}.
They have added two additional lattice spacings, $\beta=5.7$ and $6.4$,
thus increasing the lever arm. They have also carried out finite volume
studies at $\beta=6$ and $6.4$, finding only a small effect.
Their data are shown in Fig. \ref{fig:jlqcdbk}.
``Invariant'' and ``Landau'' refer to two possible discretizations
of the operators---the staggered fermion operators are spread out over
a $2^4$ hypercube, and one can either make them gauge invariant
by including gauge links, or by fixing to Landau gauge and omitting the links.
The solid (dashed) lines show quadratic (linear) fits to the first five
points. The $\chi^2/{\rm d.o.f.}$ are
\begin{center}
\begin{tabular}{ccc}
\hline
Fit & Invariant & Landau \\
$a$ & 0.86 & 0.67 \\
$a^2$ & 1.80 & 2.21 \\
\hline
\end{tabular}
\end{center}
thus favoring the linear fit, but by a much smaller difference than
last year. If one uses only the first four points then linear
and quadratic fits are equally good.
What has changed since last year is that the new point at $\beta=6.4$
lies above the straight line passing through the next four points.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig10.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{JLQCD results for staggered $B_K$.}
\vspace{-0.2truein}
\label{fig:jlqcdbk}
\end{figure}
JLQCD have also checked the theoretical argument using a simpler method
of operator enumeration\cite{aoki96,ishizuka96}.\footnote{
A similar method has also been introduced by Luo \cite{luo96}.}
The conclusion is that there cannot be $O(a)$ corrections to $B_K$,
because there are no operators available with which one
could remove these corrections.
Thus JLQCD use quadratic extrapolation and quote
(for degenerate quarks)
\begin{equation}
B_K({\rm NDR}, 2\,{\rm GeV}) = 0.5977 \pm 0.0064
\pm 0.0166 \,,
\label{eq:jlqcdbk}
\end{equation}
where the first error is statistical, the second due to truncation of
perturbation theory.
This new result agrees with that from 1993 (indeed, the results are
consistent at each $\beta$), but has much smaller errors.
To give an indication of how far things have come, compare our
1993 result at $\beta=6$ with Landau-gauge operators, $0.723(87)$
\cite{sharpe93}, to the corresponding JLQCD result $0.714(12)$.
The perturbative error in $B_K$ arises from truncating the
matching of lattice and continuum operators to one-loop order.
The use of two different lattice operators allows one to estimate this
error without resort to guesswork about the higher order terms in the
perturbative expansion.
The difference between the results from the two operators is
of $O[\alpha(2\,{\rm GeV})^2]$, and thus should remain
finite in the continuum limit.
This is what is observed in Fig. \ref{fig:jlqcdbk}.
I will take Eq. \ref{eq:jlqcdbk} as the best estimate of $B_K$ in QQCD.
The errors are so much smaller than those in previous staggered results
and in the results with Wilson fermions discussed below,
that the global average is
not significantly different from the JLQCD number alone.
The saga is not quite over, however, since one should
confirm the $a^2$ dependence by running at even smaller lattice spacings.
JLQCD intend to run at $\beta=6.6$.
If the present extrapolation holds up, then it shows how one must
beware of keeping only a single term when extrapolating in $a$.
\subsection{Unquenching $B_K$}
To obtain a result for QCD proper, two steps remain:
the inclusion of dynamical quarks, and the use of $m_s\ne m_d$.
The OSU group has made important progress on the first step \cite{osubk96}.
Previous studies (summarized in Ref. \cite{soni95}) found that
$B_K$ was reduced slightly by sea quarks, although the effect was
not statistically significant.
The OSU study, by contrast, finds a statistically significant
increase in $B_K$
\begin{equation}
{ B_K({\rm NDR,2\,GeV},N_f=3) \over B_K({\rm NDR,2\,GeV},N_f=0)}
= 1.05 \pm 0.02 \,.
\label{eq:bkquerr}
\end{equation}
They have improved upon previous work by reducing statistical errors,
and by choosing their point at $N_f=0$ ($\beta=6.05$)
to better match the lattice spacing at $N_f=2$ ($\beta=5.7$, $m_qa=0.01$)
and $4$ ($\beta=5.4$, $m_qa=0.01$).
There are systematic errors in this result
which have yet to be estimated.
First, the dynamical lattices are chosen to have
$m_q^{\rm sea}=m_q^{\rm val} = m_s^{\rm phys}/2$,
and so they are truely unquenched simulations.
But $m_s^{\rm phys}$ is determined by
extrapolating in the valence quark mass alone,
and is thus a partially quenched result.
This introduces an uncertainty in $m_s$ which feeds into the estimate of the
$N_f$ dependence of $B_K$.
Similarly, $a$ is determined by a partially quenched extrapolation,
resulting in an uncertainty in the
matching factors between lattice and continuum operators.
But probably the most important error comes from the possibility
of significant $a$ dependence in the ratio in Eq. \ref{eq:bkquerr}.
The result quoted is for $a^{-1}=2\,$GeV, at which $a$ the
discretization error in the quenched $B_K$ is 15\%.
It is not inconceivable that, say, $B_K$ in QCD has very little dependence
on $a$, in which case the ratio would increase to $\sim 1.2$ in the
continuum limit.
Clearly it is very important to repeat the comparison at a different
lattice spacing.
Despite these uncertainties, I will take the OSU result and error
as the best estimate of the effect of quenching at $a=0$.
I am being less conservative than I might be because a small
quenching error in $B_K$ is consistent with the expectations of QChPT.
A more conservative estimate for the ratio would be $1.05\pm0.15$.
\subsection{$B_K$ for non-degenerate quarks}
\label{subsec:bknondegen}
What remains is to extrapolate from $m_s=m_d\approx m_s^{\rm phys}/2$
to $m_s=m_s^{\rm phys}$ and $m_d=m_d^{\rm phys}$.
This appears difficult because it requires
dynamical quarks with very small masses.
This may not be necessary, however, if one uses ChPT to guide
the extrapolation \cite{sharpetasi}.
The point is that the chiral expansion in QCD is \cite{bijnens,sharpechbk}
\begin{equation}
{B_K\over B} = 1 - \left(3+{\epsilon^2 \over 3}\right) y\ln{y}
+ b y + c y \epsilon^2 \,,
\label{eq:bkchqcd}
\end{equation}
where
\begin{equation}
\epsilon=(m_s-m_d)/(m_s+m_d)\,,\
y = M_K^2/(4 \pi f)^2 ,
\end{equation}
and $B$, $b$ and $c$ are unknown constants.
At this order $f$ can be equally well taken to be $f_\pi$ or $f_K$.
Equation \ref{eq:bkchqcd}
is an expansion in $y$, but is valid for all $\epsilon$.
The idea is to determine $c$ by working at small $\epsilon$,
and then use the formula to extrapolate to $\epsilon=1$.
This ignores corrections of $O(y^2)$, and so the errors
in the extrapolation are likely to be $\sim 25\%$.
Notice that $m_u$ does not enter into Eq. \ref{eq:bkchqcd}.
Thus one can get away with a simulation using only two
different dynamical quark masses, e.g. setting
$m_u=m_d < m_s^{\rm phys}/2$, while holding $m_s +m_d = m_s^{\rm phys}$.
To date, no such calculation has been done.
To make an estimate I use the chiral log alone, i.e. set $c=0$, yielding
\begin{equation}
B_K({\rm non-degen}) = (1.04-1.08) B_K({\rm degen}) \,.
\end{equation}
The range comes from using $f=f_\pi$ and $f_K$, and varying the
scale in the logarithm from $m_\rho-1\,$GeV.
Since the chiral log comes mainly from kaon
and $\eta$ loops \cite{sharpechbk}, I prefer $f=f_K$, which leads to
$1.04-1.05$ for the ratio.
To be conservative I take $1.05\pm0.05$, and assume that the generous
error is large enough to include also the error in the estimate
of the effect of unquenching. This leads to a final estimate of
\begin{equation}
B_K({\rm NDR},{\rm 2\,GeV,QCD}) = 0.66 \pm 0.02 \pm 0.03 \,,
\label{eq:finalqcdbk}
\end{equation}
where the first error is that in the quenched value, the second that
in the estimate of unquenching and using non-degenerate quarks.
Taking the more conservative estimate of the unquenching error (15\%),
and adding it in quadrature with the (5\%) estimate of the error
in accounting for non-degenerate quarks, increases the second error
in Eq. \ref{eq:finalqcdbk} to $0.11$.
It is customary to quote a result for the renormalization group
invariant quantity
\[
{\widehat{B}_K \over B_K(\mu)} = \alpha_s(\mu)^{-\gamma_0 \over2\beta_0}
\left(1 +
{\alpha_s(\mu) \over 4 \pi} \left[{\beta_1 \gamma_0 -\beta_0\gamma_1
\over 2 \beta_0^2} \right] \right)
\]
in the notation of Ref. \cite{crisafulli}.
Using $\alpha_s(2\,{\rm GeV})=0.3$ and $N_f=3$, I find
$\widehat{B}_K=0.90(3)(4)$, with the last error increasing to
$0.14$ with the more conservative error.
This differs from the result I quoted in Ref. \cite{sharpe93},
because I am here using the 2-loop formula
and a continuum choice of $\alpha_s$.
\subsection{Chiral behavior of $B_K$}
\label{subsec:bkchfit}
Since $B_K$ can be calculated very accurately, it provides a
potential testing ground for (partially) quenched ChPT.
This year, for the first time, such tests have been undertaken,
with results from OSU \cite{osubk96},
JLQCD \cite{aoki96}, and Lee and Klomfass \cite{lee96}.
I note only some highlights.
It turns out that, for $\epsilon=0$,
Eq. \ref{eq:bkchqcd} is valid for all $N_f$ \cite{sharpechbk}.
This is why my estimate of the quenching error for $B_K$ with degenerate
quarks in Table \ref{tab:querr} is zero.
Thus the first test of (P)QChPT is to see
whether the $-3 y\ln y$ term is present.
The OSU group has the most extensive data as a function of $y$,
and indeed observe curvature of the expected sign and magnitude for
$N_f=0,2,4$.
JLQCD also finds reasonable fits to the chiral form,
as long as they allow a substantial dependence of $f$ on lattice spacing.
They also study other $B$ parameters, with similar conclusions.
Not everything works. JLQCD finds that the volume dependence predicted
by the chiral log \cite{sharpechbk} is too small to fit their data.
Fitting to the expected form for $\epsilon\ne0$ in QQCD, they find
$\delta=-0.3(3)$, i.e. of the opposite sign to the other determinations
discussed in Sec. \ref{sec:chevidence}.
Lee and Klomfass have studied the $\epsilon$ dependence with $N_f=2$
(for which there is as yet no PQChPT prediction).
It will be interesting to see how things evolve.
My only comment is that one may need to include $O(y^2)$ terms in the
chiral fits.
\section{STATUS OF $B_K$: WILSON}
\label{sec:bkw}
There has also been considerable progress in the last year in the
calculation of $B_K$ using Wilson and SW fermions.
The challenge here is to account for the effects of the
explicit chiral symmetry breaking in the fermion action.
Success with $B_K$ would give one confidence to attempt
more complicated calculations.
The operator of interest,
\begin{equation}
{\cal O}_{V+A} = \bar s \gamma_\mu d\, \bar s \gamma_\mu d +
\bar s \gamma_\mu \gamma_5 d\, \bar s \gamma_\mu \gamma_5 d
\,,
\end{equation}
can ``mix'' with four other dimension 6 operators
\begin{equation}
{\cal O}_{V+A}^{\rm cont} = Z_{V+A} \left(
{\cal O}_{V+A} + \sum_{i=1}^{4} z_i {\cal O}_i \right) + O(a)
\end{equation}
where the ${\cal O}$ on the r.h.s. are lattice operators.
The ${\cal O}_i$ are listed in Refs. \cite{kuramashi96,talevi96}.
The meaning of this equation is that,
for the appropriate choices of $Z_{V+A}$ and the $z_i$,
the lattice and continuum operators will have the same
matrix elements, up to corrections of $O(a)$.
In particular, while the matrix elements of a general four fermion operator
has the chiral expansion
\begin{eqnarray}
\lefteqn{\langle \bar K | {\cal O} | K \rangle =
\alpha + \beta M_K^2 + \delta_1 M_K^4 +} \\
&& p_{\bar K}\cdot p_{K}
(\gamma + \delta_2 M_K^2 + \delta_3 p_{\bar K}\cdot p_{K} ) +\dots \,,
\end{eqnarray}
chiral symmetry implies that $\alpha=\beta=\delta_1=0$ for
the particular operator ${\cal O}={\cal O}_{V+A}^{\rm cont}$.
Thus, one can test that the $z_i$ are correct by checking that the
first three terms are absent.\footnote{I have ignored chiral logarithms,
which will complicate the analysis, but can probably be ignored given
present errors and ranges of $M_K$.}
Note that the $z_i$ must be known quite accurately because the
terms we are removing are higher order in the chiral
expansion than the terms we are keeping.
Five methods have been used to determine the $z_i$ and $Z_{V+A}$. \\
(1)
One-loop perturbation theory. This fails to give the correct
chiral behavior, even when tadpole improved. \\
(2)
Use (1) plus enforce chiral behavior by adjusting subsets of the
$z_i$ by hand \cite{bernardsoni89}. Different subsets give differing
results, introducing an additional systematic error.
Results for a variety of $a$ were presented by
Soni last year \cite{soni95}.\\
(3)
Use (1) and discard the $\alpha$, $\beta$ and $\gamma$ terms,
determined by doing the calculation at a variety of momenta.
New results using this method come from the LANL group \cite{gupta96}.
Since the $z_i$ are incorrect, there is, however,
an error of $O(g^4)$ in $B_K$. \\
(4)
Non-perturbative matching by imposing continuum normalization conditions
on Landau-gauge quark matrix elements. This approach has been pioneered
by the Rome group, and is reviewed here by Rossi \cite{rossi96}.
The original calculation omitted one operator \cite{donini},
but has now been corrected \cite{talevi96}. \\
(5)
Determine the $z_i$ non-perturbatively by
imposing chiral ward identities on quark matrix elements.
Determine $Z_{V+A}$ as in (4).
This method has been introduced by JLQCD \cite{kuramashi96}.
The methods of choice are clearly (4) and (5),
as long as they can determine the $z_i$ accurately enough.
In fact, both methods work well:
the errors in the non-perturbative results are much smaller
than their difference from the one-loop perturbative values.
And both methods find that the matrix element of ${\cal O}_{V+A}^{\rm cont}$
has the correct chiral behavior, within statistical errors.
What remains to be studied is the uncertainty introduced by the fact that
there are Gribov copies in Landau gauge. Prior experience suggests
that this will be a small effect.
It is not yet clear which, if either, of methods (4) and (5) is preferable
for determining the $z_i$.
As stressed in Ref. \cite{talevi96} the $z_i$ are unique,
up to corrections of $O(a)$.
In this sense, both methods must give the same results.
But they are quite different in detail,
and it may be that the errors are smaller with one method or the other.
It will be interesting to see a comprehensive comparison between them
and also with perturbation theory.
In Fig. \ref{fig:bkw} I collect the results for $B_K$.
All calculations use $m_s=m_d$ and the quenched approximation.
The fact that most of the results agree is a significant success,
given the variety of methods employed.
It is hard to judge which method gives the smallest errors,
because each group uses different ensembles and lattice sizes,
and estimates systematic errors differently.
The errors are larger than with staggered fermions mostly because
of the errors in the $z_i$.
\begin{figure}[tb]
\vspace{-0.1truein}
\centerline{\psfig{file=fig11.ps,height=3.0truein}}
\vspace{-0.6truein}
\caption{Quenched $B_K$ with Wilson fermions.}
\vspace{-0.2truein}
\label{fig:bkw}
\end{figure}
Extrapolating to $a=0$ using the data in Fig. \ref{fig:bkw} would
give a result with a large uncertainty.
Fortunately, JLQCD has found a more accurate approach.
Instead of $B_K$, they consider the ratio of the matrix element
of ${\cal O}_{V+A}^{\rm cont}$ to its vacuum saturation approximant.
The latter differs from the denominator of $B_K$ (Eq. \ref{eq:bkdef})
at finite lattice spacing. The advantage of this choice is
that the $z_i$ appear in both the numerator and denominator,
leading to smaller statistical errors.
The disadvantage is that the new ratio has the wrong chiral behavior at
finite $a$.
It turns out that there is an overall gain, and from their
calculations at $\beta=5.9$, $6.1$ and $6.3$ they find
$B_K({\rm NDR,2\,GeV})=0.63(8)$.
This is the result shown at $a=0$ in Fig. \ref{fig:bkw}.
It agrees with the staggered result, although it has much larger errors.
Nevertheless, it is an important consistency check,
and is close to ruling out the use of a linear extrapolation
in $a$ with staggered fermions.
\section{OTHER MATRIX ELEMENTS}
\label{sec:otherme}
The LANL group \cite{gupta96} has quenched results (at $\beta=6$)
for the matrix elements which determine the dominant part
of the electromagnetic penguin contribution to $\epsilon'/\epsilon$
\begin{eqnarray}
B_7^{I=3/2} &=& 0.58 \pm 0.02 {\rm (stat)} {+0.07 \atop -0.03} \,, \\
B_8^{I=3/2} &=& 0.81 \pm 0.03 {\rm (stat)} {+0.03 \atop -0.02} \,.
\end{eqnarray}
These are in the NDR scheme at 2 GeV.
The second error is from the truncation of perturbation matching factors.
These numbers lie at or below the lower end of the range used by
phenomenologists.
The LANL group also finds $B_D=0.78(1)$.
There are also new results for $f_\rho$ and $f_\phi$
\cite{guptafpi,yoshie96}, for the pion polarizability \cite{wilcox96},
and for strange quark contributions to magnetic moments \cite{dong96}.
\section{FUTURE DIRECTIONS}
\label{sec:future}
This year has seen the first detailed tests of the predicted
chiral behavior of quenched quantities.
Further work along these lines will help us make better extrapolations,
and improve our understanding of quenching errors.
It is also a warm-up exercise for the use of chiral perturbation theory
in unquenched theories. I have outlined one such application in
Sec. \ref{subsec:bknondegen}. I expect the technique to be
of wide utility given the difficulty in simulating light dynamical
fermions.
As for matrix elements, there has been substantial progress on $B_K$.
It appears that we finally know the quenched result,
thanks largely to the efforts of JLQCD.
At the same time, it is disturbing that the complicated $a$ dependence
has made it so difficult to remove the last 20\% of the errors.
One wonders whether similar complications lie lurking beneath
the relatively large errors in other matrix elements.
The improved results for $B_K$ with Wilson fermions show that
non-perturbative normalization of operators is viable.
My hope is that we can now return to an issue
set aside in 1989: the calculation of $K\to\pi\pi$ amplitudes.
The main stumbling block is the need
to subtract lower-dimension operators.
A method exists for staggered fermions, but the errors have so far
swamped the signal.
With Wilson fermions, one needs a non-perturbative method, and the hope
is that using quark matrix elements in Landau gauge
will do the job \cite{talevi96}.
Work is underway with both types of fermion.
Given the success of the Schr\"odinger functional method at calculating
current renormalizations \cite{wittig96},
it should be tried also for four fermion operators.
Back in 1989, I also described preliminary work
on non-leptonic $D$ decays, e.g. $D\to K\pi$.
Almost no progress has been made since then,
largely because we have been lacking
a good model of the decay amplitude for Euclidean momenta.
A recent proposal by Ciuchini {\em et al.} may fill this gap \cite{ciuchini}.
Enormous computational resources have been used to calculate
matrix elements in (P)QQCD.
To proceed to QCD at anything other than a snail's pace may well
require the use of improved actions.
Indeed, the large discretization errors in quenched staggered $B_K$
already cry out for improvement.
The fact that we know $B_K$ very accurately
will provide an excellent benchmark for such calculations.
Working at smaller values of the cut-off, $1/a$,
alleviates some problems while making others worse.
Subtraction of lower dimension operators becomes simpler,
but the evaluation of mixing with operators of the same dimension
becomes more difficult.
It will be very interesting to see how things develop.
\section*{Acknowledgements}
I am grateful to Peter Lepage for helpful conversations
and comments on the manuscript.
\def\PRL#1#2#3{{Phys. Rev. Lett.} {\bf #1}, #3 (#2)}
\def\PRD#1#2#3{{Phys. Rev.} {\bf D#1}, #3 (#2)}
\def\PLB#1#2#3{{Phys. Lett.} {\bf #1B} (#2) #3}
\def\NPB#1#2#3{{Nucl. Phys.} {\bf B#1} (#2) #3}
\def\NPBPS#1#2#3{{Nucl. Phys.} {\bf B({Proc.Suppl.}) {#1}} (#2) #3}
\def{\em et al.}{{\em et al.}}
| proofpile-arXiv_065-421 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Guidelines}
It is well known that the effects on the physics of a field,
due to a much heavier field coupled to the former, are not detectable at
energies comparable to the lighter mass. More precisely the
Appelquist-Carazzone (AC) theorem~\cite{ac} states that for a Green's function
with only light external legs, the effects of the heavy loops
are either absorbable in a redefinition of the bare couplings
or suppressed by powers of $k/M$ where $k$ is the energy scale
characteristic of the Green's function
(presumably comparable to the light mass),
and $M$ is the heavy mass. However the AC theorem does not allow
to make any clear prediction when $k$ becomes close to $M$ and, in this
region one should expect some non-perturbative effect due to the onset of new
physics.
\par
In the following we shall make use of the Wilson's renormalization
group (RG) approach to discuss the physics of the light field from the
infrared region up to and beyond the mass of the heavy field.
Incidentally, the RG
technique has been already employed to proof the AC theorem~\cite{girar}.
The RG establishes the flow equations of the various coupling constants of
the theory for any change in the observational energy scale; moreover
the improved RG equations, originally derived by F.J. Wegner
and A. Houghton~\cite{weg},
where the mixing of all couplings (relevant and irrelevant) generated
by the blocking procedure is properly taken into account, should
allow to handle the non-perturbative features arising
when the heavy mass threshold is crossed.
\par
We shall discuss the simple case of two coupled scalar fields and since
we are interested in the modifications of the parameters
governing the light field, due to the heavy loops, we shall consider
the functional integration of the heavy field only. The action at a given
energy scale $k$ is
\begin{equation}
S_k(\phi,\psi)=\int d^4 x~\left ({1\over 2} \partial_\mu \phi
\partial^\mu \phi+
{1\over 2} W(\phi,\psi) \partial_\mu \psi \partial^\mu \psi+
U(\phi,\psi) \right ),
\label{eq:acteff}
\end{equation}
with polynomial $W$ and $U$
\begin{equation}
U(\phi,\psi)=\sum_{m,n}{{G_{2m,2n} \psi^{2m}\phi^{2n}}\over {(2n)!(2m)!}};
~~~~~~~~~~~~~~~~~~~~
W(\phi,\psi)=\sum_{m,n}{{H_{2m,2n}\psi^{2m}\phi^{2n}}
\over {(2n)!(2m)!}}.
\label{eq:svil}
\end{equation}
Since we want to focus on the light field, which we choose to be $\psi$,
we have simply set to 1 the wave function renormalization of $\phi$.
In the following we shall analyse the symmetric phase of the theory
with vanishing vacuum energy $G_{0,0}=0$.
\par
We do not discuss here the procedure employed~\cite{jan}
to deduce the RG coupled equations for the
couplings in Eq.~\ref{eq:svil}, because it is thoroughly explained in the
quoted reference. Since it is
impossible to handle an infinite set of equations and a truncation in the
sums in Eq.~\ref{eq:svil} is required, we keep in the action
only terms that do not exceed the sixth power in the fields and their
derivatives. Moreover we choose the initial condition for the RG equations
at a fixed ultraviolet scale $\Lambda$
where we set $H_{0,0}=1$, $G_{0,4}=
G_{2,2}=G_{4,0}=0.1$ and
$G_{0,6}=G_{2,4}=G_{4,2}=G_{6,0}=
H_{2,0}=H_{0,2}=0$,
and the flow of the various couplings is determined as a function of
$t=ln \left (k/ \Lambda\right )$, for negative $t$.
\begin{figure}
\psfig{figure=fig1.ps,height=4.3cm,width=12.cm,angle=90}
\caption{
(a): $G_{0,2}(t)/\Lambda^2$ (curve (1)) and $10^6\cdot
G_{2,0}(t)/\Lambda^2$ (curve (2)) vs $t=log\left ({{k}/{\Lambda}}\right )$.
\break
(b): $G_{2,2}(t)$ (1), $G_{0,4}(t)$ (2),
$G_{4,0}(t)$ (3) vs $t$.
\label{fig:funo}}
\end{figure}
\par
In Fig.~\ref{fig:funo}(a) $G_{0,2}(t)/\Lambda^2$ (curve (1) ) and
$10^6 \cdot G_{2,0}(t)/\Lambda^2$ (curve (2)) are plotted. Clearly the heavy
and the light masses become stable going toward the IR region and their value
at $\Lambda$ has been chosen in such a way that the stable IR values are,
$M\equiv\sqrt {G_{0,2}(t=-18)}\sim 10^{-4}\cdot \Lambda$
and $m\equiv\sqrt{G_{2,0}(t=-18)}\sim 2\cdot 10^{-7}\cdot\Lambda$.
So, in principle, there are three scales:
$\Lambda$, ($t=0$), the heavy mass $M$,
($t\sim -9.2$), the light mass $m$, ($t\sim -16.1$).
In Fig.~\ref{fig:funo}(b) the three renormalizable dimensionless
couplings are shown; the neat change around $t=-9.2$, that is $k \sim M$,
is evident and the curves become flat below this value.
The other four non-renormalizable couplings included in $U$ are
plotted in Fig.~\ref{fig:fdue}(a), in units of $\Lambda$.
Again everything is flat below $M$, and the values of the couplings
in the IR region coincide with their perturbative Feynman-diagram
estimate at the one loop level; it is easy to realize that
they are proportional to $1/M^2$, which, in units of $\Lambda$,
is a big number. Thus the large values in
Fig.~\ref{fig:fdue}(a) are just due to the scale employed and, since
these four couplings for any practical purpose, must
be compared to the energy scale at which they are calculated, it is
physically significant to plot them in units of
the running cutoff $k$:
the corresponding curves are displayed in Fig.~\ref{fig:fdue}(b);
in this case the couplings are strongly suppressed below $M$.
\begin{figure}
\psfig{figure=fig2.ps,height=4.3cm,width=12.cm,angle=90}
\caption{
(a): $G_{6,0}(t)\cdot \Lambda^2$ (1), $G_{0,6}(t)\cdot \Lambda^2$ (2),
$G_{4,2}(t)\cdot \Lambda^2$ (3) and
$G_{2,4}(t)\cdot \Lambda^2$ (4) vs $t$.\break
(b): $G_{6,0}(t)\cdot k^2$ (1), $G_{0,6}(t)\cdot k^2$ (2),
$G_{4,2}(t)\cdot k^2$ (3) and
$G_{2,4}(t)\cdot k^2$ (4) vs $t$.
\label{fig:fdue}}
\end{figure}
\par
It must be remarked that there is no change in the couplings when
the light mass threshold is crossed. This is a consequence of having
integrated the heavy field only: in this case one could check directly
from the equations ruling the coupling constants flow, that
a shift in the
initial value $G_{2,0}(t=0)$ has the only effect
(as long as one remains in the symmetric phase)
of modifying $G_{2,0}(t)$, leaving the other curves unchanged.
Therefore the results obtained are independent of $m$ and
do not change even if $m$ becomes much larger than $M$.
An example of the heavy mass dependence is shown in Fig.~\ref{fig:ftre}(a),
where $G_{6,0}(t)$ is plotted, in units of the running cutoff $k$, for three
different values of $G_{0,2}(t=0)$, which correspond respectively to
$M/\Lambda\sim 2\cdot 10^{-6}$, (1),
$M/\Lambda\sim 10^{-4}$, (2) and
$M/\Lambda\sim 0.33$, (3).
Note, in each curve, the change of slope when the $M$ scale is crossed.
$H_{0,0}=1,~~H_{0,2}=0$ is a constant solution of the corresponding equations
for these two couplings; on the other hand $H_{2,0}$ is not constant
and it is plotted in units of the running cutoff $k$ in Fig.~\ref{fig:ftre}(b),
for the three values of $M$ quoted above.
\begin{figure}
\psfig{figure=fig3.ps,height=4.3cm,width=12.cm,angle=90}
\caption{
(a): $G_{6,0}(t)\cdot k^2$ vs $t$
for $M/\Lambda \sim 2\cdot 10^{-6}$ (1),
$\sim 10^{-4}$ (2),
$\sim 0.33$ (3).\break
(b): $H_{2,0}(t)\cdot k^2$ for the three values of $M/\Lambda$
quoted in (a).
\label{fig:ftre}}
\end{figure}
\par
In conclusion, according to the AC theorem all couplings are constant at low
energies
and a change in the UV physics can only shift their values in the IR region.
Remarkably, for increasing $t$,
no trace of UV physics shows up until one reaches $M$,
that acts as a UV cut-off for the low energy physics.
Moreover, below $M$, no non-perturbative effect appears due to the
non-renormalizable couplings that vanish fastly in units of $k$.
Their behavior above $M$ is somehow constrained by the
renormalizability condition fixed at $t=0$, as clearly shown in
Fig.~\ref{fig:ftre}(a) (3). Finally, the peak of $H_{2,0}$ at $k\sim M$
in Fig.~\ref{fig:ftre}(b), whose width and
height are practically unchanged in the three examples,
is a signal of non-locality of the theory limited to the region
around $M$.
\section*{Acknowledgments}
A.B. gratefully acknowledges Fondazione A. Della Riccia and INFN
for financial support.
\section*{References}
| proofpile-arXiv_065-428 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\subsection{The Lyman--alpha forest}
\label{sec:laf}
Spectroscopic observations towards quasars show a large number of
intervening absorption systems. This `forest' of lines is numerically
dominated by systems showing only the Lyman--alpha transition ---
these absorbers are called Lyman alpha clouds.
Earlier work suggests that the clouds are large, highly ionized
structures, either pressure confined (eg. Ostriker \& Ikeuchi 1983) or within
cold dark matter structures (eg. Miralda--Escud\'e \& Rees 1993; Cen, Miralda--Escud\'{e} \& Ostriker 1994;
Petitjean \& M\"{u}cket 1995; Zhang, Anninos \& Norman 1995). However, alternative models do
exist: cold, pressure confined clouds (eg. Barcons \& Fabian 1987, but see
Rauch et~al. 1993); various shock mechanisms (Vishniac \& Bust 1987,
Hogan 1987).
Low and medium resolution spectroscopic studies of the forest
generally measure the redshift and equivalent width of each cloud. At
higher resolutions it is possible to measure the redshift ($z$), H~I
column density ($N$, atoms per cm$^2$) and Doppler parameter ($b$,
\hbox{\rm km\thinspace s$^{-1}$}). These are obtained by fitting a Voigt profile to the data
(Rybicki \& Lightman 1979).
Using a list of $N$, $z$ and $b$ measurements, and their associated
error estimates, the number density of the population can be studied.
Most work has assumed that the density is a separable function of $z$,
$N$ and $b$ (Rauch et~al. 1993).
There is a local decrease in cloud numbers near the background quasar
which is normally attributed to the additional ionizing flux in that
region. While this may not be the only reason for the depletion (the
environment near quasars may be different in other respects; the cloud
redshifts may reflect systematic motions) it is expected for the
standard physical models wherever the ionising flux from the quasar
exceeds, or is comparable to, the background.
Since the generally accepted cloud models are both optically thin to
ionizing radiation and highly ionized, it is possible to correct
column densities from the observed values to those that would be seen
if the quasar were more remote. The simplest correction assumes that
the shape of the two incident spectra --- quasar and background ---
are similar. In this case the column density of the neutral fraction
is inversely proportional to the incident ionizing flux.
If the flux from the quasar is known, and the depletion of clouds is
measured from observations, the background flux can be determined. By
observing absorption towards quasars at different redshifts the
evolution of the flux can be measured. Bechtold (1993) summarises
earlier measurements of the ionising flux, both locally and at higher
redshifts.
Recently Loeb \& Eisenstein (1995) have suggested that enhanced clustering near
quasars causes this approach to overestimate the background flux. If
this is the case then an analysis which can also study the evolution
of the effect gives important information. In particular, a decrease
in the inferred flux might be expected after the redshift where the
quasar population appears to decrease. However, if the postulated
clustering enhancement is related to the turn--on of quasars at high
redshift, it may conspire to mask any change in the ionizing
background.
Section \ref{sec:model} describes the model of the population density
in more detail, including the corrections to flux and redshift that
are necessary for a reliable result. The data used are described in
section \ref{sec:data}. In section \ref{sec:errors} the quality of
the fit is assessed and the procedure used to calculate errors in the
derived parameters is explained. Results are given in section
\ref{sec:results} and their implications discussed in section
\ref{sec:discuss}. Section \ref{sec:conc} concludes the paper.
\section{The Model}
\label{sec:model}
\subsection{Population Density}
\label{sec:popden}
The Doppler parameter distribution is not included in the model since
it is not needed to determine the ionizing background from the proximity
effect. The model here assumes that $N$ and $z$ are uncorrelated.
While this is unlikely (Carswell 1995), it should be a good
approximation over the restricted range of column densities considered
here.
The model of the population without Doppler parameters or the
correction for the proximity effect is
\begin{equation} dn(N^\prime,z) = \, A^\prime
(1+z)^{\gamma^\prime}\,(N^\prime)^{-\beta}
\frac{c(1+z)}{H_0(1+2q_0z)^\frac{1}{2}} \ dN^\prime\,dz
\end{equation}
where $H_0$ is the Hubble parameter, $q_0$ is the cosmological
deceleration parameter and $c$ is the speed of light. Correcting for
the ionizing flux and changing from `original' ($N^\prime$) to
`observed' ($N$) column densities, gives
\begin{equation} dn(N,z) = \, A (1+z)^{\gamma^\prime}
\left(\frac{N}{\Delta_F}\right)^{-\beta}
\frac{c(1+z)}{H_0(1+2q_0z)^\frac{1}{2}} \ \frac{dN}{\Delta_F}\,dz
\end{equation}
where
\begin{equation}N = N^\prime \Delta_F \ , \end{equation}
\begin{equation}\Delta_F = \frac{ f_\nu^B }{ f_\nu^B + f_\nu^Q } \ ,
\end{equation}
and $f_\nu^B$ is the background flux, $f_\nu^Q$ is the flux from the
quasar ($4\pi J_\nu(z)$).
The background flux $J_\nu^B$ may vary with redshift. Here it is
parameterised as a constant, a power law, or two power laws with a
break which is fixed at $z_B=3.25$ (the mid--point of the available
data range). An attempt was made to fit models with $z_B$ as a free
parameter, but the models were too poorly constrained by the data to
be useful.
\begin{eqnarray}
J_\nu(z)=10^{J_{3.25}} & \hbox{model {\bf B}} \\
J_\nu(z)=10^{J_{3.25}}\left(\frac{1+z}{1+3.25}\right)^{\alpha_1} & \mbox{{\bf C}} \\
J_\nu(z)=10^{J_{z_B}}\times\left\{\begin{array}{ll}\left(\frac{1+z}{1+z_B}\right)^{\alpha_1}&\mbox{$z<z_B$}\\
\left(\frac{1+z}{1+z_B}\right)^{\alpha_2}&\mbox{$z>z_B$}\end{array}\right. & \mbox{{\bf D \& E}}
\end{eqnarray}
A large amount of information (figure \ref{fig:nz}) is used to
constrain the model parameters. The high--resolution line lists give
the column density and redshift, with associated errors, for each
line. To calculate the background ionising flux the quasar luminosity
and redshift must be known (table \ref{tab:objects}). Finally, each
set of lines must have observational completeness limits (table
\ref{tab:compl}).
\subsection{Malmquist Bias and Line Blending}
\label{sec:malm}
Malmquist bias is a common problem when fitting models to a population
which increases rapidly at some point (often near an observational
limit). Errors during the observations scatter lines away from the
more populated regions of parameter space and into less populated
areas. Line blending occurs when, especially at high redshifts,
nearby, overlapping lines cannot be individually resolved. This is a
consequence of the natural line width of the clouds and cannot be
corrected with improved spectrographic resolution. The end result is
that weaker lines are not detected in the resultant `blend'. Both
these effects mean that the observed population is not identical to
the `underlying' or `real' distribution.
\subsubsection{The Idea of Data Quality}
To calculate a correction for Malmquist bias we need to understand the
significance of the error estimate since any correction involves
understanding what would happen if the `same' error occurs for
different column density clouds. The same physical cloud cannot be
observed with completely different parameters, but the same
combination of all the complex factors which influence the errors
might affect a line with different parameters in a predictable way.
If this idea of the `quality' of an observation could be quantified it
would be possible to correct for Malmquist bias: rather than the
`underlying' population, one reflecting the quality of the observation
(ie. including the bias due to observational errors) could be fitted
to the data.
For example, if the `quality' of an observation was such that,
whatever the actual column density measured, the error in column
density was the same, then it would be trivial to convolve the
`underlying' model with a Gaussian of the correct width to arrive at
an `observed' model. Fitting the latter to the data would give
parameters unaffected by Malmquist bias. Another example is the case
of galaxy magnitudes. The error in a measured magnitude is a fairly
simple function of source brightness, exposure time, etc., and so it
is possible to correct a flux--limited galaxy survey for Malmquist
bias.
\subsubsection{Using Errors as a Measure of Quality}
It may be possible to describe the quality of a spectrum by the signal
to noise level in each bin. From this one could, for a given line,
calculate the expected error in the equivalent width. The error in
the equivalent width might translate, depending on whether the
absorption line was in the linear or logarithmic portion of the `curve
of growth', to a normal error in either $N$ or $\log(N)$. But in this
idealised case it has been assumed that the spectrum has not been
re--binned, leaving the errors uncorrelated; that the effect of
overlapping, blended lines is unimportant; that there is a sudden
transition from a linear to logarithmic curve of growth; that the
resulting error is well described by a normal distribution. None of
this is likely to be correct and, in any case, the resulting analysis,
with different `observed' populations for every line, would be too
unwieldy to implement, given current computing facilities.
A more pragmatic approach might be possible. A plot of the
distribution of errors with column density (figure~\ref{fig:ndn})
suggests that the errors in $\log(N)$ are of a similar magnitude for a
wide range of lines (although there is a significant correlation
between the two parameters). Could the error in $\log(N)$ be a
sufficiently good indicator of the `quality' of an observation?
\begin{figure}
\epsfxsize=8.5cm
\epsfbox{nn.ps}
\epsfverbosetrue
\caption{The distribution of errors in column density.}
\label{fig:ndn}
\end{figure}
If the number density of the underlying population is $n^\prime(N)\
\hbox{d}\log(N)$ then the observed population density for a line with
error in $\log(N)$ of $\sigma_N$ is:
\begin{equation} n(N)\ \hbox{d}\log(N)
\propto\int_{-\infty}^{\infty}n^\prime\left(N10^x\right)\,\exp\left(\frac{-x^2}{2\sigma_N^2}\right)\,\hbox{d}x\
. \end{equation}
For a power law distribution this can be calculated analytically
and gives an increased probability of seeing lines with larger errors,
as expected. For an underlying population density $N^{-\beta}\
\hbox{d}N$ the increase is $\exp\left((1-\beta)^2(\sigma_N\log
10)^2/2\right)$.
This gives a lower statistical weight to lines with larger errors when
fitting. For this case --- a power law and log--normal errors --- the
weighting is not a function of $N$ directly, which might imply that
any correction would be uniform, with little bias expected for
estimated parameters.
In practice this correction does not work. This is probably because
the exponential dependence of the correction on $\sigma_N$ makes it
extremely sensitive to the assumptions made in the derivation above.
These assumptions are not correct. For example, it seems that the
correlation between \hbox{$\log( N )$}\ and the associated error is important.
\subsubsection{An Estimation of the Malmquist Bias}
It is possible to do a simple numerical simulation to gauge the
magnitude of the effect of Malmquist bias. A population of ten
million column densities were selected at random from a power law
distribution ($\beta=1.5, \log(N_{\hbox{min}})=10.9,
\log(N_{\hbox{max}})=22.5$) as an `unbiased' sample. Each line was
given an `observed' column density by adding a random error
distributed with a normal or log--normal (for lines where $13.8 <
\log(N) < 17.8$) distribution, with a mean of zero and a standard
deviation in $\log(N)$ of $0.5$. This procedure is a simple
approximation to the type of errors coming from the curve of growth
analysis discussed above, assuming that errors are approximately
constant in $\log(N)$ (figure~\ref{fig:ndn}). The size of the error
is larger than typical, so any inferred change in $\beta$ should be an
upper limit.
Since a power--law distribution diverges as $N\rightarrow0$ a normal
distribution of errors in $N$ would give an infinite number of
observed lines at every column density. This is clearly unphysical
(presumably the errors are not as extended as in a normal distribution
and the population has some low column density limit). Because of
this the `normal' errors above were actually constrained to lie within
3 standard deviations of zero.
\begin{figure}
\epsfxsize=8.5cm
\epsfbox{malm.ps}
\epsfverbosetrue
\caption{A model including Malmquist bias. The bold, solid line is the original sample, the bold, dashed line is the distribution after processing as described in the text. Each curve is shown twice, but the upper right plot has both axes scaled by a factor of 3 and only shows data for $12.5<\log(N)<16$. Reference lines showing the evolution expected for $\beta=1.5$ and $1.45$ (dashed) are also shown (centre).}
\label{fig:malm}
\end{figure}
The results (figure~\ref{fig:malm}) show that Malmquist bias has only
a small effect, at least for the model used here. The main solid line
is the original sample, the dashed line is the observed population.
Note that the results in this paper come from fitting to a sample of
lines with $12.5<\log(N)<16$ (section~\ref{sec:data}) ---
corresponding to the data shown expanded to the upper right of the
figure. Lines with smaller column densities are not shown since that
fraction of the population is affected by the lower density cut--off
in the synthetic data.
A comparison with the two reference lines, showing the slopes for a
population with $\beta=1.5$ or $1.45$ (dashed), indicates that the
expected change in $\beta$ is $\sim0.05$. The population of lines
within the logarithmic region of the curve of growth appears to be
slightly enhanced, but otherwise the two curves are remarkably
similar. The variations at large column densities are due to the
small number of strong lines in the sample.
\subsubsection{Other Approaches}
What other approaches can be used to measure or correct the effects of
Malmquist bias and line blending? Press \& Rybicki (1993) used a completely
different analysis of the Lyman--$\alpha$ forest. Generating and reducing synthetic
data, with a known background and cloud population, would allow us to
assess the effect of blending. Changing (sub--setting) the sample of
lines that is analysed will alter the relative (and, possibly,
absolute) importance of the two effects.
The procedure used by Press \& Rybicki (1993) is not affected by Malmquist bias
or line blending, but it is difficult to adapt to measure the ionizing
background.
Profile fitting to high--resolution data is a slow process, involving
significant manual intervention (we have tried to automate
profile--fitting with little success). An accurate measurement of the
systematic error in the ionizing background would need an order of
magnitude more data than is used here to get sufficiently low error
limits. Even if this is possible --- the analysis would need a
prohibitive amount of CPU time --- it would be sufficient work for a
separate, major paper (we would be glad to supply our software to
anyone willing to try this).
Taking a sub--set of the data is not helpful unless it is less likely
to be affected by the biases described above. One approach might be
to reject points with large errors, or large relative errors, in
column density since these are more affected by Malmquist bias.
However, this would make the observations incomplete in a very poorly
understood way. For example, relative errors are correlated with
column density (as noted above) and so rejecting lines with larger
relative errors would preferentially reject higher column density
lines. There is no sense in trying to measure one bias if doing so
introduces others.
Unlike rejecting lines throughout the sample, changing the completeness
limit does not alter the coverage of the observations (or rather, it
does so in a way that is understood and corrected for within the
analysis). Raising the completeness limits should make line blending
less important since weaker lines, which are most likely to be
blended, are excluded from the fit. Whether it affects the Malmquist
bias depends on the distribution of errors.
For blended lines, which tend to be weak, raising the completeness
limit should increase the absolute value of $\beta$ since the more
populous region of the (hypothetical) power--law population of column
densities will no longer be artificially depleted. The effect on
$\gamma$ is more difficult to assess since it is uncertain whether the
completeness limits are correct at each redshift. If the limits
increase too rapidly with redshift, for example, then raising them
further will reduce blending most at lower redshifts, lowering
$\gamma$. But if they are increasing too slowly then the converse
will be true.
\subsubsection{Conclusions}
Until either profile--fitting is automated, or the method of
Press \& Rybicki (1993) can be modified to include the proximity effect, these
two sources of uncertainty --- Malmquist bias and line blending ---
will continue be a problem for any analysis of the Lyman--$\alpha$ forest. However,
from the arguments above, it is likely that the effect of Malmquist
bias is small and that, by increasing the completeness limit, we can
assess the magnitude of the effect of line blending.
\subsection{Flux Calculations}
\label{sec:fluxcal}
\subsubsection{Galactic Extinction}
Extinction within our Galaxy reduces the apparent luminosity of the
quasars and so lowers the estimate of the background. Since the
absorption varies with frequency this also alters the observed
spectral slope.
Observed fluxes were corrected using extinction estimates derived from
the H~I measurements of Heiles \& Cleary (1979) for Q2204--573 and
Stark et~al. (1992) for all the other objects. H I column densities were
converted to $E(B-V)$ using the relationships: \begin{eqnarray}
E(B-V)&=&\frac{N_{\hbox{\footnotesize
H~I}}}{5.27\,10^{21}}\quad\hbox{if\ } \frac{N_{\hbox{\footnotesize
H~I}}}{5.27\,10^{21}}<0.1\\ E(B-V)&=&\frac{N_{\hbox{\footnotesize
H~I}}}{4.37\,10^{21}}\quad\hbox{otherwise} \end{eqnarray} where the
first value comes from Diplas \& Savage (1994) and the second value, which
compensates for the presence of H$_2$, is the first scaled by the
ratio of the conversions given in Bohlin, Savage \& Drake (1978). A ratio
$R=A(V)/E(B-V)$ of 3.0 (Lockman \& Savage 1995) was used and variations of
extinction with frequency, $A(\lambda)/A(V)$ were taken from
Cardelli, Clayton \& Mathis (1989).
The correction to the observed index, $\alpha_o$, of the power--law
continuum,
\begin{equation} f_\nu\propto\nu^{-\alpha}\ , \end{equation}
was calculated using
\begin{equation}
\alpha_o=\alpha+\frac{A(V)}{2.5}\frac{\partial}{\partial\ln\nu}\frac{A(\nu)}{A(V)}
\end{equation}
which, using the notation of Cardelli, Clayton \& Mathis (1989), becomes
\begin{equation}
\alpha_o=\alpha+\frac{A(V)}{2.5\,10^6c}\,\nu\ln(10)\,\frac{\partial}{\partial
y}\left(a(x)+\frac{b(x)}{R}\right)\ . \end{equation}
\subsubsection{Extinction in Damped Systems}
\label{sec:dampcor}
Two quasars are known to have damped absorption systems along the line
of sight (Wolfe et~al. 1995). The extinction due to these systems is not
certain, but model {\bf E} includes the corrections listed in
table~\ref{tab:damp}. These have been calculated using the SMC
extinction curve in Pei (1992), with a correction for the
evolution of heavy element abundances taken from Pei \& Fall (1995). The
SMC extinction curve is most suitable for these systems since they do
not appear to have structure at 2220~\AA\ (Boiss\'{e} \& Bergeron 1988), unlike LMC
and Galactic curves.
\begin{table}
\begin{tabular}{lllll}
Object&$\log(N_{\hbox{\footnotesize H I}})$&$z_{\hbox{\footnotesize abs}}$&$A(V)$&$\Delta_\alpha$\\
Q0000--263&21.3&3.39&0.10 &0.078\\
Q2206--199&20.7&1.92&0.14 &0.10\\
$ $&20.4&2.08&0.019 &0.049\\
\end{tabular}
\caption{
The damped absorption systems and associated corrections (at 1450~\AA\ in the quasar's rest--frame) for model {\bf E}.}
\label{tab:damp}
\end{table}
\subsubsection{Absorption by Clouds near the Quasar}
\label{sec:internal}
The amount of ionizing flux from the background quasar incident on a
cloud is attenuated by all the other clouds towards the source. If
one of the intervening clouds has a large column density this can
significantly reduce the extent of the effect of the quasar.
To correct for this the fraction of ionizing photons from the quasar
not attenuated by the intervening H~I and He~II absorption is
estimated before fitting the model. A power--law spectrum is assumed
and the attenuation is calculated for each cloud using the
cross--sections given in Osterbrock (1989). The ratio
$n(\hbox{He~II})/n(\hbox{H~I})$ within the clouds will depend on
several unknown factors (the true energy distribution of the ionizing
flux, cloud density, etc.), but was assumed to be 10 (Sargent et~al. 1980,
Miralda--Escud\'e 1993).
The attenuation is calculated using all the observed intervening
clouds. This includes clouds which are not included in the main fit
because they lie outside the column density limits, or are too close
to the quasar ($\Delta z \leq 0.003$).
For most clouds ($\log(N)\sim13.5$) near enough to the quasar to
influence the calculation of the background this correction is
unimportant (less than 1\%). However, large ($\log(N)\sim18$ or
larger) clouds attenuate the flux to near zero. This explains why
clouds with $\Delta_f\sim1$ are apparent close to the QSO in
figure~\ref{fig:prox}.
This relatively sudden change in optical depth at $\log(N)\sim18$ is
convenient since it makes the correction insensitive to any
uncertainties in the calculation (eg. $n(\hbox{He~II})/n(\hbox{H~I})$,
the shape of the incident spectrum, absorption by heavier elements)
--- for most column densities any reasonable model is either
insignificant ($\log(N)<17$) or blocks practically all ionizing
radiation ($\log(N)>19$).
In fact, the simple correction described above is in reasonable
agreement with CLOUDY models, for even the most critical column
densities. A model cloud with a column density of $\log(N)=13.5$ and
constant density was irradiated by an ionizing spectrum based on that
of Haardt \& Madau (1996). Between the cloud and quasar the model included an
additional absorber (constant density, $\log(N)=18$) which modified
the quasar's spectrum. The effect of the absorber (for a range of
heavy element abundances from pure H to primordial to 0.1 solar) on
the ionized fraction of H~I was consistent with an inferred decrease
in the quasar flux of about 80\%. In comparison, the correction
above, using a power--law spectrum with $\alpha=1$, gave a reduction
of 60\% in the quasar flux. These two values are in good agreement,
considering the exponential dependence on column densities and the
uncertainty in spectral shape. At higher and lower absorber column
densities the agreement was even better, as expected.
\subsection{Redshift Corrections}
\label{sec:redcor}
Gaskell (1982) first pointed out a discrepancy between the redshifts
measured from Lyman $\alpha$ and C~IV emission, and those from lower
ionization lines (eg. Mg~II, the Balmer series). Lower ionization
lines have a larger redshift. If the systemic redshift of the quasar
is assumed to be that of the extended emission (Heckman et~al. 1991),
molecular emission (Barvainis et~al. 1994), or forbidden line emission
(Carswell et~al. 1991), then the low ionization lines give a better measure
of the rest--frame redshift.
Using high ionization lines gives a reduced redshift for the quasar,
implies a higher incident flux on the clouds from the quasar, and, for
the same local depletion of lines, a higher estimate of the
background.
Espey (1993) re--analysed the data in Lu, Wolfe \& Turnshek (1991), correcting
systematic errors in the quasar redshifts. The analysis also
considered corrections for optically thick and thin universes and the
differences between the background and quasar spectra, but the
dominant effect in reducing the estimate from 174 to 50~J$_{23}$\ was the
change in the quasar redshifts.
To derive a more accurate estimate of the systemic velocity of the
quasars in our sample we made use of published redshift measurements
of low ionization lines, or measured these where spectra were
available to us. The lines used depended on the redshift and line
strengths in the data, but typically were one or more of
Mg~II$\,2798\,$\AA, O~I$\,1304\,$\AA, and C~II$\,1335\,$\AA.
When no low ionization line observations were available (Q0420--388,
Q1158--187, Q2204--573) we applied a mean correction to the high
ionization line redshifts. These corrections are based on
measurements of the relative velocity shifts between high and low
ionization lines in a large sample of quasars (Espey \& Junkkarinen 1996). They
find a correlation between quasar luminosity and mean velocity
difference ($\Delta_v$) with an empirical relationship given by:
\begin{equation} \Delta_v=\exp(0.66\log L_{1450}-13.72)\
\hbox{\rm km\thinspace s$^{-1}$}\end{equation} where $L_{1450}$ is the rest--frame luminosity
(ergs Hz$^{-1}$ s$^{-1}$) of the quasar at 1450~\AA\ for $q_0=0.5$ and H$_0=100\
\hbox{\rm km\thinspace s$^{-1}$}/\hbox{Mpc}$.
\section{The Data}
\label{sec:data}
\begin{table*}
\begin{tabular}{lrrrrrrrr}
&&&\multicolumn{2}{c}{$L_\nu(1450)$}&\multicolumn{4}{c}{Typical change in $\log(\hbox{J$_{23}$})$}\\
\hfill Object \hfill&\hfill $z$ \hfill&\hfill $\alpha$ \hfill&\hfill $q_0=0$ \hfill&$\hfill q_0=0.5$ \hfill&\hfill $z$ \hfill&\hfill $f_\nu$ \hfill&\hfill $\alpha$ \hfill&\hfill Total \hfill\\
Q0000--263 & 4.124 & 1.02 & 13.5\ten{31} & 2.8\ten{31} & $-$0.09 & $+$0.02 & $+$0.01 & $-$0.05\\
Q0014+813 & 3.398 & 0.55 & 34.0\ten{31} & 8.6\ten{31} & $-$0.19 & $+$0.33 & $+$0.21 & $+$0.36\\
Q0207--398 & 2.821 & 0.41 & 5.6\ten{31} & 1.7\ten{31} & $-$0.16 & $+$0.03 & $+$0.02 & $-$0.11\\
Q0420--388 & 3.124 & 0.38 & 10.9\ten{31} & 3.0\ten{31} & $-$0.16 & $+$0.04 & $+$0.02 & $-$0.10\\
Q1033--033 & 4.509 & 0.46 & 5.5\ten{31} & 1.0\ten{31} & $-$0.05 & $+$0.12 & $+$0.00 & $+$0.06\\
Q1100--264 & 2.152 & 0.34 & 13.8\ten{31} & 5.3\ten{31} & $-$0.42 & $+$0.19 & $+$0.11 & $-$0.13\\
Q1158--187 & 2.454 & 0.50 & 42.2\ten{31} & 14.4\ten{31} & $-$0.46 & $+$0.09 & $+$0.06 & $-$0.31\\
Q1448--232 & 2.223 & 0.61 & 9.6\ten{31} & 3.5\ten{31} & $-$0.34 & $+$0.28 & $+$0.17 & $+$0.11\\
Q2000--330 & 3.783 & 0.85 & 12.7\ten{31} & 2.9\ten{31} & $-$0.10 & $+$0.16 & $+$0.10 & $+$0.15\\
Q2204--573 & 2.731 & 0.50 & 42.8\ten{31} & 13.3\ten{31} & $-$0.35 & $+$0.06 & $+$0.03 & $-$0.25\\
Q2206--199 & 2.574 & 0.50 & 19.4\ten{31} & 6.3\ten{31} & $-$0.31 & $+$0.05 & $+$0.03 & $-$0.22\\
Mean & 3.081 & 0.56 & 19.1\ten{31} & 5.7\ten{31} & $-$0.24 & $+$0.12 & $+$0.07 & $-$0.04\\
\end{tabular}
\caption{
The systemic redshifts, power law continuum exponents. and rest frame luminosities (ergs Hz$^{-1}$ s$^{-1}$\ at 1450~\AA) for the quasars used. $H_0=100$ \hbox{\rm km\thinspace s$^{-1}$}/Mpc and luminosity scales as $H_0^{-2}$. The final four columns are an estimate of the relative effect of the various corrections in the paper (systemic redshift, correction for reddening to flux and spectral slope).}
\label{tab:objects}
\end{table*}
Objects, redshifts and fluxes are listed in table \ref{tab:objects}.
A total of 1675 lines from 11 quasar spectra were taken from the
literature. Of these, 844 lie within the range of redshifts and
column densities listed in table \ref{tab:compl}, although the full
sample is used to correct for absorption between the quasar and
individual clouds (section~\ref{sec:internal}). The lower column
density limits are taken from the references; upper column densities
are fixed at $\hbox{$\log( N )$}=16$ to avoid the double power law distribution
discussed by Petitjean et~al. (1993). Fluxes are calculated using standard
formulae, assuming a power law spectrum ($f_\nu \propto
\nu^{-\alpha}$), with corrections for reddening. Low ionization line
redshifts were used where possible, otherwise high ionization lines
were corrected using the relation given in section~\ref{sec:redcor}.
Values of $\alpha$ uncorrected for absorption are used where possible,
corrected using the relation above. If no $\alpha$ was available, a
value of 0.5 was assumed (Francis 1993).
References and notes on the calculations for each object follow:
\begin{description}
\item[Q0000--263] Line list from Cooke (1994). There is some
uncertainty in the wavelength calibration for these data, but the
error ($\sim30 \hbox{\rm km\thinspace s$^{-1}$}$) is much less than the uncertainty in the quasar
redshift ($\sim900 \hbox{\rm km\thinspace s$^{-1}$}$) which is taken into account in the error
estimate (section~\ref{sec:erress}). Redshift this paper
(section~\ref{sec:redcor}). Flux and $\alpha$ measurements from
Sargent, Steidel \& Boksenberg (1989).
\item[Q0014+813] Line list from Rauch et~al. (1993). Redshift this paper
(section~\ref{sec:redcor}). Flux and $\alpha$ measurements from
Sargent, Steidel \& Boksenberg (1989).
\item[Q0207--398] Line list from Webb (1987). Redshift (O I line)
from Wilkes (1984). Flux and $\alpha$ measurements from
Baldwin et~al. (1995).
\item[Q0420--388] Line list from Atwood, Baldwin \& Carswell (1985). Redshift, flux and
$\alpha$ from Osmer (1979) (flux measured from plot). The redshifts
quoted in the literature vary significantly, so a larger error (0.01)
was used in section~\ref{sec:erress}.
\item[Q1033--033] Line list and flux from Williger et~al. (1994). From their
data, $\alpha=0.78$, without a reddening correction. Redshift this
paper (section~\ref{sec:redcor}).
\item[Q1100--264] Line list from Cooke (1994). Redshift from
Espey et~al. (1989) and $\alpha$ from Tytler \& Fan (1992). Flux measured from
Osmer \& Smith (1977).
\item[Q1158--187] Line list from Webb (1987). Redshift from
Kunth, Sargent \& Kowal (1981). Flux from Adam (1985).
\item[Q1448--232] Line list from Webb (1987). Redshift from
Espey et~al. (1989). Flux and $\alpha$ measured from Wilkes et~al. (1983),
although a wide range of values exist in the literature and so a
larger error (0.6 magnitudes in the flux) was used in
section~\ref{sec:erress}.
\item[Q2000--330] Line list from Carswell et~al. (1987). Redshift this paper
(section~\ref{sec:redcor}). Flux and $\alpha$ measurements from
Sargent, Steidel \& Boksenberg (1989).
\item[Q2204--573] Line list from Webb (1987). Redshift from
Wilkes et~al. (1983). V magnitude from Adam (1985).
\item[Q2206--199] Line list from Rauch et~al. (1993). Redshift this paper
(section~\ref{sec:redcor}). V magnitude from Hewitt \& Burbidge (1989).
\end{description}
\begin{table}
\begin{tabular}{cccccc}
Object&\multispan2{\hfil$N$\hfil}&\multispan2{\hfil$z$\hfil}&Number\\
name&Low&High&Low&High&of lines\\
Q0000--263& 14.00 & 16.00 & 3.1130 & 3.3104 & 62 \\
& & & 3.4914 & 4.1210 & 101 \\
Q0014+813& 13.30 & 16.00 & 2.7000 & 3.3800 & 191 \\
Q0207--398& 13.75 & 16.00 & 2.0765 & 2.1752 & 11 \\
& & & 2.4055 & 2.4878 & 7 \\
& & & 2.6441 & 2.7346 & 6 \\
& & & 2.6852 & 2.7757 & 9 \\
& & & 2.7346 & 2.8180 & 8 \\
Q0420--388& 13.75 & 16.00 & 2.7200 & 3.0800 & 73 \\
Q1033--033& 14.00 & 16.00 & 3.7000 & 3.7710 & 16 \\
& & & 3.7916 & 3.8944 & 21 \\
& & & 3.9191 & 4.0301 & 24 \\
& & & 4.0548 & 4.1412 & 25 \\
& & & 4.1988 & 4.3139 & 30 \\
& & & 4.3525 & 4.4490 & 23 \\
& & & 4.4517 & 4.4780 & 2 \\
Q1100--264& 12.85 & 16.00 & 1.7886 & 1.8281 & 2 \\
& & & 1.8330 & 1.8733 & 8 \\
& & & 1.8774 & 1.9194 & 13 \\
& & & 1.9235 & 1.9646 & 9 \\
& & & 1.9696 & 2.0123 & 10 \\
& & & 2.0189 & 2.0617 & 6 \\
& & & 2.0683 & 2.1119 & 18 \\
Q1158--187& 13.75 & 16.00 & 2.3397 & 2.4510 & 9 \\
Q1448--232& 13.75 & 16.00 & 2.0847 & 2.1752 & 9 \\
Q2000--330& 13.75 & 16.00 & 3.3000 & 3.4255 & 23 \\
& & & 3.4580 & 3.5390 & 15 \\
& & & 3.5690 & 3.6440 & 18 \\
& & & 3.6810 & 3.7450 & 11 \\
Q2204--573& 13.75 & 16.00 & 2.4467 & 2.5371 & 10 \\
& & & 2.5454 & 2.6276 & 12 \\
& & & 2.6441 & 2.7280 & 8 \\
Q2206--199& 13.30 & 16.00 & 2.0864 & 2.1094 & 2 \\
& & & 2.1226 & 2.1637 & 8 \\
& & & 2.1760 & 2.2188 & 5 \\
& & & 2.2320 & 2.2739 & 7 \\
& & & 2.2887 & 2.3331 & 7 \\
& & & 2.3471 & 2.3940 & 10 \\
& & & 2.4105 & 2.4574 & 4 \\
& & & 2.4754 & 2.5215 & 11 \\
\multicolumn{2}{l}{Total: 11 quasars }& & & & 844 \\
\end{tabular}
\caption{Completeness limits.}
\label{tab:compl}
\end{table}
Table~\ref{tab:objects} also gives an estimate of the relative effect
of the different corrections made here. Each row gives the typical
change in $\log(\hbox{J$_{23}$})$ that would be estimated using that
quasar alone, with a typical absorption cloud 2~Mpc from the quasar
($q_0=0.5, H_0=100\,\hbox{\hbox{\rm km\thinspace s$^{-1}$}}$). The correction to obtain the
systemic redshift is not necessary for any quasar whose redshift has
been determined using low ionization lines. In such cases the value
given is the expected change if the redshift measurement had not been
available.
Using the systematic redshift always reduces the background estimate,
while correcting for reddening always acts in the opposite sense. The
net result, in the final column of table~\ref{tab:objects}, depends on
the relative strength of these two factors. For most objects the
redshift correction dominates, lowering $\log(\hbox{J$_{23}$})$ by $\sim
0.15$ (a decrease of 30\%), but for four objects the reddening is more
important (Q0014+813, the most reddened, has $B-V = 0.33$; the average
for all other objects is $0.09$).
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{nz0.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{nz1.ps}
}
\epsfverbosetrue
\caption{
The lines in the full sample (left) used to calculate the attenuation
of the quasar flux by intervening clouds and the restricted sample
(right) to which the model was fitted.}
\label{fig:nz}
\end{figure*}
Figure \ref{fig:nz} shows the distribution of column density, $N$, and
redshift, $z$, for the lines in the sample. The completeness limit
was taken from the literature and depends on the quality of the
spectra. There is also a clear trend with redshift as the number
density increases and weak lines become less and less easy to separate
in complex blends, whatever the data quality (see
section~\ref{sec:malm} for a more detailed discussion of line
blending).
\section{Fit Quality and Error Estimates}
\label{sec:errors}
\subsection{The Quality of the Fit}
\label{sec:finalq}
\begin{table}
\begin{tabular}{c@{\hspace{3em}}cc@{\hspace{3em}}cc}
&\multicolumn{2}{c}{Without Inv. Eff.\hfill}&\multicolumn{2}{c}{With Inv. Eff.\hfill}\\
Variable&Statistic&Prob.&Statistic&Prob.\\
$N$ & 1.11 & 0.17 & 1.05 & 0.22 \\
$z$ & 1.12 & 0.16 & 1.05 & 0.22 \\
\end{tabular}
\caption{
The K--S statistics measuring the quality of the fit.}
\label{tab:ks}
\end{table}
Figures \ref{fig:cum1} and \ref{fig:cum2} show the cumulative data and
model for each variable using two models: one includes the proximity
effect (model {\bf B}), one does not (model {\bf A}). The
probabilities of the associated K--S statistics are given in table
\ref{tab:ks}. For the column density plots the worst discrepancy
between model and data occurs at $\log(N)=14.79$. The model with the
proximity effect (to the right) has slightly more high column density
clouds, as would be expected, although this is difficult to see in the
figures (note that the dashed line --- the model --- is the curve that
has changed). In the redshift plots the difference between the two
models is more apparent because the changes are confined to a few
redshifts, near the quasars, rather than, as in the previous figures,
spread across a wide range of column densities. The apparent
difference between model and data is larger for the model that
includes the proximity effect (on the right of figure~\ref{fig:cum2}).
However, this is an optical illusion as the eye tends to measure the
vertical difference between horizontal, rather than diagonal, lines.
In fact the largest discrepancy in the left figure is at $z=3.323$,
shifting to $z=3.330$ when the proximity effect is included. It is
difficult to assess the importance of individual objects in cumulative
plots, but the main difference in the redshift figure occurs near the
redshift of Q0014+813. However, since this is also the case without
the proximity effect (the left--hand figure) it does not seem to be
connected to the unusually large flux correction for this object
(section~\ref{sec:data}).
In both cases --- with and without the proximity effect --- the model
fits the data reasonably well. It is not surprising that including
the proximity effect only increases the acceptability of the fit
slightly, as the test is dominated by the majority of lines which are
not influenced by the quasar. The likelihood ratio test that we use
in section \ref{sec:disevid} is a more powerful method for comparing
two models, but can only be used if the models are already a
reasonable fit (as shown here).
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{ks0.n.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{ks1.n.ps}
}
\epsfverbosetrue
\caption{
The cumulative data (solid line) and model (dashed), integrating over
$z$, for the lines in the sample, plotted against column
density (\hbox{$\log( N )$}). The model on the right includes the proximity effect.}
\label{fig:cum1}
\end{figure*}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{ks0.z.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{ks1.z.ps}
}
\epsfverbosetrue
\caption{
The cumulative data (solid line) and model (dashed), integrating over
$N$, for the lines in the sample, plotted against redshift.
The model on the right includes the proximity effect.}
\label{fig:cum2}
\end{figure*}
\subsection{Sources of Error}
\label{sec:erress}
There are two sources of stochastic uncertainty in the values of
estimated parameters: the finite number of observations and the error
associated with each observation (column densities, redshifts, quasar
fluxes, etc.).
The first source of variation --- the limited information available
from a finite number of observations --- can be assessed by examining
the distribution of the posterior probabilities for each parameter.
This is described in the following section.
The second source of variation --- the errors associated with each
measurement --- can be assessed by repeating the analysis for
simulated sets of data. In theory these errors could have been
included in the model and their contribution would have been apparent
in the posterior distribution. In practice there was insufficient
information or computer time to make a detailed model of the error
distribution.
Instead, ten different sets of line--lists were created. Each was
based on the original, with each new value, $X$, calculated from the
observed value $x$ and error estimate $\sigma_X$: \begin{equation} X =
x + a \sigma_X\ ,\end{equation} where $a$ was selected at random from
a (approximate) normal distribution with zero mean and unit variance.
The redshift (standard error 0.003) and luminosity (standard error 0.2
magnitudes) of each background quasar were also changed. For
Q0420--388 the redshift error was increased to 0.1 and, for
Q1448--232, the magnitude error was increased to 0.6 magnitudes. The
model was fitted to each data set and the most likely values of the
parameters recorded. A Gaussian was fitted to the distribution of
values. In some cases (eg.\ figure~\ref{fig:alphas_d}) a Gaussian
curve may not be the best way to describe the distribution of
measurements. However, since the error in the parameters is dominated
by the small number of data points, rather than the observational
errors, using a different curve will make little difference to the
final results.
Since the two sources of stochastic error are not expected to be
correlated they can be combined to give the final distribution of the
parameters. The Gaussian fitted to the variation from measurement
errors is convolved with the posterior distribution of the variable.
The final, normalized distribution is then a good approximation to the
actual distribution of values expected.
This procedure is shown in figures \ref{fig:beta_gamma_d}\ to
\ref{fig:alphas_d}. For each parameter in the model the `raw'
posterior distribution is plotted (thin line and points). The
distribution of values from the synthetic data is shown as a dashed
histogram and the fitted Gaussian is a thin line. The final
distribution, after convolution, is the heavy line. In general the
uncertainties due to a finite data set are the main source of error.
\subsection{Error Estimates from Posterior Probabilities\label{sec:postprob}}
\newcommand{{\bf y}}{{\bf y}}
\newcommand{\tb}{{\bf\theta}}
\newcommand{\rb}{{\bf R_\nu}}
\newcommand{\bn}{{b_\nu}}
If $p({\bf y}|\tb)$ is the likelihood of the observations (${\bf y}$), given
the model (with parameters $\tb$), then we need an expression for the
posterior probability of a `parameter of interest', $\eta$. This
might be one of the model parameters, or some function of the
parameters (such as the background flux at a certain redshift):
\begin{equation}\eta = g(\tb)\ .\end{equation}
For example, the value of J$_{23}$\ at a particular redshift for models
{\bf C} to {\bf E} in section~\ref{sec:popden} is a linear function of
several parameters (two or more of $J_{3.25}, J_{z_B}, \alpha_1$, and
$\alpha_2$). To calculate how likely a particular flux is the
probabilities of all the possible combinations of parameter values
consistent with that value must be considered: it is necessary to
integrate over all possible values of $\beta$ and $\gamma^\prime$, and
all values of $J_{z_B}, \alpha_1$, etc. which are consistent with
J$_{23}$(z) having that value.
In other words, to find the posterior distribution of $\eta$,
$\pi(\eta|\tb)$, we must marginalise the remaining model parameters:
\begin{equation}\pi(\eta|\tb)=\lim_{\gamma \rightarrow 0}
\frac{1}{\gamma} \int_D \pi(\tb|{\bf y})\,d\tb\ ,\end{equation} where D is
the region of parameter space for which $\eta \leq g(\tb) \leq \eta +
\gamma$ and $\pi(\tb|{\bf y}) \propto \pi(\tb) p({\bf y}|\tb)$, the posterior
density of $\tb$ with prior $\pi(\tb)$.
A uniform prior is used here for all parameters (equivalent to normal
maximum likelihood analysis). Explicitly, power law exponents and the
logarithm of the flux have prior distributions which are uniform over
$[-\infty,+\infty]$.
Doing the multi--dimensional integral described above would require a
large (prohibitive) amount of computer time. However, the
log--likelihood can be approximated by a second order series expansion
in $\tb$. This is equivalent to assuming that the other parameters
are distributed as a multivariate normal distribution, and the result
can then be calculated analytically. Such a procedure is shown, by
Leonard, Hsu \& Tsui (1989), to give the following procedure when $g(\tb)$ is a
linear function of $\tb$: \begin{equation}\bar{\pi}(\eta|{\bf y}) \propto
\frac{\pi_M(\eta|{\bf y})}{|\rb|^{1/2}(\bn^T\rb^{-1}\bn)^{1/2}}\
,\end{equation} where \begin{eqnarray} \pi_M(\eta|{\bf y}) & = &
\sup_{\tb:g(\tb)=\eta} \pi(\tb|{\bf y})\\&=&\pi(\tb_\eta|{\bf y})\ ,\\ \bn & =
& \left.\frac{\partial g(\tb)}{\partial \tb}\right|_{\tb=\tb_\eta}\
,\\ \rb & = & \left.\frac{\partial^2 \ln
\pi(\tb|{\bf y})}{\partial(\tb\tb^T)} \right|_{\tb=\tb_\eta}\
. \end{eqnarray} The likelihood is maximised with the constraint that
$g(\tb)$ has a particular value. $\rb$ is the Hessian matrix used in
the fitting routine (Press et~al. 1992) and $\bn$ is known (when $\eta$
is the average of the first two of three parameters, for example, $\bn
= 0.5,0.5,0$).
This quickens the calculation enormously. To estimate the posterior
distribution for, say, J$_{23}$, it is only necessary to choose a series
of values and, at each point, find the best fit consistent with that
value. The Hessian matrix, which is returned by many fitting
routines, can then be used --- following the formulae above --- to
calculate an approximation to the integral, giving a value
proportional to the probability at that point. Once this has been
repeated for a range of different values of J$_{23}$\ the resulting
probability distribution can be normalised to give an integral of one.
Note that this procedure is only suitable when $g(\tb)$ is a linear
function of $\tb$ --- Leonard, Hsu \& Tsui (1989) give the expressions needed for
more complex parameters.
\section{Results}
\label{sec:results}
A summary of the results for the different models is given in table
\ref{tab:fpars}. The models are:
\begin{description}
\item[{\bf A}] --- No Proximity Effect. The population model
described in section \ref{sec:model}, but without the proximity
effect.
\item[{\bf B}] --- Constant Background. The population model
described in section \ref{sec:model} with a constant ionising
background.
\item[{\bf C}] --- Power Law Background. The population model
described in section \ref{sec:model} with an ionising background which
varies as a power law with redshift
\item[{\bf D}] --- Broken Power Law Background. The population model
described in section \ref{sec:model} with an ionising background whose
power law exponent changes at $z_B=3.25$.
\item[{\bf E}] --- Correction for Extinction in Damped Systems. As
{\bf D}, but with a correction for absorption in known damped
absorption systems (section \ref{sec:dampcor}).
\end{description}
In this paper we assume $q_0$ = 0.5 and $H_0$ = 100~\hbox{\rm km\thinspace s$^{-1}$}/Mpc.
\subsection{Population Distribution}
\label{sec:resmpars}
\begin{table*}
\begin{tabular}{crlrlrlrlrlrc}
Model & \multispan2{\hfill$\beta$\hfill} & \multispan2{\hfill$\gamma$\hfill} & \multispan2{\hfill $J_{z_B}$\hfill} & \multispan2{\hfill $\alpha_1$\hfill} & \multispan2{\hfill $\alpha_2$\hfill} & \multispan1{\hfill $z_B$\hfill} & -2 log--likelihood\\
{\bf A} & 1.66 & $\pm0.03$ & 2.7 & $\pm0.3$ & \multicolumn{7}{c}{No background} & 60086.2 \\
{\bf B} & 1.67 & $\pm0.03$ & 2.9 & $\pm0.3$ & $-21.0$ & $\pm0.2$ & & & & & & 60052.8 \\
{\bf C} & 1.67 & $\pm0.03$ & 3.0 & $\pm0.3$ & $-21.0$ & $\pm0.2$ & $-1$ & $\pm3$ & & & & 60052.6 \\
{\bf D} & 1.67 & $\pm0.04$ & 3.0 & $\pm0.3$ & $-20.9$ & $\pm0.3$ & 0 & $+5,-6$ & $-2$ & $\pm4$ & 3.25 & 60052.4 \\
{\bf E} & 1.67 & $\pm0.03$ & 3.0 & $\pm0.3$ & $-20.9$ & $+0.3,-0.2$ & 0 & $+5,-6$ & $-2$ & $+7,-4$ & 3.25 & 60051.4 \\
\end{tabular}
\caption{
The best--fit parameters and expected errors for the models.}
\label{tab:fpars}
\end{table*}
The maximum likelihood `best--fit' values for the parameters are given
in table \ref{tab:fpars}. The quoted errors are the differences (a
single value if the distribution is symmetric) at which the
probability falls by the factor $1/\sqrt{e}$. This is equivalent to a
`$1\sigma$ error' for parameters with normal error distributions.
The observed evolution of the number of clouds per unit redshift is
described in the standard notation found in the literature
\begin{equation}dN/dz = A_0 (1+z)^{\gamma}\ .\end{equation} The
variable used in the maximum likelihood fits here, $\gamma^\prime$,
excludes variations expected from purely cosmological variations and
is related to $\gamma$ by: \begin{equation} \gamma = \left\{
\begin{array}{ll}\gamma^\prime + 1 & \mbox{ if $q_0 = 0$} \\
\gamma^\prime + \frac{1}{2} & \mbox{ if $q_0 = 0.5$ \ .}\end{array}
\right.\end{equation}
Figure \ref{fig:var_d}\ shows the variation in population parameters
for model {\bf D} as the completeness limits are increased in steps of
$\Delta\hbox{$\log( N )$}=0.1$. The number of clouds decreases from 844 to 425
(when the completeness levels have been increased by
$\Delta\hbox{$\log( N )$}=0.5$).
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{beta.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{gamma.ps}
}
\epsfverbosetrue
\caption{The expected probability distribution of the model parameters $\beta$ and $\gamma$ (heavy line) for model {\bf D}. The dashed histogram and Gaussian (thin line) show how the measured value varies for different sets of data. The dash-dot line shows the uncertainty in the parameter because the data are limited. These are combined to give the final distribution (bold). See section \ref{sec:erress} for mode details.}
\label{fig:beta_gamma_d}
\end{figure*}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{nu1_b.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{nu1_d.ps}
}
\epsfverbosetrue
\caption{The expected probability distribution of the log background flux at $z=3.25$ (heavy line) for models {\bf B} (left) and {\bf D}. The uncertainty from the small number of lines near the quasar (line with points) is significantly larger than that from uncertainties in column densities or quasar properties (thin curve). See section \ref{sec:erress} for a full description of the plot.}
\label{fig:nu1_d}
\end{figure*}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{nu2_d.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{nu3_d.ps}
}
\epsfverbosetrue
\caption{The expected probability distribution of the model parameters $\alpha_1$ and $\alpha_2$ (heavy line) for model {\bf D}. See section \ref{sec:erress} for a full description of the plot.}
\label{fig:alphas_d}
\end{figure*}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{beta_all.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{gamma_all.ps}
}
\epsfverbosetrue
\caption{The expected probability distribution of the population parameters for model {\bf D}. The top curve is for all data, each lower curve is for data remaining when the column density completeness limits are progressively increased by $\Delta\hbox{$\log( N )$}=0.1$.}
\label{fig:var_d}
\end{figure*}
\subsection{Ionising Background}
\label{sec:resbackg}
Values of the ionising flux parameters are show in table
\ref{tab:fpars}. The expected probability distributions for models
{\bf B} and {\bf D} are shown in figures \ref{fig:nu1_d} and
\ref{fig:alphas_d}. The background flux relation is described in
section \ref{sec:popden}.
The variables used to describe the variation of the flux with redshift
are strongly correlated. To illustrate the constraints more clearly
the marginalised posterior distribution (section \ref{sec:postprob})
of J$_{23}$\ was calculated at a series of redshifts. These are shown
(after convolution with the combination of Gaussians appropriate for
the uncertainties in the parameters from observational errors) for
model {\bf D} in figure \ref{fig:flux_both}. The distribution at each
redshift is calculated independently. This gives a conservative
representation since the marginalisation procedure assumes that
parameters can take all possible values consistent with the background
at that redshift (the probability that the flux can be low at a
certain redshift, for example, includes the possibility that it is
higher at other redshifts). Figure \ref{fig:flux_both} also compares
the results from the full data set (solid lines and smaller boxes)
with those from the data set with column density completeness limits
raised by $\Delta\log(N)=0.5$ (the same data as the final curves in
figure \ref{fig:var_d}).
Table~\ref{tab:modeld} gives the most likely flux (at probability
$p_m$), an estimate of the `1$\sigma$ error' (where the probability
drops to $p_m/\sqrt{e}$), the median flux, the upper and lower
quartiles, and the 5\% and 95\% limits for model {\bf D} at the
redshifts shown in figure~\ref{fig:flux_both}. It is difficult to
assess the uncertainty in these values. In general the central
measurements are more reliable than the extreme limits. The latter
are more uncertain for two reasons. First, the distribution of
unlikely models is more likely to be affected by assumptions in
section~\ref{sec:postprob} on the normal distribution of secondary
parameters. Second, the tails of the probability distribution are
very flat, making the flux value sensitive to numerical noise.
Extreme limits, therefore, should only be taken as a measure of the
relevant flux magnitude. Most likely and median values are given to
the nearest integer to help others plot our results --- the actual
accuracy is probably lower.
\begin{table}
\begin{tabular}{crrrrrr}
&$z=2$&$z=2.5$&$z=3$&$z=3.5$&$z=4$&$z=4.5$\\
$p_m/\sqrt{e}$ & 30 & 50 & 60 & 60 & 40 & 30 \\
$p_m$ & 137 & 129 & 118 & 103 & 80 & 63 \\
$p_m/\sqrt{e}$ &1000 & 400 & 220 & 180 & 160 & 170 \\
5\% & 10 & 30 & 50 & 40 & 30 & 20 \\
25\% & 70 & 80 & 80 & 70 & 60 & 40 \\
50\% & 232 & 172 & 124 & 108 & 87 & 75 \\
75\% &1000 & 400 & 200 & 160 & 100 & 200 \\
95\% &30000&3000 & 400 & 300 & 400 & 600 \\
\end{tabular}
\caption{
The fluxes (J$_{23}$) corresponding to various posterior probabilities for model {\bf D}. See the text for details on the expected errors in these values.}
\label{tab:modeld}
\end{table}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{flux_bothdy2.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{box_bothy2.ps}
}
\epsfverbosetrue
\caption{The expected probability distribution of the log background flux for model {\bf D}, comparing the results from the full data set with those obtained when the column density completeness limit is raised by $\Delta\log(N)=0.5$ (dashed line, left; larger boxes, right). The box plots show median, quartiles, and 95\% limits.}
\label{fig:flux_both}
\end{figure*}
\section{Discussion}
\label{sec:discuss}
\subsection{Population Parameters}
\label{sec:dispop}
Parameter values for the different models are given in table
\ref{tab:fpars}. They are generally consistent with other estimates
(Lu, Wolfe \& Turnshek 1991; Rauch et~al. 1993). Including the proximity effect
increases $\gamma$ by $\sim0.2$. Although not statistically
significant, the change is in the sense expected, since local
depletions at the higher redshift end of each data set are removed.
Figure \ref{fig:var_d}\ shows the change in population parameters as
the completeness limits for the observations are increased. The most
likely values (curve peaks) of both $\beta$ and $\gamma$ increase as
weaker lines are excluded, although $\gamma$ decreases again for the
last sample.
The value of $\beta$ found here ($\sim 1.7$) is significantly
different from that found by Press \& Rybicki (1993) ($\beta \sim 1.4$) using a
different technique (which is insensitive to Malmquist bias and line
blending). The value of $\beta$ moves still further away as the
column density completeness limits are increased. This is not
consistent with Malmquist bias, which would give a smaller change in
$\beta$ (section~\ref{sec:malm}), but could be a result of either line
blending or a population in which $\beta$ increases with column
density. The latter explanation is also consistent with
Cristiani et~al. (1995) who found a break in the column density distribution
with $\beta$ = 1.10 for log($N$)$ < 14.0$, and $\beta$ = 1.80 above
this value. Later work (Giallongo et~al. 1996, see section~\ref{sec:prevhi})
confirmed this.
Recent work by Hu et~al. (1995), however, using data with better
signal--to--noise and resolution, finds that the distribution of
column densities is described by a single power law ($\beta\sim 1.46$)
until $\hbox{$\log( N )$}\sim 12.3$, when line blending in in their sample becomes
significant. It might be possible that their sample is not
sufficiently large (66 lines with $\log(N)>14.5$, compared with 192
here) to detect a steeper distribution of high column density lines.
The change in $\gamma$ as completeness limits are raised may reflect
the decrease in line blending at higher column densities. This
suggests that the value here is an over--estimate, although the shift
is within the 95\% confidence interval. No estimate is significantly
different from the value of 2.46 found by Press \& Rybicki (1993) (again, using
a method less susceptible to blending problems).
\subsection{The Proximity Effect}
\subsubsection{Is the Proximity Effect Real?}
\label{sec:disevid}
The likelihood ratio statistic (equivalent to the `F test'), comparing
model {\bf A} with any other, indicates that the null hypothesis (that
the proximity effect, as described by the model here, should be
disregarded) can be rejected with a confidence exceeding 99.9\%. Note
that this confirmation is based on the likelihood values in
table~\ref{tab:fpars}. This test is much more powerful than the K--S
test (section~\ref{sec:finalq}) which was only used to see whether the
models were sufficiently good for the likelihood ratio test to be
used.
To reiterate: if model {\bf A} and model {\bf B} are taken as
competing descriptions of the population of Lyman--$\alpha$\ clouds, then the
likelihood ratio test, which allows for the extra degree of freedom
introduced, strongly favours a description which includes the
proximity effect. The model without the proximity effect is firmly
rejected. This does not imply that the interpretation of the effect
(ie.\ additional ionization by background radiation) is correct, but
it does indicate that the proximity effect, in the restricted,
statistical sense above, is `real' (cf. R\"{o}ser 1995).
If the assumptions behind this analysis are correct, in particular
that the proximity effect is due to the additional ionising flux from
the quasar, then the average value of the background is
100\elim{50}{30}~J$_{23}$\ (model {\bf B}).
If a more flexible model for the background (two power laws) is used
the flux is consistent with a value of 120\elim{110}{50}~J$_{23}$\
(model {\bf D} at $z=3.25$).
\subsubsection{Systematic Errors\label{sec:syserr}}
Five sources of systematic error are discussed here: Malmquist bias
and line blending; reddening by damped absorption systems; increased
clustering of clouds near quasars; the effect of gravitational
lensing.
The constraints on the background given here may be affected by
Malmquist bias and line blending (sections \ref{sec:malm} and
\ref{sec:dispop}). The effects of line blending will be discussed
further in section~\ref{sec:gerry}, where a comparison with a
different procedure suggests that it may cause us to over--estimate
the flux (by perhaps $0.1$ dex). Malmquist bias is more likely to
affect parameters sensitive to absolute column densities than those
which rely only on relative changes in the observed population. So
while this may have an effect on $\beta$, it should have much less
influence on the inferred background value.
Attenuation by intervening damped absorption systems will lower the
apparent quasar flux and so give an estimate for the background which
is too low. This is corrected in model {\bf E}, which includes
adjustments for the known damped systems (section \ref{sec:dampcor},
table~\ref{tab:damp}). The change in the inferred background flux is
insignificant (figure \ref{fig:evoln}, table~\ref{tab:fpars}),
implying that the magnitude of the bias is less than 0.1 dex.
If quasars lie in regions of increased absorption line clustering
(Loeb \& Eisenstein 1995; Yurchenko, Lanzetta \& Webb 1995) then the background flux may be
overestimated by up to 0.5, or even 1, dex.
Gravitational lensing may change the apparent brightness of a quasar
--- in general the change can make the quasar appear either brighter
or fainter. Absorption line observations are made towards the
brightest quasars known (to get good quality spectra). Since there
are more faint quasars than bright ones this will preferentially
select objects which have been brightened by lensing (see the comments
on Malmquist bias in section~\ref{sec:malm}). An artificially high
estimate of the quasar flux will cause us to over--estimate the
background.
Unfortunately, models which assess the magnitude of the increase in
quasar brightness are very sensitive to the model population of
lensing objects. From Pei (1995) an upper limit consistent with
observations is an increase in flux of about 0.5 magnitudes,
corresponding to a background estimate which is too high by a factor
of 1.6 (0.2 dex). The probable effect, however, could be much smaller
(Blandford \& Narayan 1992).
If bright quasars are more likely to be lensed we can make a
rudimentary measurement of the effect by splitting the data into two
separate samples. When fitted with a constant background (model {\bf
B}) the result for the five brightest objects is indeed brighter than
that for the remaining six, by 0.1 dex. The errors, however, are
larger (0.3 dex), making it impossible to draw any useful conclusions.
The effects of Malmquist bias, line blending and damped absorption
systems are unlikely to change the results here significantly. Cloud
clustering and gravitational lensing could be more important --- in
each case the background would be over--estimated. The magnitude of
these last two biases is not certain, but cloud clustering seems more
likely to be significant.
\subsubsection{Is there any Evidence for Evolution?}
\label{sec:noevoln}
More complex models allow the background flux to vary with redshift.
If the flux does evolve then these models should fit the data better.
However, there is no significant change in the fit when comparing the
likelihood of models {\bf C} to {\bf E} with that of {\bf B}. Nor are
$\alpha_1$ or $\alpha_2$ significantly different from zero. So there
is no significant evidence for a background which changes with
redshift.
The asymmetries in the wings of the posterior distributions of
$\alpha_1$ or $\alpha_2$ for model {\bf D} (figure \ref{fig:alphas_d})
are a result of the weak constraints on upper limits (see next
section). The box plots in figure \ref{fig:flux_both} illustrate the
range of evolutions that are possible.
\subsubsection{Upper and Lower Limits\label{sec:lims}}
\begin{figure*}
\hbox{
\epsfxsize=8.5cm
\epsfbox{delta_z.ps}
\hfill
\epsfxsize=8.5cm
\epsfbox{delta_d.ps}
}
\epsfverbosetrue
\caption{On the left, absorber redshifts are plotted against $\Delta_F$. On the right $\Delta_F$ is plotted against the distance between cloud and quasar. Note that the correction for the quasar's flux, and hence the upper limit to the estimate of the background, is significant for only a small fraction of the clouds.}
\label{fig:prox}
\end{figure*}
While there is little evidence here for evolution of the background,
the upper limits to the background flux diverge more strongly than the
lower limits at the lowest and highest redshifts. Also, the posterior
probability of the background is extended more towards higher values.
The background was measured by comparing its effect with that of the
quasar. If the background were larger the quasar would have less
effect and the clouds with $\Delta_F < 1$ would not need as large a
correction to the observed column density for them to agree with the
population as a whole. If the background was less strong then the
quasars would have a stronger influence and more clouds would be
affected.
The upper limit to the flux depends on clouds influenced by the
quasar. Figure \ref{fig:prox} shows how $\Delta_F$ changes with
redshift and proximity to the background quasar. From this figure it
is clear that the upper limit is dominated by only a few clouds.
However, the lower limit also depends on clouds near to, but not
influenced by, the quasar. This involves many more clouds. The lower
limit is therefore stronger, more uniform, and less sensitive to the
amount of data, than the upper limit.
Other procedures for calculating the errors in the flux have assumed
that the error is symmetrical (the only apparent exception is
Fern\'{a}ndez--Soto et~al. (1995) who unfortunately had insufficient data to normalize
the distribution). While this is acceptable for $\beta$ and $\gamma$,
whose posterior probability distributions
(figure~\ref{fig:beta_gamma_d}) can be well--approximated by Gaussian
curves, it is clearly wrong for the background
(eg. figure~\ref{fig:flux_both}), especially where there are less data
(at the lowest and highest redshifts).
An estimate based on the assumption that the error is normally
distributed will be biased in two ways. First, since the extended
upper bound to the background has been ignored, it will underestimate
the `best' value. Second, since the error bounds are calculated from
the curvature of the posterior distribution at its peak (ie. from the
Hessian matrix) they will not take account of the extended `tails' and
so will underestimate the true range of values. In addition, most
earlier work has calculated errors assuming that the other population
parameters are fixed at their best--fit values. This will also
under--estimate the true error limits. All these biases become more
significant as the amount of data decreases.
The first of these biases also makes the interpretation of the
box--plots (eg. figures \ref{fig:flux_both} and \ref{fig:evoln}) more
difficult. For example, the curves in the left--hand plot in
figure~\ref{fig:flux_both} and the data in table~\ref{tab:modeld} show
that the value of the flux with highest probability at $z=2$ is
$140$~J$_{23}$\ (for model {\bf D}). In contrast the box--plot on the
right shows that the median probability is almost twice as large
($230$~J$_{23}$). Neither plot is `wrong': this is the consequence of
asymmetric error distributions.
\begin{figure*}
\epsfxsize=15.cm
\epsfbox{box_bothe3.ps}
\epsfverbosetrue
\caption{The expected probability distribution of the log background
flux for models {\bf D} (left) and {\bf E} (right, including a
correction for the known damped absorption systems). The box plots
show median, quartiles, and 95\% limits. The shaded area covers the
range of backgrounds described in Fall \& Pei (1995). The lower boundary
is the expected background if all quasars are visible, the higher
fluxes are possible if an increasing fraction of the quasar population
is obscured at higher redshifts. The crosses and arrows mark the
extent of previous measurements from high resolution spectra --- see
the text for more details.}
\label{fig:evoln}
\end{figure*}
\subsection{Comparison with Previous Estimates}
\subsubsection{Earlier High--Resolution Work}
\label{sec:prevhi}
Fern\'{a}ndez--Soto et~al. (1995) fitted high signal--to--noise data towards three
quasars. For $2 < z < 2.7$\ they estimate an ionizing background
intensity of 32~J$_{23}$, with an absolute lower limit (95\% confidence)
of 16~J$_{23}$\ (figure~\ref{fig:evoln}, the leftmost cross). They were
unable to put any upper limit on their results.
Cristiani et~al. (1995) determined a value of 50~J$_{23}$\ using a sample of five
quasars with a lower column density cut--off of log($N$) = 13.3. This
sample was recently extended (Giallongo 1995). They find that the
ionizing background is roughly constant over the range $1.7 < z <
4.0$\ with a value of 50~J$_{23}$\ which they considered a possible
lower limit (figure~\ref{fig:evoln}, the middle lower limit).
While this paper was being refereed Giallongo et~al. (1996) became available,
extending the work above. Using a maximum likelihood analysis with an
unspecified procedure for calculating errors they give an estimate for
the background of $50\pm10$~J$_{23}$. They found no evidence for
evolution with redshift when using a single power law exponent.
Williger et~al. (1994) used a single object, Q1033--033, which is included in
this sample, to give an estimate of $10-30$~J$_{23}$\
(figure~\ref{fig:evoln}, the rightmost cross). The error limits are
smaller than those found here, even though they only use a subset of
this data, which suggests that they have been significantly
underestimated.
If the errors in Williger et~al. (1994) are indeed underestimates then these
measurements are consistent with the results here. However, the
best--fit values are all lower than those found here. This may be, at
least partly, because of the biases discussed in
section~\ref{sec:lims}.
Williger et~al. (1994) used a more direct method than usual to estimate the
background. This gives a useful constraint on the effect of
line blending in the procedures used, and is explored in more detail
below.
\subsubsection{Q1033--033 and Line Blending\label{sec:gerry}}
The measured value of the background, 80\elim{80}{40}~J$_{23}$\ (model
{\bf D} at $z=4$), is larger than an earlier estimate using a subset
of this data (Williger et~al. 1994, Q1033--033, $10-30$~J$_{23}$).
As has already been argued, it is difficult to understand how a
procedure using much less data could have smaller error limits than
the results here, so it is likely that the error was an underestimate
and that the two results are consistent. However, it is interesting
to see if there is also a systematic bias in the analyses used.
The correction for galactic absorption is not very large for this
object (about 20\%). More importantly, the procedures used differ
significantly in how they are affected by blended lines. These are a
problem at the highest redshifts, where the increased Lyman--$\alpha$\ cloud
population density means that it is not always possible to resolve
individual clouds. Williger et~al. (1994) added additional lines ($\hbox{$\log( N )$} =
13.7$, $b$ = 30~\hbox{\rm km\thinspace s$^{-1}$}) to their $z$ = 4.26 spectra of Q1033--033 and
found that between 40\% and 75\% would be missed in the line list.
As the lower column density limit is raised Williger et~al. (1994) find that
the observed value of $\gamma$ also increases. The resulting stronger
redshift evolution would make the deficit of clouds near the quasar
more significant and so give a lower estimate of the background.
Although not significant at the 95\% level, there is an indication
that $\gamma$ also increases with higher column density in this
analysis (section \ref{sec:dispop}, figure \ref{fig:var_d}). While it
is possible that $\gamma$ varies with column density the same
dependence would be expected if line blending is reducing the number
of smaller clouds. To understand how line blending can affect the
estimates, we will now examine the two analyses in more detail.
Line blending makes the detection of lines less likely. Near the
quasar lines are easier to detect because the forest is more sparse.
In the analysis used in this paper the appearance of these `extra'
lines reduces the apparent effect of the quasar. Alternatively, one
can say that away from the quasar line blending lowers $\gamma$.
Both arguments describe the same process and imply that the estimated
background flux is too large.
In contrast, Williger et~al. (1994) take a line--list from a crowded region,
which has too few weak lines and correspondingly more saturated lines,
and reduce the column densities until they agree with a region closer
to the quasar. Since a few saturated lines are less sensitive to the
quasar's flux than a larger number of weaker lines, the effect of this
flux is over--estimated (and poorly determined), making the background
seem less significant and giving a final value for the background flux
which is too small. This method is therefore biased in the opposite
sense to ours and so the true value of the background probably lies
between their estimate and ours.
The comparison with Williger et~al. (1994) gives one estimate of the bias from
line blending. Another can be made by raising the completeness limits
of the data (section~\ref{sec:malm}). This should decrease the number
of weak, blended lines, but also excludes approximately half the data.
In figure \ref{fig:flux_both} the flux estimates from the full data
set are shown together with those from one in which the limits have
been raised by $\Delta\log(N)=0.5$. There is little change in the
lowest reasonable flux, an increase in the upper limits, and in
increase in the `best--fit' values. The flux for $z<3$ is almost
unconstrained by the restricted sample (section~\ref{sec:lims}
explains the asymmetry).
An increase of 0.5 in $\log(N)$ is a substantial change in the
completeness limits. That the lower limits remain constant (to within
$\sim 0.1$ dex) suggests that line blending is not causing the flux to
be significantly over--estimated. The increase in the upper limits is
expected when the number of clouds in the sample decreases
(section~\ref{sec:lims}).
In summary, the total difference between our measurement and that in
Williger et~al. (1994) is 0.7 dex which can be taken as an upper limit on the
effect of line blending. However, a more typical value, from the
constancy of the lower limits when completeness limits are raised, is
probably $\sim0.1$ dex.
\subsubsection{Results from Lower Resolution Spectra}
Bechtold (1994) analysed lower resolution spectra towards 34 quasars
using equivalent widths rather than individual column density
measurements. She derived a background flux of 300~J$_{23}$\
($1.6<z<4.1$), decreasing to 100~J$_{23}$\ when a uniform correction was
applied to correct for non--systemic quasar redshifts. With
low--resolution data a value of $\beta$ is used to change from a
distribution of equivalent widths to column densities. If $\beta$ is
decreased from 1.7 to a value closer to that found for narrower lines
(see section~\ref{sec:dispop}) then the inferred background estimate
could decrease further.
The evolution was not well--constrained ($-7<\alpha<4$). No
distinction was made between the lower and upper constraints on the
flux estimate, and it is likely that the wide range of values reflects
the lack of strong upper constraints which we see in our analysis.
It is not clear to what extent this analysis is affected by line
blending. Certainly the comments above --- that relatively more
clouds will be detected near the quasar --- also apply.
\subsubsection{Lower Redshift Measurements}
The background intensity presented in this paper is much larger than
the 8~J$_{23}$\ upper limit at $z=0$ found by
Vogel et~al. (1995). Kulkarni \& Fall (1993) obtain an even lower value of
0.6\elim{1.2}{0.4}~J$_{23}$\ at $z=0.5$ by analysing the proximity
effect in HST observations. However, even an unevolving flux will
decrease by a factor of $\sim 50$ between $z=2$ and $0$, so such a
decline is not inconsistent with the results given here.
\subsection{What is the Source of the Background?}
\label{sec:source}
\subsubsection{Quasars}
Quasars are likely to provide a significant, if not dominant,
contribution to the extragalactic background. An estimate of the
ionizing background can be calculated from models of the quasar
population. Figure \ref{fig:evoln} shows the constraints from models
{\bf D} and {\bf E} and compares them with the expected evolution of
the background calculated by Fall \& Pei (1995). The background can take
a range of values (the shaded region), with the lower boundary
indicating the expected trend for a dust--free universe and larger
values taking into account those quasars that may be hidden from our
view, but which still contribute to the intergalactic ionizing flux.
The hypothesis that the flux is only from visible quasars (the
unobscured model in Fall \& Pei 1995) is formally rejected at over the
95\% significance level since the predicted evolution is outside the
95\% bar in the box plots at higher redshift.
Although our background estimate excludes a simple quasar--dominated
model based on the observed number of such objects, the analysis here
may give a background flux which is biased (too large) from a
combination of line blending (section~\ref{sec:gerry}) and clustering
around the background quasars. From the comparison with
Williger et~al. (1994), above, there is an upper limit on the correction for
line blending, at the higher redshifts, of 0.7 dex. However, an
analysis of the data when column density completeness limits were
increased by $\Delta\log(N)=0.5$ suggests that a change in the lower
limits here of $\sim 0.1$ dex is more likely. A further change of up
to between 0.5 and 1 dex is possible if quasars lie in regions of
increased clustering (section~\ref{sec:syserr}). These two effects
imply that at the highest redshifts the flux measured here could
reasonably overestimate the real value by $\sim 0.5$ dex. This could
make the measurements marginally consistent with the expected flux
from the observed population of quasars.
There is also some uncertainty in the expected background from quasars
since observations could be incomplete even at the better understood
lower redshifts (eg.~Goldschmidt et~al. 1992) and while absorption in damped
systems is understood in theory (Fall \& Pei 1993) its effect is
uncertain (particularly because the distribution of high column
density systems is poorly constrained).
The highest flux model (largest population of obscured quasars) from
Fall \& Pei (1995) is consistent with the measurements here (assuming that
the objects used in this paper are not significantly obscured).
\subsubsection{Stars}
The background appears to be stronger than the integrated flux from
the known quasar population. Can star formation at high redshifts
explain the discrepancy?
Recent results from observations of low redshift starbursts
(Leitherer et~al. 1995) suggest that very few ionizing photons ($\leq 3\,$\%)
escape from these systems. If high redshift starbursts are similar in
their properties, then the presence of cool gas in these objects would
similarly limit their contribution to the ionizing background.
However, Madau \& Shull (1996) estimate that if star formation occurs in
Lyman--$\alpha$\ clouds, and a significant fraction of the ionizing photons
($\sim 25\,\%$) escape, then these photons may contribute a
substantial fraction of the ionizing background photons in their
immediate vicinity. As an example, at $z \sim 3$\ they estimate that
$J_\nu \leq 50~\hbox{J$_{23}$}$\ if star formation sets in at
$z\sim3.2$. This flux would dominate the lowest (no correction for
obscuration) quasar background shown in figure \ref{fig:evoln} and
could be consistent with the intensity we estimate for the background
at this redshift, given the possible systematic biases discussed above
and in section~\ref{sec:syserr}.
\section{Conclusions}
\label{sec:conc}
A model has been fitted to the population of Lyman--$\alpha$\ clouds. The model
includes the relative effect of the ionising flux from the background
and nearby quasars (section~\ref{sec:model}).
The derived model parameters for the population of absorbers are
generally consistent with earlier estimates. There is some evidence
that $\beta$, the column density power law population exponent,
increases with column density, but could also be due to line blending
(section~\ref{sec:dispop}).
The ionising background is estimated to be 100\elim{50}{30}~J$_{23}$\
(model {\bf B}, section~\ref{sec:resbackg}) over the range of
redshifts ($2<z<4.5$) covered by the data. No strong evidence for
evolution in the ionizing background is seen over this redshift range.
In particular, there is no significant evidence for a decline for
$z>3$ (section~\ref{sec:noevoln}). Previous results may have been
biased (too low, with optimistic error limits ---
section~\ref{sec:lims}).
Constraints on the evolution of the background are shown in figure
\ref{fig:evoln}. The estimates are not consistent with the background
flux expected from the observed population of quasars
(section~\ref{sec:source}). However, two effects are likely to be
important. First, both line blending and increased clustering of
clouds near quasars lead to the measured background being
overestimated. Second, a significant fraction of the quasar
population at high redshifts may be obscured. Since their
contribution to the background would then be underestimated this would
imply that current models of the ionizing background are too low.
Both of these would bring the expected and measured fluxes into closer
agreement. It is also possible that gravitational lensing makes the
measurement here an overestimate of the true background.
The dominant source of errors in our work is the limited number of
lines near the background quasar (eg. figures \ref{fig:nu1_d}\ and
\ref{fig:prox}). Systematic errors are smaller and become important
only if it is necessary to make standard (unobscured quasar) models
for the background consistent with the lower limits presented here.
Further data will therefore make the estimate here more accurate,
although observational data are limited by confusion of the most
numerous lower column density systems ($\hbox{$\log( N )$}< 13.0$) so it will
remain difficult to remove the bias from line blending. An
improvement in the errors for the highest redshift data points, or a
determination of the shape of the ionizing spectrum (e.g. from
He~II/H~I estimates in Lyman--$\alpha$\ clouds) would help in discriminating
between current competing models for the ionizing background.
Finally, a determination of the background strength in the redshift
range $0.5 < z < 2.0$ is still needed.
\section{Acknowledgements}
We would like to thank Yichuan Pei for stimulating discussions and for
making data available to us. Tom Leonard (Dept. of Statistics,
Edinburgh) gave useful comments and guidance on the statistics used in
this paper. We would also like to thank an anonymous referee for
helpful and constructive comments.
| proofpile-arXiv_065-429 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\setcounter{footnote}{0}
A still challenging question in strong interaction physics is
the derivation of the
low-energy properties of the spectrum from ``QCD first principle",
due to our limited present skill with non-perturbative physics.
At very low energy, where the ordinary
perturbation theory cannot be applied,
Chiral Perturbation Theory~\cite{GaLeut} and Extended Nambu--Jona-Lasinio
models~\cite{NJL,ENJL}~\footnote{For a recent complete review see
\cite{Miransky}.} give a consistent framework
in terms of a set of parameters that have to be fixed from the data;
yet the bridge between those effective parameters and the basic
QCD degrees of freedom remains largely unsolved. Although lattice QCD
simulations
recently made definite progress~\cite{lattcsb} in that direction,
the consistent treatment of dynamical
unquenched quarks and the chiral symmetry remains a serious problem. \\
In this paper,
we investigate a new, semi-analytical method, to
explore {\it how far}
the basic QCD Lagrangian can provide, in a self-consistent
way,
non-zero
dynamical quark
masses, quark condensates, and pion decay constant,
in the limit of vanishing Lagrangian (current) quark masses. Such a
qualitative picture
of chiral symmetry breakdown (CSB) can be made more quantitative
by applying
a new ``variational mass" approach,
recently developed within the framework of the
anharmonic oscillator \cite{bgn}, and in
the Gross--Neveu (GN) model \cite{gn1,gn2}.
The starting point is very similar to
the ideas developed a long time ago and implemented
in various different forms in refs.\cite{pms}, \cite{delta}.
There, it was advocated
that the convergence of conventional perturbation
theory may be improved by a variational procedure in which
the separation of the
action into ``free" and ``interaction" parts
is made to depend on some set of auxiliary parameters.
The results obtained by expanding to finite order
in this redefined perturbation
series are optimal in regions of the space of auxiliary
parameters where they are least sensitive to these parameters.
Recently there appeared
strong evidence that this optimized perturbation
theory may indeed lead to a rigorously convergent
series of approximations even in strong coupling cases
\cite{JONE}. \\
An essential novelty,
however, in \cite{bgn}--\cite{gn2} and the present paper,
is that our construction combines in a specific manner
the renormalization group (RG) invariance
with the properties of an analytically continued, arbitrary mass parameter
$m$.
This, at least in a certain approximation to be motivated,
allows us to reach
{\it infinite} order of
the variational-perturbative expansion, and therefore presumably optimal,
provided it converges. Our main results are a set of
non-perturbative
ansatzs for the relevant CSB quantities,
as functions of the variational mass $m$,
which can be
studied for extrema and optimized.
Quite essentially, our construction also provides a simple and
consistent treatment of the renormalization, reconciling
the variational approach with the inherent infinities
of quantum field theory and the RG properties.
Before proceeding, let us note that there exists
a quite radically different attitude towards CSB in QCD, advocating
that the responsible mechanism is most probably the non-perturbative
effects due to the {\em instanton} vacuum~\cite{Callan},
or even more directly related to confinement~\cite{Cornwall}.
However,
even if the instanton picture of CSB is on general grounds well motivated,
and many fruitful ideas have been developed in that
context\footnote{See e.g
ref.~\cite{Shuryak} for a review and original references.},
as far as we are aware there is at present no
sufficiently rigorous or compelling evidence for it, derived from ``first
principle".
In any event, it is certainly of interest to
investigate quantitatively
the ``non-instantonic" contribution to CSB, and we hope
that our method is a more consistent step in that direction.
\section{Dynamical quark masses}
In what follows we only consider the $SU(n_f)_L \times SU(n_f)_R$
part of the chiral symmetry, realized by the QCD Lagrangian
in the absence of quark mass terms,
and for
$n_f =2$ or $n_f = 3$ as physically relevant applications.
Following the treatment of the anharmonic oscillator~\cite{bgn} and its
generalization to the GN model~\cite{gn1,gn2},
let us consider the following modification of the usual QCD Lagrangian,
\begin{equation}
L_{QCD} \to L^{free}_{QCD}(g_0 =0, m_0=0)
-m_0 \sum^{n_f}_{i=1} \overline{q}_i q_i
+ L^{int}_{QCD}(g^2_0 \to x g^2_0)
+x \; m_0 \sum^{n_f}_{i=1} \overline{q}_i q_i\;,
\label{xdef}
\end{equation}
where $L^{int}_{QCD}$ designates the ordinary QCD interaction terms, and
$x$ is a convenient ``interpolating" expansion parameter.
This formally is equivalent
to substituting everywhere in the bare Lagrangian,
\begin{equation} m_0 \to m_0\; (1-x); ~~~~g^2_0 \to g^2_0\; x,
\label{substitution}
\end{equation}
and therefore
in any perturbative (bare) quantity as well,
calculated in terms of $m_0$ and $g^2_0$.
Since the original massless QCD
Lagrangian is recovered for $x \to 1$, $m_0$ is to be considered as
an {\it arbitrary } mass parameter
after substitution (\ref{substitution}). One expects
to optimize physical quantities with
respect to $m_0$ at different, possibly arbitrary
orders of the expansion parameter $x$,
eventually approaching a stable limit, i.e {\it flattest}
with respect to $m_0$,
at sufficiently high order in $x$. \\
However,
before accessing any physical quantity
of interest for such an optimization, the theory should be
renormalized, and there is an unavoidable mismatch
between the expansion in $x$, as
introduced above, and the ordinary perturbative expansion as dictated by the
mass and coupling counterterms.
Moreover,
it is easy to see that
at any finite order in the $x$ expansion,
one always
recovers a trivial result in the limit $ m \to 0$ (equivalently
$x\to 1$), which
is the limit
in which to identify non-zero
order parameters of CSB.
These
problems can be circumvented by
advocating a specific ansatz, which resums
the (reorganized) perturbation series in $x$ and is such that
the limit $x \to 1$
no longer gives
a trivial zero mass gap.
As was shown in detail in ref.~\cite{gn2}, the ansatz for the dynamical
mass is
most easily derived by following the steps\footnote{See
also ref.~\cite{jlk} for
a detailed derivation in the QCD context.}:\\
{\it i}) Consider first the general
solution for the running mass,
given as
\begin{equation}
m(\mu^{'}) = m(\mu )\;\; {\exp\left\{ -\int^{g(\mu^{'})}_{g(\mu )}
dg {\gamma_m(g) \over {\beta(g)}} \right\} }
\label{runmass}
\end{equation}
in terms of the effective coupling $g(\mu)$, whose RG evolution is given
as $\mu dg(\mu)/d\mu \equiv \beta(g)$, and $\gamma_m(g) \equiv
-(\mu/m)d(m(\mu))/d\mu$.
Solving (\ref{runmass}) imposing the
``fixed point" boundary condition:
\begin{equation}
M \equiv m(M),
\label{RGBC}
\end{equation}
at two-loop RG order we obtain, after some algebra
(we use the normalization
$\beta(g) = -b_0 g^3 -b_1 g^5 -\cdots$,
$\gamma_m(g) = \gamma_0 g^2 +\gamma_1 g^4
+\cdots$):
\begin{equation}
M_2 = \bar m \;\;
\displaystyle{f^{-\frac{ \gamma_0}{2b_0}}\;\; \Bigl[\frac{ 1 +\frac{b_1}{b_0}
\bar g^2 f^{-1}}{ 1+\frac{b_1}{b_0}\bar g^2} \Bigr]^{ -\frac{\gamma_1}{
2 b_1}
+\frac{\gamma_0}{2 b_0} } }\;,
\label{MRG2}
\end{equation}
where
$\bar m \equiv m(\bar\mu)$, $\bar g \equiv g(\bar\mu)$
($\bar \mu \equiv \mu \sqrt{4 \pi} e^{-\gamma_E/2}$), and
$f \equiv \bar g^2/g^2(M_2)$ satisfies
\begin{equation}
f = \displaystyle{ 1 +2b_0 \bar g^2 \ln \frac{M_2}{\bar \mu }
+\frac{b_1}{b_0} \bar g^2
\ln \Bigl[\frac{ 1 +\frac{b_1}{b_0} \bar g^2 f^{-1}}{
1 +\frac{b_1}{b_0} \bar g^2 }\;f\;\Bigr] }\; ;
\label{f2def}
\end{equation}
(note in (\ref{MRG2}) and (\ref{f2def})
the recursivity in both $f$ and $M_2$).
The necessary
non-logarithmic perturbative corrections to those pure RG results
are then consistently
included as
\begin{equation}
M^P_2 \equiv M_2 \;\Bigl(1 +{2\over 3}\gamma_0 {\bar g^2\over f}
+{K \over{(4 \pi^2)^2}}{\bar g^4\over f^2}+{\cal O}(g^6)\;\Bigr)\;,
\label{Mpole}
\end{equation}
where
the complicated
two-loop coefficient $K$ was calculated
exactly in ref.~\cite{Gray}.
Equation.~(\ref{Mpole}) defines the (infrared-convergent, gauge-invariant)
pole mass~\cite{Tarrach}
$M^P_2$, in terms of the
$\overline{MS}$ mass at two-loop order,
and can be
shown~\cite{jlk} to resum the leading (LL)
{\it and} next-to-leading logarithmic (NLL)
dependence in $\bar m$ to all
orders. \\
{\it ii})
Perform in expressions (\ref{MRG2}), (\ref{f2def}), (\ref{Mpole})
the substitution
$
\bar m \to \bar m v $,
and integrate the resulting expression, denoted by $M^P_2(v)$,
according to
\begin{equation}
\frac{1}{2i\pi} \;\oint \frac{dv}{v}\;e^v M^P_2(v)\;,
\label{contgen}
\end{equation}
where the contour is around the negative real $v$ axis. \\
In \cite{gn2} it was shown that the previous steps correspond
(up to a specific renormalization scheme (RS) change,
allowed on general grounds from RG properties) to a resummation
of the $x$ series as generated from the substitution
(\ref{substitution})\footnote{$v$ is related to the original
expansion parameter $x$ as $x = 1-v/q$, $q$ being the order of the expansion.}.
Moreover this is in fact the only way of rendering compatible the
above $x$ expansion and the ordinary perturbative one, thus obtaining
finite results.
Actually the resummation coincides with the exact result in the
large-$N$ limit of the GN model. Now, since the
summation can be formally
extended to arbitrary RG orders~\cite{gn2}, including consistently as many
arbitrary perturbative correction terms as known in a given theory,
in the QCD case we make the assumption that it
gives an adequate ``trial ansatz",
to be subsequently optimized in a way
to be specified next.
After appropriate rescaling of the basic parameters, $\bar g$
and $\bar m$, by introducing the RG-invariant basic scale $\Lambda_{\overline{MS}}$~
\cite{Lambda}
(at two-loop order),
and the convenient scale-invariant dimensionless ``mass" parameter
\begin{equation}
m''\equiv \displaystyle{(\frac{\bar m}{ \Lambda_{\overline{MS}}}) \;
2^{C}\;[2b_0 \bar g^2]^{-\gamma_0/(2b_0)}
\;\left[1+\frac{b_1}{b_0}\bar g^2\right]^{
\gamma_0/(2 b_0)-\gamma_1/(2 b_1)}}
\; ,
\label{msec2def}
\end{equation}
we end up with the following dynamical mass ansatz:
\begin{equation}
{ M^P_2 (m^{''})\over \Lambda_{\overline{MS}}}
= {2^{-C} \over{2 i \pi}} \oint dy {e^{\;y/m^{''}}
\over{F^A(y) [C + F(y)]^B}} {\left(1 +{{\cal M}_{1}\over{F(y)}}
+{{\cal M}_{2}\over{F^2(y)}} \right)},
\label{contour7}
\end{equation}
where $y \equiv m'' v$, and
$F$ is defined as
\begin{equation}
F(y) \equiv \ln [y] -A \; \ln [F(y)] -(B-C)\; \ln [C +F(y)],
\label{Fdef}
\end{equation}
with $A =\gamma_1/(2 b_1)$, $B =\gamma_0/(2 b_0)-\gamma_1/(2 b_1)$,
$C = b_1/(2b^2_0)$, in terms of the RG coefficients
\cite{betagamma}.
Finally the perturbative corrections in (\ref{contour7})
are simply given as
${\cal M}_{1} =(2/3)(\gamma_0/2b_0)$ and ${\cal M}_{2} = K/(2b_0)^2$. \\
Observe in
fact that, were we in a simplified QCD world, where there would be
{\em no} non-logarithmic perturbative contributions (i.e. such that
${\cal M}_{1} = {\cal M}_{2} = \cdots = 0$ in (\ref{contour7})),
the latter ansatz would
then resums exactly the $x$ variational expansion.
In that case, (\ref{contour7}) would have a very simple
behaviour near the origin $m'' \to 0$. Indeed, it is easy to see
that (\ref{Fdef}) admits an expansion
$
F(y) \simeq C^{(B-C)/A}\;y^{1/A}$ for $y \to 0$,
which immediately implies that (\ref{contour7}) would
give a simple pole at $y \to 0$,
with a residue giving $M_2 = (2C)^{-C}\;\Lambda_{\overline{MS}} $.
Moreover one can always
choose
an appropriate renormalization scheme in which $
b_2$ and $\gamma_2$ are set to zero, as well as all
higher order coefficients, so that there are no other corrections
to the simple above relation.
Now, in the realistic world, ${\cal M}_1$, ${\cal M}_2$, etc can
presumably not be neglected.
We can nevertheless expand
(\ref{contour7}) near $m'' \to 0$ for any known
non-zero ${\cal M}_{i}$,
using
\begin{equation}
\label{hankel}
\frac{1}{2i \pi} \oint dy e^{y/m^{''}} y^\alpha =
\frac{(m^{''})^{1+\alpha}}{\Gamma[-\alpha]}\; ,
\end{equation}
and the resulting Laurent expansion in $(m'')^{1/A}$ may be
analysed for extrema and optimized at different, in principle
arbitrary $(m'')^{1/A}$ orders. An important point, however, is
that the
perturbative corrections do depend on the RS choice, as is well known.
Since the pure RG behaviour in (\ref{contour7})
already gives the order of magnitude, $M \simeq {\rm const} \times \Lambda_{\overline{MS}}$,
we can hope that
a perturbative but optimized treatment of the remaining corrections
is justifed.
In other words we shall perform an
``optimized perturbation" with respect to $m''$
around the non-trivial fixed point
of the RG solution. \\
To take into account this RS freedom, we first introduce
in (\ref{contour7}) an arbitrary scale parameter $a$,
from $\bar \mu \to a\; \bar \mu$.
Accordingly the
perturbative coefficients ${\cal M}_{i}$ in (\ref{contour7}) take a
logarithmic dependence in $a$, simply
fixed order by order from (\ref{MRG2})--(\ref{Mpole})
and the requirement that (\ref{contour7})
differs from the original $\overline{MS}$ expression only by higher order terms.
The $a$-dependence will eventually exhibit a non-trivial extrema
structure and we shall also
optimize the result with respect to
$a$\footnote{This procedure indeed gave very good results~\cite{gn2}
in the GN model, where in particular for low values of $N$ the
optimal values found, $a_{opt}$, are quite different from 1.}.
Actually
there are other possible changes of renormalization prescriptions
affecting expression (\ref{contour7}) in addition to the $a$ dependence,
which may be
taken into account as well. More precisely, the second coefficient
of $\gamma_m(g)$, $\gamma_1$, do depend on the RS choice,
even in MS schemes~\cite{Collins}.
As it turns out, this additional RS freedom is
very welcome in our case: in fact,
the previous picture is
invalidated, due to the occurence of extra branch cuts
in the $y$ plane at $Re[y_{cut}] > 0$,
as given by the zeros of $dy/dF$ from
(\ref{Fdef}) (in addition
to the original cut on the negative real $y$ axis).
This prevents using
the expansion near the origin, eq.~(\ref{hankel}),
since it would lead to
ambiguities of ${\cal O}(\exp(Re[y]/m''))$
for $m'' \to 0$\footnote{
The origin of those singularities is rather
similar to the ambiguities related
to
renormalons~\cite{renormalons}.
An essential difference, however, is that the present
singularities occur in the analytic continuation of a mass parameter
rather than a coupling constant, and
it is possible to move those singularities away by an
appropriate RS change,
as we discuss next. See ref.~\cite{jlk} for an extended discussion.}.
The specific contour around the negative real axis
was suggested by the known properties of the large
$N$ limit of the GN model, and it is not surprising
if the analytic structure
is more complicated in QCD.
However, the nice
point is that the actual position of those cuts
do depend on the RS, via
$A(\gamma_1)$ in (\ref{Fdef}).
Defining
$
\gamma^{'}_1 \equiv \gamma_1 +\Delta \gamma_1$,
we can choose
$Re[y_{cut}] \simeq 0$ for $\Delta\gamma_1 \simeq$ 0.00437 (0.00267) for
$n_f =$ 2 ($n_f =$ 3), respectively.
We therefore consider~\cite{jlk} the general RS change
\begin{equation}
m' = \bar m\;(1+B_1 \bar g^2 +B_2 \bar g^4)\;;\;\;\;g^{'2} = \bar g^2\;
(1 +A_1 \bar g^2 +A_2 \bar g^4)\;
\label{RSchange}
\end{equation}
(implying $\Delta\gamma_1 = 2b_0 B_1 -\gamma_0 A_1$),
and optimize with respect
to this new arbitrariness\footnote{
We also impose a further RS choice,
$
b^{'}_2 = 0$,
$\gamma^{'}_2 = 0$,
which fixes $A_2$, $B_2$ in (\ref{RSchange}) and
guarantees that our two-loop convention for $\Lambda_{\overline{MS}}$
remains unaffected. Note, however, that
(\ref{RSchange}) implies $\Lambda_{\overline{MS}} \to \Lambda_{\overline{MS}} \; \exp\{\frac{A_1}{2b_0}\}\equiv
\Lambda'$. In what
follows we express the results in terms of the original $\Lambda_{\overline{MS}}$.}.
However one soon realizes that
our extension of the ``principle of minimal sensitivity" (PMS)~\cite{pms}
defines a rather complicated optimization
problem.
Fortunately, we can study this problem within some approximations,
which we believe are legitimate.
Since
the ansatz
(\ref{contour7}) (with the above RS change understood, to make
it consistent) would indeed
be optimal with respect
to $m^{''}$ for {\em vanishing} perturbative non-logarithmic
corrections, ${\cal M}_{i} =0$,
we shall assume that the
expansion for small $m''$ is as close as possible to an optimum,
and
define the $m^{''} \to 0$ limit by some relatively crude but standard
approximation,
avoiding numerical optimization
with respect to $m^{''}$.
The approximation we are looking for is
not unique: given (\ref{contour7}), one could construct
different
approximations leading to a finite limit
for $m'' \to 0$~\cite{gn2}. Here
we shall only demonstrate
the feasibility of our program in the simplest possible
realization.
In fact, since we shall anyhow optimize with respect to the RS dependence
we assume that it largely
takes into
account this non-uniqueness due to higher order uncertainties.
Pad\'e approximants are known to greatly improve perturbative
results~\cite{pade}
and often have the effect of smoothing the RS dependence.
We thus take a simple Pad\'e approximant
which by construction restitutes a simple pole for $F \to 0$
(i.e $m'' \to 0$) in
(\ref{contour7}), and gives
\begin{equation}
{M^{Pad\acute{e}}(a,\Delta\gamma_1,B_1,m''\to 0) = \Lambda_{\overline{MS}}\;
(2C)^{-C} \;a\; \exp\{\frac{A_1}{2b_0}\}\;\left[
1 -{{\cal M}^2_{1}(a, \Delta\gamma_1, B_1)\over{{\cal M}_{2}(a,
\Delta\gamma_1, B_1)}}\right] }
\label{Mpade}
\end{equation}
We have performed a rather systematic study of the possible extrema
of (\ref{Mpade}) for arbitrary
$a$, $B_1$ (with $\Delta\gamma_1$
fixed so that the extra cuts start at $ Re[y] \simeq 0 $).
We obtain the flattest such extrema for $a \simeq 2.1$, $B_1 \simeq 0.1$,
which leads to the result
\begin{equation}
M^{Pad\acute{e}}_{opt}(m''\to 0) \simeq 2.97\;\Lambda_{\overline{MS}}(2)\;
\label{Mnum}
\end{equation}
for $n_f=2$. Similarly, we obtain $M^{Pad\acute{e}}_{opt}(m''\to 0)
\simeq 2.85 \Lambda_{\overline{MS}}(3)$
for $n_f=3$.
Note that these values of the dynamical quark masses, if
they are to be consistent
with the expected
range~\cite{Miransky}
of $M_{dyn}\simeq$ 300-400 GeV, call for relatively low
$\Lambda_{\overline{MS}} \simeq $ 100-150 GeV, which
is indeed supported by our results in the next section.
\section{Composite operators and $F_\pi$}
We shall now generalize the ansatz
(\ref{contour7})
for the pion decay constant $F_\pi$.
The main idea is to
do perturbation theory around the same RG evolution solution
with the non-trivial fixed point, as specified by the function $F$ in
(\ref{Fdef}),
with perturbative correction terms obviously specific to $F_\pi$.
A definition
of $F_\pi$ suiting all our purposes is~\cite{GaLeut,derafael}
\begin{equation}
i\;\int d^4q e^{iq.x} \langle 0 \vert T\;A^i_\mu(x) A^k_\nu(0) \vert 0
\rangle =
\delta^{ik} g_{\mu \nu} F^2_\pi +{\cal O}(p_\mu p_\nu)
\label{Fpidef}
\end{equation}
where the axial vector current $A^i_\mu \equiv
(\bar q \gamma_\mu
\gamma_5\lambda^i q)/2$ (the $\lambda^i$'s are Gell-Mann $SU(3)$ matrices
or Pauli matrices for $n_f =3$, $n_f=2$, respectively). Note that according
to (\ref{Fpidef}), $F_\pi$ is to be considered as an order
parameter of CSB~\cite{Stern}. \\
The perturbative expansion of (\ref{Fpidef}) for $m \neq 0$ is available
to the three-loop order, as it can be easily
extracted
from the very similar contributions to the electroweak
$\rho$-parameter, calculated at two loops in \cite{abdel}
and three loops in \cite{Avdeev}.
The appropriate generalization of
(\ref{contour7}) for $F_\pi$
now takes the form
\begin{eqnarray}
& \displaystyle{{F^2_\pi \over{\Lambda_{\overline{MS}}^2}} = (2b_0)\;
{2^{-2 C} a^2\over{2 i \pi}} \oint {dy\over y}\; y^2 {e^{y/m^{''}}}
\frac{1}{F^{\;2 A-1} [C + F]^{\;2 B}} }
\; \times \nonumber \\
& \displaystyle{ {\delta_{\pi }
\left(1 +{\alpha_{\pi}(a)\over{F}}+{\beta_{\pi}(a)
\over{F^2}}
\;+\cdots \right)} }
\label{Fpiansatz}
\end{eqnarray}
in terms of $F(y)$ defined by eq.~(\ref{Fdef}) and where
$\delta_\pi$, $\alpha_\pi(1)$ and $\beta_\pi(1)$,
whose complicated expressions
will be given elsewhere~\cite{jlk},
are fixed by matching the perturbative $\overline{MS}$
expansion in a way to be specified next.
Formula (\ref{Fpiansatz})
necessitates some comments: apart from the obvious changes in the powers of
$F$, $y$, etc,
dictated by dimensional analysis,
note that the perturbative expansion of the (composite operator)
$\langle A_\mu A_\nu \rangle$ in (\ref{Fpidef})
starts at one-loop, but zero $g^2$ order.
This leads to the extra $2b_0 F$ factor in (\ref{Fpiansatz}),
corresponding to an expansion
starting at ${\cal O}(1/g^2)$\footnote{
The ${\cal O}(1/g^2)$
first-order term cancels anyhow
after the necessary subtraction discussed
below.}.
Another difference is that
the perturbative expansion
of (\ref{Fpidef}) is ambiguous
due to remaining divergences
after mass and coupling
constant renormalization. Accordingly it necessitates additional
subtractions which, within our framework,
are nothing but the usual
renormalization procedure for a composite operator, which is (perturbatively)
well-defined~\cite{Collins}.
The only consequence is that,
after a consistent treatment
of the subtracted terms (i.e respecting RG invariance),
the unambiguous determination of the $1/F^n$ perturbative
terms in (\ref{Fpiansatz})
necessitates the knowledge of the $(n+1)$ order
of the ordinary perturbative expansion.
The nice thing, however, is that the subtracted terms only affect the
values of $\alpha_\pi$ and $\beta_\pi$, but not the
{\em form} of the ansatz (\ref{Fpiansatz}), as soon as the order of the
variational-perturbative expansion is larger than 1~\cite{gn2}.
The consistency of
our formalism is checked by noting that the re-expansion of
(\ref{Fpiansatz})
do reproduce correctly the LL and NLL dependence in $\bar m$ of the
perturbative expansion of the composite operator to all orders.
The
analyticity range with respect to $\Delta\gamma_1$, discussed in section
2, remains valid for
(\ref{Fpiansatz})
as well, since the branch cuts are determined by the very same
relation (\ref{Fdef}). We can thus proceed to a
numerical optimization with respect to the RS dependence, along the
same line as the mass case in section 2. Using an appropriate Pad\'e
approximant form to define the $F \to 0$ ($m'' \to 0$) limit,
we obtain the optimal
values as
\begin{equation}
F^{Pad\acute{e}}_{\pi ,opt}(m'' \to 0)
\simeq 0.55\;\Lambda_{\overline{MS}}(2)\;\;\;(0.59\;\Lambda_{\overline{MS}}(3)\;)\;,
\end{equation}
for $n_f =$ 2 (3). With $F_\pi \simeq 92$ MeV,
this gives $\Lambda_{\overline{MS}} \simeq $ 157 (168) MeV, for $n_f =$ 3 (2).
\section{$\langle \bar q q \rangle$ ansatz}
As is well known~\cite{Collins,Miransky},
$\langle \bar q q \rangle$ is not RG-invariant, while
$m \langle \bar q q \rangle$ is; this is thus the relevant quantity
to consider
for applying our RG-invariant construction.
A straightforward generalization of the
derivation in section 3 leads to the ansatz
\begin{eqnarray}
{\bar m \langle \bar q q\rangle \over{\Lambda_{\overline{MS}}^4}} =(2b_0)
{2^{-4 C} a^4\over{2 i \pi}} \oint {dy\over y} {e^{y/m^{''}} y^4
\over{(F)^{4 A-1} [C + F]^{4 B
}}} {
\delta_{\langle \bar q q\rangle}
\left(1 +{\alpha_{\langle \bar q q\rangle}(a)\over{F(y)}}
\;\right)}
\label{qqansatz}
\end{eqnarray}
where again the coefficients
$\delta_{\langle\bar q q\rangle}$ and
$\alpha_{\langle\bar q q\rangle}(1)$ are obtained from matching the
ordinary perturbative expansion after a subtraction,
and will be given explicitely elsewhere~\cite{jlk}.
The perturbative expansion, known up to two-loop order
\cite{Spiridonov,jlk}
implies that one only knows unambiguously the first
order correction ${\cal O}(1/F)$ in (\ref{qqansatz}),
as previously discussed.
Apart from that, (\ref{qqansatz})
has all the expected properties (RG invariance, resumming LL
and NLL dependence etc),
but a clear inconvenience
is that $\langle\bar q q\rangle$ cannot be directly accessed, being
screened by tiny explicit symmetry breaking effects due to $m \neq 0$.
This is of course a well-known problem, not specific to our construction.
However, it is not clear how to consistently include explicit
symmetry breaking effects
within our framework.
As amply discussed, in (\ref{qqansatz})
$m^{''}$ is an arbitrary parameter, destined to reach its
chiral limit $m^{''} \to 0$.
Accordingly, $\bar m \to 0$
for $m'' \to 0$,
so that one presumably expects only to recover a trivial result,
$\bar m \langle\bar q q\rangle \to 0$ for $m'' \to 0$. This is
actually the case:
although
(\ref{qqansatz}) potentially gives a non-trivial result in the
chiral limit, namely the simple
pole residue ($\equiv 2b_0(2C)^{-C} \;\delta_{\langle \bar q q\rangle}
\;\alpha_{\langle \bar q q\rangle}(a)$,
upon neglecting unknown higher-order
purely perturbative corrections), when we require
extrema of
this expression with respect to RS changes,
using for the $m'' \to 0$ limit
a Pad\'e approximant similar to the one for $F_\pi$,
we do {\em not} find
non-zero extrema.
Such a result is not conclusive regarding
the actual value of $\langle\bar q q \rangle(\bar\mu)$,
but it may be considered
a consistency cross-check
of our formalism. \\
On the other hand, we should mention that our basic expression
(\ref{qqansatz}) {\em does} possess non-trivial extrema for some
$m''_{opt} \neq 0$. These
we however refrain from interpreating in a more quantitative way
since, within our framework, we cannot
give to $m''\times \Lambda_{\overline{MS}}$ the meaning of a true, explicit quark mass
(whose
input we in principle need in order to extract a $\langle \bar q q\rangle
$ value
from (\ref{qqansatz})).
At least, it strongly indicates that it should be possible
to extract $\langle \bar q q\rangle$ in the chiral limit,
by introducing in a consistent way
a small explicit symmetry-breaking mass,
$-m_{0,exp} \bar q_i q_i$, to the basic Lagrangian (\ref{xdef}).
\section{Summary}
In this paper we have shown that
the variational expansion in arbitrary $m''$, as developed
in the context of the GN model~\cite{gn2}, can be formally
extended to the QCD case, apart from the complication due to the
presence of extra singularities, which can be however removed
by appropriate RS change.
As a result we
obtain in the chiral limit non-trivial relationships between
$\Lambda_{\overline{MS}}$ and the dynamical masses and order parameters, $F_\pi$,
$\bar m \langle\bar q q\rangle$.
The resulting
expressions in a generalized RS have been numerically optimized, using
a well-motivated Pad\'e approximant form, due to the complexity of the
full optimization problem. The optimal values obtained for $M_q$ and $F_\pi$
are quite encouraging, while for $\langle\bar q q\rangle$
they are quantitatively
not conclusive,
due to the inherent screening by an explicit
mass term of this quantity, in the limit $m \to 0$.
A possible extension to include consistently explicit breaking mass terms
in our formalism is explored in ref.~\cite{jlk}.
\vskip .5 cm
{\large \bf Acknowledgements} \\
We are grateful to Eduardo de Rafael for
valuable remarks and discussions.
J.-L. K. also thanks Georges Grunberg, Heinrich Leutwyler, Jan Stern
and Christof Wetterich for useful discussions.
(C.A) is grateful to the theory group of Imperial College
for their hospitality.\\
| proofpile-arXiv_065-432 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Table of Contents}
\noindent \S0 - INTRODUCTION
\medskip
\noindent \S1 - THEIR PROTOTYPE IS ${\frak K}^{tr}_{\langle\lambda_n:n< \omega
\rangle}$, NOT ${\frak K}^{tr}_\lambda$!
\medskip
\noindent \S2 - ON STRUCTURES LIKE $(\prod\limits_n
\lambda_n,E_m)_{m<\omega}$, $\eta E_m\nu =:\eta(m) = \nu(m)$
\medskip
\noindent \S3 - REDUCED TORSION FREE GROUPS; NON-EXISTENCE OF UNIVERSALS
\medskip
\noindent \S4 - BELOW THE CONTINUUM THERE MAY BE UNIVERSAL STRUCTURES
\medskip
\noindent \S5 - BACK TO ${\frak K}^{rs(p)}$, REAL NON-EXISTENCE RESULTS
\medskip
\noindent \S6 - IMPLICATIONS BETWEEN THE EXISTENCE OF UNIVERSALS
\medskip
\noindent \S7 - NON-EXISTENCE OF UNIVERSALS FOR TREES WITH SMALL DENSITY
\medskip
\noindent \S8 - UNIVERSALS IN SINGULAR CARDINALS
\medskip
\noindent \S9 - METRIC SPACES AND IMPLICATIONS
\medskip
\noindent \S10 - ON MODULES
\medskip
\noindent \S11 - OPEN PROBLEMS
\medskip
\noindent REFERENCES
\eject
\section{Introduction}
{\bf Context.}\hspace{0.15in} In this paper, model theoretic notions (like
superstable, elementary classes) appear in the introduction but not in the
paper itself (so the reader does not need to know them). Only naive set
theory and basic facts on Abelian groups (all in \cite{Fu}) are necessary
for understanding the paper. The basic definitions are reviewed at the end
of the introduction. On the history of the problem of the existence of
universal members, see Kojman, Shelah \cite{KjSh:409}; for more direct
predecessors see Kojman, Shelah \cite{KjSh:447}, \cite{KjSh:455} and
\cite{Sh:456}, but we do not rely on them. For other advances see
\cite{Sh:457}, \cite{Sh:500} and D\v{z}amonja, Shelah \cite{DjSh:614}.
Lately \cite{Sh:622} continue this paper.
\medskip
A class ${\frak K}$ is a class of structures with an embeddability notion.
If not said otherwise, an embedding, is a one to one function preserving
atomic relations and their negations. If ${\frak K}$ is a class and
$\lambda$ is a cardinal, then ${\frak K}_\lambda$ stands for the collection
of all members of ${\frak K}$ of cardinality $\lambda$.\\
We similarly define ${\frak K}_{\leq\lambda}$.
A member $M$ of ${\frak K}_\lambda$ is universal, if every $N\in {\frak
K}_{\le \lambda}$, embeds into $M$. An example is $M=:\bigoplus\limits_\lambda
{\Bbb Q}$, which is universal in ${\frak K}_\lambda$ if ${\frak K}$ is
the class of all torsion-free Abelian groups, under usual embeddings.
We give some motivation to the present paper by a short review of the above
references. The general thesis in these papers, as well as the present one
is:
\begin{Thesis}
\label{0.1}
General Abelian groups and trees with $\omega+1$ levels behave in universality
theorems like stable non-superstable theories.
\end{Thesis}
The simplest example of such a class is the class ${\frak K}^{tr} =:$ trees
$T$ with $(\omega+1)$-levels, i.e. $T\subseteq {}^{\omega\ge}\alpha$ for some
$\alpha$, with the relations $\eta E^0_n\nu =: \eta\restriction n=\nu
\restriction n\ \&\ \lg(\eta)\geq n$. For ${\frak K}^{tr}$ we know
that $\mu^+<\lambda={\rm cf}(\lambda)
<\mu^{\aleph_0}$ implies there is no universal for ${\frak K}^{tr}_\lambda$
(by \cite{KjSh:447}). Classes as ${\frak K}^{rtf}$ (defined in the title),
or ${\frak K}^{rs(p)}$ (reduced separable Abelian $p$-groups) are similar
(though they are not elementary classes) when we consider pure embeddings
(by \cite{KjSh:455}). But it is not less natural to consider usual embeddings
(remembering they, the (Abelian) groups under consideration, are reduced). The
problem is that the invariant has been defined using divisibility, and so
under non-pure embedding those seemed to be erased.
Then in \cite{Sh:456} the non-existence of universals is proved restricting
ourselves to $\lambda>2^{\aleph_0}$ and $(< \lambda)$-stable groups
(see there). These restrictions hurt the generality of the theorem; because
of the first requirement we lose some cardinals. The second requirement
changes the class to one which is not established among Abelian group
theorists (though to me it looks natural).
Our aim was to eliminate those requirements, or show that they are necessary.
Note that the present paper is mainly concerned essentially with results in
ZFC, but they have roots in ``difficulties" in extending independence results
thus providing a case for the
\begin{Thesis}
\label{0.2}
Even if you do not like independence results you better look at them, as you
will not even consider your desirable ZFC results when they are camouflaged
by the litany of many independence results you can prove things.
\end{Thesis}
Of course, independence has interest {\em per se}; still for a given problem in
general a solution in ZFC is for me preferable on an independence result. But
if it gives a method of forcing (so relevant to a series of problems) the
independence result is preferable (of course, I assume there are no other
major differences; the depth of the proof would be of first importance to me).
As occurs often in my papers lately, quotations of {\bf pcf} theory appear.
This paper is also a case of
\begin{Thesis}
\label{0.3}
Assumption of cases of not GCH at singular (more generally $pp\lambda>
\lambda^+$) are ``good", ``helpful" assumptions; i.e. traditionally uses of
GCH proliferate mainly not from conviction but as you can prove many theorems
assuming $2^{\aleph_0}=\aleph_1$ but very few from $2^{\aleph_0}>\aleph_1$,
but assuming $2^{\beth_\omega}>\beth^+_\omega$ is helpful in proving.
\end{Thesis}
Unfortunately, most results are only almost in ZFC as they use extremely weak
assumptions from {\bf pcf}, assumptions whose independence is not know. So
practically it is not tempting to try to remove them as they may be
true, and it is unreasonable to try to prove independence results before
independence results on {\bf pcf} will advance.
In \S1 we give an explanation of the earlier difficulties: the problem
(of the existence of universals for
${\frak K}^{rs(p)}$) is not like looking for ${\frak K}^{tr}$ (trees with
$\omega+1$ levels) but for ${\frak K}^{tr}_{\langle\lambda_\alpha:\alpha<
\omega\rangle}$ where
\begin{description}
\item[($\oplus$)] $\lambda^{\aleph_0}_n<\lambda_{n+1}<\mu$, $\lambda_n$
are regular and $\mu^+<\lambda=\lambda_\omega={\rm cf}(\lambda)<\mu^{\aleph_0}$
and ${\frak K}^{tr}_{\langle\lambda_n:n<\omega\rangle}$ is
\[\{T:T\mbox{ a tree with $\omega+1$ levels, in level $n< \omega$ there are
$\lambda_n$ elements}\}.\]
\end{description}
We also consider ${\frak K}^{tr}_{\langle\lambda_\alpha:\alpha\leq\omega
\rangle}$, which is defined similarly but the level $\omega$ of $T$ is
required to have $\lambda_\omega$ elements.
\noindent For ${\frak K}^{rs(p)}$ this is proved fully, for ${\frak K}^{rtf}$
this is proved for the natural examples.
\medskip
In \S2 we define two such basic examples: one is ${\frak K}^{tr}_{\langle
\lambda_\alpha:\alpha \le \omega \rangle}$, and the second is
${\frak K}^{fc}_{\langle \lambda_\alpha:\alpha\leq\omega\rangle}$.
The first is a tree with $\omega+1$ levels;
in the second we have slightly less restrictions. We have $\omega$ kinds of
elements and a function from the $\omega$-th-kind to the $n$th kind. We can
interpret a tree $T$ as a member of the second example: $P^T_\alpha = \{x:x
\mbox{ is of level }\alpha\}$ and
\[F_n(x) = y \quad\Leftrightarrow \quad x \in P^T_\omega\ \&\ y \in
P^T_n\ \&\ y <_T x.\]
For the second we recapture the non-existence theorems.
But this is not one of the classes we considered originally.
In \S3 we return to ${\frak K}^{rtf}$ (reduced torsion free Abelian groups)
and prove the non-existence of universal ones in $\lambda$ if $2^{\aleph_0}
< \mu^+<\lambda={\rm cf}(\lambda)<\mu^{\aleph_0}$ and an additional very weak set
theoretic assumption (the consistency of its failure is not known).
\medskip
\noindent Note that (it will be proved in \cite{Sh:622}):
\begin{description}
\item[($\otimes$)] if $\lambda<2^{\aleph_0}$ then ${\frak K}^{rtf}_\lambda$
has no universal members.
\end{description}
\noindent Note: if $\lambda=\lambda^{\aleph_0}$ then ${\frak K}^{tr}_\lambda$
has universal member also ${\frak K}^{rs(p)}_\lambda$ (see \cite{Fu}) but not
${\frak K}^{rtf}_\lambda$ (see \cite[Ch IV, VI]{Sh:e}).
\noindent We have noted above that for ${\frak K}^{rtf}_\lambda$ requiring
$\lambda\geq 2^{\aleph_0}$ is reasonable: we can prove (i.e. in ZFC) that
there is no universal member. What about ${\frak K}^{rs(p)}_\lambda$? By \S1
we should look at ${\frak K}^{tr}_{\langle\lambda_i:i\le\omega\rangle}$,
$\lambda_\omega=\lambda<2^{\aleph_0}$, $\lambda_n<\aleph_0$.
In \S4 we prove the consistency of the existence of universals for
${\frak K}^{tr}_{\langle\lambda_i:i \le\omega\rangle}$
when $\lambda_n\leq \omega$, $\lambda_\omega=\lambda< 2^{\aleph_0}$ but of
cardinality
$\lambda^+$; this is not the original problem but it seems to be a reasonable
variant, and more seriously, it shoots down the hope to use the present
methods of proving non-existence of universals. Anyhow this is
${\frak K}^{tr}_{\langle \lambda_i:i \le\omega\rangle}$ not
${\frak K}^{rs(p)}_{\lambda_\omega}$, so we proceed to reduce this problem to
the previous one under a mild variant of MA. The intentions are to deal with
``there is universal of cardinality $\lambda$" in D\v{z}amonja Shelah
\cite{DjSh:614}.
The reader should remember that the consistency of e.g.
\begin{quotation}
{\em
\noindent $2^{\aleph_0}>\lambda>\aleph_0$ and there is no $M$ such that $M\in
{\frak K}^{rs(p)}$ is of cardinality $<2^{\aleph_0}$ and universal for
${\frak K}^{rs(p)}_\lambda$
}
\end{quotation}
is much easier to obtain, even in a wider context (just add many Cohen reals).
\medskip
As in \S4 the problem for ${\frak K}^{rs(p)}_\lambda$ was reasonably resolved
for $\lambda<2^{\aleph_0}$ (and for $\lambda = \lambda^{\aleph_0}$, see
\cite{KjSh:455}), we now, in \S5 turn to $\lambda>2^{\aleph_0}$ (and
$\mu,\lambda_n$)
as in $(\oplus)$ above.
As in an earlier proof we use $\langle C_\delta: \delta\in S\rangle$ guessing
clubs for $\lambda$ (see references or later here), so $C_\delta$ is a subset
of $\delta$ (so the invariant depends on the representation of $G$ but this
disappears when we divide by suitable ideal on $\lambda$).
What we do is: rather than trying to code a subset of
$C_\delta$ (for $\bar G=\langle G_i:i<\lambda\rangle$ a representation or
filtration of the structure $G$ as the union of an increasing continuous
sequence of structures of smaller cardinality)
by an element of $G$, we do it, say, by some set $\bar x=\langle x_t:t\in
{\rm Dom}(I)\rangle$, $I$ an ideal on ${\rm Dom}(I)$ (really by $\bar x/I$). At first
glance if ${\rm Dom}(I)$ is infinite we cannot list {\em a priori} all possible
such sequences for a candidate $H$ for being a universal member, as their
number is $\ge\lambda^{\aleph_0}=\mu^{\aleph_0}$. But we can find a family
\[{\cal F}\subseteq\{\langle x_t:t\in A\rangle:\ A\subseteq{\rm Dom}(I),\ A\notin
I,\ x_t\in\lambda\}\]
of cardinality $<\mu^{\aleph_0}$ such that for any $\bar{x}=\langle x_t:t\in
{\rm Dom}(I)\rangle$, for some $\bar y\in {\cal F}$ we have $\bar y=\bar
x\restriction {\rm Dom}(\bar y)$.
\medskip
As in \S3 there is such ${\cal F}$ except when some set theoretic statement
related to {\bf pcf} holds. This statement is extremely strong, also in the
sense that we do not know to prove its consistency at present. But
again, it seems unreasonable to try to prove its consistency before the
{\bf pcf} problem was dealt with. Of course, we may try to improve the
combinatorics to avoid the use of this statement, but are naturally
discouraged by the possibility that the {\bf pcf} statement can be proved in
ZFC; thus we would retroactively get the non-existence of universals in ZFC.
\medskip
In \S6, under weak {\bf pcf} assumptions, we prove: if there is a universal
member in ${\frak K}^{fc}_\lambda$ then there is one in
${\frak K}^{rs(p)}_\lambda$; so making the connection between the
combinatorial structures and the algebraic ones closer.
\medskip
In \S7 we give other weak {\bf pcf} assumptions which suffice to prove
non-existence of universals in ${\frak K}^x_{\langle\lambda_\alpha:\alpha
\le\omega\rangle}$ (with $x$ one of the ``legal'' values):
$\max{\rm pcf}\{\lambda_n:n<\omega\}=\lambda$ and ${\cal P}(
\{\lambda_n:n<\omega\})/J_{<\lambda}\{\lambda_n:n<\omega\}$ is an infinite
Boolean Algebra (and $(\oplus)$ holds, of course).
\medskip
In \cite{KjSh:409}, for singular $\lambda$ results on non-existence of
universals (there on orders) can be gotten from these weak {\bf pcf}
assumptions.
\medskip
In \S8 we get parallel results from, in general, more complicated assumptions.
\medskip
In \S9 we turn to a closely related class: the class of metric spaces with
(one to one) continuous embeddings, similar results hold for it. We also
phrase a natural criterion for deducing the non-existence of universals
from one class to another.
\medskip
In \S10 we deal with modules and in \S11 we discuss the open problems of
various degrees of seriousness.
The sections are written in the order the research was done.
\begin{notation}
\label{0.4}
Note that we deal with trees with $\omega+1$ levels rather than, say, with
$\kappa+1$, and related situations, as those cases are quite popular. But
inherently the proofs of \S1-\S3, \S5-\S9 work for $\kappa+1$ as well (in
fact, {\bf pcf} theory is stronger).
\noindent For a structure $M$, $\|M\|$ is its cardinality.
\noindent For a model, i.e. a structure, $M$ of cardinality $\lambda$,
where $\lambda$ is regular uncountable, we say that $\bar M$ is a
representation (or filtration) of $M$ if $\bar M=\langle M_i:i<\lambda\rangle$
is an increasing continuous sequence of submodels of cardinality $<\lambda$
with union $M$.
\noindent For a set $A$, we let $[A]^\kappa = \{B:B \subseteq A \mbox{ and }
|B|=\kappa\}$.
\noindent For a set $C$ of ordinals,
$${\rm acc}(C)=\{\alpha\in C: \alpha=\sup(\alpha \cap C)\}, \mbox{(set of
accumulation points)}$$
$$
{\rm nacc}(C)=C\setminus {\rm acc}(C) \ (=\mbox{ the set of non-accumulation points}).
$$
We usually use $\eta$, $\nu$, $\rho$ for sequences of ordinals; let
$\eta\vartriangleleft\nu$ means $\eta$ is an initial segment of $\nu$.
Let ${\rm cov}(\lambda, \mu, \theta, \sigma)= \min\{|{\cal P}|: {\cal P}\subseteq
[\lambda]^{<\mu}$, and for every $A\in [\lambda]^{<\theta}$ for some $\alpha<
\sigma$ and $B_i\in {\cal P}$ for $i< \alpha$ we have $A\subseteq
\bigcup\limits_{i< \alpha} B_i\}$.
\noindent Remember that for an ordinal $\alpha$, e.g. a natural number,
$\alpha=\{\beta:\beta<\alpha\}$.
\end{notation}
\begin{notation}
\noindent ${\frak K}^{rs(p)}$ is the class of (Abelian) groups which are
$p$-groups (i.e. $(\forall x\in G)(\exists n)[p^nx = 0]$) reduced (i.e. have
no divisible non-zero subgroups) and separable (i.e. every cyclic pure
subgroup is a direct summand). See \cite{Fu}.
\noindent For $G\in{\frak K}^{rs(p)}$ define a norm $\|x\|=\inf\{2^{-n}:
p^n \mbox{ divides } x\}$. Now every $G\in {\frak K}^{rs(p)}$ has a
basic subgroup $B=\bigoplus\limits_{\scriptstyle n<\omega\atop\scriptstyle
i<\lambda_n} {\Bbb Z} x^n_i$, where $x^n_i$ has order $p^{n+1}$, and every
$x\in G$ can be represented as $\sum\limits_{\scriptstyle n<\omega\atop
\scriptstyle i<\lambda_n} a^n_ix^n_i$, where for each $n$, $w_n(x)=\{i<
\lambda_n:a^n_ix^n_i\ne 0\}$ is finite.
\noindent ${\frak K}^{rtf}$ is the class of Abelian groups which are reduced
and torsion free (i.e. $G \models nx = 0$, $n>0$\qquad
$\Rightarrow\qquad x = 0$).
\noindent For a group $G$ and $A\subseteq G$ let $\langle A\rangle_G$ be the
subgroup of $G$ generated by $A$, we may omit the subscript $G$ if
clear from the context.
\noindent Group will mean an Abelian group, even if not stated
explicitly.
\noindent Let $H\subseteq_{pr} G$ means $H$ is a pure subgroup of
$G$.
\noindent Let $nG=\{nx: x\in G\}$ and let $G[n]=\{x\in G: nx=0\}$.
\end{notation}
\begin{notation}
${\frak K}$ will denote a class of structures with the same
vocabulary, with a notion of embeddability, equivalently a notion
$\leq_{{\frak K}}$ of submodel.
\end{notation}
\section{Their prototype is ${\frak K}^{tr}_{\langle \lambda_n:n<\omega
\rangle}$ not ${\frak K}^{tr}$!}
If we look for universal member in ${\frak K}^{rs(p)}_\lambda$, thesis
\ref{0.1} suggests to us to think it is basically ${\frak K}^{tr}_\lambda$
(trees with $\omega+1$ levels, i.e. ${\frak K}^{tr}_{\lambda}$ is our
prototype), a way followed in \cite{KjSh:455}, \cite{Sh:456}. But, as
explained in the introduction, this does not give answer for the case
of usual embedding for the family of all such groups. Here we show
that for this case the thesis should be corrected. More concretely,
the choice of the prototype means the choice of what we expect is the
division of the possible classes. That is for a family of classes a
choice of a prototype assert that we believe that they all behave in
the same way.
We show that looking for a universal member $G$ in ${\frak K}^{rs(p)}_\lambda$
is like looking for it among the $G$'s with density $\le\mu$
($\lambda,\mu$, as usual, as in $(\oplus)$ from \S0). For
${\frak K}^{rtf}_\lambda$ we get weaker results which still cover the examples
usually constructed, so showing that the restrictions in \cite{KjSh:455} (to
pure embeddings) and \cite{Sh:456} (to $(<\lambda)$-stable groups) were
natural.
\begin{Proposition}
\label{1.1}
Assume that $\mu=\sum\limits_{n<\omega}\lambda_n=\lim\sup\limits_n\lambda_n$,
$\mu\le\lambda\le\mu^{\aleph_0}$, and $G$ is a reduced separable
$p$-group such that
\[|G|=\lambda\quad\mbox{ and }\quad\lambda_n(G)=:\dim((p^n G)[p]/
(p^{n+1}G)[p])\le\mu\]
(this is a vector space over ${\Bbb Z}/p {\Bbb Z}$, hence the dimension is
well defined). \\
{\em Then} there is a reduced separable $p$-group $H$ such that
$|H|=\lambda$, $H$ extends $G$ and $(p^nH)[p]/(p^{n+1}H)[p]$ is a
group of dimension $\lambda_n$ (so if $\lambda_n\geq \aleph_0$, this
means cardinality $\lambda_n$).
\end{Proposition}
\begin{Remark}
\label{1.1A}
So for $H$ the invariants from \cite{KjSh:455} are trivial.
\end{Remark}
\proof (See Fuchs \cite{Fu}). We can find $z^n_i$ (for
$n<\omega$, $i<\lambda_n(G)\le\mu$) such that:
\begin{description}
\item[(a)] $z^n_i$ has order $p^n$,
\item[(b)] $B=\sum\limits_{n,i}\langle z^n_i \rangle_G$ is a direct sum,
\item[(c)] $B$ is dense in $G$ in the topology induced by the norm
\[\|x\|=\min\{2^{-n}:p^n \mbox{ divides } x \mbox{ in } G\}.\]
\end{description}
For each $n<\omega$ and $i<\lambda_n(G)$ ($\le\mu$) choose $\eta^n_i\in
\prod\limits_{m<\omega}\lambda_m$, pairwise distinct such that for $(n^1,i^1)
\neq (n^2,i^2)$ for some $n(*)$ we have:
\[\lambda_n \ge \lambda_{n(*)}\qquad \Rightarrow\qquad \eta^{n^1}_{i^1}(n)
\neq \eta^{n^2}_{i^2}(n).\]
Let $H$ be generated by $G$, $x^m_i$ ($i<\lambda_m$, $m<\omega$),
$y^{n,k}_i$ ($i<\lambda_n$, $n<\omega$, $n\le k<\omega)$ freely except for:
\begin{description}
\item[($\alpha$)] the equations of $G$,
\item[($\beta$)] $y^{n,n}_i = z^n_i$,
\item[($\gamma$)] $py^{n,k+1}_i - y^{n,k}_i = x^k_{\eta^n_i(k)}$,
\item[($\delta$)] $p^{n+1}x^n_i = 0$,
\item[($\varepsilon$)] $p^{k+1}y^{n,k}_i = 0$.
\end{description}
Now check. \hfill$\square_{\ref{1.1}}$
\begin{Definition}
\label{1.2}
\begin{enumerate}
\item ${\bf t}$ denotes a sequence $\langle t_i:i<\omega\rangle$, $t_i$ a
natural number $>1$.
\item For a group $G$ we define
\[G^{[{\bf t}]}=\{x\in G:\bigwedge_{j<\omega}[x\in (\prod_{i<j} t_i)
G]\}.\]
\item We can define a semi-norm $\|-\|_{{\bf t}}$ on $G$
\[\|x\|_{{\bf t}}=\min\{2^{-i}:x\in (\prod_{j<i} t_j)G\}\]
and so the semi-metric
\[d_{{\bf t}}(x,y)=\|x-y\|_{{\bf t}}.\]
\end{enumerate}
\end{Definition}
\begin{Remark}
\label{1.2A}
So, if $\|-\|_{{\bf t}}$ is a norm, $G$ has a completion under $\|-\|_{{\bf t}}$,
which we call $\|-\|_{{\bf t}}$-completion; if ${\bf t}=\langle i!:i<\omega
\rangle$ we refer to $\|-\|_{{\bf t}}$ as $\Bbb Z$-adic norm, and this induces
$\Bbb Z$-adic topology, so we can speak of $\Bbb Z$-adic completion.
\end{Remark}
\begin{Proposition}
\label{1.3}
Suppose that
\begin{description}
\item[($\otimes_0$)] $\mu=\sum\limits_n\lambda_n$ and $\mu\le\lambda\le
\mu^{\aleph_0}$ for simplicity, $2<2\cdot\lambda_n\le\lambda_{n+1}$ (maybe
$\lambda_n$ is finite!),
\item[($\otimes_1$)] $G$ is a torsion free group, $|G|=\lambda$, and
$G^{[{\bf t}]}=\{0\}$,
\item[($\otimes_2$)] $G_0\subseteq G$, $G_0$ is free and $G_0$ is
${\bf t}$-dense in $G$ (i.e. in the topology induced by the metric $d_{{\bf t}}$),
where ${\bf t}$ is a sequence of primes.
\end{description}
{\em Then} there is a torsion free group $H$, $G\subseteq H$, $H^{[{\bf t}]}
=\{0\}$, $|H|=\lambda$ and, under $d_{{\bf t}}$, $H$ has density $\mu$.
\end{Proposition}
\proof Let $\{x_i:i<\lambda\}$ be a basis of $G_0$. Let $\eta_i\in
\prod\limits_{n<\omega} \lambda_n$ for $i<\mu$ be distinct such that
$\eta_i(n+1)\geq \lambda_n$ and
\[i\ne j\qquad \Rightarrow\qquad (\exists m)(\forall n)[m \le n \quad
\Rightarrow\quad \eta_i(n) \ne \eta_j(n)].\]
Let $H$ be generated by
\[G,\ \ x^m_i \mbox{ (for $i<\lambda_m$, $m<\omega$), }\ y^n_i \mbox{
(for $i<\mu$, $n<\omega$)}\]
freely except for
\begin{description}
\item[(a)] the equations of $G$,
\item[(b)] $y^0_i = x_i$,
\item[(c)] $t_n\, y^{n+1}_i + y^n_i = x^n_{\eta_i(n)}$.
\end{description}
\medskip
\noindent{\bf Fact A}\hspace{0.15in} $H$ extends $G$ and is torsion free.
\noindent [Why? As $H$ can be embedded into the divisible hull of $G$.]
\medskip
\noindent{\bf Fact B}\hspace{0.15in} $H^{[{\bf t}]}= \{0\}$.
\proof Let $K$ be a countable pure subgroup of $H$ such that $K^{[{\bf t}]}\ne
\{0\}$. Now without loss of generality $K$ is generated by
\begin{description}
\item[(i)] $K_1\subseteq G\cap\mbox{ [the $d_{{\bf t}}$--closure of $\langle x_i:
i\in I\rangle_G]$]}$, where $I$ is a countable infinite subset of $\lambda$
and $K_1\supseteq\langle x_i:i\in I\rangle_G$,
\item[(ii)] $y^m_i$, $x^n_j$ for $i\in I$, $m<\omega$ and $(n,j)\in
J$, where $J\subseteq \omega\times \lambda$ is countable and
\[i\in I,\ n<\omega\qquad\Rightarrow\qquad (n,\eta_i(n))\in J.\]
\end{description}
Moreover, the equations holding among those elements are deducible from the
equations of the form
\begin{description}
\item[(a)$^-$] equations of $K_1$,
\item[(b)$^-$] $y^0_i=x_i$ for $i \in I$,
\item[(c)$^-$] $t_n\,y^{n+1}_i+y^n_i=x^n_{\eta_i(n)}$ for $i\in I,n<\omega$.
\end{description}
\noindent We can find $\langle k_i:i<\omega\rangle$ such that
\[[n\ge k_i\ \&\ n\ge k_j\ \&\ i \ne j\qquad \Rightarrow\qquad \eta_i(n)\ne
\eta_j(n)].\]
Let $y \in K\setminus\{0\}$. Then for some $j$, $y\notin (\prod\limits_{i<j}
t_i)G$, so for some finite $I_0\subseteq I$ and finite $J_0\subseteq J$ and
\[y^* \in\langle\{x_i:i\in I_0\}\cup\{x^n_\alpha:(n,\alpha)\in J_0\}
\rangle_K\]
we have $y-y^*\in(\prod\limits_{i<j} t_i) G$. Without loss of generality $J_0
\cap\{(n,\eta_i(n)):i\in I,\ n\ge k_i\}=\emptyset$. Now there is a
homomorphism $\varphi$ from $K$ into the divisible hull $K^{**}$ of
\[K^* = \langle\{x_i:i\in I_0\}\cup\{x^n_j:(n,j)\in J_0\}\rangle_G\]
such that ${\rm Rang}(\varphi)/K^*$ is finite. This is enough.
\medskip
\noindent{\bf Fact C}\hspace{0.15in} $H_0=:\langle x^n_i:n<\omega,i<\lambda_n
\rangle_H$ is dense in $H$ by $d_{{\bf t}}$.
\proof Straight as each $x_i$ is in the $d_{{\bf t}}$-closure of $H_0$ inside $H$.
\medskip
Noting then that we can increase the dimension easily, we are done.
\hfill$\square_{\ref{1.3}}$
\section{On structures like $(\prod\limits_n \lambda_n,E_m)_{m<\omega}$,
$\eta E_m \nu =: \eta(m)=\nu(m)$}
\begin{Discussion}
\label{2.1}
We discuss the existence of universal members in cardinality $\lambda$,
$\mu^+<\lambda<\mu^{\aleph_0}$, for certain classes of groups. The claims
in \S1 indicate that the problem is similar not to the problem of the
existence of a universal member in ${\frak K}^{tr}_\lambda$ (the class of
trees with $\lambda$ nodes, $\omega+1$ levels) but to the one where the first
$\omega$ levels, are each with $<\mu$ elements. We look more carefully and
see that some variants are quite different.
The major concepts and Lemma (\ref{2.3}) are similar to those of \S3, but
easier. Since detailed proofs are given in \S3, here we give somewhat
shorter proofs.
\end{Discussion}
\begin{Definition}
\label{2.2}
For a sequence $\bar\lambda=\langle\lambda_i:i\le\delta\rangle$ of cardinals
we define:
\begin{description}
\item[(A)] ${\frak K}^{tr}_{\bar \lambda}=\{T:\,T$ is a tree with $\delta
+1$ levels (i.e. a partial order such that
\qquad\quad for $x\in T$, ${\rm lev}_T(x)=:{\rm otp}(\{y:y<x\})$ is an ordinal
$\le\delta$) such
\qquad\quad that:\quad ${\rm lev}_i(T)=:\{x\in T:{\rm lev}_T(x)=i\}$ has cardinality
$\le\lambda_i\}$,
\item[(B)] ${\frak K}^{fc}_{\bar\lambda}=\{M:\,M=(|M|,P_i,F_i)_{i\le\delta}$,
$|M|$ is the disjoint union of
\qquad\quad $\langle P_i:i\le\delta\rangle$, $F_i$ is a function from
$P_\delta$ to $P_i$, $\|P_i\|\le\lambda_i$,
\qquad\quad $F_\delta$ is the identity (so can be omitted)$\}$,
\item[(C)] If $[i\le\delta\quad \Rightarrow\quad \lambda_i=\lambda]$ then we
write $\lambda$, $\delta+1$ instead of
$\langle\lambda_i:i\le\delta\rangle$.
\end{description}
\end{Definition}
\begin{Definition}
\label{2.2A}
Embeddings for ${\frak K}^{tr}_{\bar\lambda}$, ${\frak K}^{fc}_{\bar\lambda}$
are defined naturally: for ${\frak K}^{tr}_{\bar\lambda}$ embeddings preserve
$x<y$, $\neg x<y$, ${\rm lev}_T(x)=\alpha$; for ${\frak K}^{fc}_{\bar\lambda}$
embeddings are defined just as for models.
If $\delta^1=\delta^2=\delta$ and $[i<\delta\quad\Rightarrow\quad\lambda^1_i
\le\lambda^2_i]$ and $M^\ell\in{\frak K}^{fc}_{\bar\lambda^\ell}$, (or $T^\ell
\in{\frak K}^{tr}_{\bar\lambda^\ell}$) for $\ell=1,2$, then an embedding of
$M^1$ into $M^2$ ($T^1$ into $T^2$) is defined naturally.
\end{Definition}
\begin{Lemma}
\label{2.3}
Assume $\bar\lambda=\langle\lambda_i:i\le\delta\rangle$ and $\theta$, $\chi$
satisfy (for some $\bar C$):
\begin{description}
\item[(a)] $\lambda_\delta$, $\theta$ are regular, $\bar C=\langle C_\alpha:
\alpha\in S\rangle$, $S\subseteq\lambda=:\lambda_\delta$, $C_\alpha\subseteq
\alpha$, for every club $E$ of $\lambda$ for some $\alpha$ we have $C_\alpha
\subseteq E$, $\lambda_\delta<\chi\le |C_\alpha|^\theta$ and ${\rm otp}(C_\alpha)
\ge\theta$,
\item[(b)] $\lambda_i\le\lambda_\delta$,
\item[(c)] there are $\theta$ pairwise disjoint sets $A\subseteq\delta$
such that $\prod\limits_{i\in A}\lambda_i\ge\lambda_\delta$.
\end{description}
{\em Then}
\begin{description}
\item[($\alpha$)] there is no universal member in ${\frak K}^{fc}_{\bar
\lambda}$;\quad moreover
\item[($\beta$)] if $M_\alpha\in {\frak K}^{fc}_{\bar\lambda}$ or even
$M_\alpha\in {\frak K}^{fc}_{\lambda_\delta}$ for $\alpha<\alpha^*<\chi$
{\em then} some $M\in {\frak K}^{fc}_{\bar\lambda}$ cannot be embedded into
any $M_\alpha$.
\end{description}
\end{Lemma}
\begin{Remark}
\label{2.3A}
Note that clause $(\beta)$ is relevant to our discussion in \S1: the
non-universality is preserved even if we increase the density and,
also, it is witnessed even by non-embeddability in many models.
\end{Remark}
\proof Let $\langle A_\varepsilon:\varepsilon<\theta\rangle$ be as in clause
(c) and let $\eta^\varepsilon_\alpha\in\prod\limits_{i\in A_\varepsilon}
\lambda_i$ for $\alpha<\lambda_\delta$ be pairwise distinct. We fix $M_\alpha
\in {\frak K}^{fc}_{\lambda_\delta}$ for $\alpha<\alpha^*<\chi$.
\noindent For $M\in {\frak K}^{fc}_{\bar\lambda}$, let $\bar M=(|M|,P^M_i,
F^M_i)_{i\le\delta}$ and let $\langle M_\alpha: \alpha<
\lambda_\delta\rangle$ be a representation (=filtration) of $M$; for
$\alpha\in S$, $x\in P^M_\delta$, let
\[\begin{ALIGN}
{\rm inv}(x,C_\alpha;\bar M)=\big\{\beta\in C_\alpha:&\mbox{for some }\varepsilon
<\theta\mbox{ and } y\in M_{\min(C_\alpha\setminus (\beta+1))}\\
&\mbox{we have }\ \bigwedge\limits_{i\in A_\varepsilon} F^M_i(x)=F^M_i(y)\\
&\mbox{\underbar{but} there is no such } y\in M_\beta\big\}.
\end{ALIGN}\]
\[{\rm Inv}(C_\alpha,\bar M)=\{{\rm inv}(x,C_\alpha,\bar M):x\in P^M_\delta\}.\]
\[{\rm INv}(\bar M,\bar C)=\langle{\rm Inv}(C_\alpha,\bar M):\alpha\in S\rangle.\]
\[{\rm INV}(\bar M,\bar C)={\rm INv}(\bar M,\bar C)/{\rm id}^a(\bar C).\]
Recall that
\[{\rm id}^a(\bar C)=\{T\subseteq\lambda:\mbox{ for some club $E$ of
$\lambda$ for no $\alpha\in T$ is $C_\alpha\subseteq E$}\}.\]
The rest should be clear (for more details see proofs in \S3), noticing
\begin{Fact}
\label{2.3B}
\begin{enumerate}
\item ${\rm INV}(\bar M,\bar C)$ is well defined, i.e. if $\bar M^1$, $\bar M^2$
are representations (=filtrations) of $M$ then ${\rm INV}(\bar M^1,\bar
C)={\rm INV}(\bar M^2,\bar C)$.
\item ${\rm Inv}(C_\alpha,\bar M)$ has cardinality $\le\lambda$.
\item ${\rm inv}(x,C_\alpha;\bar M)$ is a subset of $C_\alpha$ of cardinality
$\le \theta$.
\end{enumerate}
\end{Fact}
\hfill$\square_{\ref{2.3}}$
\begin{Conclusion}
\label{2.4}
If $\mu=\sum\limits_{n<\omega}\lambda_n$ and $\lambda^{\aleph_0}_n<
\lambda_{n+1}$ and $\mu^+<\lambda_\omega={\rm cf}(\lambda_\omega)<\mu^{\aleph_0}$,
{\em then} in ${\frak K}^{fc}_{\langle\lambda_\alpha:\alpha\le\omega\rangle}$
there is no universal member and even in ${\frak K}^{fc}_{\langle
\lambda_\omega:\alpha\le\omega\rangle}$ we cannot find a member universal
for it.
\end{Conclusion}
\proof Should be clear or see the proof in \S3.
\hfill$\square_{\ref{2.4}}$
\section{Reduced torsion free groups: Non-existence of universals}
We try to choose torsion free reduced groups and define invariants so that
in an extension to another such group $H$ something survives. To this end
it is natural to stretch ``reduced" near to its limit.
\begin{Definition}
\label{3.1}
\begin{enumerate}
\item ${\frak K}^{tf}$ is the class of torsion free (abelian) groups.
\item ${\frak K}^{rtf}=\{G\in {\frak K}^{tf}:{\Bbb Q}$ is not embeddable into
$G$ (i.e. $G$ is reduced)$\}$.
\item ${\bf P}^*$ denotes the set of primes.
\item For $x\in G$, ${\bf P}(x,G)=:\{p\in{\bf P}^*: \bigwedge\limits_n x\in
p^n G\}$.
\item ${\frak K}^x_\lambda=\{G\in{\frak K}^x:\|G\|=\lambda\}$.
\item If $H\in {\frak K}^{rtf}_\lambda$, we say $\bar H$ is a representation
or filtration of $H$ if $\bar H=\langle
H_\alpha:\alpha<\lambda\rangle$ is increasing continuous and
$H=\bigcup\limits_{\alpha<\lambda} H_\alpha$,
$H\in {\frak K}^{rtf}$ and each $H_\alpha$ has cardinality $<\lambda$.
\end{enumerate}
\end{Definition}
\begin{Proposition}
\label{3.2}
\begin{enumerate}
\item If $G\in {\frak K}^{rtf}$, $x\in G\setminus\{0\}$, $Q\cup{\bf P}(x,G)
\subsetneqq{\bf P}^*$, $G^+$ is the group generated by $G,y,y_{p,\ell}$ ($\ell
<\omega$, $p\in Q$) freely, except for the equations of $G$ and
\[y_{p,0}=y,\quad py_{p,\ell+1}=y_{p,\ell}\quad \mbox{ and }\quad
y_{p,\ell}=z\mbox{ when } z\in G,p^\ell z=x\]
{\em then} $G^+\in {\frak K}^{rtf}$, $G\subseteq_{pr}G^+$ (pure extension).
\item If $G_i\in {\frak K}^{rtf}$ ($i<\alpha$) is $\subseteq_{pr}$-increasing
{\em then} $G_i\subseteq_{pr}\bigcup\limits_{j<\alpha}G_j\in{\frak K}^{rtf}$
for every $i<\alpha$.
\end{enumerate}
\end{Proposition}
The proof of the following lemma introduces a method quite central to this
paper.
\begin{Lemma}
\label{3.3}
Assume that
\begin{description}
\item[$(*)^1_\lambda$] $2^{\aleph_0}+\mu^+<\lambda={\rm cf}(\lambda)<
\mu^{\aleph_0}$,
\item[$(*)^2_\lambda$] for every $\chi<\lambda$, there is $S\subseteq
[\chi]^{\le\aleph_0}$, such that:
\begin{description}
\item[(i)] $|S|<\lambda$,
\item[(ii)] if $D$ is a non-principal ultrafilter on $\omega$ and $f:D
\longrightarrow\chi$ {\em then} for some $a\in S$ we have
\[\bigcap \{X\in D:f(X)\in a\}\notin D.\]
\end{description}
\end{description}
{\em Then}
\begin{description}
\item[($\alpha$)] in ${\frak K}^{rtf}_\lambda$ there is no universal
member (under usual embeddings (i.e. not necessarily pure)),
\item[($\beta$)] moreover, \underbar{for any} $G_i\in {\frak
K}^{rtf}_\lambda$, for $i<i^*<\mu^{\aleph_0}$ \underbar{there is} $G\in
{\frak K}^{rtf}_\lambda$ not embeddable into any one of $G_i$.
\end{description}
\end{Lemma}
Before we prove \ref{3.3} we consider the assumptions of \ref{3.3} in
\ref{3.4}, \ref{3.5}.
\begin{Claim}
\label{3.4}
\begin{enumerate}
\item In \ref{3.3} we can replace $(*)^1_\lambda$ by
\begin{description}
\item[$(**)^1_\lambda$ (i)] $2^{\aleph_0}<\mu<\lambda={\rm cf}(\lambda)<
\mu^{\aleph_0}$,
\item[\qquad(ii)] there is $\bar C=\langle C_\delta:\delta\in S^*\rangle$
such that $S^*$ is a stationary subset of $\lambda$, each $C_\delta$ is
a subset of $\delta$ with ${\rm otp}(C_\delta)$ divisible by $\mu$,
$C_\delta$ closed in $\sup(C_\delta)$ (which normally $\delta$, but
not necessarily so) and
\[(\forall\alpha)[\alpha\in {\rm nacc}(C_\delta)\quad \Rightarrow\quad {\rm cf}(\alpha)
>2^{\aleph_0}]\]
(where ${\rm nacc}$ stands for ``non-accumulation points''),
and such that $\bar C$ guesses clubs of $\lambda$ (i.e. for every club $E$ of
$\lambda$, for some $\delta\in S^*$ we have $C_\delta\subseteq E$) and
$[\delta\in S^*\quad \Rightarrow\quad {\rm cf}(\delta)=\aleph_0]$.
\end{description}
\item In $(*)^1_\lambda$ and in $(*)^2_\lambda$, without loss of generality
$(\forall\theta<\mu)[\theta^{\aleph_0}<\mu]$ and ${\rm cf}(\mu)=\aleph_0$.
\end{enumerate}
\end{Claim}
\proof \ \ \ 1) This is what we actually use in the proof (see below).
\noindent 2) Replace $\mu$ by $\mu'=\min\{\mu_1:\mu^{\aleph_0}_1\ge\mu$
(equivalently $\mu^{\aleph_0}_1=\mu^{\aleph_0}$)$\}$.
\hfill$\square_{\ref{3.4}}$
Compare to, say, \cite{KjSh:447}, \cite{KjSh:455}; the new assumption is
$(*)^2_\lambda$, note that it is a very weak assumption, in fact it might be
that it is always true.
\begin{Claim}
\label{3.5}
Assume that $2^{\aleph_0}<\mu<\lambda<\mu^{\aleph_0}$ and $(\forall \theta<
\mu)[\theta^{\aleph_0}<\mu]$ (see \ref{3.4}(2)).
Then each of the following is a sufficient condition to $(*)^2_\lambda$:
\begin{description}
\item[($\alpha$)] $\lambda<\mu^{+\omega_1}$,
\item[($\beta$)] if ${\frak a}\subseteq{\rm Reg}\cap\lambda\setminus\mu$ and
$|{\frak a}|\le 2^{\aleph_0}$ then we can find $h:{\frak a}\longrightarrow
\omega$ such that:
\[\lambda>\sup\{\max{\rm pcf}({\frak b}):{\frak b}\subseteq {\frak a}\mbox{
countable, and $h\restriction {\frak b}$ constant}\}.\]
\end{description}
\end{Claim}
\proof Clause $(\alpha)$ implies Clause $(\beta)$: just use any
one-to-one function $h:{\rm Reg}\cap\lambda\setminus\mu\longrightarrow\omega$.
\smallskip
Clause $(\beta)$ implies (by \cite[\S6]{Sh:410} + \cite[\S2]{Sh:430}) that
for $\chi<\lambda$ there is $S\subseteq [\chi]^{\aleph_0}$, $|S|<\lambda$
such that for every $Y\subseteq\chi$, $|Y|=2^{\aleph_0}$, we can find
$Y_n$ such that $Y=\bigcup\limits_{n<\omega} Y_n$ and $[Y_n]^{\aleph_0}
\subseteq S$. (Remember: $\mu>2^{\aleph_0}$.) Without loss of generality
(as $2^{\aleph_0} < \mu < \lambda$):
\begin{description}
\item[$(*)$] $S$ is downward closed.
\end{description}
So if $D$ is a non-principal ultrafilter on $\omega$ and $f:D\longrightarrow
\chi$ then letting $Y={\rm Rang}(f)$ we can find $\langle Y_n:n<\omega\rangle$ as
above. Let $h:D\longrightarrow\omega$ be defined by $h(A)=\min\{n:f(A)\in
Y_n\}$. So
\[X\subseteq D\ \ \&\ \ |X|\le\aleph_0\ \ \&\ \ h\restriction X
\mbox{ constant }\Rightarrow\ f''(X)\in S\quad\mbox{(remember
$(*)$)}.\]
Now for each $n$, for some countable $X_n\subseteq D$ (possibly finite or
even empty) we have:
\[h \restriction X_n\ \mbox{ is constantly } n,\]
\[\ell <\omega \ \&\ (\exists A\in D)(h(A)=n\ \&\ \ell\notin A)
\Rightarrow (\exists B\in X_n)(\ell\notin B).\]
Let $A_n=:\bigcap\{A:A\in X_n\}=\bigcap\{A:A\in D\mbox{ and } h(X)=n\}$.
If the desired conclusion fails, then $\bigwedge\limits_{n<\omega}A_n\in D$.
So
\[(\forall A)[A\in D\quad \Leftrightarrow\quad\bigvee_{n<\omega} A\supseteq
A_n].\]
So $D$ is generated by $\{A_n:n<\omega\}$ but then $D$ cannot be a
non-principal ultrafilter.
\hfill$\square_{\ref{3.5}}$
\begin{Remark}
The case when $D$ is a principal ultrafilter is trivial.
\end{Remark}
\proof of Lemma \ref{3.3} Let $\bar C=\langle C_\delta:\delta\in S^*\rangle$
be as in $(**)^1_{\bar \lambda}$ (ii) from \ref{3.4} (for \ref{3.4}(1) its
existence is obvious, for \ref{3.3} - use \cite[VI, old III 7.8]{Sh:e}).
Let us suppose that $\bar A=\langle A_\delta:\delta\in S^*\rangle$, $A_\delta
\subseteq{\rm nacc}(C_\delta)$ has order type $\omega$ ($A_\delta$ like this will
be chosen later) and let $\eta_\delta$ enumerate $A_\delta$ increasingly. Let
$G_0$ be freely generated by $\{x_i:i<\lambda\}$.
Let $R$ be
\[\begin{ALIGN}
\big\{\bar a: &\bar a=\langle a_n:n<\omega\rangle\mbox{ is a sequence of
pairwise disjoint subsets of } {\bf P}^*,\\
&\mbox{with union }{\bf P}^* \mbox{ for simplicity, such that}\\
&\mbox{for infinitely many }n,\ a_n\ne\emptyset\big\}.
\end{ALIGN}\]
Let $G$ be a group generated by
\[G_0 \cup \{y^{\alpha,n}_{\bar a},z^{\alpha,n}_{\bar a,p}:\ \alpha<\lambda,
\ \bar a\in R,\ n<\omega,\ p \mbox{ prime}\}\]
freely except for:
\begin{description}
\item[(a)] the equations of $G_0$,
\item[(b)] $pz^{\alpha,n+1}_{\bar a,p}=z^{\alpha,n}_{\bar a,p}$ when
$p\in a_n$, $\alpha<\lambda$,
\item[(c)] $z^{\delta,0}_{\bar a,p}=y^{\delta,n}_{\bar a}-
x_{\eta_\delta(n)}$ when $p\in a_n$ and $\delta\in S^*$.
\end{description}
Now $G\in {\frak K}^{rtf}_\lambda$ by inspection.
\medskip
\noindent Before continuing the proof of \ref{3.3} we present a definition
and some facts.
\begin{Definition}
\label{3.7}
For a representation $\bar H$ of $H\in {\frak K}^{rtf}_\lambda$, and $x\in H$,
$\delta\in S^*$ let
\begin{enumerate}
\item ${\rm inv}(x,C_\delta;\bar H)=:\{\alpha\in C_\delta:$ for some $Q\subseteq
{\bf P}^*$, there is $y\in H_{\min[C_\delta\setminus(\alpha+1)]}$ such that
$Q\subseteq{\bf P}(x-y,H)$ but for no $y\in H_\alpha$ is $Q\subseteq{\bf P}(x-y,H)\}$
(so ${\rm inv}(x,C_\delta;\bar H)$ is a subset of $C_\delta$ of cardinality $\le
2^{\aleph_0}$).
\item ${\rm Inv}^0(C_\delta,\bar H)=:\{{\rm inv}(x,C_\delta;\bar H):x\in
\bigcup\limits_i H_i\}$.
\item ${\rm Inv}^1(C_\delta,\bar H)=:\{a:a\subseteq C_\delta$ countable and for
some $x\in H$, $a\subseteq{\rm inv}(x,C_\delta;\bar H)\}$.
\item ${\rm INv}^\ell(\bar H,\bar C)=:{\rm Inv}^\ell(H,\bar H,\bar C)=:\langle
{\rm Inv}^\ell(C_\delta;\bar H):\delta\in S^*\rangle$ for $\ell\in\{0,1\}$.
\item ${\rm INV}^\ell(H,\bar C)=:{\rm INv}^\ell(H,\bar H,\bar C)/{\rm id}^a(\bar C)$,
where
\[{\rm id}^a(\bar C)=:\{T\subseteq\lambda:\mbox{ for some club $E$ of
$\lambda$ for no $\delta\in T$ is }C_\delta\subseteq E\}.\]
\item If $\ell$ is omitted, $\ell = 0$ is understood.
\end{enumerate}
\end{Definition}
\begin{Fact}
\label{3.8}
\begin{enumerate}
\item ${\rm INV}^\ell(H,\bar C)$ is well defined.
\item The $\delta$-th component of ${\rm INv}^\ell(\bar H,\bar C)$ is a family
of $\le\lambda$ subsets of $C_\delta$ each of cardinality $\le 2^{\aleph_0}$
and if $\ell=1$ each member is countable and the family is closed under
subsets.
\item {\em If} $G_i\in{\frak K}^{rtf}_\lambda$ for $i<i^*$, $i^*<
\mu^{\aleph_0}$, $\bar G^i=\langle\bar G_{i,\alpha}:\alpha<\lambda\rangle$
is a representation of $G_i$,
{\em then} we can find $A_\delta\subseteq{\rm nacc}(C_\delta)$ of order type
$\omega$ such that: $i<i^*$, $\delta\in S^*\qquad \Rightarrow$\qquad for
no $a$ in the $\delta$-th component of ${\rm INv}^\ell(G_i,\bar G^i,\bar C)$ do
we have $|a \cap A_\delta|\ge\aleph_0$.
\end{enumerate}
\end{Fact}
\proof Straightforward. (For (3) note ${\rm otp}(C_\delta)\ge\mu$, so there
are $\mu^{\aleph_0}>\lambda$ pairwise almost disjoint subsets of
$C_\delta$ each
of cardinality $\aleph_0$ and every $A\in{\rm Inv}(C_\delta,\bar G^i)$
disqualifies at most $2^{\aleph_0}$ of them.)
\hfill$\square_{\ref{3.8}}$
\begin{Fact}
\label{3.9}
Let $G$ be as constructed above for $\langle A_\delta:\delta\in
S^*\rangle,A_\delta\subseteq{\rm nacc}(C_\delta)$, ${\rm otp}(A_\delta)=\omega$
(where $\langle A_\delta:\delta\in S^*\rangle$ are chosen as in \ref{3.8}(3)
for the sequence $\langle G_i:i<i^* \rangle$ given for proving
\ref{3.3}, see $(\beta)$ there).
\noindent Assume $G \subseteq H\in {\frak K}^{rtf}_\lambda$ and $\bar H$ is a
filtration of $H$. {\em Then}
\[\begin{array}{rr}
B=:\big\{\delta:A_\delta\mbox{ has infinite intersection with some}&\ \\
a\in{\rm Inv}(C_\delta,\bar H)\big\}&=\ \lambda\ \mod\ {\rm id}^a(\bar C).
\end{array}\]
\end{Fact}
\proof We assume otherwise and derive a contradiction. Let for $\alpha
<\lambda$, $S_\alpha\subseteq [\alpha]^{\le \aleph_0}$, $|S_\alpha|<\lambda$
be as guaranteed by $(*)^2_\lambda$.
Let $\chi>2^\lambda$, ${\frak A}_\alpha\prec (H(\chi),\in,<^*_\chi)$
for $\alpha<\lambda$ increasing continuous, $\|{\frak A}_\alpha\|<\lambda$,
$\langle {\frak A}_\beta:\beta\le\alpha\rangle\in {\frak A}_{\alpha+1}$,
${\frak A}_\alpha\cap\lambda$ an ordinal and:
\[\langle S_\alpha:\alpha<\lambda\rangle,\ G,\ H,\ \bar C,\ \langle
A_\delta:\delta\in S^* \rangle,\ \bar H,\ \langle x_i, y^\delta_{\bar a},
z^{\delta,n}_{\bar a,p}:\;i,\delta,\bar a,n,p \rangle\]
all belong to ${\frak A}_0$ and $2^{\aleph_0}+1\subseteq {\frak A}_0$.
Then $E=\{\delta<\lambda:{\frak A}_\delta \cap\lambda=\delta\}$ is a club
of $\lambda$. Choose $\delta\in S^* \cap E\setminus B$ such that $C_\delta
\subseteq E$. (Why can we? As to ${\rm id}^a(\bar C)$ belong all non
stationary subsets of $\lambda$, in particular $\lambda\setminus E$,
and $\lambda\setminus S^*$ and $B$, but $\lambda\notin {\rm id}^a(\bar
C)$.) Remember that $\eta_\delta$ enumerates $A_\delta$ (in the
increasing order). For each $\alpha\in A_\delta$ (so $\alpha\in E$
hence ${\frak A}_\alpha \cap \lambda=\alpha$ but $\bar H\in {\frak
A}_\alpha$ hence $H\cap {\frak A}_\alpha= H_\alpha$) and
$Q\subseteq{\bf P}^*$ choose, if possible, $y_{\alpha,Q}\in H_\alpha$ such that:
\[Q\subseteq{\bf P}(x_\alpha-y_{\alpha,Q},H).\]
Let $I_\alpha=:\{Q\subseteq{\bf P}^*:y_{\alpha,Q}$ well defined$\}$. Note (see
\ref{3.4} $(**)^1_\lambda$ and remember $\eta_\delta(n)\in A_\delta\subseteq
{\rm nacc}(C_\delta)$) that ${\rm cf}(\alpha)>2^{\aleph_0}$ (by (ii) of
\ref{3.4} $(**)^1_\lambda$) and hence for some $\beta_\alpha<\alpha$,
\[\{ y_{\alpha,Q}:Q\in I_\alpha\}\subseteq H_{\beta_\alpha}.\]
Now:
\begin{description}
\item[$\otimes_1$] $I_\alpha$ is downward closed family of subsets of
${\bf P}^*$, ${\bf P}^*\notin I_\alpha$ for $\alpha
\in A_\delta$.
\end{description}
[Why? See the definition for the first phrase and note also that $H$ is
reduced for the second phrase.]
\begin{description}
\item[$\otimes_2$] $I_\alpha$ is closed under unions of two members (hence
is an ideal on ${\bf P}^*$).
\end{description}
[Why? If $Q_1,Q_2\in I_\alpha$ then (as $x_\alpha\in G\subseteq H$ witnesses
this):
\[\begin{ALIGN}
({\cal H}(\chi),\in,<^*_\chi)\models &(\exists x)(x\in H\ \&\
Q_1\subseteq{\bf P}(x- y_{\alpha,Q_1},H)\ \&\\
&Q_2\subseteq{\bf P}(x-y_{\alpha,Q_2},H)).
\end{ALIGN}\]
All the parameters are in ${\frak A}_\alpha$ so there is $y\in
{\frak A}_\alpha\cap H$ such that
\[Q_1\subseteq{\bf P}(y-y_{\alpha,Q_1},H)\quad\mbox{ and }\quad Q_2\subseteq
{\bf P}(y-y_{\alpha,Q_2},H).\]
By algebraic manipulations,
\[Q_1\subseteq {\bf P}(x_\alpha-y_{\alpha,Q_1},H),\ Q_1\subseteq{\bf P}(y-y_{\alpha,
Q_1},H)\quad\Rightarrow\quad Q_1\subseteq{\bf P}(x_\alpha-y,H);\]
similarly for $Q_2$. So $Q_1\cup Q_2\subseteq{\bf P}(x_\alpha-y,H)$ and hence
$Q_1\cup Q_2\in I_\alpha$.]
\begin{description}
\item[$\otimes_3$] If $\bar Q=\langle Q_n:n\in\Gamma\rangle$ are pairwise
disjoint subsets of ${\bf P}^*$, for some infinite $\Gamma\subseteq\omega$,
then for some $n\in\Gamma$ we have $Q_n\in I_{\eta_\delta(n)}$.
\end{description}
[Why? Otherwise let $a_n$ be $Q_n$ if $n\in \Gamma$, and $\emptyset$
if $n\in \omega\setminus \Gamma$, and let $\bar a=\langle a_n: n<
\omega\rangle$. Now
$n\in\Gamma\quad\Rightarrow\quad\eta_\delta(n)\in
{\rm inv}(y^{\delta, 0}_{\bar a},C_\delta;\bar H)$ and hence
\[A_\delta\cap{\rm inv}(y^{\delta, 0}_{\bar a},C_\delta;\bar
H)\supseteq\{\eta_\delta(n):n\in \Gamma\},\]
which is infinite, contradicting the choice of $A_\delta$.]
\begin{description}
\item[$\otimes_4$] for all but finitely many $n$ the Boolean algebra
${\cal P}({\bf P}^*)/I_{\eta_\delta(n)}$ is finite.
\end{description}
[Why? If not, then by $\otimes_1$ second phrase, for each $n$ there are
infinitely many non-principal ultrafilters $D$ on ${\bf P}^*$ disjoint to
$I_{\eta_\delta(n)}$, so for $n<\omega$ we can find an ultrafilter $D_n$
on ${\bf P}^*$ disjoint to $I_{\eta_\delta(n)}$, distinct from $D_m$ for
$m<n$. Thus we can find $\Gamma\in [\omega]^{\aleph_0}$ and $Q_n\in D_n$
for $n\in\Gamma$ such that $\langle Q_n:n\in\Gamma\rangle$ are pairwise
disjoint (as $Q_n\in D_n$ clearly $|Q_n|=\aleph_0$). Why? Look: if $B_n
\in D_0\setminus D_1$ for $n\in\omega$ then
\[(\exists^\infty n)(B_n \in D_n)\quad\mbox{ or }\quad(\exists^\infty n)
({\bf P}^*\setminus B_n \in D_n),\]
etc. Let $Q_n=\emptyset$ for $n\in\omega\setminus\Gamma$, now $\bar Q=
\langle Q_n:n<\omega\rangle$ contradicts $\otimes_3$.]
\begin{description}
\item[$\otimes_5$] If the conclusion (of \ref{3.9}) fails, then for no
$\alpha\in A_\delta$ is ${\cal P}({\bf P}^*)/I_\alpha$ finite.
\end{description}
[Why? If not, choose such an $\alpha$ and $Q^*\subseteq{\bf P}^*$, $Q^*
\notin I_\alpha$ such that $I=I_\alpha\restriction Q^*$ is a maximal
ideal on $Q^*$. So $D=:{\cal P}(Q^*)\setminus I$ is a non-principal
ultrafilter. Remember $\beta=\beta_\alpha<\alpha$ is such that
$\{y_{\alpha,Q}:Q\in I_\alpha\}\subseteq H_\beta$. Now, $H_\beta\in
{\frak A}_{\beta+1}$, $|H_\beta|<\lambda$. Hence $(*)^2_\lambda$ from
\ref{3.3} (note that it does not matter whether we consider an ordinal
$\chi<\lambda$ or a cardinal $\chi<\lambda$, or any other set of
cardinality $< \lambda$) implies that there is $S_{H_\beta}\in
{\frak A}_{\beta+1}$, $S_{H_\beta}\subseteq [H_\beta]^{\le \aleph_0}$,
$|S_{H_\beta}|<\lambda$ as there. Now it does not matter if we deal with
functions from an ultrafilter on $\omega$ \underbar{or} an ultrafilter on
$Q^*$. We define $f:D\longrightarrow H_\beta$ as follows: for $U\in D$ we
let $f(U)=y_{\alpha,Q^* \setminus U}$. (Note: $Q^*\setminus U\in I_\alpha$,
hence $y_{\alpha,Q^* \setminus U}$ is well defined.) So, by the choice of
$S_{H_\beta}$ (see (ii) of $(*)^2_\lambda$), for some countable $f'
\subseteq f$, $f'\in {\frak A}_{\beta+1}$ and $\bigcap\{U:U\in{\rm Dom}(f')\}
\notin D$ (reflect for a minute). Let ${\rm Dom}(f')=\{U_0,U_1,\ldots\}$.
Then $\bigcup\limits_{n<\omega}(Q^*\setminus U_n)\notin I_\alpha$. But as
in the proof of $\otimes_2$, as
\[\langle y_\alpha,(Q^* \setminus U_n):n<\omega\rangle\in
{\frak A}_{\beta+1}\subseteq {\frak A}_\alpha,\]
we have $\bigcup\limits_{n<\omega}(Q^*\setminus U_n)\in I_\alpha$, an
easy contradiction.]
Now $\otimes_4$, $\otimes_5$ give a contradiction. \hfill$\square_{\ref{3.3}}$
\begin{Remark}
\label{3.10}
We can deal similarly with $R$-modules, $|R|<\mu$ \underbar{if} $R$ has
infinitely many prime ideals $I$. Also the treatment of
${\frak K}^{rs(p)}_\lambda$ is similar to the one for modules over
rings with one prime.
\noindent Note: if we replace ``reduced" by
\[x\in G \setminus\{0\}\quad \Rightarrow\quad (\exists p\in{\bf P}^*)(x\notin
pG)\]
then here we could have defined
\[{\bf P}(x,H)=:\{p\in {\bf P}^*:x\in pH\}\]
and the proof would go through with no difference (e.g. choose a fixed
partition $\langle {\bf P}^*_n: n< \omega\rangle$ of ${\bf P}^*$ to infinite
sets, and let ${\bf P}'(x, H)=\{n: x\in pH\mbox{ for every }p\in
{\bf P}^*_n\}$). Now the groups are less divisible.
\end{Remark}
\begin{Remark}
\label{3.11}
We can get that the groups are slender, in fact, the construction gives it.
\end{Remark}
\section{Below the continuum there may be universal structures}
Both in \cite{Sh:456} (where we deal with universality for $(<\lambda)$-stable
(Abelian) groups, like ${\frak K}^{rs(p)}_\lambda$) and in \S3, we restrict
ourselves to $\lambda>2^{\aleph_0}$, a restriction which does not appear
in \cite{KjSh:447}, \cite{KjSh:455}. Is this restriction necessary? In this
section we shall show that at least to some extent, it is.
We first show under MA that for $\lambda<2^{\aleph_0}$, any $G\in
{\frak K}^{rs(p)}_\lambda$ can be embedded into a ``nice" one; our aim
is to reduce the consistency of ``there is a universal in
${\frak K}^{rs(p)}_\lambda$" to ``there is a universal in
${\frak K}^{tr}_{\langle\aleph_0:n<\omega\rangle\char 94\langle \lambda
\rangle}$". Then we proceed to
prove the consistency of the latter. Actually a weak form of MA suffices.
\begin{Definition}
\label{4.2}
\begin{enumerate}
\item $G\in {\frak K}^{rs(p)}_\lambda$ is {\em tree-like} if:
\begin{description}
\item[(a)] we can find a basic subgroup $B=
\bigoplus\limits_{\scriptstyle i<\lambda_n\atop\scriptstyle n<\omega}
{\Bbb Z} x^n_i$, where
\[\lambda_n=\lambda_n(G)=:\dim\left((p^nG)[p]/p^{n+1}(G)[p]\right)\]
(see Fuchs \cite{Fu}) such that: ${\Bbb Z} x^n_i \cong {\Bbb Z}/p^{n+1}
{\Bbb Z}$ and
\begin{description}
\item[$\otimes_0$] every $x\in G$ has the form
\[\sum\limits_{n,i}\{a^n_i p^{n-k} x^n_i:n\in [k,\omega)\mbox{ and }
i<\lambda\}\]
where $a^n_i\in{\Bbb Z}$ and
\[n<\omega\quad \Rightarrow\quad w_n[x]=:\{i:a^n_i\, p^{n-k}x^n_i\neq 0\}
\mbox{ is finite}\]
\end{description}
(this applies to any $G\in {\frak K}^{rs(p)}_\lambda$ we considered so far;
we write $w_n[x]=w_n[x,\bar Y]$ when $\bar Y=\langle x^n_i:n,i\rangle$).
Moreover
\item[(b)] $\bar Y=\langle x^n_i:n,i\rangle$ is tree-like inside $G$,
which means
that we can find $F_n:\lambda_{n+1}\longrightarrow\lambda_n$ such that
letting $\bar F=\langle F_n:n<\omega\rangle$, $G$ is generated by some
subset of
$\Gamma(G,\bar Y,\bar F)$ where:
\[\hspace{-0.5cm}\begin{ALIGN}
\Gamma(G,\bar Y,\bar F)=\big\{x:&\mbox{for some }\eta\in\prod\limits_{n
<\omega}\lambda_n, \mbox{ for each } n<\omega \mbox{ we have}\\
&F_n(\eta(n+1))=\eta(n)\mbox{ and }x=\sum\limits_{n\ge k}
p^{n-k}x^n_{\eta(n)}\big\}.
\end{ALIGN}\]
\end{description}
\item $G\in {\frak K}^{rs(p)}_\lambda$ is {\em semi-tree-like} if above
we replace (b) by
\begin{description}
\item[(b)$'$] we can find a set $\Gamma\subseteq\{\eta:\eta$ is a partial
function from $\omega$ to $\sup\limits_{n<\omega} \lambda_n$ with
$\eta(n)< \lambda_n\}$ such that:
\begin{description}
\item[($\alpha$)] $\eta_1\in\Gamma,\ \eta_2\in\Gamma,\ \eta_1(n)=\eta_2(n)
\quad \Rightarrow\quad\eta_1\restriction n=\eta_2\restriction n$,
\item[($\beta$)] for $\eta\in\Gamma$ and $n\in{\rm Dom}(\eta)$, there is
\[y_{\eta,n}=\sum \{p^{m-n}x^m_{\eta(m)}:m \in {\rm Dom}(\eta)\mbox{ and }
m \ge n\}\in G,\]
\item[($\gamma$)] $G$ is generated by
\[\{x^n_i:n<\omega,i<\lambda_n\}\cup\{y_{\eta,n}:\eta\in\Gamma,n\in
{\rm Dom}(\eta)\}.\]
\end{description}
\end{description}
\item $G\in {\frak K}^{rs(p)}_\lambda$ is {\em almost tree-like} if
in (b)$'$ we add
\begin{description}
\item[($\delta$)] for some $A\subseteq\omega$ for every $\eta\in\Gamma$,
${\rm Dom}(\eta)=A$.
\end{description}
\end{enumerate}
\end{Definition}
\begin{Proposition}
\label{4.3}
\begin{enumerate}
\item Suppose $G\in {\frak K}^{rs(p)}_\lambda$ is almost tree-like, as
witnessed by $A\subseteq\omega$, $\lambda_n$ (for $n<\omega$), $x^n_i$ (for
$n\in A$, $i<\lambda_n$), and if $n_0<n_2$ are successive members of $A$,
$n_0<n<n_2$ then $\lambda_n\ge\lambda_{n_0}$ or just
\[\lambda_n\ge|\{\eta(n_0):\eta\in\Gamma\}|.\]
{\em Then} $G$ is tree-like (possibly with other witnesses).
\item If in \ref{4.2}(3) we just demand $\eta\in\Gamma\quad\Rightarrow
\quad\bigvee\limits_{n<\omega}{\rm Dom}(\eta)\setminus n=A\setminus n$;
then changing the $\eta$'s and the $y_{\eta,n}$'s we can regain the
``almost tree-like".
\end{enumerate}
\end{Proposition}
\proof 1) For every successive members $n_0<n_2$ of $A$ for
\[\alpha\in S_{n_0}=:\{\alpha:(\exists\eta)[\eta\in\Gamma\ \&\ \eta(n_0)
=\alpha]\},\]
choose ordinals $\gamma(n_0,\alpha,\ell)$ for $\ell\in (n_0,n_2)$ such that
\[\gamma(n_0,\alpha_1,\ell)=\gamma(n_0,\alpha_2,\ell)\quad\Rightarrow
\quad\alpha_1=\alpha_2.\]
We change the basis by replacing for $\alpha\in S_{n_0}$, $\{x^n_\alpha\}\cup
\{x^\ell_{\gamma(n_0,\alpha,\ell)}:\ell\in (n_0,n_2)\}$ (note: $n_0<n_2$
but possibly $n_0+1=n_2$), by:
\[\begin{ALIGN}
\biggl\{ x^{n_0}_\alpha + px^{n_0+1}_{\gamma(n_0,\alpha,n_0+1)},
&x^{n_0+1}_{\gamma(n_0,\alpha,n_0+1)} +
px^{n_0+2}_{\gamma(n_0,\alpha,n_0+2)},\ldots, \\
&x^{n_2-2}_{\gamma(n_0,\alpha,n_2-2)} +
px^{n_2-1}_{\gamma(n_0,\alpha,n_2-1)},
x^{n_2-1}_{\gamma(n_0,\gamma,n_2-1)} \biggr\}.
\end{ALIGN}\]
2) For $\eta\in \Gamma$ let $n(\eta)=\min\{ n: n\in A\cap{\rm Dom}(\eta)$
and ${\rm Dom}(\eta)\setminus n=A\setminus n\}$, and let
$\Gamma_n=\{\eta\in \Gamma: n(\eta)=n\}$ for $n\in A$. We choose by
induction on $n< \omega$ the objects $\nu_\eta$ for $\eta\in \Gamma_n$
and $\rho^n_\alpha$ for $\alpha< \lambda_n$ such that: $\nu_\eta$ is a
function with domain $A$, $\nu_\eta\restriction (A\setminus
n(\eta))=\eta\restriction (A\setminus n(\eta))$ and
$\nu_\eta\restriction (A\cap n(\eta))= \rho^n_{\eta(n)}$,
$\nu_\eta(n)< \lambda_n$ and $\rho^n_\alpha$ is a function with domain
$A\cap n$, $\rho^n_\alpha(\ell)< \lambda_\ell$ and $\rho^n_\alpha
\restriction (A\cap \ell) = \rho^\ell_{\rho^n_\alpha(\ell)}$ for
$\ell\in A\cap n$. There are no problems and $\{\nu_\eta: \eta\in
\Gamma_n\}$ is as required.
\hfill$\square_{\ref{4.3}}$
\begin{Theorem}[MA]
\label{4.1}
Let $\lambda<2^{\aleph_0}$. Any $G\in {\frak K}^{rs(p)}_\lambda$ can be
embedded into some $G'\in {\frak K}^{rs(p)}_\lambda$ with countable
density which is tree-like.
\end{Theorem}
\proof By \ref{4.3} it suffices to get $G'$ ``almost tree-like" and
$A\subseteq\omega$ which satisfies \ref{4.3}(1). The ability to make $A$
thin helps in proving Fact E below. By \ref{1.1} without loss of generality
$G$ has a base (i.e. a dense subgroup of the form)
$B=\bigoplus\limits_{\scriptstyle n<\omega\atop\scriptstyle i<\lambda_n}
{\Bbb Z} x^n_i$, where ${\Bbb Z} x^n_i\cong{\Bbb Z}/p^{n+1}{\Bbb Z}$ and
$\lambda_n=\aleph_0$ (in fact $\lambda_n$ can be $g(n)$ if $g\in
{}^\omega\omega$ is not bounded (by algebraic manipulations), this will be
useful if we consider the forcing from \cite[\S2]{Sh:326}).
Let $B^+$ be the extension of $B$ by $y^{n,k}_i$ ($k<\omega$, $n<\omega$,
$i<\lambda_n$) generated freely except for $py^{n,k+1}_i=y^{n,k}_i$ (for
$k<\omega$), $y^{n,\ell}_i=p^{n-\ell}x^n_i$ for $\ell\le n$, $n<\omega$,
$i<\lambda_n$. So $B^+$ is a divisible $p$-group, let $G^+ =:
B^+\bigoplus\limits_B
G$. Let $\{z^0_\alpha:\alpha<\lambda\}\subseteq G[p]$ be a basis of $G[p]$
over $\{p^n x^n_i:n,i<\omega\}$ (as a vector space over ${\Bbb Z}/p{\Bbb
Z}$ i.e. the two sets are disjoint, their union is a basis); remember
$G[p]=\{x\in G:px=0\}$. So we can find $z^k_\alpha\in G$ (for $\alpha<
\lambda$, $k<\omega$ and $k\ne 0$) such that
\[pz^{k+1}_\alpha-z^k_\alpha=\sum_{i\in w(\alpha,k)} a^{k,\alpha}_i
x^k_i,\]
where $w(\alpha,k)\subseteq\omega$ is finite (reflect on the Abelian group
theory).
We define a forcing notion $P$ as follows: a condition $p \in P$ consists
of (in brackets are explanations of intentions):
\begin{description}
\item[(a)] $m<\omega$, $M\subseteq m$,
\end{description}
[$M$ is intended as $A\cap\{0,\ldots,m-1\}$]
\begin{description}
\item[(b)] a finite $u\subseteq m\times\omega$ and $h:u\longrightarrow
\omega$ such that $h(n,i)\ge n$,
\end{description}
[our extensions will not be pure, but still we want that the group produced
will be reduced, now we add some $y^{n,k}_i$'s and $h$ tells us how many]
\begin{description}
\item[(c)] a subgroup $K$ of $B^+$:
\[K=\langle y^{n,k}_i:(n,i)\in u,k<h(n,i)\rangle_{B^+},\]
\item[(d)] a finite $w\subseteq\lambda$,
\end{description}
[$w$ is the set of $\alpha<\lambda$ on which we give information]
\begin{description}
\item[(e)] $g:w\rightarrow m + 1$,
\end{description}
[$g(\alpha)$ is in what level $m'\le m$ we ``start to think" about $\alpha$]
\begin{description}
\item[(f)] $\bar\eta=\langle\eta_\alpha:\alpha\in w\rangle$ (see (i)),
\end{description}
[of course, $\eta_\alpha$ is the intended $\eta_\alpha$ restricted to $m$ and
the set of all $\eta_\alpha$ forms the intended $\Gamma$]
\begin{description}
\item[(g)] a finite $v\subseteq m\times\omega$,
\end{description}
[this approximates the set of indices of the new basis]
\begin{description}
\item[(h)] $\bar t=\{t_{n,i}:(n,i)\in v\}$ (see (j)),
\end{description}
[approximates the new basis]
\begin{description}
\item[(i)] $\eta_\alpha\in {}^M\omega$, $\bigwedge\limits_{\alpha\in w}
\bigwedge\limits_{n\in M} (n,\eta_\alpha(n))\in v$,
\end{description}
[toward guaranteeing clause $(\delta)$ of \ref{4.2}(3) (see \ref{4.3}(2))]
\begin{description}
\item[(j)] $t_{n,i}\in K$ and ${\Bbb Z} t_{n,i} \cong {\Bbb Z}/p^n
{\Bbb Z}$,
\item[(k)] $K=\bigoplus\limits_{(n,i)\in v} ({\Bbb Z} t_{n,i})$,
\end{description}
[so $K$ is an approximation to the new basic subgroup]
\begin{description}
\item[(l)] if $\alpha\in w$, $g(\alpha)\le\ell\le m$ and $\ell\in M$ then
\[z^\ell_\alpha-\sum\{t^{n-\ell}_{n,\eta_\alpha(n)}:\ell\le n\in
{\rm Dom}(\eta_\alpha)\}\in p^{m-\ell}(K+G),\]
\end{description}
[this is a step toward guaranteeing that the full difference (when
${\rm Dom}(\eta_\alpha)$ is possibly infinite) will be in the closure of
$\bigoplus\limits_{\scriptstyle n\in [i,\omega)\atop\scriptstyle
i<\omega} {\Bbb Z} x^n_i$].
We define the order by:
\noindent $p \le q$ \qquad if and only if
\begin{description}
\item[$(\alpha)$] $m^p\le m^q$, $M^q \cap m^p = M^p$,
\item[$(\beta)$] $u^p\subseteq u^q$, $h^p\subseteq h^q$,
\item[$(\gamma)$] $K^p\subseteq_{pr} K^q$,
\item[$(\delta)$] $w^p\subseteq w^q$,
\item[$(\varepsilon)$] $g^p\subseteq g^q$,
\item[$(\zeta)$] $\eta^p_\alpha\trianglelefteq\eta^q_\alpha$,
(i.e. $\eta^p_\alpha$ is an initial segment of $\eta^q_\alpha$)
\item[$(\eta)$] $v^p\subseteq v^q$,
\item[$(\theta)$] $t^p_{n,i}=t^q_{n,i}$ for $(n,i)\in v^p$.
\end{description}
\medskip
\noindent{\bf A Fact}\hspace{0.15in} $(P,\le)$ is a partial order.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Trivial.
\medskip
\noindent{\bf B Fact}\hspace{0.15in} $P$ satisfies the c.c.c. (even is
$\sigma$-centered).
\medskip
\noindent{\em Proof of the Fact:}\ \ \ It suffices to observe the following.
Suppose that
\begin{description}
\item[$(*)$(i)] $p,q \in P$,
\item[\quad(ii)] $M^p=M^q$, $m^p=m^q$, $h^p=h^q$, $u^p=u^q$, $K^p=K^q$,
$v^p=v^q$, $t^p_{n,i}=t^q_{n,i}$,
\item[\quad(iii)] $\langle\eta^p_\alpha:\alpha\in w^p\cap w^q\rangle =
\langle\eta^q_\alpha:\alpha\in w^p\cap w^q\rangle$,
\item[\quad(iv)] $g^p\restriction (w^p \cap w^q)=g^q \restriction(w^p
\cap w^q)$.
\end{description}
Then the conditions $p,q$ are compatible (in fact have an upper bound with
the same common parts): take the common values (in (ii)) or the union (for
(iii)).
\medskip
\noindent{\bf C Fact}\hspace{0.15in} For each $\alpha<\lambda$ the set
${\cal I}_\alpha=:\{p\in P:\alpha\in w^p\}$ is dense (and open).
\medskip
\noindent{\em Proof of the Fact:}\ \ \ For $p\in P$ let $q$ be like $p$
except that:
\[w^q=w^p\cup\{\alpha\}\quad\mbox{ and }\quad g^q(\beta)=\left\{
\begin{array}{lll}
g^p(\beta) &\mbox{if}& \beta\in w^p \\
m^p &\mbox{if}& \beta=\alpha,\ \beta\notin w^p.
\end{array}\right.\]
\medskip
\noindent{\bf D Fact}\hspace{0.15in} For $n<\omega$, $i<\omega$ the
following set is a dense subset of $P$:
\[{\cal J}^*_{(n,i)}=\{p\in P:x^n_i\in K^p\ \&\ (\forall n<m^p)(\{n\}\times
\omega)\cap u^p \mbox{ has }>m^p\mbox{ elements}\}.\]
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Should be clear.
\medskip
\noindent{\bf E Fact}\hspace{0.15in} For each $m<\omega$ the set ${\cal J}_m
=:\{p\in P:m^p\ge m\}$ is dense in $P$.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Let $p\in P$ be given such that $m^p
<m$. Let $w^p=\{\alpha_0,\ldots,\alpha_{r-1}\}$ be without repetitions;
we know that in $G$, $pz^0_{\alpha_\ell}=0$ and $\{z^0_{\alpha_\ell}:
\ell<r\}$ is independent $\mod\ B$, hence also in $K+G$ the set
$\{z^0_{\alpha_\ell}:\ell<r\}$ is independent $\mod\ K$. Clearly
\begin{description}
\item[(A)] $pz^{k+1}_{\alpha_\ell}=z^k_{\alpha_\ell}\mod\ K$
for $k\in [g(\alpha_\ell),m^p)$, hence
\item[(B)] $p^{m^p}z^{m^p}_{\alpha_\ell}=z^{g(\alpha_\ell)}_{\alpha_\ell}
\mod\ K$.
\end{description}
Remember
\begin{description}
\item[(C)] $z^{m^p}_{\alpha_\ell}=\sum\{a^{k,\alpha_\ell}_i p^{k-m^p} x^k_i:
k \ge m^p,i\in w(\alpha_\ell,k)\}$,
\end{description}
and so, in particular, (from the choice of $z^0_{\alpha_\ell}$)
\[p^{m^p+1}z^{m^p}_{\alpha_\ell}=0\quad\mbox{ and }\quad
p^{m^p}z^{m^p}_{\alpha_\ell}\ne 0.\]
For $\ell<r$ and $n\in [m^p,\omega)$ let
\[s^n_\ell=:\sum\big\{a^{k,\alpha_\ell}_i p^{k-m^p} x^k_i:k \ge m^p
\mbox{ but }k<n\mbox{ and } i\in w(\alpha_\ell,k)\big\}.\]
But $p^{k-m^p}x^k_i = y^{k,m^p}_i$, so
\[s^n_\ell=\sum\big\{a^{k,\alpha_\ell}_i y^{k,m^p}_i:k\in [m^p,n)
\mbox{ and }i\in (\alpha_\ell,k)\big\}.\]
Hence, for some $m^*>m,m^p$ we have: $\{p^m\,s^{m^*}_\ell:\ell<r\}$ is
independent in $G[p]$ over $K[p]$ and therefore in $\langle x^k_i:k\in
[m^p,m^*],i<\omega\rangle$. Let
\[s^*_\ell=\sum\big\{a^{k,\alpha_\ell}_i:k\in [m^p,m^*)\mbox{ and }i\in
w(\alpha_\ell,k)\}.\]
Then $\{ s^*_\ell:\ell<r\}$ is independent in
\[B^+_{[m,m^*)}=\langle y^{l,m^*-1}_i:k\in [m^p,m^*)\mbox{ and }i<\omega
\rangle.\]
Let $i^*<\omega$ be such that: $w(\alpha_\ell,k)\subseteq\{0,\ldots,i^*-1\}$
for $k\in [m^p,m^*)$, $\ell=1,\ldots,r$. Let us start to define $q$:
\[\begin{array}{c}
m^q=m^*,\quad M^q=M^p\cup\{m^*-1\},\quad w^q=w^p,\quad g^q=g^p,\\
u^q=u^p\cup ([m^p,m^*)\times\{0,\ldots,i^*-1\}),\\
h^q\mbox{ is } h^p\mbox{ on } u^p\mbox{ and }h^q(k,i)=m^*-1\mbox{
otherwise},\\
K^q\mbox{ is defined appropriately, let } K'=\langle x^n_i:n\in [m^p,m^*),
i<i^*\rangle.
\end{array}\]
Complete $\{s^*_\ell:\ell<r\}$ to $\{s^*_\ell:\ell<r^*\}$, a basis of $K'[p]$,
and choose $\{t_{n,i}:(n,i)\in v^*\}$ such that: $[p^mt_{n,i}=0\ \
\Leftrightarrow\ \ m>n]$, and for $\ell<r$
\[p^{m^*-1-\ell}t_{m^*-1,\ell} = s^*_\ell.\]
The rest should be clear.
\medskip
The generic gives a variant of the desired result: almost tree-like basis; the
restriction to $M$ and $g$ but by \ref{4.3} we can finish.
\hfill$\square_{\ref{4.2}}$
\begin{Conclusion}
[MA$_\lambda$($\sigma$-centered)]
\label{4.4}
For $(*)_0$ to hold it suffices that $(*)_1$ holds where
\begin{description}
\item[$(*)_0$] in ${\frak K}^{rs(p)}_\lambda$, there is a universal
member,
\item[$(*)_1$] in ${\frak K}^{tr}_{\bar\lambda}$ there is a universal
member, where:
\begin{description}
\item[(a)] $\lambda_n=\aleph_0$, $\lambda_\omega=\lambda$, $\ell g(\bar\lambda)
=\omega+1$\qquad \underbar{or}
\item[(b)] $\lambda_\omega=\lambda$, $\lambda_n\in [n,\omega)$, $\ell g
(\bar \lambda)=\omega+1$.
\end{description}
\end{description}
\end{Conclusion}
\begin{Remark}
\label{4.4A}
Any $\langle\lambda_n:n<\omega\rangle$, $\lambda_n<\omega$ which is not
bounded suffices.
\end{Remark}
\proof For case (a) - by \ref{4.1}.
\noindent For case (b) - the same proof. \hfill$\square_{\ref{4.4}}$
\begin{Theorem}
\label{4.5}
Assume $\lambda<2^{\aleph_0}$ and
\begin{description}
\item[(a)] there are $A_i\subseteq\lambda$, $|A_i|=\lambda$ for
$i<2^\lambda$ such that $i\ne j \Rightarrow |A_i \cap A_j| \le \aleph_0$.
\end{description}
Let $\bar\lambda=\langle \lambda_\alpha:\alpha\le\omega\rangle$, $\lambda_n
= \aleph_0$, $\lambda_\omega=\lambda$.
\noindent{\em Then} there is $P$ such that:
\medskip
\begin{description}
\item[$(\alpha)$] $P$ is a c.c.c. forcing notion,
\item[$(\beta)$] $|P|=2^\lambda$,
\item[$(\gamma)$] in $V^P$, there is $T\in {\frak K}^{tr}_{\bar\lambda}$
into which every $T' \in ({\frak K}^{tr}_{\bar \lambda})^V$ can be embedded.
\end{description}
\end{Theorem}
\proof Let $\bar T=\langle T_i:i<2^\lambda\rangle$ list the trees $T$ of
cardinality $\le\lambda$ satisfying
\[{}^{\omega >}\omega\subseteq T \subseteq {}^{\omega \ge} \omega\quad
\mbox{ and }\quad T\cap {}^\omega\omega\mbox{ has cardinality $\lambda$, for
simplicity.}\]
Let $T_i\cap {}^\omega\omega=\{\eta^i_\alpha:\alpha\in A_i \}$.
We shall force $\rho_{\alpha,\ell}\in {}^\omega\omega$ for $\alpha<
\lambda$, $\ell<\omega$, and for each $i<2^\lambda$ a function $g_i:A_i
\longrightarrow\omega$ such that: there is an automorphism $f_i$ of
$({}^{\omega>}\omega,\triangleleft)$ which induces an embedding of $T_i$
into $\left(({}^{\omega>}\omega)\cup \{\rho_{\alpha,g_i(\alpha)}:\alpha<
\lambda\},\triangleleft\right)$. We shall define $p\in P$ as an approximation.
\noindent A condition $p\in P$ consists of:
\begin{description}
\item[(a)] $m<\omega$ and a finite subset $u$ of ${}^{m \ge}\omega$, closed
under initial segments such that $\langle\rangle\in u$,
\item[(b)] a finite $w\subseteq 2^\lambda$,
\item[(c)] for each $i\in w$, a finite function $g_i$ from $A_i$ to
$\omega$,
\item[(d)] for each $i\in w$, an automorphism $f_i$ of
$(u,\triangleleft)$,
\item[(e)] a finite $v\subseteq\lambda\times\omega$,
\item[(f)] for $(\alpha,n)\in v$, $\rho_{\alpha,n}\in u\cap
({}^m\omega)$,
\end{description}
such that
\begin{description}
\item[(g)] if $i\in w$ and $\alpha\in{\rm Dom}(g_i)$ then:
\begin{description}
\item[$(\alpha)$] $(\alpha,g_i(\alpha))\in v$,
\item[$(\beta)$] $\eta^i_\alpha\restriction m\in u$,
\item[$(\gamma)$] $f_i(\eta^i_\alpha\restriction m)=\rho_{\alpha,
g_i(\alpha)}$,
\end{description}
\item[(h)] $\langle\rho_{\alpha,n}:(\alpha,n)\in v\rangle$ is with no
repetition (all of length $m$),
\item[(i)] for $i\in w$, $\langle\eta^i_\alpha\restriction m:\alpha\in
{\rm Dom}(g_i)\rangle$ is with no repetition.
\end{description}
The order on $P$ is: $p \le q$ if and only if:
\begin{description}
\item[$(\alpha)$] $u^p \subseteq u^q$, $m^p\le m^q$,
\item[$(\beta)$] $w^p \subseteq w^q$,
\item[$(\gamma)$] $f^p_i \subseteq f^q_i$ for $i\in w^p$,
\item[$(\delta)$] $g^p_i \subseteq g^q_i$ for $i\in w^p$,
\item[$(\varepsilon)$] $v^p \subseteq v^q$,
\item[$(\zeta)$] $\rho^p_{\alpha,n}\trianglelefteq\rho^q_{\alpha,n}$,
when $(\alpha,n) \in v^p$,
\item[$(\eta)$] if $i\ne j\in w^p$ then for every $\alpha\in A_i\cap
A_j\setminus ({\rm Dom}(g^p_i)\cap {\rm Dom}(g^p_j))$ we have $g^q_i(\alpha)\ne
g^q_j(\alpha)$.
\end{description}
\medskip
\noindent{\bf A Fact}\hspace{0.15in} $(P,\le)$ is a partial order.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Trivial.
\medskip
\noindent{\bf B Fact}\hspace{0.15in} For $i<2^\lambda$ the set $\{p:i\in
w^p\}$ is dense in $P$.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ If $p\in P$, $i\in 2^\lambda
\setminus w^p$, define $q$ like $p$ except $w^q=w^p\cup\{i\}$,
${\rm Dom}(g^q_i)=\emptyset$.
\medskip
\noindent{\bf C Fact}\hspace{0.15in} If $p\in P$, $m_1\in
(m^p,\omega)$, $\eta^*\in u^p$, $m^*<\omega$, $i\in w^p$, $\alpha\in\lambda
\setminus{\rm Dom}(g^p_i)$ {\em then} we can find $q$ such that $p\le q\in
P$, $m^q>m_1$, $\eta^* \char 94\langle m^*\rangle\in u^q$ and $\alpha\in
{\rm Dom}(g_i)$ and $\langle\eta^j_\beta\restriction m^q:j\in w^q$ and $\beta\in
{\rm Dom}(g^q_j)\rangle$ is with no repetition, more exactly
$\eta^{j(1)}_{\beta_1}\setminus m^q= \eta^{j(2)}_{\beta_2}\restriction
m^q \Rightarrow \eta^{j(1)}_{\beta_1}=\eta^{j(2)}_{\beta_2}$.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Let $n_0\le m^p$ be maximal such that
$\eta^i_\alpha\restriction n_0 \in u^p$. Let $n_1<\omega$ be minimal such
that $\eta^i_\alpha\restriction n_1\notin\{\eta^i_\beta\restriction n_1:
\beta\in{\rm Dom}(g^p_i)\}$ and moreover the sequence
\[\langle\eta^j_\beta\restriction n_1:j\in w^p\ \&\ \beta\in{\rm Dom}(g^p_j)\
\ \mbox{ or }\ \ j=i\ \&\ \beta=\alpha\rangle\]
is with no repetition. Choose a natural number $m^q>m^p+1,n_0+1,n_1+2$ and
let $k^*=:3+\sum\limits_{i\in w^p}|{\rm Dom}(g^p_i)|$. Choose $u^q\subseteq
{}^{m^q\ge}\omega$ such that:
\begin{description}
\item[(i)] $u^p\subseteq u^q\subseteq {}^{m^q\ge}\omega$, $u^q$ is downward
closed,
\item[(ii)] for every $\eta\in u^q$ such that $\ell g(\eta)<m^q$, for
exactly $k^*$ numbers $k$, $\eta\char 94\langle k\rangle\in u^q\setminus
u^p$,
\item[(iii)] $\eta^j_\beta\restriction\ell\in u^q$ when $\ell\le m^q$
and $j\in w^p$, $\beta\in{\rm Dom}(g^p_j)$,
\item[(iv)] $\eta^i_\alpha\restriction\ell\in u^q$ for $\ell\le m^q$,
\item[(v)] $\eta^*\char 94\langle m^* \rangle\in u^q$.
\end{description}
Next choose $\rho^q_{\beta,n}$ (for pairs $(\beta,n)\in v^p)$ such that:
\[\rho^p_{\beta,n}\trianglelefteq\rho^q_{\beta,n}\in u^q\cap {}^{m^q}
\omega.\]
For each $j\in w^p$ separately extend $f^p_j$ to an automorphism $f^q_j$
of $(u^q,\triangleleft)$ such that for each $\beta\in{\rm Dom}(g^p_j)$ we have:
\[f^q_j(\eta^j_\beta\restriction m^q)=\rho^q_{\beta,g_j}(\beta).\]
This is possible, as for each $\nu\in u^p$, and $j\in w^p$, we can
separately define
\[f^q_j\restriction\{\nu':\nu\triangleleft\nu'\in u^q\ \mbox{ and }\ \nu'
\restriction (\ell g(\nu)+1)\notin u^p\}\]
--its range is
\[\{\nu':f^p_j(\nu)\triangleleft \nu'\in u^q\ \mbox{ and }\ \nu'
\restriction (\ell g(\nu)+1)\notin u^p\}.\]
The point is: by Clause (ii) above those two sets are isomorphic and for
each $\nu$ at most one $\rho^p_{\beta,n}$ is involved (see Clause (h) in the
definition of $p \in P$). Next let $w^q=w^p$, $g^q_j=g^p_j$ for $j\in w
\setminus\{i\}$, $g^q_i\restriction{\rm Dom}(g^p_i)=g^p_i$, $g^q_i(\alpha)=
\min(\{n:(\alpha,n)\notin v^p\})$, ${\rm Dom}(g^q_i)={\rm Dom}(g^p_i)\cup\{\alpha\}$,
and $\rho^q_{\alpha,g^q_i(\alpha)}=f^g_i(\eta^i_\alpha\restriction m^q)$ and
$v^q=v^p\cup\{(\alpha,g^q_i(\alpha))\}$.
\medskip
\noindent{\bf D Fact}\hspace{0.15in} $P$ satisfies the c.c.c.
\medskip
\noindent{\em Proof of the Fact:}\ \ \ Assume $p_\varepsilon\in P$ for
$\varepsilon<\omega_1$. By Fact C, without loss of generality each
\[\langle\eta^j_\beta\restriction m^{p_\varepsilon}:j\in w^{p_\varepsilon}
\mbox{ and }\beta\in{\rm Dom}(g^{p_\varepsilon}_j)\rangle\]
is with no repetition. Without loss of generality, for all $\varepsilon
<\omega_1$
\[U_\varepsilon=:\big\{\alpha<2^\lambda:\alpha\in w^{p_\varepsilon}\mbox{ or }
\bigvee_{i\in w^p}[\alpha\in{\rm Dom}(g_i)]\mbox{ or }\bigvee_k(k,\alpha)\in
v^{p_\varepsilon}\big\}\]
has the same number of elements and for $\varepsilon\ne\zeta<\omega_1$,
there is a unique one-to-one order preserving function from $U_\varepsilon$
onto $U_\zeta$ which we call ${\rm OP}_{\zeta,\varepsilon}$, which also maps
$p_\varepsilon$ to $p_\zeta$ (so $m^{p_\zeta}=m^{p_\varepsilon}$; $u^{p_\zeta}
=u^{p_\varepsilon}$; ${\rm OP}_{\zeta,\varepsilon}(w^{p_\varepsilon})=
w^{p_\zeta}$; if $i\in w^{p_\varepsilon}$, $j={\rm OP}_{\zeta,\varepsilon}(i)$,
then $f_i\circ{\rm OP}_{\varepsilon,\zeta}\equiv f_j$; and {\em if\/} $\beta=
{\rm OP}_{\zeta,\varepsilon}(\alpha)$ and $\ell<\omega$ {\em then}
\[(\alpha,\ell)\in v^{p_\varepsilon}\quad\Leftrightarrow\quad (\beta,\ell)
\in v^{p_\zeta}\quad\Rightarrow\quad\rho^{p_\varepsilon}_{\alpha,\ell}=
\rho^{p_\zeta}_{\beta,\ell}).\]
Also this mapping is the identity on $U_\zeta\cap U_\varepsilon$ and
$\langle U_\zeta:\zeta<\omega_1\rangle$ is a $\triangle$-system.
Let $w=:w^{p_0}\cap w^{p_1}$. As $i\ne j\ \Rightarrow\ |A_i\cap A_j|\le
\aleph_0$, without loss of generality
\begin{description}
\item[$(*)$] if $i\ne j\in w$ then
\[U_{\varepsilon}\cap (A_i\cap A_j)\subseteq w.\]
\end{description}
We now start to define $q\ge p_0,p_1$. Choose $m^q$ such that $m^q\in
(m^{p_\varepsilon},\omega)$ and
\[\begin{array}{ll}
m^q>\max\big\{\ell g(\eta^{i_0}_{\alpha_0}\cap\eta^{i_1}_{\alpha_1})+1:&
i_0\in w^{p_0},\ i_1 \in w^{p_1},\ {\rm OP}_{1,0}(i_0)=i_1,\\
\ &\alpha_0\in{\rm Dom}(g^{p_0}_{i_0}),\ \alpha_1\in{\rm Dom}(g^{p_1}_{i_1}),\\
\ &{\rm OP}_{1,0}(\alpha_0)=\alpha_1\big\}.
\end{array}\]
Let $u^q\subseteq {}^{m^q\ge}\omega$ be such that:
\begin{description}
\item[(A)] $u^q\cap\left({}^{m^{p_0}\ge}\omega\right)=u^q\cap\left(
{}^{m^{p_1}\ge}\omega\right)=u^{p_0}=u^{p_1}$,
\item[(B)] for each $\nu\in u^q$, $m^{p_0}\le\ell g(\nu)<m^q$, for exactly
two numbers $k<\omega$, $\nu\char 94 \langle k\rangle\in u^q$,
\item[(C)] $\eta^i_\alpha\restriction\ell\in u^q$ for $\ell\le m^q$
\underbar{when}: $i\in w^{p_0}$, $\alpha\in{\rm Dom}(g^{p_0}_i)$ \underbar{or}
$i\in w^{p_1}$, $\alpha\in{\rm Dom}(g^{p_1}_i)$.
\end{description}
[Possible as $\{\eta^i_\alpha\restriction m^{p_\varepsilon}:i\in
w^{p_\varepsilon},\alpha\in{\rm Dom}(g^{p_\varepsilon}_i)\}$ is with no
repetitions (the first line of the proof).]
Let $w^q=:w^{p_0}\cup w^{p_1}$ and $v^q=:v^{p_0}\cup v^{p_1}$ and for
$i \in w^q$
\[g^q_i=\left\{\begin{array}{lll}
g^{p_0}_i &\mbox{\underbar{if}}& i\in w^{p_0}\setminus w^{p_1},\\
g^{p_1}_i &\mbox{\underbar{if}}& i\in w^{p_1}\setminus w^{p_0},\\
g^{p_0}_i \cup g^{p_1}_i &\mbox{\underbar{if}}& i\in w^{p_0}\cap w^{p_1}.
\end{array}\right.\]
Next choose $\rho^q_{\alpha,\ell}$ for $(\alpha,\ell)\in v^q$ as follows.
Let $\nu_{\alpha,\ell}$ be $\rho^{p_0}_{\alpha,\ell}$ if defined,
$\rho^{p_1}_{\alpha,\ell}$ if defined (no contradiction). If $(\alpha,\ell)
\in v^q$ choose $\rho^q_{\alpha,\ell}$ as any $\rho$ such that:
\begin{description}
\item[$\otimes_0$] $\nu_{\alpha,\ell}\triangleleft\rho\in u^q\cap
{}^{(m^q)}\omega$.
\end{description}
But not all choices are O.K., as we need to be able to define $f^q_i$ for
$i\in w^q$. A possible problem will arise only when $i\in w^{p_0}\cap
w^{p_1}$. Specifically we need just (remember that $\langle
\rho^{p_\varepsilon}_{\alpha,\ell}:(\alpha,\ell)\in v^{p_\varepsilon}
\rangle$ are pairwise distinct by clause (b) of the Definition of $p\in P$):
\begin{description}
\item[$\otimes_1$] if $i_0\in w^{p_0},(\alpha_0,\ell)=(\alpha_0,
g_{i_0}(\alpha_0)),\alpha_0\in{\rm Dom}(g^{p_0}_{i_0})$, $i_1={\rm OP}_{1,0}(i_0)$
and $\alpha_1={\rm OP}_{1,0}(\alpha_0)$ and $i_0=i_1$
{\em then} $\ell g(\eta^{i_0}_{\alpha_0}\cap\eta^{i_1}_{\alpha_1})=\ell g
(\rho^q_{\alpha_0,\ell}\cap\rho^q_{\alpha_1,\ell})$.
\end{description}
We can, of course, demand $\alpha_0\neq \alpha_1$ (otherwise the
conclusion of $\otimes_1$ is trivial).
Our problem is expressible for each pair $(\alpha_0,\ell),(\alpha_1,\ell)$
separately as: first the problem is in defining the
$\rho^q_{(\alpha,\ell)}$'s and second, if $(\alpha'_1,\ell')$,
$(\alpha'_2,\ell)$ is another such pair then $\{(\alpha_1,\ell),
(\alpha_2,\ell)\}$, $\{(\alpha'_1,\ell'),(\alpha'_2,\ell')\}$ are either
disjoint or equal. Now for a given pair $(\alpha_0,\ell),(\alpha_1,\ell)$ how
many $i_0=i_1$ do we have? Necessarily $i_0\in w^{p_0}\cap w^{p_1}=w$. But
if $i'_0\ne i''_0$ are like that then $\alpha_0\in A_{i'_0}\cap
A_{i''_0}$, contradicting $(*)$ above because $\alpha_0\neq
\alpha_1={\rm OP}_{1,0}(\alpha_0)$. So there is at most one candidate
$i_0=i_1$, so there is no problem to satisfy $\otimes_1$. Now we can define
$f^q_i$ (i$\in w^q$) as in the proof of Fact C.
\medskip
The rest should be clear. \hfill$\square_{\ref{4.4}}$
\begin{Conclusion}
\label{4.6}
Suppose $V\models GCH$, $\aleph_0<\lambda<\chi$ and $\chi^\lambda=\chi$.
Then for some c.c.c. forcing notion $P$ of cardinality $\chi$, not collapsing
cardinals nor changing cofinalities, in $V^P$:
\begin{description}
\item[(i)] $2^{\aleph_0}=2^\lambda=\chi$,
\item[(ii)] ${\frak K}^{tr}_\lambda$ has a universal family of cardinality
$\lambda^+$,
\item[(iii)] ${\frak K}^{rs(p)}_\lambda$ has a universal family of cardinality
$\lambda^+$.
\end{description}
\end{Conclusion}
\proof First use a preliminary forcing $Q^0$ of Baumgartner \cite{B},
adding $\langle A_\alpha:\alpha<\chi\rangle$, $A_\alpha\in
[\lambda]^\lambda$, $\alpha\neq\beta\quad \Rightarrow\quad |A_\alpha\cap
A_\beta|\le\aleph_0$ (we can have $2^{\aleph_0}=\aleph_1$ here, or
$[\alpha\ne \beta\quad \Rightarrow\quad A_\alpha\cap A_\beta$ finite], but
not both). Next use an FS iteration $\langle P_i,\dot{Q}_i:i<\chi\times
\lambda^+\rangle$ such that each forcing from \ref{4.4} appears and each
forcing as in \ref{4.5} appears. \hfill$\square_{\ref{4.6}}$
\begin{Remark}
\label{4.7}
We would like to have that there is a universal member in
${\frak K}^{rs(p)}_\lambda$; this sounds very reasonable but we did not try.
In our framework, the present result shows limitations to ZFC results which
the methods applied in the previous sections can give.
\end{Remark}
\section{Back to ${\frak K}^{rs(p)}$, real non-existence results}
By \S1 we know that if $G$ is an Abelian group with set of elements
$\lambda$, $C\subseteq\lambda$, then for an element $x\in G$ the distance
from $\{y:y<\alpha\}$ for $\alpha\in C$ does not code an appropriate invariant.
If we have infinitely many such distance functions, e.g. have infinitely many
primes, we can use more complicated invariants related to $x$ as in \S3.
But if we have one prime, this approach does not help.
If one element fails, can we use infinitely many? A countable subset $X$ of
$G$ can code a countable subset of $C$:
\[\{\alpha\in C:\mbox{ closure}(\langle X\rangle_G)\cap\alpha\nsubseteq
\sup(C\cap\alpha)\},\]
but this seems silly - we use heavily the fact that $C$ has many countable
subsets (in particular $>\lambda$) and $\lambda$ has at least as many.
However, what if $C$ has a small family (say of cardinality $\le\lambda$ or
$<\mu^{\aleph_0}$) of countable subsets such that every subset of cardinality,
say continuum, contains one? Well, we need more: we catch a countable subset
for which the invariant defined above is infinite (necessarily it is at most
of cardinality $2^{\aleph_0}$, and because of \S4 we are not trying any more
to deal with $\lambda\le 2^{\aleph_0}$). The set theory needed is expressed
by $T_J$ below, and various ideals also defined below, and the result itself
is \ref{5.7}.
Of course, we can deal with other classes like torsion free reduced groups,
as they have the characteristic non-structure property of unsuperstable
first order theories; but the relevant ideals will vary: the parallel to
$I^0_{\bar\mu}$ for $\bigwedge\limits_n \mu_n=\mu$, $J^2_{\bar\mu}$ seems
to be always O.K.
\begin{Definition}
\label{5.1}
\begin{enumerate}
\item For $\bar\mu=\langle\mu_n:n<\omega\rangle$ let $B_{\bar\mu}$ be
\[\bigoplus\{K^n_\alpha:n<\omega,\alpha<\mu_n\},\qquad K^n_\alpha=
\langle{}^* t^n_\alpha\rangle_{K^n_\alpha}\cong {\Bbb Z}/p^{n+1} {\Bbb Z}.\]
Let $B_{\bar\mu\restriction n}=\bigoplus\{K^m_\alpha:\alpha<\mu_m,m<n\}
\subseteq B_{\bar\mu}$ (they are in ${\frak K}^{rs(p)}_{\le\sum\limits_{n}
\mu_n}$). Let $\hat B$ be the $p$-torsion completion of $B$ (i.e. completion
under the norm $\|x\|=\min\{2^{-n}:p^n\mbox{ divides }x\}$).
\item Let $I^1_{\bar \mu}$ be the ideal on $\hat B_{\bar \mu}$
generated by $I^0_{\bar \mu}$, where
\[\begin{array}{ll}
I^0_{\bar\mu}=\big\{A\subseteq\hat B_{\bar\mu}:&\mbox{for every large
enough }n,\\
&\mbox{for no }y\in\bigoplus\{K^m_\alpha:m\le n\mbox{ and }\alpha<\mu_m\}\\
&\mbox{but }y\notin\bigoplus\{K^m_\alpha:m<n\mbox{ and }\alpha<\mu_m\}
\mbox{ we have}:\\
&\mbox{for every }m\mbox{ for some }z\in\langle A\rangle\mbox{ we have:}\\
&p^m\mbox{ divides }z-y\big\}.
\end{array}\]
(We may write $I^0_{\hat B_{\bar\mu}}$, but the ideal depends also on
$\langle\bigoplus\limits_{\alpha<\mu_n} K^n_\alpha:n<\omega\rangle$ not
just on $\hat B_{\bar\mu}$ itself).
\item For $X,A\subseteq\hat B_{\bar\mu}$,
\[\mbox{ recall }\ \ \langle A\rangle_{\bar
B_{\bar\mu}}=\big\{\sum\limits_{n<n^*} a_ny_n:y_n
\in A,\ a_n\in{\Bbb Z}\mbox{ and } n^*\in{\Bbb N}\big\},\]
\[\mbox{ and let }\ \ c\ell_{\hat B_{\bar\mu}}(X)=\{x:(\forall
n)(\exists y\in X)(x-y\in p^n \hat B_{\bar \mu})\}.\]
\item Let $J^1_{\bar \mu}$ be the ideal which $J^{0.5}_{\bar \mu}$
generates, where
\[\begin{array}{ll}
J^{0.5}_{\bar\mu}=\big\{A\subseteq\prod\limits_{n<\omega}\mu_n:
&\mbox{for some }n<\omega\mbox{ for no }m\in [n,\omega)\\
&\mbox{and }\beta<\gamma<\mu_m\mbox{ do we have}:\\
&\mbox{for every }k\in [m,\omega)\mbox{ there are }\eta,\nu\in A\mbox{
such}\\
&\mbox{that:}\ \ \eta(m)=\beta,\,\nu(m)=\gamma,\,\eta\restriction m=
\nu\restriction m\\
&\mbox{and }\eta\restriction (m,k)=\nu\restriction (m,k)\big\}.
\end{array}\]
\item
\[\begin{array}{rr}
J^0_{\bar\mu}=\{A\subseteq\prod\limits_{n<\omega}\mu_n:
&\mbox{for some }n<\omega\mbox{ and } k,\mbox{ the mapping }\eta\mapsto
\eta \restriction n\\
&\mbox{is }(\le k)\mbox{-to-one }\}.
\end{array}\]
\item $J^2_{\bar\mu}$ is the ideal of nowhere dense subsets of
$\prod\limits_n\mu_n$ (under the following natural topology: a
neighbourhood of $\eta$ is $U_{\eta,n}=\{\nu:\nu\restriction n=\eta
\restriction n\}$ for some $n$).
\item $J^3_{\bar \mu}$ is the ideal of meagre subsets of $\prod\limits_n
\mu_n$, i.e. subsets which are included in countable union of members of
$J^2_{\bar \mu}$.
\end{enumerate}
\end{Definition}
\begin{Observation}
\label{5.2}
\begin{enumerate}
\item $I^0_{\bar\mu}$, $J^0_{\bar\mu}$, $J^{0.5}_{\bar\mu}$ are
$(<\aleph_1)$-based, i.e. for $I^0_{\bar \mu}$: if $A\subseteq\hat
B_{\bar\mu}$, $A\notin I^0_{\bar\mu}$ then there is a countable $A_0
\subseteq A$ such that $A_0\notin I^0_{\bar\mu}$.
\item $I^1_{\bar\mu}$, $J^0_{\bar\mu}$, $J^1_{\bar\mu}$,
$J^2_{\bar\mu}$, $J^3_{\bar\mu}$ are ideals, $J^3_{\bar\mu}$ is
$\aleph_1$-complete.
\item $J^0_{\bar\mu}\subseteq J^1_{\bar\mu}\subseteq J^2_{\bar\mu}
\subseteq J^3_{\bar\mu}$.
\item There is a function $g$ from $\prod\limits_{n<\omega}\mu_n$ into
$\hat B_{\bar\mu}$ such that for every $X\subseteq\prod\limits_{n<\omega}
\mu_n$:
\[X\notin J^1_{\bar\mu}\quad \Rightarrow\quad g''(X)\notin
I^1_{\bar\mu}.\]
\end{enumerate}
\end{Observation}
\proof E.g. 4)\ \ Let $g(\eta)=\sum\limits_{n<\omega}p^n({}^*t^n_{\eta(n)})$.
Let $X \subseteq\prod\limits_{n<\omega}\mu_n$, $X\notin J^1_{\bar\mu}$.
Assume $g''(X)\in\bar I^1_{\bar\mu}$, so for some $\ell^*$ and $A_\ell
\subseteq\hat B_{\bar\mu}$, ($\ell<\ell^*$) we have $A_\ell\in I^0_{\bar\mu}$,
and $g''(X)\subseteq\bigcup\limits_{\ell<\ell^*} A_\ell$, so $X=
\bigcup\limits_{\ell<\ell^*} X_\ell$, where
\[X_\ell=:\{\eta\in X:g(\eta)\in A_\ell\}.\]
As $J^1_{\bar \mu}$ is an ideal, for some $\ell<\ell^*$, $X_\ell\notin
J^1_{\bar\mu}$. So by the definition of $J^1_{\bar\mu}$, for some infinite
$\Gamma\subseteq\omega$ for each $m\in\Gamma$ we have $\beta_m<\gamma_m<
\mu_m$ and for every $k\in [m,\omega)$ we have $\eta_{m,k},\nu_{m,k}$, as
required in the definition of $J^1_{\bar \mu}$. So $g(\eta_{m,k}),
g(\nu_{m,k}) \in A_\ell$ (for $m\in \Gamma$, $k\in (m,\omega)$). Now
\[{}^* t^m_{\gamma_m} - {}^* t^m_{\beta_m}=g(\eta_{m,k})-g(\nu_{m,k})
\mod\ p^k \hat B_{\bar \mu},\]
but $g(\eta_{m,k})-g(\nu_{m,k})\in\langle A_\ell\rangle_{\hat B_{\bar\mu}}$.
Hence
\[(\exists z\in\langle A_\ell\rangle_{\hat B_{\bar\mu}})[{}^* t^m_{\gamma_m}
-{}^* t^m_{\beta_m}=z\mod\ p^k \hat B_{\bar\mu}],\]
as this holds for each $k$, ${}^* t^m_{\gamma_m}-{}^* t^m_{\beta_m}\in
c \ell(\langle A_\ell\rangle_{\hat B_{\bar\mu}})$.
This contradicts $A_\ell
\in I^0_{\bar \mu}$. \hfill$\square_{\ref{5.2}}$
\begin{Definition}
\label{5.3}
Let $I\subseteq{\cal P}(X)$ be downward closed (and for simplicity $\{\{x\}:
x\in X\}\subseteq I$). Let $I^+={\cal P}(X)\setminus I$. Let
\[\begin{array}{ll}
{\bf U}^{<\kappa}_I(\mu)=:\min\big\{|{\cal P}|:&{\cal P}\subseteq [\mu]^{<\kappa},
\mbox{ and for every } f:X\longrightarrow\mu\mbox{ for some}\\
&Y\in {\cal P},\mbox{ we have }\{x\in X:f(x)\in Y\}\in
I^+\big\}.
\end{array}\]
Instead of $<\kappa^+$ in the superscript of ${\bf U}$ we write $\kappa$. If
$\kappa>|{\rm Dom}(I)|^+$, we omit it (since then its value does not matter).
\end{Definition}
\begin{Remark}
\label{5.4}
\begin{enumerate}
\item If $2^{<\kappa}+|{\rm Dom}(I)|^{<\kappa}\le\mu$ we can find $F\subseteq$
partial functions from ${\rm Dom}(I)$ to $\mu$ such that:
\begin{description}
\item[(a)] $|F|={\bf U}^{<\kappa}_I(\mu)$,
\item[(b)] $(\forall f:X\longrightarrow\mu)(\exists Y\in I^+)[f\restriction
Y \in F]$.
\end{description}
\item Such functions (as ${\bf U}^{<\kappa}_I(\mu)$) are investigated in {\bf
pcf} theory (\cite{Sh:g}, \cite[\S6]{Sh:410}, \cite[\S2]{Sh:430},
\cite{Sh:513}).
\item If $I\subseteq J\subseteq {\cal P}(X)$, then ${\bf U}^{<\kappa}_I(\mu)\le
{\bf U}^{<\kappa}_J(\mu)$, hence by \ref{5.2}(3), and the above
\[{\bf U}^{<\kappa}_{J^0_{\bar\mu}}(\mu)\le {\bf U}^{<\kappa}_{J^1_{\bar\mu}}(\mu)
\le {\bf U}^{<\kappa}_{J^2_{\bar\mu}}(\mu)\le
{\bf U}^{<\kappa}_{J^3_{\bar\mu}}(\mu)\]
and by \ref{5.2}(4) we have ${\bf U}^{<\kappa}_{I^1_{\bar \mu}}\leq
{\bf U}^{<\kappa}_{J^1_{\bar \mu}}(\mu).$
\item On ${\rm IND}_\theta(\bar\kappa)$ (see \ref{5.5A} below) see \cite{Sh:513}.
\end{enumerate}
\end{Remark}
\begin{Definition}
\label{5.5A}
${\rm IND}'_\theta(\langle\kappa_n:n<\omega\rangle)$ means that for every model
$M$ with universe $\bigcup\limits_{n<\omega}\kappa_n$ and $\le\theta$
functions, for some $\Gamma\in [\omega]^{\aleph_0}$ and $\eta\in
\prod\limits_{n<\omega}\kappa_n$ we have:
\[n\in\Gamma\quad\Rightarrow\quad\eta(n)\notin c\ell_M\{\eta(\ell):\ell
\ne n\}.\]
\end{Definition}
\begin{Remark}
Actually if $\theta\geq \aleph_0$, this implies that we can fix
$\Gamma$, hence replacing $\langle \kappa_n: n< \omega\rangle$ by an
infinite subsequence we can have $\Gamma=\omega$.
\end{Remark}
\begin{Theorem}
\label{5.5}
\begin{enumerate}
\item If $\mu_n\rightarrow (\kappa_n)^2_{2^\theta}$ and ${\rm IND}'_\theta(\langle
\kappa_n:n<\omega\rangle)$ {\em then} $\prod\limits_{n<\omega}\mu_n$ is not
the union of $\le\theta$ sets from $J^1_{\bar \mu}$.
\item If $\theta=\theta^{\aleph_0}$ and $\neg{\rm IND}'_\theta(\langle\mu_n:
n<\omega\rangle$) then $\prod\limits_{n<\omega}\mu_n$ is the union of
$\le\theta$ members of $J^1_{\bar\mu}$.
\item If $\lim\sup\limits_n \mu_n$ is $\ge 2$, then $\prod\limits_{n<\omega}
\mu_n\notin J^3_{\bar\mu}$ (so also the other ideals defined above are not
trivial by \ref{5.2}(3), (4)).
\end{enumerate}
\end{Theorem}
\proof 1)\ \ Suppose $\prod\limits_{n<\omega}\mu_n$ is
$\bigcup\limits_{i<\theta} X_i$, and each $X_i\in J^1_{\bar\mu}$. We define
for each $i<\theta$ and $n<k<\omega$ a two-place relation $R^{n,k}_i$ on
$\mu_n$:
\qquad $\beta R^{n,k}_i \gamma$ if and only if
\qquad there are $\eta,\nu\in X_i\subseteq\prod\limits_{\ell<k}\mu_\ell$
such that
\[\eta\restriction [0,n)=\nu\restriction [0,n)\quad\mbox{and }\
\eta\restriction (n,k)=\nu\restriction (n,k)\quad\mbox{and }\ \eta(n)
=\beta,\ \nu(n)=\gamma.\]
Note that $R^{n,k}_i$ is symmetric and
\[n<k_1<k_2\ \&\ \beta R^{n,k_2}_i \gamma\quad \Rightarrow\quad \beta
R^{n,k_1}_i \gamma.\]
As $\mu_n\rightarrow (\kappa_n)^2_{2^\theta}$, we can find $A_n\in
[\mu_n]^{\kappa_n}$ and a truth value ${\bf t}^{n,k}_i$ such that for all
$\beta<\gamma$ from $A_n$, the truth value of $\beta R^{n,k}_i\gamma$ is
${\bf t}^{n,k}_i$. If for some $i$ the set
\[\Gamma_i=:\{n<\omega:\mbox{ for every }k\in (n,\omega)\mbox{ we have }
{\bf t}^{n,k}_i=\mbox{ true}\}\]
is infinite, we get a contradiction to ``$X_i\in J^1_{\bar \mu}$", so for
some $n(i)<\omega$ we have $n(i)=\sup(\Gamma_i)$.
For each $n<k<\omega$ and $i<\theta$ we define a partial function
$F^{n,k}_i$ from $\prod\limits_{\scriptstyle \ell<k,\atop\scriptstyle\ell
\ne n} A_\ell$ into $A_n$:
\begin{quotation}
\noindent $F(\alpha_0\ldots\alpha_{n-1},\alpha_{n+1},\ldots,\alpha_k)$ is
the first $\beta\in A_n$ such that for some $\eta\in X_i$ we have
\[\begin{array}{c}
\eta\restriction [0,n)=\langle\alpha_0,\ldots,\alpha_{n-1}\rangle,\quad
\eta(n)=\beta,\\
\eta\restriction (n,k)=\langle\alpha_{n+1},\ldots,\alpha_{k-1}\rangle.
\end{array}\]
\end{quotation}
So as ${\rm IND}'_\theta(\langle\kappa_n:n<\omega\rangle)$ there is $\eta=\langle
\beta_n:n<\omega\rangle\in\prod\limits_{n<\omega} A_n$ such that for
infinitely many $n$, $\beta_n$ is not in the closure of $\{\beta_\ell:\ell
<\omega,\,\ell\ne n\}$ by the $F^{n,k}_i$'s. As $\eta\in\prod\limits_{n<
\omega} A_n\subseteq\prod\limits_{n<\omega}\mu_n=\bigcup\limits_{i<\theta}
X_i$, necessarily for some $i<\theta$, $\eta\in X_i$. Let $n\in(n(i),\omega)$
be such that $\beta_n$ is not in the closure of
$\{\beta_\ell:\ell<\omega\mbox{ and }\ell\neq n\}$
and let $k>n$ be such that ${\bf t}^{n,k}_i=\mbox{ false}$. Now $\gamma=:
F^{n,k}_i(\beta_0,\ldots,\beta_{n-1},\beta_{n+1},\ldots,\beta_{k-1})$ is well
defined $\le\beta_n$ (as $\beta_n$ exemplifies that there is such $\beta$) and
is $\ne \beta_n$ (by the choice of $\langle\beta_\ell:\ell<\omega\rangle$), so
by the choice of $n(i)$ (so of $n$, $k$ and earlier of ${\bf t}^{n, k}_i$
and of $A_n$) we get
contradiction to ``$\gamma<\beta_n$ are from $A_n$".
\noindent 2)\ \ Let $M$ be an algebra with universe $\sum\limits_{n<\omega}
\mu_n$ and $\le\theta$ functions (say $F^n_i$ for $i<\theta$, $n<\omega$,
$F^n_i$ is $n$-place) exemplifying $\neg{\rm IND}'_\theta(\langle\mu_n:n<\omega
\rangle)$. Let
\[\Gamma=:\{\langle(k_n,i_n):n^*\le n<\omega\rangle:n^*<\omega\mbox{ and }
\bigwedge_n n<k_n<\omega\mbox{ and }i_n<\theta\}.\]
For $\rho=\langle(k_n,i_n):n^*\le n<\omega\rangle\in\Gamma$ let
\[\begin{ALIGN}
A_\rho=:&\big\{\eta\in\prod\limits_{n<\omega}\mu_n:\mbox{for every }n
\in [n^*,\omega)\mbox{ we have}\\
&\qquad\eta(n)=F^{k_n-1}_{i_n}\left(\eta(0),\ldots,\eta(n-1),\eta(n+1),
\ldots,\eta(k_n)\right)\big\}.
\end{ALIGN}\]
So, by the choice of $M$, $\prod\limits_{n<\omega}\mu_n=\bigcup\limits_{\rho
\in\Gamma} A_\rho$. On the other hand, it is easy to check that $A_\rho\in
J^1_{\bar \mu}$. \hfill$\square_{\ref{5.5}}$
\begin{Theorem}
\label{5.6}
If $\mu=\sum\limits_{n<\omega}\lambda_n$, $\lambda^{\aleph_0}_n<\lambda_{n+1}$
and $\mu<\lambda={\rm cf}(\lambda)<\mu^{+\omega}$\\
then ${\bf U}^{\aleph_0}_{I^0_{\langle\lambda_n:n<\omega\rangle}}(\lambda)=
\lambda$ and even ${\bf U}^{\aleph_0}_{J^3_{\langle \lambda_n:n<\omega\rangle}}
(\lambda)=\lambda$.
\end{Theorem}
\proof See \cite[\S6]{Sh:410}, \cite[\S2]{Sh:430}, and \cite{Sh:513}
for considerably more.
\begin{Lemma}
\label{5.7}
Assume $\lambda>2^{\aleph_0}$ and
\begin{description}
\item[$(*)$(a)] $\prod\limits_{n<\omega}\mu_n<\mu$ and $\mu^+<\lambda=
{\rm cf}(\lambda)<\mu^{\aleph_0}$,
\item[\ \ (b)] $\hat B_{\bar\mu}\notin I^0_{\bar\mu}$ and $\lim_n\sup\mu_n$
is infinite,
\item[\ \ (c)] ${\bf U}^{\aleph_0}_{I^0_{\bar\mu}}(\lambda)=\lambda$
(note $I^0_{\bar \mu}$ is not required to be an ideal).
\end{description}
{\em Then} there is no universal member in ${\frak K}^{rs(p)}_\lambda$.
\end{Lemma}
\proof Let $S\subseteq\lambda$, $\bar C=\langle C_\delta:\delta\in S\rangle$
guesses clubs of $\lambda$, chosen as in the proof of \ref{3.3} (so $\alpha
\in{\rm nacc}(C_\delta)\ \Rightarrow\ {\rm cf}(\alpha)>2^{\aleph_0}$). Instead of
defining the relevant invariant we prove the theorem directly, but we could
define it, somewhat cumbersomely (like \cite[III,\S3]{Sh:e}).
Assume $H\in {\frak K}^{rs(p)}_\lambda$ is a pretender to universality;
without loss of generality with the set of elements of $H$ equal to
$\lambda$.
Let $\chi=\beth_7(\lambda)^+$, ${\bar{\frak A}}=\langle {\frak A}_\alpha:
\alpha<\lambda\rangle$ be an increasing continuous sequence of elementary
submodels of $({\cal H}(\chi),\in,<^*_\chi)$, ${\bar {\frak A}}\restriction
(\alpha+1)\in {\frak A}_{\alpha+1}$, $\|{\frak A}_\alpha\|<\lambda$, ${\frak
A}_\alpha\cap\lambda$ an ordinal, ${\frak A}=\bigcup\limits_{\alpha<\lambda}
{\frak A}_\alpha$ and $\{H,\langle\mu_n:n<\omega\rangle,\mu,\lambda\}\in
{\frak A}_0$, so $B_{\bar \mu},\hat B_{\bar \mu} \in {\frak A}_0$ (where $\bar
\mu=\langle\mu_n:n<\omega\rangle$, of course).
For each $\delta\in S$, let ${\cal P}_\delta=:[C_\delta]^{\aleph_0}\cap
{\frak A}$. Choose $A_\delta\subseteq C_\delta$ of order type $\omega$
almost disjoint from each $a\in {\cal P}_\delta$, and from $A_{\delta_1}$ for
$\delta_1\in\delta\cap S$; its existence should be clear as $\lambda< \mu^{\aleph_0}$. So
\begin{description}
\item[$(*)_0$] every countable $A\in {\frak A}$ is almost disjoint to
$A_\delta$.
\end{description}
By \ref{5.2}(2), $I^0_{\bar\mu}$ is $(<\aleph_1)$-based so by \ref{5.4}(1) and
the assumption (c) we have
\begin{description}
\item[$(*)_1$] for every $f:\hat B_{\bar\mu}\longrightarrow\lambda$ for some
countable $Y\subseteq \hat B_{\bar\mu}$, $Y\notin I^0_{\bar\mu}$, we have
$f\restriction Y\in {\frak A}$
\end{description}
(remember $(\prod\limits_{n<\omega}\mu_n)^{\aleph_0}=\prod\limits_{n<\omega}
\mu_n$).
\noindent Let $B$ be $\bigoplus\{G^n_{\alpha,i}:n<\omega,\alpha<\lambda,\,i<
\sum\limits_{k<\omega}\mu_k\}$, where
\[G^n_{\alpha,i}=\langle x^n_{\alpha,i}\rangle_{G^n_{\alpha,i}}\cong{\Bbb
Z}/p^{n+1}{\Bbb Z}.\]
\noindent
So $B$, $\hat B$, $\langle(n,\alpha,i,x^n_{\alpha,i}):n<\omega,\alpha<\lambda,
i<\sum\limits_{k<\omega}\mu_k\rangle$ are well defined. Let $G$ be the
subgroup of $\hat B$ generated by:
\[\begin{ALIGN}
B\cup\big\{x\in\hat B: &\mbox{for some }\delta\in S,\, x\mbox{ is in the
closure of }\\
&\bigoplus\{G^n_{\alpha,i}:n<\omega,i<\mu_n,\alpha\mbox{ is the }n\mbox{th
element of } A_\delta\}\big\}.
\end{ALIGN}\]
As $\prod\limits_{n<\omega}\mu_n<\mu<\lambda$, clearly $G\in {\frak
K}^{rs(p)}_\lambda$, without loss of generality the set of elements of $G$ is
$\lambda$ and let $h:G\longrightarrow H$ be an embedding. Let
\[E_0=:\{\delta<\lambda:({\frak A}_\delta,h \restriction \delta,\;G
\restriction\delta)\prec({\frak A},h,G)\},\]
\[E=:\{\delta<\lambda:{\rm otp}(E_0\cap\delta)=\delta\}.\]
They are clubs of $\lambda$, so for some $\delta\in S$, $C_\delta\subseteq E$
(and $\delta\in E$ for simplicity). Let $\eta_\delta$ enumerate $A_\delta$
increasingly.
There is a natural embedding $g = g_\delta$ of $B_{\bar \mu}$ into $G$:
\[g({}^* t^n_i) = x^n_{\eta_\delta(n),i}.\]
Let $\hat g_\delta$ be the unique extension of $g_\delta$ to an embedding of
$\hat B_{\bar\mu}$ into $G$; those embeddings are pure, (in fact $g''_\delta
(\hat B_{\bar\mu})\setminus g''_\delta(B_\mu)\subseteq G\setminus G\cap
{\frak A}_\delta$). So $h\circ\hat g_\delta$ is an embedding of $\hat B_{\bar
\mu}$ into $H$, not necessarily pure but still an embedding, so the distance
function can become smaller but not zero and
\[h\circ\hat g_\delta(\hat B_{\bar\mu})\setminus h\circ g_\delta(B_\mu)
\subseteq H\setminus {\frak A}_\delta.\]
Remember $\hat B_{\bar\mu}\subseteq {\frak A}_0$ (as it belongs to ${\frak
A}_0$ and has cardinality $\prod\limits_{n<\omega}\mu_n<\lambda$ and
$\lambda\cap {\frak A}_0$ is an ordinal). By $(*)_1$ applied to
$f=h\circ\hat g$ there is a countable $Y \subseteq \hat B_{\bar \mu}$
such that $Y \notin I^0_{\bar\mu}$ and $f \restriction Y \in {\frak
A}$. But, from $f \restriction Y$ we shall below reconstruct
some countable sets not almost disjoint to $A_\delta$, reconstruct meaning in
${\frak A}$, in contradiction to $(*)_0$ above.
As $Y\notin I^0_{\bar \mu}$ we can find an infinite
$S^*\subseteq\omega\setminus m^*$ and for $n\in
S^*$, $z_n\in\bigoplus\limits_{\alpha<\mu_n} K^n_\alpha\setminus\{0\}$ and
$y^\ell_n\in\hat B_{\bar\mu}$ (for $\ell<\omega$) such that:
\begin{description}
\item[$(*)_2$] $z_n+y_{n,\ell}\in\langle Y\rangle_{\hat B_{\bar\mu}}$,\qquad
and
\item[$(*)_3$] $y_{n,\ell}\in p^\ell\,\hat B_{\bar\mu}$.
\end{description}
Without loss of generality $pz_n=0\ne z_n$ hence $p\,y^\ell_n=0$. Let
\[\nu_\delta(n)=:\min(C_\delta\setminus (\eta_\delta(n)+1)),\quad z^*_n=
(h\circ\hat g_\delta)(z_n)\quad\mbox{ and }\quad y^*_{n,\ell}=(h\circ\hat
g_\delta)(y_{n,\ell}).\]
Now clearly $\hat g_\delta(z_n)=g_\delta(z_n)=x^n_{\eta_\delta(n),i}\in
G\restriction\nu_\delta(n)$, hence $(h\circ\hat g_\delta)(z_n)\notin H
\restriction\eta_\delta(n)$, that is $z^*_n\notin
H\restriction\eta_\delta(n)$.
So $z^*_n\in H_{\nu_\delta(n)}\setminus H_{\eta_\delta(n)}$ belongs to
the $p$-adic closure of ${\rm Rang}(f\restriction Y)$. As $H$, $G$, $h$ and
$f\restriction Y$ belongs to ${\frak A}$, also $K$, the closure of
${\rm Rang}(f\restriction Y)$ in $H$ by the $p$-adic topology belongs to
${\frak A}$, and clearly $|K|\leq 2^{\aleph_0}$, hence
\[A^*=\{\alpha\in C_\delta: K\cap H_{\min(C_\delta\setminus
(\alpha+1))}\setminus H_\alpha \mbox{ is not empty}\}\]
is a subset of $C_\delta$ of cardinality $\leq 2^{\aleph_0}$ which
belongs to ${\frak A}$, hence $[A^*]^{\aleph_0}\subseteq {\frak A}$
but $A_\delta\subseteq A^*$ so $A_\delta\in {\frak A}$, a contradiction.
\hfill$\square_{\ref{5.7}}$
\section{Implications between the existence of universals}
\begin{Theorem}
\label{6.1}
Let $\bar n=\langle n_i:i<\omega\rangle$, $n_i\in [1,\omega)$. Remember
\[J^2_{\bar n}=\{A\subseteq\prod_{i<\omega} n_i:A \mbox{ is nowhere
dense}\}.\]
Assume $\lambda\ge 2^{\aleph_0}$, $T^{\aleph_0}_{J^3_{\bar n}}(\lambda)=
\lambda$ or just $T^{\aleph_0}_{J^2_{\bar n}}(\lambda)=\lambda$ for every
such $\bar n$, and
\[n<\omega\quad\Rightarrow\quad\lambda_n\le\lambda_{n+1}\le\lambda_\omega
=\lambda\quad\mbox{ and}\]
\[\lambda\le\prod_{n<\omega}\lambda_n\quad\mbox{ and }\quad\bar\lambda=
\langle\lambda_i:i\le\omega\rangle.\]
\begin{enumerate}
\item If in ${\frak K}^{fc}_{\bar \lambda}$ there is a universal member
{\em then} in ${\frak K}^{rs(p)}_\lambda$ there is a universal member.
\item If in ${\frak K}^{fc}_\lambda$ there is a universal member for
${\frak K}^{fc}_{\bar \lambda}$
{\em then} in
\[{\frak K}^{rs(p)}_{\bar\lambda}=:\{G\in {\frak K}^{rs(p)}_\lambda:\lambda_n
(G)\le\lambda\}\]
there is a universal member (even for ${\frak K}^{rs(p)}_\lambda$).
\end{enumerate}
($\lambda_n(G)$ were defined in \ref{1.1}).
\end{Theorem}
\begin{Remark}
\begin{enumerate}
\item Similarly for ``there are $M_i\in {\frak K}_{\lambda_1}$ ($i<\theta$)
with $\langle M_i:i<\theta\rangle$ being universal for ${\frak K}_\lambda$''.
\item The parallel of \ref{1.1} holds for ${\frak K}^{fc}_\lambda$.
\item By \S5 only the case $\lambda$ singular or $\lambda=\mu^+\ \&\
{\rm cf}(\mu)=\aleph_0\ \& \ (\forall \alpha<
\mu)(|\alpha|^{\aleph_0}<\mu)$
is of interest for \ref{6.1}.
\end{enumerate}
\end{Remark}
\proof 1)\ \ By \ref{1.1}, (2) $\Rightarrow$ (1).
More elaborately, by part (2) of \ref{6.1} below there is $H\in {\frak
K}^{rs(p)}_{\bar \lambda}$ which is universal in ${\frak
K}^{rs(p)}_{\bar \lambda}$. Clearly $|G|=\lambda$ so $H\in {\frak
K}^{rs(p)}_\lambda$, hence for proving part (1) of \ref{6.1} it
suffices to prove that $H$ is a universal member of ${\frak
K}^{rs(p)}_\lambda$. So let $G\in {\frak K}^{rs(p)}_\lambda$, and we
shall prove that it is embeddable into $H$. By \ref{1.1} there is $G'$
such that $G\subseteq G'\in {\frak K}^{rs(p)}_{\bar \lambda}$. By the
choice of $H$ there is an embedding $h$ of $G'$ into $H$. So
$h\restriction G$ is an embedding of $G$ into $H$, as required.
\noindent 2)\ \ Let $T^*$ be a universal member of ${\frak K}^{fc}_{\bar
\lambda}$ (see \S2) and let $P_\alpha = P^{T^*}_\alpha$.
Let $\chi>2^\lambda$. Without loss of generality $P_n=\{n\}\times
\lambda_n$, $P_\omega=\lambda$. Let
\[B_0=\bigoplus\{G^n_t:n<\omega,t\in P_n \},\]
\[B_1=\bigoplus \{G^n_t: n< \omega\mbox{ and }t\in P_n\},\]
where $G^n_t\cong {\Bbb Z}/p^{n+1}{\Bbb Z}$, $G^n_t$ is generated by
$x^n_t$. Let ${\frak B}\prec ({\cal H}(\chi),\in,<^*_\chi)$, $\|{\frak B}\|=
\lambda$, $\lambda+1\subseteq {\frak B}$, $T^*\in {\frak B}$, hence
$B_0$, $B_1\in {\frak B}$ and $\hat B_0, \hat B_1\in {\frak B}$ (the
torsion completion of $B$). Let $G^* =\hat B_1\cap {\frak B}$.
Let us prove that $G^*$ is universal for ${\frak K}^{rs(p)}_{\bar
\lambda}$ (by \ref{1.1} this suffices).
Let $G \in {\frak K}^{rs(p)}_{\lambda}$, so by \ref{1.1} without loss of
generality $B_0 \subseteq G\subseteq\hat B_0$. We define $R$:
\[\begin{ALIGN}
R=\big\{\eta:&\eta\in\prod\limits_{n<\omega}\lambda_n\mbox{ and for
some }
x\in G\mbox{ letting }\\
&x=\sum\{a^n_i\,p^{n-k}\,x^n_i:n<\omega,i\in w_n(x)\}\mbox{ where }\\
&w_n(x)\in [\lambda_n]^{<\aleph_0},a^n_i\,p^{n-k}\,x^n_i\ne 0\mbox{
we have }\\
&\bigwedge\limits_n\eta(n)\in w_n(x)\cup \{\ell:
\ell+|w_n(x)|\leq n\}\big\}.
\end{ALIGN}\]
Lastly let $M =:(R\cup\bigcup\limits_{n<\omega}\{n\}\times\lambda_n,\,P_n,\,
F_n)_{n<\omega}$ where $P_n=\{n\}\times\lambda_n$ and $F_n(\eta)=(n,\eta(n))$,
so clearly $M\in {\frak K}^{fc}_{\bar\lambda}$. Consequently, there is an
embedding $g:M\longrightarrow T^*$, so $g$ maps $\{n\}\times\lambda_n$ into
$P^{T^*}_n$ and $g$ maps $R$ into $P^{T^*}_\omega$. Let $g(n,\alpha)=
(n,g_n(\alpha))$ (i.e. this defines $g_n$). Clearly $g\restriction (\cup
P^M_n)=g\restriction (\bigcup\limits_n\{n\}\times\lambda_n)$ induces an
embedding $g^*$ of $B_0$ to $B_1$ (by mapping the generators into the
generators).
\noindent The problem is why:
\begin{description}
\item[$(*)$] if $x=\sum\{a^n_i\,p^{n-k}\,x^n_i:n<\omega,i\in w_n(x)\}\in G$
{\em then} $g^*(x)=\sum\{a^n_i\,p^{n-k}\,g^*(x^n_i):n<\omega,i\in w_n(x)\}\in
G^*$.
\end{description}
As $G^*=\hat B_1\cap {\frak B}$, and $2^{\aleph_0}+1\subseteq {\frak B}$, it is
enough to prove $\langle g^{\prime\prime}(w_n(x)):n<\omega\rangle\in
{\frak B}$. Now for
notational simplicity $\bigwedge\limits_n [|w_n(x)|\ge n+1]$ (we can add an
element of $G^*\cap {\frak B}$ or just repeat the arguments). For each $\eta
\in\prod\limits_{n<\omega} w_n(x)$ we know that
\[g(\eta)=\langle g(\eta(n)):n<\omega\rangle\in T^*\quad\mbox{ hence is in }
{\frak B}\]
(as $T^*\in {\frak B}$, $|T^*|\le\lambda$). Now by assumption there is
$A\subseteq\prod\limits_{n<\omega} w_n(x)$ which is not nowhere dense
such that $g
\restriction A\in {\frak B}$, hence for some $n^*$ and $\eta^* \in
\prod\limits_{\ell<n^*}w_\ell(x)$, $A$ is dense above $\eta^*$ (in
$\prod\limits_{n<\omega} w_n(x)$). Hence
\[\langle\{\eta(n):\eta\in A\}:n^* \le n<\omega\rangle=\langle w_n[x]:n^*\le
n<\omega\rangle,\]
but the former is in ${\frak B}$ as $A\in {\frak B}$, and from the latter the
desired conclusion follows. \hfill$\square_{\ref{6.1}}$
\section{Non-existence of universals for trees with small density}
For simplicity we deal below with the case $\delta=\omega$, but the proof
works in general (as for ${\frak K}^{fr}_{\bar\lambda}$ in \S2). Section 1
hinted we should look at ${\frak K}^{tr}_{\bar\lambda}$ not only for the case
$\bar\lambda=\langle\lambda:\alpha\le\omega\rangle$ (i.e. ${\frak
K}^{tr}_\lambda$), but in particular for
\[\bar\lambda=\langle\lambda_n:n<\omega\rangle\char 94\langle\lambda\rangle,
\qquad \lambda^{\aleph_0}_n<\lambda_{n+1}<\mu<\lambda={\rm cf}(\lambda)<
\mu^{\aleph_0}.\]
Here we get for this class (embeddings are required to preserve levels),
results stronger than the ones we got for the classes of Abelian groups we
have considered.
\begin{Theorem}
\label{7.1}
Assume that
\begin{description}
\item[(a)] $\bar\lambda=\langle\lambda_\alpha:\alpha\le\omega\rangle$,
$\lambda_n<\lambda_{n+1}<\lambda_\omega$, $\lambda=\lambda_\omega$,
all are regulars,
\item[(b)] $D$ is a filter on $\omega$ containing cobounded sets,
\item[(c)] ${\rm tcf}(\prod \lambda_n /D)=\lambda$ (indeed, we mean $=$, we could
just use $\lambda\in{\rm pcf}_D(\{\lambda_n:n<\omega\})$),
\item[(d)] $(\sum\limits_{n<\omega}\lambda_n)^+<\lambda<\prod\limits_{n<
\omega}\lambda_n$.
\end{description}
{\em Then} there is no universal member in ${\frak K}^{tr}_{\bar\lambda}$.
\end{Theorem}
\proof We first notice that there is a sequence $\bar P=\langle P_\alpha:
\sum\limits_{n<\omega}\lambda_n<\alpha<\lambda\rangle$ such that:
\begin{enumerate}
\item $|P_\alpha|<\lambda$,
\item $a\in P_\alpha\quad\Rightarrow\quad a$ is a closed subset of $\alpha$
of order type $\leq\sum\limits_{n<\omega}\lambda_n$,
\item $a\in\bigcup\limits_{\alpha<\lambda} P_\alpha\ \&\ \beta\in{\rm nacc}(a)
\quad \Rightarrow\quad a\cap\beta\in P_\beta$,
\item For all club subsets $E$ of $\lambda$, there are stationarily many
$\delta$ for which there is an $a\in\bigcup\limits_{\alpha<\lambda} P_\alpha$
such that
\[{\rm cf}(\delta)=\aleph_0\ \&\ a\in P_\delta\ \&\ {\rm otp}(a)=\sum_{n<\omega}
\lambda_n\ \&\ a\subseteq E.\]
\end{enumerate}
[Why? If $\lambda=(\sum\limits_{n<\omega}\lambda_n)^{++}$, then it is the
successor of a regular, so we use \cite[\S4]{Sh:351}, i.e.
\[\{\alpha<\lambda:{\rm cf}(\alpha)\le(\sum_{n<\omega}\lambda_n)\}\]
is the union of $(\sum\limits_{n<\omega}\lambda_n)^+$ sets with squares.\\
If $\lambda>(\sum\limits_{n<\omega}\lambda_n)^{++}$, then we can use
\cite[\S1]{Sh:420}, which guarantees that there is a stationary $S\in
I[\lambda]$.]
We can now find a sequence
\[\langle f_\alpha,g_{\alpha,a}:\alpha<\lambda,a\in P_\alpha\rangle\]
such that:
\begin{description}
\item[(a)] $\bar f=\langle f_\alpha:\alpha<\lambda\rangle$ is a
$<_D$-increasing cofinal sequence in $\prod\limits_{n<\omega}\lambda_n$,
\item[(b)] $g_{\alpha,a}\in\prod\limits_{n<\omega}\lambda_n$,
\item[(c)] $\bigwedge\limits_{\beta<\alpha} f_\beta<_D g_{\alpha,a}<_D
f_{\alpha+1}$,
\item[(d)] $\lambda_n>|a|\ \&\ \beta\in{\rm nacc}(a)\quad \Rightarrow\quad
g_{\beta,a\cap\beta}(n)<g_{\alpha,a}(n)$.
\end{description}
[How? Choose $\bar f$ by ${\rm tcf}(\prod\limits_{n<\omega}\lambda_n/D)=\lambda$.
Then choose $g$'s by induction, possibly throwing out some of the
$f$'s; this is from \cite[II, \S1]{Sh:g}.]
Let $T\in {\frak K}^{tr}_{\bar \lambda}$.
We introduce for $x\in{\rm lev}_\omega(T)$ and $\ell<\omega$ the notation
$F^T_\ell(x) = F_\ell(x)$ to denote the unique member of ${\rm lev}_\ell(T)$ which
is below $x$ in the tree order of $T$.
\noindent For $a\in\bigcup\limits_{\alpha<\lambda} P_\alpha$, let $a=\{
\alpha_{a,\xi}:\xi<{\rm otp}(a)\}$ be an increasing enumeration. We shall consider
two cases. In the first one, we assume that the following statement $(*)$
holds. In this case, the proof is easier, and maybe $(*)$ always holds for
some $D$, but we do not know this at present.
\begin{description}
\item[{{$(*)$}}] There is a partition $\langle A_n:n < \omega \rangle$ of
$\omega$ into sets not disjoint to any member of $D$.
\end{description}
In this case, let for $n\in\omega$, $D_n$ be the filter generated by $D$ and
$A_n$. Let for $a\in\bigcup\limits_{\alpha<\lambda} P_\alpha$ with ${\rm otp}(a)=
\sum\limits_{n<\omega}\lambda_n$, and for $x\in{\rm lev}_\omega(T)$,
\[{\rm inv}(x,a,T)=:\langle\xi_n(x,a,T):n<\omega\rangle,\]
where
\[\begin{ALIGN}
\xi_n(x,a,T)=:\min\big\{\xi<{\rm otp}(a)\!:&\mbox{ for some }m<\omega
\mbox{ we have }\\
& \langle F^T_\ell(x):
\ell<\omega\rangle<_{D_n} g_{\alpha', a'}\mbox{ where }\\
&\alpha'=\alpha_{a,\omega\xi+m}\mbox{ and } a'=
a\cap\alpha'\big\}.
\end{ALIGN}\]
Let
\[{\rm INv}(a,T)=:\{{\rm inv}(x,a,T):x\in T\ \&\ {\rm lev}_T(x)=\omega\},\]
\[\begin{ALIGN}
{\rm INV}(T)=:\big\{c:&\mbox{for every club } E\subseteq\lambda,\mbox{ for some }
\delta\mbox{ and } a\in P\\
&\mbox{we have }{\rm otp}(a)=\sum\lambda_n\ \&\ a\subseteq E\ \&\ a\in P_\delta\\
&\mbox{and for some } x\in T\mbox{ of } {\rm lev}_T(x)=\omega,\ c={\rm inv}(x,a,T)
\big\}.
\end{ALIGN}\]
(Alternatively, we could have looked at the function giving each $a$ the value
${\rm INv}(a,T)$, and then divide by a suitable club guessing ideal as in
the proof in \S3, see Definition \ref{3.7}.)
\noindent Clearly
\medskip
\noindent{\bf Fact}:\hspace{0.15in} ${\rm INV}(T)$ has cardinality $\le\lambda$.
\medskip
The main point is the following
\medskip
\noindent{\bf Main Fact}:\hspace{0.15in} If ${\bf h}:T^1\longrightarrow T^2$ is
an embedding, {\em then\/}
\[{\rm INV}(T^1)\subseteq{\rm INV}(T^2).\]
\medskip
\noindent{\em Proof of the {\bf Main Fact} under $(*)$}\ \ \ We define for $n
\in\omega$
\[E_n=:\big\{\delta<\lambda_n:\,\delta>\bigcup_{\ell<n}\lambda_\ell\mbox{ and
}\left(\forall x\in{\rm lev}_n(T^1)\right)({\bf h}(x)<\delta \Leftrightarrow x<\delta)
\big\}.\]
We similarly define $E_\omega$, so $E_n$ ($n\in\omega$) and $E_\omega$ are
clubs (of $\lambda_n$ and $\lambda$ respectively). Now suppose $c\in{\rm INV}(T_1)
\setminus{\rm INV}(T_2)$. Without loss of generality $E_\omega$ is (also) a club of
$\lambda$ which exemplifies that $c\notin{\rm INV}(T_2)$. For $h\in
\prod\limits_{n<\omega}\lambda_n$, let
\[h^+(n)=:\min(E_n\setminus h(n)),\quad\mbox{ and }\quad\beta[h]=\min\{\beta
<\lambda:h<f_\beta\}.\]
(Note that $h<f_{\beta[h]}$, not just $h<_D f_{\beta[h]}$.) For a sequence
$\langle h_i:i<i^*\rangle$ of functions from $\prod\limits_{n<\omega}
\lambda_n$, we use $\langle h_i:i<i^* \rangle^+$ for $\langle h^+_i:i<i^*
\rangle$. Now let
\[E^*=:\big\{\delta<\lambda:\mbox{if }\alpha<\delta\mbox{ then }
\beta[f^+_\alpha]<\delta\mbox{ and }\delta\in{\rm acc}(E_\omega)\big\}.\]
Thus $E^*$ is a club of $\lambda$. Since $c\in{\rm INV}(T_1)$, there is $\delta<
\lambda$ and $a\in P_\delta$ such that for some $x\in{\rm lev}_\omega(T_1)$ we have
\[a\subseteq E^*\ \&\ {\rm otp}(a)=\sum_{n<\omega}\lambda_n\ \&\ c={\rm inv}(x,a,T_1).\]
Let for $n\in\omega$, $\xi_n=:\xi_n(x,a,T_1)$, so $c=\langle\xi_n:n<\omega
\rangle$. Also let for $\xi<\sum\limits_{n<\omega}\lambda_n$, $\alpha_\xi=:
\alpha_{a,\xi}$, so $a=\langle\alpha_\xi:\xi<\sum\limits_{n<\omega}\lambda_n
\rangle$ is an increasing enumeration. Now fix an $n<\omega$ and consider
${\bf h}(x)$. Then we know that for some $m$
\begin{description}
\item[($\alpha$)] $\langle F^{T_1}_\ell(x):\ell<\omega\rangle<_{D_n}
g_{\alpha'}$ where $\alpha'=\alpha_{\omega\xi_n+m}$\qquad and
\item[($\beta$)] for no $\xi<\xi_n$ is there such an $m$.
\end{description}
Now let us look at $F^{T_1}_\ell(x)$ and $F^{T_2}_\ell({\bold h}(x))$. They are
not necessarily equal, but
\begin{description}
\item[($\gamma$)] $\min(E_\ell\setminus F^{T_1}_\ell(x))=\min(E_\ell
\setminus F^{T_2}_\ell({\bf h}(x))$
\end{description}
(by the definition of $E_\ell$). Hence
\begin{description}
\item[($\delta$)] $\langle F^{T_1}_\ell(x):\ell<\omega\rangle^+=\langle
F^{T_2}_\ell({\bf h}(x)):\ell<\omega\rangle^+$.
\end{description}
Now note that by the choice of $g$'s
\begin{description}
\item[($\varepsilon$)] $(g_{\alpha_\varepsilon,a\cap\alpha_\varepsilon})^+
<_{D_n} g_{\alpha_{\varepsilon+1},a\cap\alpha_{\varepsilon+1}}$.
\end{description}
\relax From $(\delta)$ and $(\varepsilon)$ it follows that $\xi_n({\bf h}(x),a,
T^2)=\xi_n(x,a,T^1)$. Hence $c\in{\rm INV}(T^2)$.
\hfill$\square_{\mbox{Main Fact}}$
\medskip
Now it clearly suffices to prove:
\medskip
\noindent{\bf Fact A:}\hspace{0.15in} For each $c=\langle\xi_n:n<\omega
\rangle\in {}^\omega(\sum\limits_{n<\omega}\lambda_n)$ we can find a $T\in
{\frak K}^{tr}_{\bar\lambda}$ such that $c\in{\rm INV}(T)$.
\medskip
\noindent{\em Proof of the Fact A in case $(*)$ holds}\ \ \ For each $a\in
\bigcup\limits_{\delta<\lambda} P_\delta$ with ${\rm otp}(a)=\sum\limits_{n\in
\omega}\lambda_n$ we define $x_{c,a}=:\langle x_{c,a}(\ell):\ell<\omega
\rangle$ by:
\begin{quotation}
if $\ell\in A_n$, then $x_{c,a}(\ell)=\alpha_{a,\omega\xi_n+\delta}$.
\end{quotation}
Let
\[T=\bigcup_{n<\omega}\prod_{\ell<n}\lambda_\ell\cup\big\{x_{c,a}:a\in
\bigcup_{\delta<\lambda} P_\delta\ \&\ {\rm otp}(a)=\sum_{n<\omega}\lambda_n
\big\}.\]
We order $T$ by $\triangleleft$.
It is easy to check that $T$ is as required. \hfill$\square_A$
\medskip
Now we are left to deal with the case that $(*)$ does not hold. Let
\[{\rm pcf}(\{\lambda_n:n<\omega\})=\{\kappa_\alpha:\alpha\le\alpha^*\}\]
be an enumeration in increasing order so in particular
\[\kappa_{\alpha^*}=\max{\rm pcf}(\{\lambda_n:n<\omega\}).\]
Without loss of generality $\kappa_{\alpha^*}=\lambda$ (by throwing out some
elements if necessary) and $\lambda\cap{\rm pcf}(\{\lambda_n:n<\omega\})$ has no
last element (this appears explicitly in \cite{Sh:g}, but is also
straightforward from the pcf theorem). In particular, $\alpha^*$ is a limit
ordinal. Hence, without loss of generality
\[D=\big\{A\subseteq\omega:\lambda>\max{\rm pcf}\{\lambda_n:n\in\omega\setminus A\}
\big\}.\]
Let $\langle {\frak a}_{\kappa_\alpha}:\alpha\le\alpha^*\rangle$ be a
generating sequence for ${\rm pcf}(\{\lambda_n:n<\omega\})$, i.e.
\[\max{\rm pcf}({\frak a}_{\kappa_\alpha})=\kappa_\alpha\quad\mbox{ and }\quad
\kappa_\alpha\notin{\rm pcf}(\{\lambda_n:n<\omega\}\setminus {\frak
a}_{\kappa_\alpha}).\]
(The existence of such a sequence follows from the pcf theorem). Without
loss of generality,
\[{\frak a}_{\alpha^*} = \{ \lambda_n:n < \omega\}.\]
Now note
\begin{Remark}
If ${\rm cf}(\alpha^*)=\aleph_0$, then $(*)$ holds.
\end{Remark}
Why? Let $\langle\alpha(n):n<\omega\rangle$ be a strictly increasing cofinal
sequence in $\alpha^*$. Let $\langle B_n:n<\omega\rangle$ partition $\omega$
into infinite pairwise disjoint sets and let
\[A_\ell=:\big\{k<\omega:\bigvee_{n\in B_\ell}[\lambda_k\in {\frak
a}_{\kappa_{\alpha(n)}}\setminus\bigcup_{m<n} {\frak a}_{\kappa_{\alpha(m)}}]
\big\}.\]
To check that this choice of $\langle A_\ell:\ell<\omega\rangle$ works, recall
that for all $\alpha$ we know that $\alpha_{\kappa_\alpha}$ does not belong to
the ideal generated by $\{{\frak a}_{\kappa_\beta}: \beta<\alpha\}$ and use
the pcf calculus. \hfill$\square$
Now let us go back to the general case, assuming ${\rm cf}(\alpha^*)>\aleph_0$.
Our problem is the possibility that
\[{\cal P}(\{\lambda_n:n<\omega\})/J_{<\lambda}[\{\lambda_n:n<\omega\}].\]
is finite. Let now $A_\alpha=:\{n:\lambda_n\in {\frak a}_\alpha\}$, and
\[\begin{array}{lll}
J_\alpha &=: &\big\{A\subseteq\omega:\max{\rm pcf}\{\lambda_\ell:\ell\in A\}<
\kappa_\alpha\big\}\\
J'_\alpha &=: &\big\{A\subseteq\omega:\max{\rm pcf}\{\lambda_\ell:\ell\in A\}\cap
{\frak a}_{\kappa_\alpha}<\kappa_\alpha\big\}.
\end{array}\]
We define for $T\in {\frak K}^{tr}_{\bar\lambda}$, $x\in{\rm lev}_\omega(T)$,
$\alpha<\alpha^*$ and $a\in\bigcup\limits_{\delta<\lambda} P_\delta$:
\[\begin{ALIGN}
\xi^*_\alpha(x,a,T)=:\min\big\{\xi:&\bigvee_m [\langle F^T_\ell(x):\ell<
\omega\rangle <_{J'_\alpha} g_{\alpha', a'}\mbox{ where }\\
&\qquad \alpha'=\alpha_{a,\omega \xi+m}\mbox{ and }a'=a\cap
\alpha'\big\}.
\end{ALIGN}
\]
Let
\[{\rm inv}_\alpha(x,a,T)=:\langle\xi^*_{\alpha+n}(x,a,T):n<\omega\rangle,\]
\[{\rm INv}(a,T)=:\big\{{\rm inv}_\alpha(x,a,T):x\in T\ \&\ \alpha<\alpha^* \ \&\
{\rm lev}_T(x)=\omega\big\},\]
and
\[\begin{ALIGN}
{\rm INV}(T)=\big\{c:&\mbox{for every club } E^*\mbox{ of }\lambda\mbox{ for some
} a\in\bigcup\limits_{\delta<\lambda} P_\delta\\
&\mbox{with }{\rm otp}(a)=\sum\limits_{n<\omega}\lambda_n\mbox{ for arbitrarily
large }\alpha<\alpha^*,\\
&\mbox{there is }x\in{\rm lev}_\omega(T)\mbox{ such that }{\rm inv}_\alpha(x,a,T)=
c\big\}.
\end{ALIGN}\]
As before, the point is to prove the Main Fact.
\medskip
\noindent{\em Proof of the {\bf Main Fact} in general}\ \ \ Suppose ${\bf h}:
T^1\longrightarrow T^2$ and $c\in{\rm INV}(T^1)\setminus{\rm INV}(T^2)$. Let $E'$ be
a club of $\lambda$ which witnesses that $c\notin{\rm INV}(T^2)$. We define
$E_n,E_\omega$ as before, as well as $E^*$ ($\subseteq E_\omega\cap
E'$). Now let us choose $a\in
\bigcup\limits_{\delta<\lambda} P_\delta$ with $a\subseteq E^*$ and ${\rm otp}(a)=
\sum\limits_{n<\omega}\lambda_n$. So $a=\{\alpha_{a,\xi}:\xi<\sum\limits_{n<
\omega}\lambda_n\}$, which we shorten as $a=\{\alpha_\xi:\xi<\sum\limits_{n<
\omega}\lambda_n\}$. For each $\xi<\sum\limits_{n<\omega}\lambda_n$, as
before, we know that
\[(g_{\alpha_\xi,a\cap\alpha_\xi})^+<_{J^*_\alpha}g_{\alpha_{\xi+1},a\cap
\alpha_{\xi+1}}.\]
Therefore, there are $\beta_{\xi,\ell}<\alpha^*$ ($\ell<\ell_\xi$) such that
\[\{\ell:g^+_{\alpha_\xi,a\cap\alpha_\xi}(\ell)\ge g_{\alpha_{\xi+1},a\cap
\alpha_{\xi+1}}(\ell)\}\supseteq\bigcup_{\ell<\ell_\xi} A_{\beta_{\xi,\ell}}.\]
Let $c=\langle\xi_n:n<\omega\rangle$ and let
\[\Upsilon=\big\{\beta_{\xi,\ell}:\mbox{for some } n\mbox{ and } m\mbox{ we
have }\xi=\omega\xi_n+m\ \&\ \ell<\omega\big\}.\]
Thus $\Upsilon\subseteq\alpha^*$ is countable. Since ${\rm cf}(\alpha^*)>\aleph_0$,
the set $\Upsilon$ is bounded in $\alpha^*$. Now we know that $c$ appears as
an invariant for $a$ and arbitrarily large $\delta<\alpha^*$, for some
$x_{a,\delta}\in{\rm lev}_\omega(T_1)$. If $\delta>\sup(\Upsilon)$, $c\in{\rm INV}(T^2)$
is exemplified by $a,\delta,{\bf h}(x_{\alpha,\delta})$, just as before.
\hfill$\square$
\medskip
We still have to prove that every $c=\langle\xi_n:n<\omega\rangle$ appears as
an invariant; i.e. the parallel of Fact A.
\medskip
\noindent{\em Proof of Fact A in the general case:}\ \ \ Define for each $a\in
\bigcup\limits_{\delta<\lambda} P_\delta$ with ${\rm otp}(a)=\sum\limits_{n<\omega}
\lambda_n$ and $\beta<\alpha^*$
\[x_{c,a,\beta}=\langle x_{c,a,\beta}(\ell):\ell<\omega\rangle,\]
where
\[x_{c,a,\beta}(\ell)=\left\{
\begin{array}{lll}
\alpha_{a,\omega\xi_n+\delta} &\mbox{\underbar{if}} &\lambda_\ell\in {\frak
a}_{\beta+k}\setminus\bigcup\limits_{k'<k} {\frak a}_{\beta+k'}\\
0 &\mbox{\underbar{if}} &\lambda_\ell\notin {\frak a}_{\beta+k}\mbox{ for
any } k<\omega.
\end{array}\right.\]
Form the tree as before. Now for any club $E$ of $\lambda$, we can find $a\in
\bigcup\limits_{\delta<\lambda} P_\delta$ with ${\rm otp}(a)=\sum\limits_{n<\omega}
\lambda_n$, $a\subseteq E$ such that $\langle x_{c,a,\beta}:\beta<\alpha^*
\rangle$ shows that $c\in{\rm INV}(T)$. \hfill$\square_{\ref{7.1}}$
\begin{Remark}
\begin{enumerate}
\item Clearly, this proof shows not only that there is no one $T$ which is
universal for ${\frak K}^{tr}_{\bar\lambda}$, but that any sequence of
$<\prod\limits_{n<\omega}\lambda_n$ trees will fail. This occurs generally in
this paper, as we have tried to mention in each particular case.
\item The case ``$\lambda<2^{\aleph_0}$" is included in the theorem, though
for the Abelian group application the $\bigwedge\limits_{n<\omega}
\lambda^{\aleph_0}_n<\lambda_{n+1}$ is necessary.
\end{enumerate}
\end{Remark}
\begin{Remark}
\label{7.1A}
\begin{enumerate}
\item If $\mu^+< \lambda={\rm cf}(\lambda)<\chi<\mu^{\aleph_0}$ and
$\chi^{[\lambda]}<\mu^{\aleph_0}$
(or at least $T_{{\rm id}^a(\bar C)}(\chi)<\mu^{\aleph_0}$) we can get the results
for ``no $M \in {\frak K}^x_\chi$ is universal for ${\frak K}^x_\lambda$", see
\S8 (and \cite{Sh:456}).
\end{enumerate}
\end{Remark}
\noindent We can below (and subsequently in \S8) use $J^3_{\bar m}$ as in \S6.
\begin{Theorem}
\label{7.2}
Assume that $2^{\aleph_0}<\lambda_0$, $\bar\lambda=\langle\lambda_n:n<\omega
\rangle\char 94\langle\lambda\rangle$, $\mu=\sum\limits_{n<\omega}\lambda_n$,
$\lambda_n<\lambda_{n+1}$, $\mu^+<\lambda={\rm cf}(\lambda)<\mu^{\aleph_0}$.\\
\underbar{If}, for simplicity, $\bar m=\langle m_i:i<\omega\rangle=\langle
\omega:i<\omega\rangle$ (actually $m_i\in [2,\omega]$ or even $m_i\in
[2,\lambda_0)$, $\lambda_0<\lambda$ are O.K.) and ${\bf U}^{<\mu}_{J^2_{\bar m}}
(\lambda)=\lambda$ (remember
\[J^2_{\bar m}=\{A\subseteq\prod_{i<\omega} m_i:A \mbox{ is nowhere dense}\}\]
and definition \ref{5.3}),\\
\underbar{then} in ${\frak K}^{tr}_{\bar\lambda}$ there is no universal
member.
\end{Theorem}
\proof 1)\ \ Let $S\subseteq\lambda$, $\bar C=\langle C_\delta:\delta\in S
\rangle$ be a club guessing sequence on $\lambda$ with ${\rm otp}(C_\delta)\ge\sup
\lambda_n$. We assume that we have ${\bar{\frak A}}=\langle {\frak A}_\alpha:
\alpha<\lambda\rangle$, $J^2_{\bar m}$, $T^*\in {\frak A}_0$ ($T^*$ is a
candidate for the universal), $\bar C=\langle C_\delta:\delta\in S\rangle\in
{\frak A}_\alpha$, ${\frak A}_\alpha\prec({\cal H}(\chi),\in,<^*_\chi)$, $\chi=
\beth_7(\lambda)^+$, $\|{\frak A}_\alpha\|<\lambda$, ${\frak A}_\alpha$
increasingly continuous, $\langle {\frak A}_\beta:\beta\le\alpha\rangle\in
{\frak A}_{\alpha+1}$, ${\frak A}_\alpha\cap\lambda$ is an ordinal, ${\frak A}
=\bigcup\limits_{\alpha<\lambda}{\frak A}_\alpha$ and
\[E=:\{\alpha:{\frak A}_\alpha\cap\lambda=\alpha\}.\]
Note: $\prod\bar m\subseteq {\frak A}$ (as $\prod\bar m\in {\frak A}$ and
$|\prod\bar m|=2^{\aleph_0}$).
\noindent NOTE: By ${\bf U}^{<\mu}_{J^2_{\bar m}}(\lambda)=\lambda$,
\begin{description}
\item[$(*)$] \underbar{if} $x_\eta\in{\rm lev}_\omega(T^*)$ for $\eta\in\prod\bar
m$
\underbar{then} for some $A\in (J^2_{\bar m})^+$ the set $\langle(\eta,
x_\eta):\eta\in A\rangle$ belongs to ${\frak A}$. But then for some $\nu\in
\bigcup\limits_k\prod\limits_{i<k} m_i$, the set $A$ is dense above $\nu$ (by
the definition of $J^2_{\bar m}$) and hence: if the mapping $\eta\mapsto
x_\eta$ is continuous then $\langle x_\rho:\nu\triangleleft\rho\in\prod\bar m
\rangle\in {\frak A}$.
\end{description}
For $\delta\in S$ such that $C_\delta \subseteq E$ we let
\[\begin{ALIGN}
P^0_\delta=P^0_\delta({\frak A})=\biggl\{\bar x:&\bar x=\langle x_\rho:\rho
\in t\rangle\in {\frak A}\mbox{ and } x_\rho\in{\rm lev}_{\ell g(\rho)}T^*,\\
&\mbox{the mapping }\rho\mapsto x_\rho\mbox{ preserves all of the
relations:}\\
&\ell g(\rho)=n,\rho_1\triangleleft\rho_2, \neg(\rho_1\triangleleft\rho_2),
\neg(\rho_1=\rho_2),\\
&\rho_1\cap\rho_2=\rho_3\mbox{ (and so }\ell g(\rho_1\cap\rho_2)=n\mbox{ is
preserved});\\
&\mbox{and } t\subseteq\bigcup\limits_{\alpha\le\omega}\prod\limits_{i<
\alpha} m_i\biggr\}.
\end{ALIGN}\]
Assume $\bar x=\langle x_\rho:\rho\in t\rangle\in P^0_\delta$. Let
\[{\rm inv}(\bar x,C_\delta,T^*,{\bar {\frak A}})=:\big\{\alpha\in C_\delta:
(\exists\rho\in{\rm Dom}(\bar x))(x_\rho\in {\frak A}_{\min(C_\delta
\setminus (\alpha+1))}\setminus {\frak A}_\alpha\big)\}.\]
Let
\[\begin{array}{l}
{\rm Inv}(C_\delta,T^*,\bar{\frak A})=:\\
\big\{a:\mbox{for some }\bar x\in P^0_\delta,\ \ a\mbox{ is a
countable subset of }{\rm inv}(\bar x,C_0,T^*,\bar{\frak A})\big\}.
\end{array}\]
Note: ${\rm inv}(\bar x,C_0,T^*,\bar{\frak A})$ has cardinality at most continuum,
so ${\rm Inv}(C_0,T^*,\bar{\frak A})$ is a family of $\le 2^{\aleph_0}\times
|{\frak A}|=\lambda$ countable subsets of $C_\delta$.
We continue as before. Let $\alpha_{\delta,\varepsilon}$ be the
$\varepsilon$-th member of $C_\delta$ for $\varepsilon<\sum\limits_{n<\omega}
\lambda_n$. So as $\lambda< \mu^{\aleph_0}$,$\mu> 2^{\aleph_0}$
clearly $\lambda< {\rm cf}([\lambda]^{\aleph_0}, \subseteq)$ (equivalently
$\lambda<{\rm cov}(\mu,\mu,\aleph_1,2)$) hence we can find $\gamma_n\in
(\bigcup\limits_{\ell<n}\lambda_\ell,\lambda_n)$ limit so such that
for each $\delta\in S$, $a\in{\rm Inv}(C_\delta,T^*,\bar{\frak A})$ we have
$\{\gamma_n+\ell:n<\omega\mbox{ and }\ell<m_i\}\cap a$ is bounded in $\mu$.
Now we can find $T$ such that ${\rm lev}_n(T)=\prod\limits_{\ell<n}\lambda_\ell$
and
\[\begin{ALIGN}
{\rm lev}_\omega(T)=\big\{\bar\beta:
&\bar\beta=\langle\beta_\ell:\ell<\omega
\rangle,\mbox{ and for some }\delta \in S,\mbox{ for every }\ell<\omega\\
&\mbox{we have }\gamma'_\ell\in\{\alpha_{\delta,\gamma_\ell+m}:m<m_i\}
\big\}.
\end{ALIGN} \]
So, if $T^*$ is universal there is an embedding $f:T\longrightarrow T^*$,
and hence
\[E'=\{\alpha\in E:{\frak A}_\alpha\mbox{ is closed under } f\mbox{ and }
f^{-1}\}\]
is a club of $\lambda$. By the choice of $\bar C$ for some $\delta\in S$ we
have $C_\delta\subseteq E'$. Now use $(*)$ with $x_\eta=f(\bar\beta^{\delta,
\eta})$, where $\beta^{\delta,\eta}_\ell=\alpha_{\delta,\gamma_\ell+
\eta(\ell)}\in{\rm lev}_\omega(T)$. Thus we get $A\in (J^2_{\bar m})^+$ such that
$\{(\eta,x_\eta):\eta\in A\}\in{\frak A}$, there is $\nu\in\bigcup\limits_k
\prod\limits_{i<k} m_i$ such that $A$ is dense above $\nu$, hence as $f$ is
continuous, $\langle(\eta,x_\eta):\nu\triangleleft\eta\in\prod\bar m\rangle
\in {\frak A}$. So $\langle x_\eta:\eta\in\prod\bar m,\nu\trianglelefteq\eta
\rangle\in P^0_\delta({\frak A})$, and hence the set
\[\{\alpha_{\delta,\gamma_\ell+m}:\ell\in [\ell g(\nu),\omega)\mbox{ and } m<
m_\ell\}\cup\{\alpha_{\delta,\gamma_i+\nu(i)}:\ell<\ell g(\nu)\}\]
is ${\rm inv}(\bar x,C_\delta,T^*,{\frak A})$. Hence
\[a=\{\alpha_{\delta,\gamma_\ell}\!:\ell\in [\ell g(\nu),\omega)\}\in{\rm Inv}(
C_\delta,T^*,{\frak A}),\]
contradicting
``$\{\alpha_{\delta,\gamma_\ell}:\ell<\omega\}$ has finite intersection with
any $a\in{\rm Inv}(C_\delta,T^*,{\frak A})$''.
\begin{Remark}
\label{7.3}
We can a priori fix a set of $\aleph_0$ candidates and say more on their order
of appearance, so that ${\rm Inv}(\bar x,C_\delta,T^*,{\bar{\frak A}})$ has order
type $\omega$. This makes it easier to phrase a true invariant, i.e. $\langle
(\eta_n,t_n):n<\omega\rangle$ is as above, $\langle\eta_n:n<\omega\rangle$
lists ${}^{\omega >}\omega$ with no repetition, $\langle t_n\cap {}^\omega
\omega:n<\omega\rangle$ are pairwise disjoint. If $x_\rho\in{\rm lev}_\omega(T^*)$
for $\rho\in {}^\omega\omega$, $\bar T^*=\langle\bar T^*_\zeta:\zeta<\lambda
\rangle$ representation
\[\begin{array}{l}
{\rm inv}(\langle x_\rho:\rho\in {}^\omega\omega\rangle,C_\delta,\bar T^*)=\\
\big\{\alpha\in C_\delta:\mbox{for some } n,\ (\forall\rho)[\rho\in t_n\cap
{}^\omega\omega\quad\Rightarrow\quad x_\rho\in T^*_{\min(C_\delta\setminus
(\alpha+1))}\setminus T^*_\alpha]\big\}.
\end{array}\]
\end{Remark}
\begin{Remark}
\label{7.4}
If we have $\Gamma\in (J^2_{\bar m})^+$, $\Gamma$ non-meagre, $J=J^2_m
\restriction\Gamma$ and ${\bf U}^2_J(\lambda)<\lambda^{\aleph_0}$ then we
can weaken the cardinal assumptions to:
\[\bar\lambda=\langle\lambda_n:n<\omega\rangle\char 94\langle\lambda\rangle,
\qquad \mu=\sum_n\lambda_n,\qquad\lambda_n<\lambda_{n+1},\]
\[\mu^+<\lambda={\rm cf}(\lambda)\qquad\mbox{ and }\qquad {\bf U}^2_J(\lambda)<{\rm cov}(\mu,
\mu,\aleph_1,2) (\mbox{see }\ref{0.4}).\]
The proof is similar.
\end{Remark}
\section{Universals in singular cardinals}
In \S3, \S5, \ref{7.2}, we can in fact deal with ``many'' singular cardinals
$\lambda$. This is done by proving a stronger assertion on some regular
$\lambda$. Here ${\frak K}$ is a class of models.
\begin{Lemma}
\label{8.1}
\begin{enumerate}
\item There is no universal member in ${\frak K}_{\mu^*}$ if for some $\lambda
<\mu^*$, $\theta\ge 1$ we have:
\begin{description}
\item[\mbox{$\otimes_{\lambda,\mu^*,\theta}[{\frak K}]$}] not only there is no
universal member in ${\frak K}_\lambda$ but if we assume:
\[\langle M_i:i<\theta\rangle\ \mbox{ is given, }\ \|M_i\|\le\mu^*<\prod_n
\lambda_n,\ M_i\in {\frak K},\]
then there is a structure $M$ from ${\frak K}_\lambda$ (in some cases
of a simple form) not embeddable in any $M_i$.
\end{description}
\item Assume
\begin{description}
\item[$\otimes^\sigma_1$] $\langle\lambda_n:n<\omega\rangle$ is given,
$\lambda^{\aleph_0}_n<\lambda_{n+1}$,
\[\mu=\sum_{n<\omega}\lambda_n<\lambda={\rm cf}(\lambda)\leq
\mu^*<\prod_{n<\omega}\lambda_n\]
and $\mu^+<\lambda$ or at least there is a club guessing $\bar C$ as
in $(**)^1_\lambda$ (ii) of \ref{3.4} for
$(\lambda,\mu)$.
\end{description}
\underbar{Then} there is no universal member in ${\frak K}_{\mu^*}$ (and
moreover $\otimes_{\lambda,\mu^*,\theta}[{\frak K}]$ holds) in the following
cases
\begin{description}
\item[$\otimes_2$(a)] for torsion free groups, i.e. ${\frak K}={\frak
K}^{rtf}_{\bar\lambda}$ if ${\rm cov}(\mu^*,\lambda^+,\lambda^+,\lambda)<
\prod\limits_{n<\omega}\lambda_n$, see notation \ref{0.4} on ${\rm cov}$)
\item[\quad(b)] for ${\frak K}={\frak K}^{tcf}_{\bar\lambda}$,
\item[\quad(c)] for ${\frak K}={\frak K}^{tr}_{\bar\lambda}$ as in \ref{7.2} -
${\rm cov}({\bf U}_{J^3_{\bar m}}(\mu^*),\lambda^+,\lambda^+,\lambda)<\prod\limits_{n<
\omega}\lambda_n$,
\item[(d)] for ${\frak K}^{rs(p)}_{\bar\lambda}$: like case (c) (for
appropriate ideals), replacing $tr$ by $rs(p)$.
\end{description}
\end{enumerate}
\end{Lemma}
\begin{Remark}
\label{8.1A}
\begin{enumerate}
\item For \ref{7.2} as $\bar m=\langle\omega:i<\omega\rangle$ it is clear that
the subtrees $t_n$ are isomorphic. We can use $m_i\in [2,\omega)$, and use
coding; anyhow it is immaterial since ${}^\omega \omega,{}^\omega 2$ are
similar.
\item We can also vary $\bar\lambda$ in \ref{8.1} $\otimes_2$, case (c).
\item We can replace ${\rm cov}$ in $\otimes_2$(a),(c) by
\[\sup{\rm pp}_{\Gamma(\lambda)}(\chi):{\rm cf}(\chi)=\lambda,\lambda<\chi\le
{\bf U}_{J^3_{\bar m}}(\mu^*)\}\]
(see \cite[5.4]{Sh:355}, \ref{2.3}).
\end{enumerate}
\end{Remark}
\proof Should be clear, e.g.\\
{\em Proof of Part 2), Case (c)}\ \ \ Let $\langle T_i:i<i^* \rangle$ be
given, $i^*<\prod\limits_{n<\omega}\lambda_n$ such that
\[\|T_i\|\le\mu^*\quad\mbox{ and }\quad\mu^\otimes=:{\rm cov}({\bf U}_{J^3_{\bar
m}}(\mu^*),\lambda^+,\lambda^+,\lambda)<\prod_{n<\omega}\lambda_n.\]
By \cite[5.4]{Sh:355} and ${\rm pp}$ calculus (\cite[2.3]{Sh:355}), $\mu^\otimes=
{\rm cov}(\mu^\otimes,\lambda^+,\lambda^+,\lambda)$. Let
$\chi=\beth_7(\lambda)^+$. For $i<i^*$ choose ${\frak B}_i\prec (H(\chi)\in
<^*_\chi)$, $\|{\frak B}_i\|=\mu^\otimes$, $T_i\in {\frak B}_i$, $\mu^\otimes
+1\subseteq {\frak B}_i$. Let $\langle Y_\alpha: \alpha<\mu^\otimes\rangle$ be
a family of subsets of $T_i$ exemplifying the Definition of $\mu^\otimes=
{\rm cov}(\mu^\otimes,\lambda^+,\lambda^+,\lambda)$.\\
Given $\bar x=\langle x_\eta:\eta\in {}^\omega\omega\rangle$, $x_\eta\in
{\rm lev}_\omega(T_i)$, $\eta\mapsto x_\eta$ continuous (in our case this means
$\ell g(\eta_1\cap\eta_2)=\ell g(x_{\eta_1}\cap x_{\eta_2})=:\ell g(\max
\{\rho:\rho\triangleleft\eta_1\ \&\ \rho\triangleleft\eta_2\})$. Then for some
$\eta\in {}^{\omega>}\omega$,
\[\langle x_\rho:\eta\triangleleft\rho\in {}^\omega\omega\rangle\in{\frak
B}.\]
So given $\left<\langle x^\zeta_\eta:\eta\in {}^\omega\omega\rangle:\zeta<
\lambda\right>$, $x^\zeta_\eta\in{\rm lev}_\omega(T_i)$ we can find $\langle (\alpha_j,\eta_j):j<j^*<\lambda\rangle$ such that:
\[\bigwedge_{\zeta<\lambda}\bigvee_j\langle x^\zeta_\eta:\eta_j\triangleleft
\eta\in {}^\omega\omega\rangle\in Y_\alpha.\]
Closing $Y_\alpha$ enough we can continue as usual. \hfill$\square_{\ref{8.1}}$
\section{Metric spaces and implications}
\begin{Definition}
\label{9.1}
\begin{enumerate}
\item ${\frak K}^{mt}$ is the class of metric spaces $M$ (i.e. $M=(|M|,d)$,
$|M|$ is the set of elements, $d$ is the metric, i.e. a two-place function
from $|M|$ to ${\Bbb R}^{\geq 0}$ such that
$d(x,y)=0\quad\Leftrightarrow\quad x=0$ and
$d(x,z)\le d(x,y)+d(y,z)$ and $d(x,y) = d(y,x)$).
An embedding $f$ of $M$ into $N$ is a one-to-one function from $|M|$ into
$|N|$ which is continuous, i.e. such that:
\begin{quotation}
\noindent if in $M$, $\langle x_n:n<\omega\rangle$ converges to $x$
\noindent then in $N$, $\langle f(x_n):n<\omega\rangle$ converges to $f(x)$.
\end{quotation}
\item ${\frak K}^{ms}$ is defined similarly but ${\rm Rang}(d)\subseteq\{2^{-n}:n
<\omega\}\cup\{0\}$ and instead of the triangular inequality we require
\[d(x,y)=2^{-i},\qquad d(y,z)=2^{-j}\quad \Rightarrow\quad d(x,z) \le
2^{-\min\{i-1,j-1\}}.\]
\item ${\frak K}^{tr[\omega]}$ is like ${\frak K}^{tr}$ but $P^M_\omega=|M|$
and embeddings preserve $x\;E_n\;y$ (not necessarily its negation) are
one-to-one, and remember $\bigwedge\limits_n
x\;E_n\;y\quad\Rightarrow\quad x \restriction n = y \restriction n$).
\item ${\frak K}^{mt(c)}$ is the class of semi-metric spaces $M=(|M|,d)$,
which means that for the constant $c\in\Bbb R^+$ the triangular inequality is
weakened to $d(x,z)\le cd(x,y)+cd(y,z)$ with embedding as in \ref{9.1}(1)
(so for $c=1$ we get ${\frak K}^{mt}$).
\item ${\frak K}^{mt[c]}$ is the class of pairs $(A,d)$ such that $A$ is a
non-empty set, $d$ a two-place symmetric function from $A$ to ${\Bbb
R}^{\ge 0}$ such that $[d(x,y)=0\ \Leftrightarrow\ x=y]$ and
\[d(x_0,x_n)\le c\sum\limits_{\ell<n} d(x_\ell,x_{\ell+1})\ \ \mbox{ for any
$n<\omega$ and $x_0,\ldots,x_n\in A$.}\]
\item ${\frak K}^{ms(c)}$, ${\frak K}^{ms[c]}$ are defined parallely.
\item ${\frak K}^{rs(p),\mbox{pure}}$ is defined like ${\frak K}^{rs(p)}$ but
the embeddings are pure.
\end{enumerate}
\end{Definition}
\begin{Remark}
There are, of course, other notions of embeddings; isometric embeddings if $d$
is preserved, co-embeddings if the image of an open set is open, bi-continuous
means an embedding which is a co-embedding. The isometric embedding is the
weakest, its case is essentially equivalent to the ${\frak
K}^{tr}_{\lambda}$ case (as in \ref{9.3}(3)); for the open case there
is a universal:
discrete space. The universal for ${\frak K}^{mt}_\lambda$ under bicontinuous
case exist in cardinality $\lambda^{\aleph_0}$, see \cite{Ko57}.
\end{Remark}
\begin{Definition}
\label{9.1A}
\begin{enumerate}
\item ${\rm Univ}^0({\frak K}^1,{\frak K}^2)=\{(\lambda,\kappa,\theta):$ there are
$M_i\in {\frak K}^2_\kappa$ for $i<\theta$ such that any $M\in {\frak
K}^1_\lambda$ can be embedded into some $M_i\}$. We may omit $\theta$
if it is 1. We may omit the superscript 0.
\item ${\rm Univ}^1({\frak K}^1,{\frak K}^2)=\{(\lambda,\kappa,\theta):$ there are
$M_i\in {\frak K}^2_\kappa$ for $i<\theta$ such that any $M\in {\frak
K}^1_\lambda$ can be represented as the union of $<\lambda$ sets $A_\zeta$
($\zeta<\zeta^*<\lambda)$ such that each $M\restriction A_\zeta$ can be
embedded into some $M_i\}$ and is a $\leq_{{\frak K}^1}$-submodel of $M$.
\item If above ${\frak K}^1={\frak K}^2$ we write it just once; (naturally we
usually assume ${\frak K}^1 \subseteq {\frak K}^2$).
\end{enumerate}
\end{Definition}
\begin{Remark}
\begin{enumerate}
\item We prove our theorems for $Univ^0$, we can get parallel things for
${\rm Univ}^1$.
\item Many previous results of this paper can be rephrased using a pair of
classes.
\item We can make \ref{9.2} below deal with pairs and/or function $H$ changing
cardinality.
\item ${\rm Univ}^\ell$ has the obvious monotonicity properties.
\end{enumerate}
\end{Remark}
\begin{Proposition}
\label{9.2}
\begin{enumerate}
\item Assume ${\frak K}^1,{\frak K}^2$ has the same models as their members
and every embedding for ${\frak K}^2$ is an embedding for ${\frak K}^1$.\\
Then ${\rm Univ}({\frak K}^2)\subseteq{\rm Univ}({\frak K}^1)$.
\item Assume there is for $\ell=1,2$ a function $H_\ell$ from ${\frak K}^\ell$
into ${\frak K}^{3-\ell}$ such that:
\begin{description}
\item[(a)] $\|H_1(M_1)\|=\|M_1\|$ for $M_1\in {\frak K}^1$,
\item[(b)] $\|H_2(M_2)\|=\|M_2\|$ for $M_2\in {\frak K}^2$,
\item[(c)] if $M_1\in {\frak K}^1$, $M_2\in {\frak K}^2$, $H_1(M_1)\in {\frak
K}^2$ is embeddable into $M_2$ \underbar{then} $M_1$ is embeddable into
$H_2(M_2)\in {\frak K}^1$.
\end{description}
\underbar{Then}\quad ${\rm Univ}({\frak K}^2)\subseteq{\rm Univ}({\frak K}^1)$.
\end{enumerate}
\end{Proposition}
\begin{Definition}
\label{9.2A}
We say ${\frak K}^1\le {\frak K}^2$ if the assumptions of \ref{9.2}(2)
hold. We say ${\frak K}^1\equiv {\frak K}^2$ if ${\frak K}^1\le {\frak K}^2
\le {\frak K}^1$ (so larger means with fewer cases of universality).
\end{Definition}
\begin{Theorem}
\label{9.3}
\begin{enumerate}
\item The relation ``${\frak K}^1\le {\frak K}^2$'' is a quasi-order
(i.e. transitive and reflexive).
\item If $({\frak K}^1,{\frak K}^2)$ are as in \ref{9.2}(1) then ${\frak K}^1
\le {\frak K}^2$ (use $H_1 = H_2 =$ the identity).
\item For $c_1>1$ we have ${\frak K}^{mt(c_1)}\equiv {\frak K}^{mt[c_1]}\equiv
{\frak K}^{ms[c_1]}\equiv {\frak K}^{ms(c_1)]}$.
\item ${\frak K}^{tr[\omega]} \le {\frak K}^{rs(p)}$.
\item ${\frak K}^{tr[\omega]} \le {\frak K}^{tr(\omega)}$.
\item ${\frak K}^{tr(\omega)} \le {\frak K}^{rs(p),\mbox{pure}}$.
\end{enumerate}
\end{Theorem}
\proof 1)\ \ Check.\\
2)\ \ Check. \\
3)\ \ Choose $n(*)<\omega$ large enough and ${\frak K}^1,{\frak K}^2$ any two
of the four. We define $H_1,H_2$ as follows. $H_1$ is the identity. For $(A,d)
\in{\frak K}^\ell$ let $H_\ell((A,d))=(A,d^{[\ell]})$ where $d^{[\ell]}(x,y)=
\inf\{1/(n+n(*)):2^{-n}\ge d(x,y)\}$ (the result is not necessarily a metric
space, $n(*)$ is chosen so that the semi-metric inequality holds). The point
is to check clause (c) of \ref{9.2}(2); so assume $f$ is a function which
${\frak K}^2$-embeds $H_1((A_1,d_1))$ into $(A_2,d_2)$; but
\[H_1((A_1,d_1))=(A_1,d_1),\quad H_2((A_2,d_2))=(A_2,d^{[2]}_2),\]
so it is enough to check that $f$ is a function which ${\frak K}^1$-embeds
$(A_1,d^{[1]}_1)$ into $(A_2,d^{[2]}_2)$ i.e. it is one-to-one (obvious) and
preserves limit (check).\\
4)\ \ For $M=(A,E_n)_{n<\omega}\in {\frak K}^{tr[\omega]}$, without loss of
generality $A\subseteq {}^\omega\lambda$ and
\[\eta E_n\nu\qquad\Leftrightarrow\qquad\eta\in A\ \&\ \nu\in A\ \&\
\eta\restriction n=\nu\restriction n.\]
Let $B^+=\{\eta\restriction n:\eta\in A\mbox{ and } n<\omega\}$. We define
$H_1(M)$ as the (Abelian) group generated by
\[\{x_\eta:\eta\in A\cup B\}\cup\{y_{\eta,n}:\eta\in A,n<\omega\}\]
freely except
\[\begin{array}{rcl}
p^{n+1}x_\eta=0 &\quad\mbox{\underbar{if}}\quad &\eta\in B, \ell g(\eta)=n\\
y_{\eta,0}=x_\eta &\quad\mbox{\underbar{if}}\quad &\eta\in A\\
py_{\eta,n+1}-y_\eta=x_{\eta\restriction n} &\quad\mbox{\underbar{if}}\quad
&\eta\in A, n<\omega\\
p^{n+1}y_{\eta, n}=0 &\quad\mbox{\underbar{if}}\quad &\eta\in B, n<\omega.
\end{array}\]
For $G\in {\frak K}^{rs(p)}$ let $H_2(G)$ be $(A,E_n)_{n<\omega}$ with:
\[A = G,\quad xE_ny\quad\underbar{iff}\quad G\models\mbox{``}p^n\mbox{ divides
}(x-y)\mbox{''.}\]
$H_2(G)\in {\frak K}^{tr[\omega]}$ as ``$G$ is separable" implies $(\forall
x)(x \ne 0\ \Rightarrow\ (\exists n)[x\notin p^nG])$. Clearly clauses
(a), (b) of
Definition \ref{9.1}(2) hold. As for clause (c), assume $(A,E_n)_{n<\omega}
\in {\frak K}^{tr[\omega]}$. As only the isomorphism type counts without loss
of generality $A\subseteq {}^\omega\lambda$. Let $B=\{\eta\restriction n:n<
\omega:\eta\in A\}$ and $G=H_1((A,E_n)_{n<\omega})$ be as above. Suppose that
$f$ embeds $G$ into some $G^*\in {\frak K}^{rs(p)}$, and let $(A^*,E^*_n)_{n<
\omega}$ be $H_2(G^*)$. We should prove that $(A,E_n)_{n<\omega}$ is
embeddable into $(A^*,E^*_n)$.\\
Let $f^*:A\longrightarrow A^*$ be $f^*(\eta)=x_\eta\in A^*$. Clearly $f^*$ is
one to one from $A$ to $A^*$; if $\eta E_n \nu$ then $\eta\restriction n=\nu
\restriction n$ hence $G \models p^n \restriction (x_\eta-x_\nu)$ hence
$(A^*,A^*_n)_{n<\omega}\models\eta E^*_n\nu$. \hfill$\square_{\ref{9.3}}$
\begin{Remark}
\label{9.3A}
In \ref{9.3}(4) we can prove ${\frak K}^{tr[\omega]}_{\bar\lambda}\le{\frak
K}^{rs(p)}_{\bar\lambda}$.
\end{Remark}
\begin{Theorem}
\label{9.4}
\begin{enumerate}
\item ${\frak K}^{mt} \equiv {\frak K}^{mt(c)}$ for $c \ge 1$.
\item ${\frak K}^{mt} \equiv {\frak K}^{ms[c]}$ for $c > 1$.
\end{enumerate}
\end{Theorem}
\proof 1)\ \ Let $H_1:{\frak K}^{mt}\longrightarrow {\frak K}^{mt(c)}$ be the
identity. Let $H_2:{\frak K}^{mt(c)}\longrightarrow {\frak K}^{mt}$ be defined
as follows:\\
$H_2((A,d))=(A,d^{mt})$, where
\[\begin{array}{l}
d^{mt}(y,z)=\\
\inf\big\{\sum\limits^n_{\ell=0} d(x_\ell,x_{\ell,n}):n<\omega\ \&\
x_\ell\in A\mbox{ (for $\ell\le n$)}\ \&\ x_0=y\ \&\ x_n=z\big\}.
\end{array}\]
Now
\begin{description}
\item[$(*)_1$] $d^{mt}$ is a two-place function from $A$ to ${\Bbb
R}^{\ge 0}$,
is symmetric, $d^{mt}(x,x)=0$ and it satisfies the triangular inequality.
\end{description}
This is true even on ${\frak K}^{mt(c)}$, but here also
\begin{description}
\item[$(*)_2$] $d^{mt}(x,y) = 0 \Leftrightarrow x=0$.
\end{description}
[Why? As by the Definition of ${\frak K}^{mt[c]},d^{mt}(x,y)\ge{\frac 1c}
d(x,y)$. Clearly clauses (a), (b) of \ref{9.2}(2) hold.]\\
Next,
\begin{description}
\item[$(*)_3$] If $M_1,N\in {\frak K}^{mt}$, $f$ is an embedding (for ${\frak
K}^{mt}$) of $M_1$ into $N$ then $f$ is an embedding (for ${\frak K}^{mt[c]}$)
of $H_1(M)$ into $H_1(N)$
\end{description}
[why? as $H_1(M)=M$ and $H_2(N)=N$],
\begin{description}
\item[$(*)_4$] If $M,N\in {\frak K}^{mt[c]}$, $f$ is an embedding (for
${\frak K}^{mt[c]}$) of $M$ into $N$ then $f$ is an embedding (for ${\frak
K}^{mt}$) of $H_2(M)$ into $H_1(M)$
\end{description}
[why? as $H^*_\ell$ preserves $\lim\limits_{n\to\infty} x_n=x$ and
$\lim\limits_{n\to\infty} x_n\ne x$].
So two applications of \ref{9.2} give the equivalence. \\
2)\ \ We combine $H_2$ from the proof of (1) and the proof of \ref{9.3}(3).
\hfill$\square_{\ref{9.4}}$
\begin{Definition}
\label{9.6}
\begin{enumerate}
\item If $\bigwedge\limits_n\mu_n=\aleph_0$ let
\[\hspace{-0.5cm}\begin{array}{ll}
J^{mt}=J^{mt}_{\bar\mu}=\big\{A\subseteq\prod\limits_{n<\omega}\mu_n:&
\mbox{for every } n \mbox{ large enough, } \\
\ & \mbox{ for every }\eta\in\prod\limits_{\ell
<n}\mu_\ell\\
\ &\mbox{the set }\{\eta'(n):\eta\triangleleft\eta'\in A\}\mbox{ is finite}
\big\}.
\end{array}\]
\item Let $T=\bigcup\limits_{\alpha\le\omega}\prod\limits_{n<\alpha}\mu_n$,
$(T,d^*)$ be a metric space such that
\[\prod_{\ell<n}\mu_\ell\cap\mbox{closure}\left(\bigcup_{m<n}\prod_{\ell<m}
\mu_\ell\right)=\emptyset;\]
now
\[\begin{ALIGN}
I^{mt}_{(T,d^*)}=:\big\{A\subseteq\prod\limits_{n<\omega}\mu_n:&\mbox{ for
some }n,\mbox{ the closure of } A\mbox{ (in $(T,d^*)$)}\\
&\mbox{ is disjoint to }\bigcup\limits_{m\in [n,\omega)}\prod\limits_{\ell
<m} \mu_\ell\big\}.
\end{ALIGN}\]
\item Let $H\in {\frak K}^{rs(p)}$, $\bar H=\langle H_n:n<\omega\rangle$, $H_n
\subseteq H$ pure and closed, $n<m\ \Rightarrow\ H_n\subseteq H_m$ and $\bigcup\limits_{n<\omega} H_n$ is dense in $H$. Let
\[\begin{array}{ll}
I^{rs(p)}_{H,\bar H}=:\big\{A\subseteq H:&\mbox{for some } n\mbox{ the closure
of }\langle A\rangle_H\mbox{ intersected with}\\
&\bigcup\limits_{\ell<\omega}H_\ell\mbox{ is included in }H_n\big\}.
\end{array}\]
\end{enumerate}
\end{Definition}
\begin{Proposition}
\label{9.5}
Suppose that $2^{\aleph_0}<\mu$ and $\mu^+<\lambda={\rm cf}(\lambda)<
\mu^{\aleph_0}$ and
\begin{description}
\item[$(*)_\lambda$] ${\bf U}_{J^{mt}_{\bar\mu}}(\lambda)=\lambda$ or at least
${\bf U}_{J^{mt}_{\bar\mu}}(\lambda)<\lambda^{\aleph_0}$ for some $\bar\mu=\langle
\mu_n:n<\omega\rangle$ such that $\prod\limits_{n<\omega}\mu_n<\lambda$.
\end{description}
Then ${\frak K}^{mt}_\lambda$ has no universal member.
\end{Proposition}
\begin{Proposition}
\label{9.7}
\begin{enumerate}
\item $J^{mt}$ is $\aleph_1$-based.
\item The minimal cardinality of a set which is not in the
$\sigma$-ideal generated by $J^{mt}$ is ${\frak b}$.
\item $I^{mt}_{(T,d^*)},I^{rs(p)}_{H,\bar H}$ are $\aleph_1$-based.
\item $J^{mt}$ is a particular case of $I^{mt}_{(T,d^*)}$ (i.e. for some
choice of $(T,d^*)$).
\item $I^0_{\bar \mu}$ is a particular case of $I^{rs(p)}_{H,\bar H}$.
\end{enumerate}
\end{Proposition}
\proof of \ref{9.5}. Let
$$
\begin{array}{ll}
T_\alpha=\{(\eta, \nu)\in{}^\alpha\lambda\times {}^\alpha(\omega+
1):& \mbox{ for every }n\mbox{ such that }n+1< \alpha\\
\ & \mbox{ we have
}\nu(n)< \omega\}
\end{array}
$$
and for $\alpha\le\omega$ let $T=\bigcup\limits_{\alpha\le
\omega}T_\alpha$. We define on $T$ the relation $<_T$:
\[(\eta_1,\nu_1)\le(\eta_1,\nu_2)\quad\mbox{ iff }\quad\eta_1\trianglelefteq
\eta_2\ \&\ \nu_1\triangleleft\nu_2.\]
We define a metric:\\
if $(\eta_1,\nu_1)\ne(\eta_2,\nu_2)\in T$ and $(\eta,\nu)$ is their maximal
common initial segment and $(\eta,\nu)\in T$ then necessarily $\alpha=
\ell g((\eta,\nu))<\omega$ and we let:
\begin{quotation}
\noindent if $\eta_1(\alpha)\ne\eta_2(\alpha)$ then
\[d\left((\eta_1,\nu_1),(\eta_2,\nu_2)\right)=2^{-\sum\{\nu(\ell):\ell<
\alpha\}},\]
if $\eta_1(\alpha)=\eta_2(\alpha)$ (so $\nu_1(\alpha)\ne\nu_2(\alpha)$ then
\[d\left((\eta_1,\nu_1),(\eta_2,\nu_2)\right)=2^{-\sum\{\nu(\ell):\ell<\alpha
\}}\times 2^{-\min\{\nu_1(\alpha),\nu_2(\alpha)\}}.\]
\end{quotation}
Now, for every $S\subseteq\{\delta<\lambda:{\rm cf}(\delta)=\aleph_0\}$, and $\bar
\eta=\langle\eta_\delta:\delta\in S\rangle$, $\eta_\delta\in {}^\omega
\delta$, $\eta_\delta$ increasing let $M_\eta$ be $(T,d)\restriction A_{\bar
\eta}$, where
\[A_{\bar\eta}=\bigcup_{n<\omega} T_n\cup\{(\eta_\delta,\nu):\delta\in S,\;\nu
\in {}^\omega\omega\}.\]
The rest is as in previous cases (note that $\langle(\eta\char 94\langle
\alpha \rangle,\nu\char 94\langle n\rangle):n<\omega\rangle$ converges to
$(\eta\char 94\langle\alpha\rangle,\nu\char 94\langle\omega\rangle)$
and even if $(\eta\char 94\langle \alpha\rangle, \nu\char 94\langle
n\rangle)\leq (\eta_n, \nu_n)\in T_\omega$ then $\langle(\eta_n,
\nu_n): n<\omega\rangle$ converge to $(\eta\char 94 \langle
\alpha\rangle, \nu\char 94\langle \omega\rangle)$).
\hfill$\square_{\ref{9.7}}$
\begin{Proposition}
\label{9.8}
If ${\rm IND}_{\chi'}(\langle\mu_n:n<\omega\rangle)$, then $\prod\limits_{n<\omega}
\mu_n$ is not the union of $\le\chi$ members of $I^0_{\bar\mu}$ (see
Definition \ref{5.5A} and Theorem \ref{5.5}).
\end{Proposition}
\proof Suppose that $A_\zeta=\{\sum\limits_{n<\omega}
p^nx^n_{\alpha_n}:\langle \alpha_n:n<\omega\rangle\in X_\zeta\}$ and
$\alpha_n<\mu_n$ are such that if $\sum
p^nx^n_{\alpha_n}\in A_\zeta$ then for infinitely many $n$ for every
$k<\omega$ there is $\langle \beta_n:n<\omega\rangle$,
\[(\forall\ell<k)[\alpha_\ell=\beta_\ell\ \ \Leftrightarrow\ \ \ell=n]\qquad
\mbox{ and }\qquad\sum_{n<\omega}p^nx^n_{\beta_n}\in A_\zeta\ \ \mbox{ (see
\S5).}\]
This clearly follows. \hfill$\square_{\ref{9.8}}$
\section{On Modules}
Here we present the straight generalization of the one prime case like Abelian
reduced separable $p$-groups. This will be expanded in \cite{Sh:622}
(including the proof of \ref{10.4new}).
\begin{Hypothesis}
\label{10.1}
\begin{description}
\item[(A)] $R$ is a ring, $\bar{\frak e}=\langle{\frak e}_n:n<\omega\rangle$,
${\frak e}_n$ is a definition of an additive subgroup of $R$-modules by an
existential positive formula (finitary or infinitary) decreasing with $n$, we
write ${\frak e}_n(M)$ for this additive subgroup, ${\frak e}_\omega(M)=
\bigcap\limits_n {\frak e}_n(M)$.
\item[(B)] ${\frak K}$ is the class of $R$-modules.
\item[(C)] ${\frak K}^*\subseteq {\frak K}$ is a class of $R$-modules, which
is closed under direct summand, direct limit and for which there is $M^*$, $x^*
\in M^*$, $M^*=\bigoplus\limits_{\ell\le n} M^*_\ell\oplus M^{**}_n$, $M^*_n
\in {\frak K}$, $x^*_n\in {\frak e}_n(M^*_n)\setminus {\frak e}_{n+1}(M^*)$,
$x^*-\sum\limits_{\ell<n} x^*_\ell\in {\frak e}_n(M^*)$.
\end{description}
\end{Hypothesis}
\begin{Definition}
\label{10.2}
For $M_1,M_2\in {\frak K}$, we say $h$ is a $({\frak K},\bar {\frak
e})$-homomorphism from $M_1$ to $M_2$ if it is a homomorphism and it maps $M_1
\setminus {\frak e}_\omega(M_1)$ into $M_2\setminus {\frak
e}_\omega(M_2)$;
we say $h$ is an $\bar {\frak e}$-pure homomorphism if for each $n$ it
maps $M_1\setminus {\frak e}_n(M_1)$ into $M_2\setminus {\frak
e}_n(M_2)$.
\end{Definition}
\begin{Definition}
\label{10.3}
\begin{enumerate}
\item Let $H_n \subseteq H_{n+1} \subseteq H$, $\bar H=\langle H_n:n<\omega
\rangle$, $c\ell$ is a closure operation on $H$, $c\ell$ is a function from
${\cal P}(H)$ to itself and
\[X \subseteq c \ell(X) = c \ell(c \ell(X)).\]
Define
\[I_{H,\bar H,c\ell}=\big\{A\subseteq H:\mbox{for some }k<\omega\mbox{ we have
} c\ell(A)\cap\bigcup_{n<\omega} H_n\subseteq H_k\big\}.\]
\item We can replace $\omega$ by any regular $\kappa$ (so $H=\langle H_i:i<
\kappa\rangle$).
\end{enumerate}
\end{Definition}
\begin{Claim}
\label{10.4new}
Assume $|R|+\mu^+< \lambda = {\rm cf}(\lambda)< \mu^{\aleph_0}$, then for
every $M\in {\frak K}_\lambda$ there is $N\in {\frak K}_\lambda$ with
no $\bar {\frak e}$-pure homomorphism from $N$ into $M$.
\end{Claim}
\begin{Remark}
In the interesting cases $c\ell$ has infinitary character.\\
The applications here are for $\kappa=\omega$. For the theory, $pcf$
is nicer for higher $\kappa$.
\end{Remark}
\section{Open problems}
\begin{Problem}
\begin{enumerate}
\item If $\mu^{\aleph_0}\ge\lambda$ then any $(A,d)\in {\frak K}^{mt}_\lambda$
can be embedded into some $M'\in {\frak K}^{mt}_\lambda$ with density
$\le\mu$.
\item If $\mu^{\aleph_0}\ge\lambda$ then any $(A,d)\in {\frak K}^{ms}_\lambda$
can be embedded into some $M'\in {\frak K}^{ms}_\lambda$ with density
$\le\mu$.
\end{enumerate}
\end{Problem}
\begin{Problem}
\begin{enumerate}
\item Other inclusions on ${\rm Univ}({\frak K}^x)$ or show consistency of non
inclusions (see \S9).
\item Is ${\frak K}^1\le {\frak K}^2$ the right partial order? (see \S9).
\item By forcing reduce consistency of ${\bf U}_{J_1}(\lambda)>\lambda+
2^{\aleph_0}$ to that of ${\bf U}_{J_2}(\lambda)>\lambda+2^{\aleph_0}$.
\end{enumerate}
\end{Problem}
\begin{Problem}
\begin{enumerate}
\item The cases with the weak ${\rm pcf}$ assumptions, can they be resolved in
ZFC? (the $pcf$ problems are another matter).
\item Use \cite{Sh:460}, \cite{Sh:513} to get ZFC results for large enough
cardinals.
\end{enumerate}
\end{Problem}
\begin{Problem}
If $\lambda^{\aleph_0}_n<\lambda_{n+1}$, $\mu=\sum\limits_{n<\omega}
\lambda_n$, $\lambda=\mu^+<\mu^{\aleph_0}$ can $(\lambda,\lambda,1)$ belong to
${\rm Univ}({\frak K})$? For ${\frak K}={\frak K}^{tr},{\frak K}^{rs(p)},{\frak
K}^{trf}$?
\end{Problem}
\begin{Problem}
\begin{enumerate}
\item If $\lambda=\mu^+$, $2^{<\mu}=\lambda<2^\mu$ can $(\lambda,\lambda,1)
\in {\rm Univ}({\frak K}^{\mbox{or}}=$ class of linear orders)?
\item Similarly for $\lambda=\mu^+$, $\mu$ singular, strong limit, ${\rm cf}(\mu)=
\aleph_0$, $\lambda<\mu^{\aleph_0}$.
\item Similarly for $\lambda=\mu^+$, $\mu=2^{<\mu}=\lambda^+ <2^\mu$.
\end{enumerate}
\end{Problem}
\begin{Problem}
\begin{enumerate}
\item Analyze the existence of universal member from ${\frak
K}^{rs(p)}_\lambda$, $\lambda<2^{\aleph_0}$.
\item \S4 for many cardinals, i.e. is it consistent that:
$2^{\aleph_0}> \aleph_\omega$ and for every $\lambda< 2^{\aleph_0}$
there is a universal member of ${\frak K}^{rs(p)}_\lambda$?
\end{enumerate}
\end{Problem}
\begin{Problem}
\begin{enumerate}
\item If there are $A_i\subseteq\mu$ for $i<2^{\aleph_0}$, $|A_i\cap A_j|<
\aleph_0$, $2^\mu=2^{\aleph_0}$ find forcing adding $S\subseteq [{}^\omega
\omega]^\mu$ universal for $\{(B, \vartriangleleft):
{}^{\omega>}\omega \subseteq B\subseteq {}^{\omega\geq}\omega, |B|\leq
\lambda\}$ under (level preserving) natural embedding.
\end{enumerate}
\end{Problem}
\begin{Problem}
For simple countable $T$, $\kappa=\kappa^{<\kappa}<\lambda\subseteq \kappa$
force existence of universal for $T$ in $\lambda$ still $\kappa=\kappa^{<
\kappa}$ but $2^\kappa=\chi$.
\end{Problem}
\begin{Problem}
Make \cite[\S4]{Sh:457}, \cite[\S1]{Sh:500} work for a larger class of
theories more than simple.
\end{Problem}
See on some of these problems \cite{DjSh:614}, \cite{Sh:622}.
\bibliographystyle{lit-plain}
| proofpile-arXiv_065-469 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently much attention has been given to lower dimensional gauge theories.
Such remarkable results as the chiral symmetry breaking \cite{1}, quantum
Hall effect \cite{2}, spontaneously broken Lorentz invariance by the
dynamical generation of a magnetic field \cite{3}, and the connection
between non-perturbative effects in low-energy strong interactions and QCD$%
_{2}$ \cite{4}, show the broad range of applicability of these theories.
In particular, 2+1 dimensional gauge theories with fractional statistics
-anyon systems \cite{4a}- have been extensively studied. One main reason for
such an interest has been the belief that a strongly correlated electron
system in two dimensions can be described by an effective field theory of
anyons \cite{5}, \cite{5a}. Besides, it has been claimed that anyons could
play a basic role in high-T$_{C}$ superconductivity \cite{5a}-\cite{6b}. It
is known \cite{a} that a charged anyon system in two spatial dimensions can
be modeled by means of a 2+1 dimensional Maxwell-Chern-Simons (MCS) theory.
An important feature of this theory is that it violates parity and
time-reversal invariance. However, at present no experimental evidences of P
and T violation in high-T$_{C}$ superconductivity have been found. It should
be pointed out, nevertheless, that it is possible to construct more
sophisticated P and T invariant anyonic models\cite{6a}. In any case,
whether linked to high-T$_{C}$ superconductivity or not, the anyon system is
an interesting theoretical model in its own right.
The superconducting behavior of anyon systems at $T=0$ has been investigated
by many authors \cite{6}-\cite{15a}. Crucial to the existence of anyon
superconductivity at $T=0$ is the exact cancellation between the bare and
induced Chern-Simons terms in the effective action of the theory.
Although a general consensus exists regarding the superconductivity of anyon
systems at zero temperature, a similar consensus at finite temperature is
yet to be achieved \cite{8}-\cite{16}. Some authors (see ref. \cite{9}) have
concluded that the superconductivity is lost at $T\neq 0$, based upon the
appearance of a temperature-dependent correction to the induced Chern-Simons
coefficient that is not cancelled out by the bare term. In ref. \cite{11} it
is argued, however, that this finite temperature correction is numerically
negligible at $T<200$ $K$, and that the main reason for the lack of a
Meissner effect is the development of a pole $\sim \left( \frac{1}{{\bf k}%
^{2}}\right) $ in the polarization operator component $\Pi _{00}$ at $T\neq
0 $. There, it is discussed how the existence of this pole leads to a so
called partial Meissner effect with a constant magnetic field penetration
throughout the sample that appreciably increases with temperature. On the
other hand, in ref. \cite{8}, it has been independently claimed that the
anyon model cannot superconduct at finite temperature due to the existence
of a long-range mode, found inside the infinite bulk at $T\neq 0$. The long
range mode found in ref. \cite{8} is also a consequence of the existence of
a pole $\sim \left( \frac{1}{{\bf k}^{2}}\right) $ in the polarization
operator component $\Pi _{00}$ at $T\neq 0$.
The apparent lack of superconductivity at temperatures greater than zero has
been considered as a discouraging property of anyon models. Nevertheless, it
may be still premature to disregard the anyons as a feasible solution for
explaining high -T$_{c}$ superconductivity, at least if the reason
sustaining such a belief is the absence of the Meissner effect at finite
temperature. As it was shown in a previous paper \cite{16}, the lack of a
Meissner effect, reported in ref. \cite{11} for the case of a half-plane
sample as a partial Meissner effect, is a direct consequence of the omission
of the sample boundary effects in the calculations of the minimal solution
for the magnetic field within the sample. To understand this remark we must
take into account that the results of ref. \cite{11} were obtained by
finding the magnetization in the bulk due to an externally applied magnetic
field at the boundary of a half-plane sample. However, in doing so, a
uniform magnetization was assumed and therefore the boundary effects were
indeed neglected. Besides, in ref. \cite{11} the field equations were solved
considering only one short-range mode of propagation for the magnetic field,
while as has been emphasized in our previous letter \cite{16}, there is a
second short-range mode whose qualitative contribution to the solutions of
the field equations cannot be ignored.
In the present paper we study the effects of the sample's boundaries in the
magnetic response of the anyon fluid at finite temperature. This is done by
considering a sample shaped as an infinite strip. When a constant and
homogeneous external magnetic field, which is perpendicular to the sample
plane, is applied at the boundaries of the strip, two different magnetic
responses, depending on the temperature values, can be identified. At
temperatures smaller than the fermion energy gap inherent to the
many-particle MCS model ($T\ll \omega _{c}$), the system exhibits a Meissner
effect. In this case the magnetic field cannot penetrate the bulk farther
than a very short distance ($\overline{\lambda }\sim 10^{-5}cm$ for electron
densities characteristic of the high -T$_{c}$ superconductors and $T\sim 200$
$K$). On the other hand, as it is natural to expect from a physical point of
view, when the temperatures are larger than the energy gap ($T\gg \omega
_{c} $) the Meissner effect is lost. In this temperature region a periodic
inhomogeneous magnetic field is present within the bulk.
These results, together with those previously reported in ref. \cite{16},
indicate that, contrary to some authors' belief, the superconducting
behavior (more precisely, the Meissner effect), found in the charged anyon
fluid at $T=0$, does not disappear as soon as the system is heated.
As it is shown below, the presence of boundaries can affect the dynamics of
the system in such a way that the mode that accounts for a homogeneous field
penetration \cite{8} cannot propagate in the bulk. Although these results
have been proved for two types of samples, the half-plane \cite{16} and the
infinite strip reported in this paper, we conjecture that similar effects
should also exist in other geometries.
Our main conclusion is that the magnetic behavior of the anyon fluid is not
just determined by its bulk properties, but it is essentially affected by
the sample boundary conditions. The importance of the boundary conditions in
2+1 dimensional models has been previously stressed in ref.\cite{b}.
The plan for the paper is as follows. In Sec. 2, for completeness as well as
for the convenience of the reader, we define the many-particle 2+1
dimensional MCS model used to describe the charged anyon fluid, and briefly
review its main characteristics. In Sec. 3 we study the magnetic response in
the self-consistent field approximation of a charged anyon fluid confined to
an infinite-strip, finding the analytical solution of the MCS field
equations that satisfies the boundary conditions. The fermion contribution
in this approximation is given by the corresponding polarization operators
at $T\neq 0$ in the background of a many-particle induced Chern-Simons
magnetic field. Using these polarization operators in the low temperature
approximation ($T\ll \omega _{c}$), we determine the system's two London
penetration depths. Taking into account that the boundary conditions are not
enough to completely determine the magnetic field solution within the
sample, an extra physical condition, the minimization of the system
free-energy density, is imposed. This is done in Sec. 4. In this section we
prove that even though the electromagnetic field has a long-range mode of
propagation in the charged anyon fluid at $T\neq 0$ \cite{8}, a constant and
uniform magnetic field applied at the sample's boundaries cannot propagate
through this mode. The explicit temperature dependence at $T\ll \omega _{c}$
of all the coefficients appearing in the magnetic field solution, and of the
effective London penetration depth are also found. In Sec. 5, we discuss how
the superconducting behavior of the charged anyon fluid disappears at
temperatures larger than the energy gap ($T\gg \omega _{c}$). Sec. 6
contains the summary and discussion.
\section{MCS Many-Particle Model}
The Lagrangian density of the 2+1 dimensional non-relativistic charged MCS
system is
\begin{equation}
{\cal L}=-\frac{1}{4}F_{\mu \nu }^{2}-\frac{N}{4\pi }\varepsilon ^{\mu \nu
\rho }a_{\mu }\partial _{\nu }a_{\rho }+en_{e}A_{0}+i\psi ^{\dagger
}D_{0}\psi -\frac{1}{2m}\left| D_{k}\psi \right| ^{2}+\psi ^{\dagger }\mu
\psi \eqnum{2.1}
\end{equation}
where $A_{\mu }$ and $a_{\mu }$ represent the electromagnetic and the
Chern-Simons fields respectively. The role of the Chern-Simons fields is
simply to change the quantum statistics of the matter field, thus, they do
not have an independent dynamics. $\psi $ represents non-relativistic
spinless fermions. $N\ $ is a positive integer that determines the magnitude
of the Chern-$%
\mathop{\rm Si}%
$mons coupling constant. The charged character of the system is implemented
by introducing a chemical potential $\mu $; $n_{e}$ is a background
neutralizing `classical' charge density, and $m$ is the fermion mass. We
will consider throughout the paper the metric $g_{\mu \nu }$=$(1,-%
\overrightarrow{1})$. The covariant derivative $D_{\nu }$ is given by
\begin{equation}
D_{\nu }=\partial _{\nu }+i\left( a_{\nu }+eA_{\nu }\right) ,\qquad \nu
=0,1,2 \eqnum{2.2}
\end{equation}
It is known that to guarantee the system neutrality in the presence of a
different from zero fermion density $\left( n_{e}\neq 0\right) $,$\ $a
nontrivial background of Chern-Simons magnetic field $\left( \overline{b}=%
\overline{f}_{21}\right) $ is generated. The Chern-Simons background field
is obtained as the solution of the mean field Euler-Lagrange equations
derived from (2.1)
\begin{mathletters}
\begin{equation}
-\frac{N}{4\pi }\varepsilon ^{\mu \nu \rho }f_{\nu \rho }=\left\langle
j^{\mu }\right\rangle \eqnum{2.3}
\end{equation}
\begin{equation}
\partial _{\nu }F^{\mu \nu }=e\left\langle j^{\mu }\right\rangle
-en_{e}\delta ^{\mu 0} \eqnum{2.4}
\end{equation}
considering that the system formed by the electron fluid and the background
charge $n_{e}$ is neutral
\end{mathletters}
\begin{equation}
\left\langle j^{0}\right\rangle -n_{e}\delta ^{\mu 0}=0 \eqnum{2.5}
\end{equation}
In eq. (2.5) $\left\langle j^{0}\right\rangle $ is the fermion density of
the many-particle fermion system
\begin{equation}
\left\langle j^{0}\right\rangle =\frac{\partial \Omega }{\partial \mu },
\eqnum{2.6}
\end{equation}
$\Omega $ is the fermion thermodynamic potential.
In this approximation it is found from (2.3)-(2.5) that the Chern-Simons
magnetic background is given by
\begin{equation}
\overline{b}=\frac{2\pi n_{e}}{N} \eqnum{2.7}
\end{equation}
Then, the unperturbed one-particle Hamiltonian of the matter field
represents a particle in the background of the Chern-Simons magnetic field $%
\overline{b\text{,}}$
\begin{equation}
H_{0}=-\frac{1}{2m}\left[ \left( \partial _{1}+i\overline{b}x_{2}\right)
^{2}+\partial _{2}^{2}\right] \eqnum{2.8}
\end{equation}
In (2.8) we considered the background Chern-Simons potential, $\overline{a}%
_{k}$, $(k=1,2)$, in the Landau gauge
\begin{equation}
\overline{a}_{k}=\overline{b}x_{2}\delta _{k1} \eqnum{2.9}
\end{equation}
The eigenvalue problem defined by the Hamiltonian (2.8) with periodic
boundary conditions in the $x_{1}$-direction: $\Psi \left( x_{1}+L,\text{ }%
x_{2}\right) =$ $\Psi \left( x_{1},\text{ }x_{2}\right) $,
\begin{equation}
H_{0}\Psi _{nk}=\epsilon _{n}\Psi _{nk},\qquad n=0,1,2,...\text{ }and\text{ }%
k\in {\cal Z} \eqnum{2.10}
\end{equation}
has eigenvalues and eigenfunctions given respectively by
\begin{equation}
\epsilon _{n}=\left( n+\frac{1}{2}\right) \omega _{c}\qquad \eqnum{2.11}
\end{equation}
\begin{equation}
\Psi _{nk}=\frac{\overline{b}^{1/4}}{\sqrt{L}}\exp \left( -2\pi
ikx_{1}/L\right) \varphi _{n}\left( x_{2}\sqrt{\overline{b}}-\frac{2\pi k}{L%
\sqrt{\overline{b}}}\right) \eqnum{2.12}
\end{equation}
where $\omega _{c}=\overline{b}/m$ is the cyclotron frequency and $\varphi
_{n}\left( \xi \right) $ are the orthonormalized harmonic oscillator wave
functions.
Note that the energy levels $\epsilon _{n}$ are degenerates (they do not
depend on $k$). Then, for each Landau level $n$ there exists a band of
degenerate states. The cyclotron frequency $\omega _{c}$ plays here the role
of the energy gap between occupied Landau levels. It is easy to prove that
the filling factor, defined as the ratio between the density of particles $%
n_{e}$ and the number of states per unit area of a full Landau level, is
equal to the Chern-$%
\mathop{\rm Si}%
$mons coupling constant $N$. Thus, because we are considering that $N$ is a
positive integer, we have in this MCS theory $N$ completely filled Landau
levels. Once this ground state is established, it can be argued immediately
\cite{6}, \cite{6b}, \cite{10a}, \cite{15}, that at $T=0$ the system will be
confined to a filled band, which is separated by an energy gap from the free
states; therefore, it is natural to expect that at $T=0$ the system should
superconduct. This result is already a well established fact on the basis of
Hartree-Fock analysis\cite{6} and Random Phase Approximation \cite{6b},\cite
{10a}. The case at $T\neq 0$ is more controversial since thermal
fluctuations, occurring in the many-particle system, can produce significant
changes. As we will show in this paper, the presence in this theory of a
natural scale, the cyclotron frequency $\omega _{c}$, is crucial for the
existence of a phase at $T\ll \omega _{c}$, on which the system, when
confined to a bounded region, still behaves as a superconductor.
The fermion thermal Green's function in the presence of the background
Chern-Simons field $\overline{b}$
\begin{equation}
G\left( x,x^{\prime }\right) =-\left\langle T_{\tau }\psi \left( x\right)
\overline{\psi }\left( x^{\prime }\right) \right\rangle \eqnum{2.13}
\end{equation}
is obtained by solving the equation
\begin{equation}
\left( \partial _{\tau }-\frac{1}{2m}\overline{D}_{k}^{2}-\mu \right)
G\left( x,x^{\prime }\right) =-\delta _{3}\left( x-x^{\prime }\right)
\eqnum{2.14}
\end{equation}
subject to the requirement of antiperiodicity under the imaginary time
translation $\tau \rightarrow \tau +\beta $ ($\beta $ is the inverse
absolute temperature). In (2.14) we have introduced the notation
\begin{equation}
\overline{D}_{k}=\partial _{k}+i\overline{a}_{k} \eqnum{2.15}
\end{equation}
The Fourier transform of the fermion thermal Green's function (2.13)
\begin{equation}
G\left( p_{4},{\bf p}\right) =\int\limits_{0}^{\beta }d\tau \int d{\bf x}%
G\left( \tau ,{\bf x}\right) e^{i\left( p_{4}\tau -{\bf px}\right) }
\eqnum{2.16}
\end{equation}
can be expressed in terms of the orthonormalized harmonic oscillator wave
functions $\varphi _{n}\left( \xi \right) $ as \cite{Efrain}
\begin{eqnarray}
G\left( p_{4},{\bf p}\right) &=&\int\limits_{0}^{\infty }d\rho
\int\limits_{-\infty }^{\infty }dx_{2}\sqrt{\overline{b}}\exp -\left(
ip_{2}x_{2}\right) \exp -\left( ip_{4}+\mu -\frac{\overline{b}}{2m}\right)
\rho \nonumber \\
&&\sum\limits_{n=0}^{\infty }\varphi _{n}\left( \xi \right) \varphi
_{n}\left( \xi ^{\prime }\right) t^{n} \eqnum{2.17}
\end{eqnarray}
where $t=\exp \frac{\overline{b}}{m}\rho $, $\xi =\frac{p_{1}}{\sqrt{%
\overline{b}}}+\frac{x_{2}\sqrt{\overline{b}}}{2}$, $\xi ^{\prime }=\frac{%
p_{1}}{\sqrt{\overline{b}}}-\frac{x_{2}\sqrt{\overline{b}}}{2}$ and $%
p_{4}=(2n+1)\pi /\beta $ are the discrete frequencies $(n=0,1,2,...)$
corresponding to fermion fields.
\section{Linear Response in the Infinite Strip}
\subsection{Effective Theory at $\mu \neq 0$ and $T\neq 0$}
In ref.\cite{8} the effective current-current interaction of the MCS model
was calculated to determine the independent components of the magnetic
interaction at finite temperature in a sample without boundaries, i.e., in
the free space. These authors concluded that the pure Meissner effect
observed at zero temperature is certainly compromised by the appearance of a
long-range mode at $T\neq 0$. Our main goal in the present paper is to
investigate the magnetic response of the charged anyon fluid at finite
temperature for a sample that confines the fluid within some specific
boundaries. As we prove henceforth, the confinement of the system to a
bounded region (a condition which is closer to the experimental situation
than the free-space case) is crucial for the realization of the Meissner
effect inside the charged anyon fluid at finite temperature.
Let us investigate the linear response of a charged anyon fluid at finite
temperature and density to an externally applied magnetic field in the
specific case of an infinite-strip sample. The linear response of the medium
can be found under the assumption that the quantum fluctuations of the gauge
fields about the ground-state are small. In this case the one-loop fermion
contribution to the effective action, obtained after integrating out the
fermion fields, can be evaluated up to second order in the gauge fields. The
effective action of the theory within this linear approximation \cite{8},%
\cite{11} takes the form
\begin{equation}
\Gamma _{eff}\,\left( A_{\nu },a_{\nu }\right) =\int dx\left( -\frac{1}{4}%
F_{\mu \nu }^{2}-\frac{N}{4\pi }\varepsilon ^{\mu \nu \rho }a_{\mu }\partial
_{\nu }a_{\rho }+en_{e}A_{0}\right) +\Gamma ^{\left( 2\right) } \eqnum{3.1}
\end{equation}
\[
\Gamma ^{\left( 2\right) }=\int dx\Pi ^{\nu }\left( x\right) \left[ a_{\nu
}\left( x\right) +eA_{\nu }\left( x\right) \right] +\int dxdy\left[ a_{\nu
}\left( x\right) +eA_{\nu }\left( x\right) \right] \Pi ^{\mu \nu }\left(
x,y\right) \left[ a_{\nu }\left( y\right) +eA_{\nu }\left( y\right) \right]
\]
Here $\Gamma ^{\left( 2\right) }$ is the one-loop fermion contribution to
the effective action in the linear approximation. The operators $\Pi _{\nu }$
and $\Pi _{\mu \nu }$ are calculated using the fermion thermal Green's
function in the presence of the background field $\overline{b}$ (2.17). They
represent the fermion tadpole and one-loop polarization operators
respectively. Their leading behaviors for static $\left( k_{0}=0\right) $
and slowly $\left( {\bf k}\sim 0\right) $ varying configurations in the
frame ${\bf k}=(k,0)$ take the form
\begin{equation}
\Pi _{k}\left( x\right) =0,\;\;\;\Pi _{0}\left( x\right) =-n_{e},\;\;\;\Pi
_{\mu \nu }=\left(
\begin{array}{ccc}
{\it \Pi }_{{\it 0}}+{\it \Pi }_{{\it 0}}\,^{\prime }\,k^{2} & 0 & {\it \Pi }%
_{{\it 1}}k \\
0 & 0 & 0 \\
-{\it \Pi }_{{\it 1}}k & 0 & {\it \Pi }_{\,{\it 2}}k^{2}
\end{array}
\right) , \eqnum{3.2}
\end{equation}
The independent coefficients: ${\it \Pi }_{{\it 0}}$, ${\it \Pi }_{{\it 0}%
}\,^{\prime }$, ${\it \Pi }_{{\it 1}}$ and ${\it \Pi }_{\,{\it 2}}$ are
functions of $k^{2}$, $\mu $ and $\overline{b}$. In order to find them we
just need to calculate the $\Pi _{\mu \nu }$ Euclidean components: $\Pi
_{44} $, $\Pi _{42}$ and $\Pi _{22}$. In the Landau gauge these Euclidean
components are given by\cite{11},
\begin{mathletters}
\begin{equation}
\Pi _{44}\left( k,\mu ,\overline{b}\right) =-\frac{1}{\beta }%
\sum\limits_{p_{4}}\frac{d{\bf p}}{\left( 2\pi \right) ^{2}}G\left( p\right)
G\left( p-k\right) , \eqnum{3.3}
\end{equation}
\begin{equation}
\Pi _{4j}\left( k,\mu ,\overline{b}\right) =\frac{i}{2m\beta }%
\sum\limits_{p_{4}}\frac{d{\bf p}}{\left( 2\pi \right) ^{2}}\left\{ G\left(
p\right) \cdot D_{j}^{-}G\left( p-k\right) +D_{j}^{+}G\left( p\right) \cdot
G\left( p-k\right) \right\} , \eqnum{3.4}
\end{equation}
\end{mathletters}
\begin{eqnarray}
\Pi _{jk}\left( k,\mu ,\overline{b}\right) &=&\frac{1}{4m^{2}\beta }%
\sum\limits_{p_{4}}\frac{d{\bf p}}{\left( 2\pi \right) ^{2}}\left\{
D_{k}^{-}G\left( p\right) \cdot D_{j}^{-}G\left( p-k\right)
+D_{j}^{+}G\left( p\right) \cdot D_{k}^{+}G\left( p-k\right) \right.
\nonumber \\
&&\left. +D_{j}^{+}D_{k}^{-}G\left( p\right) \cdot G\left( p-k\right)
+G\left( p\right) \cdot D_{j}^{-}D_{k}^{+}G\left( p-k\right) \right\}
\nonumber \\
&&-\frac{1}{2m}\Pi _{4}, \eqnum{3.5}
\end{eqnarray}
where the notation
\begin{eqnarray}
D_{j}^{\pm }G\left( p\right) &=&\left[ ip_{j}\mp \frac{\overline{b}}{2}%
\varepsilon ^{jk}\partial _{p_{k}}\right] G\left( p\right) , \nonumber \\
D_{j}^{\pm }G\left( p-k\right) &=&\left[ i\left( p_{j}-k_{j}\right) \mp
\frac{\overline{b}}{2}\varepsilon ^{jk}\partial _{p_{k}}\right] G\left(
p-k\right) , \eqnum{3.6}
\end{eqnarray}
was used.
Using (3.3)-(3.5) after summing in $p_{4}$, we found that, in the $k/\sqrt{%
\overline{b}}\ll 1$ limit, the polarization operator coefficients ${\it \Pi }%
_{{\it 0}}$, ${\it \Pi }_{{\it 0}}\,^{\prime }$, ${\it \Pi }_{{\it 1}}$ and $%
{\it \Pi }_{\,{\it 2}}$ are
\[
{\it \Pi }_{{\it 0}}=\frac{\beta \overline{b}}{8\pi {\bf k}^{2}}%
\sum_{n}\Theta _{n},\;\qquad {\it \Pi }_{{\it 0}}\,^{\prime }=\frac{2m}{\pi
\overline{b}}\sum_{n}\Delta _{n}-\frac{\beta }{8\pi }\sum_{n}(2n+1)\Theta
_{n},
\]
\[
{\it \Pi }_{{\it 1}}=\frac{1}{\pi }\sum_{n}\Delta _{n}-\frac{\beta \overline{%
b}}{16\pi m}\sum_{n}(2n+1)\Theta _{n},\qquad {\it \Pi }_{\,{\it 2}}=\frac{1}{%
\pi m}\sum_{n}(2n+1)\Delta _{n}-\frac{\beta \overline{b}}{32\pi m^{2}}%
\sum_{n}(2n+1)^{2}\Theta _{n},
\]
\begin{equation}
\Theta _{n}=%
\mathop{\rm sech}%
\,^{2}\frac{\beta (\epsilon _{n}/2-\mu )}{2},\qquad \Delta _{n}=(e^{\beta
(\epsilon _{n}/2-\mu )}+1)^{-1} \eqnum{3.7}
\end{equation}
The leading contributions of the one-loop polarization operator coefficients
(3.7) at low temperatures $\left( T\ll \omega _{c}\right) $ are
\begin{equation}
{\it \Pi }_{{\it 0}}=\frac{2\beta \overline{b}}{\pi }e^{-\beta \overline{b}%
/2m},\qquad {\it \Pi }_{{\it 0}}\,^{\prime }=\frac{mN}{2\pi \overline{b}}%
{\it \Lambda },\qquad {\it \Pi }_{{\it 1}}=\frac{N}{2\pi }{\it \Lambda }%
,\quad {\it \Pi }_{\,{\it 2}}=\frac{N^{2}}{4\pi m}{\it \Lambda },\qquad {\it %
\Lambda }=\left[ 1-\frac{2\beta \overline{b}}{m}e^{-\beta \overline{b}%
/2m}\right] \eqnum{3.8}
\end{equation}
and at high temperatures $\left( T\gg \omega _{c}\right) $ are
\begin{equation}
{\it \Pi }_{{\it 0}}=\frac{m}{2\pi }\left[ \tanh \frac{\beta \mu }{2}%
+1\right] ,\qquad {\it \Pi }_{{\it 0}}\,^{\prime }=-\frac{\beta }{48\pi }%
\mathop{\rm sech}%
\!^{2}\!\,\left( \frac{\beta \mu }{2}\right) ,\qquad {\it \Pi }_{{\it 1}}=%
\frac{\overline{b}}{m}{\it \Pi }_{{\it 0}}\,^{\prime },\qquad {\it \Pi }_{\,%
{\it 2}}=\frac{1}{12m^{2}}{\it \Pi }_{{\it 0}} \eqnum{3.9}
\end{equation}
In these expressions $\mu $ is the chemical potential and $m=2m_{e}$ ($m_{e}$
is the electron mass). These results are in agreement with those found in
refs.\cite{8},\cite{14}.
\subsection{MCS Linear Equations}
To find the response of the anyon fluid to an externally applied magnetic
field, one needs to use the extremum equations derived from the effective
action (3.1). This formulation is known in the literature as the
self-consistent field approximation\cite{11}. In solving these equations we
confine our analysis to gauge field configurations which are static and
uniform in the y-direction. Within this restriction we take a gauge in which
$A_{1}=a_{1}=0$.
The Maxwell and Chern-Simons extremum equations are respectively,
\begin{equation}
\partial _{\nu }F^{\nu \mu }=eJ_{ind}^{\mu } \eqnum{3.10a}
\end{equation}
\begin{equation}
-\frac{N}{4\pi }\varepsilon ^{\mu \nu \rho }f_{\nu \rho }=J_{ind}^{\mu }
\eqnum{3.10b}
\end{equation}
Here, $f_{\mu \nu }$ is the Chern-Simons gauge field strength tensor,
defined as $f_{\mu \nu }=\partial _{\mu }a_{\nu }-\partial _{\nu }a_{\mu }$,
and $J_{ind}^{\mu }$ is the current density induced by the anyon system at
finite temperature and density. Their different components are given by
\begin{equation}
J_{ind}^{0}\left( x\right) ={\it \Pi }_{{\it 0}}\left[ a_{0}\left( x\right)
+eA_{0}\left( x\right) \right] +{\it \Pi }_{{\it 0}}\,^{\prime }\partial
_{x}\left( {\cal E}+eE\right) +{\it \Pi }_{{\it 1}}\left( b+eB\right)
\eqnum{3.11a}
\end{equation}
\begin{equation}
J_{ind}^{1}\left( x\right) =0,\qquad J_{ind}^{2}\left( x\right) ={\it \Pi }_{%
{\it 1}}\left( {\cal E}+eE\right) +{\it \Pi }_{\,{\it 2}}\partial _{x}\left(
b+eB\right) \eqnum{3.11b}
\end{equation}
in the above expressions we used the following notation: ${\cal E}=f_{01}$, $%
E=F_{01}$, $b=f_{12}$ and $B=F_{12}$. Eqs. (3.11) play the role in the anyon
fluid of the London equations in BCS superconductivity. When the induced
currents (3.11) are substituted in eqs. (3.10) we find, after some
manipulation, the set of independent differential equations,
\begin{equation}
\omega \partial _{x}^{2}B+\alpha B=\gamma \left[ \partial _{x}E-\sigma
A_{0}\right] +\tau \,a_{0}, \eqnum{3.12}
\end{equation}
\begin{equation}
\partial _{x}B=\kappa \partial _{x}^{2}E+\eta E, \eqnum{3.13}
\end{equation}
\begin{equation}
\partial _{x}a_{0}=\chi \partial _{x}B \eqnum{3.14}
\end{equation}
The coefficients appearing in these differential equations depend on the
components of the polarization operators through the relations
\[
\omega =\frac{2\pi }{N}{\it \Pi }_{{\it 0}}\,^{\prime },\quad \alpha =-e^{2}%
{\it \Pi }_{{\it 1}},\quad \tau =e{\it \Pi }_{{\it 0}},\quad \chi =\frac{%
2\pi }{eN},\quad \sigma =-\frac{e^{2}}{\gamma }{\it \Pi }_{{\it 0}},\quad
\eta =-\frac{e^{2}}{\delta }{\it \Pi }_{{\it 1}},
\]
\begin{equation}
\gamma =1+e^{2}{\it \Pi }_{{\it 0}}\,^{\prime }-\frac{2\pi }{N}{\it \Pi }_{%
{\it 1}},\quad \delta =1+e^{2}{\it \Pi }_{\,{\it 2}}-\frac{2\pi }{N}{\it \Pi
}_{{\it 1}},\quad \kappa =\frac{2\pi }{N\delta }{\it \Pi }_{\,{\it 2}}.
\eqnum{3.15}
\end{equation}
Distinctive of eq. (3.12) is the presence of the nonzero coefficients $%
\sigma $ and $\tau $. They are related to the Debye screening in the two
dimensional anyon thermal ensemble. A characteristic of this 2+1 dimensional
model is that the Debye screening disappears at $T=0$, even if the chemical
potential is different from zero. Note that $\sigma $ and $\tau $ link the
magnetic field to the zero components of the gauge potentials, $A_{0}$ and $%
a_{0}$. As a consequence, these gauge potentials will play a nontrivial role
in finding the magnetic field solution of the system.
\subsection{Field Solutions and Boundary Conditions}
Using eqs.(3.12)-(3.14) one can obtain a higher order differential equation
that involves only the electric field,
\begin{equation}
a\partial _{x}^{4}E+d\partial _{x}^{2}E+cE=0, \eqnum{3.16}
\end{equation}
Here, $a=\omega \kappa $, $d=\omega \eta +\alpha \kappa -\gamma -\tau \kappa
\chi $, and $c=\alpha \eta -\sigma \gamma -\tau \eta \chi $.
Solving (3.16) we find
\begin{equation}
E\left( x\right) =C_{1}e^{-x\xi _{1}}+C_{2}e^{x\xi _{1}}+C_{3}e^{-x\xi
_{2}}+C_{4}e^{x\xi _{2}}, \eqnum{3.17}
\end{equation}
where
\begin{equation}
\xi _{1,2}=\left[ -d\pm \sqrt{d^{2}-4ac}\right] ^{\frac{1}{2}}/\sqrt{2a}
\eqnum{3.18}
\end{equation}
When the low density approximation $n_{e}\ll m^{2}$ is considered (this
approximation is in agreement with the typical values $n_{e}=2\times
10^{14}cm^{-2}$, $m_{e}=2.6\times 10^{10}cm^{-1}$ found in high-T$_{C}$
superconductivity), we find that, for $N=2$ and at temperatures lower than
the energy gap $\left( T\ll \omega _{c}\right) $, the inverse length scales
(3.18) are given by the following real functions
\begin{equation}
\xi _{1}\simeq e\sqrt{\frac{m}{\pi }}\left[ 1+\left( \frac{\pi ^{2}n_{e}^{2}%
}{m^{3}}\right) \beta \exp -\left( \frac{\pi n_{e}\beta }{2m}\right) \right]
\eqnum{3.19}
\end{equation}
\begin{equation}
\xi _{2}\simeq e\sqrt{\frac{n_{e}}{m}}\left[ 1-\left( \frac{\pi n_{e}}{m}%
\right) \beta \exp -\left( \frac{\pi n_{e}\beta }{2m}\right) \right]
\eqnum{3.20}
\end{equation}
These two inverse length scales correspond to two short-range
electromagnetic modes of propagation. These results are consistent with
those obtained in ref. \cite{8} using a different approach. If the masses of
the two massive modes, obtained in ref. \cite{8} by using the
electromagnetic thermal Green's function for static and slowly varying
configurations, are evaluated in the range of parameters considered above,
it can be shown that they reduce to eqs. (319), (3.20).
The solutions for $B$, $a_{0}$ and $A_{0}$, can be obtained using eqs.
(3.13), (3.14), (3.17) and the definition of $E$ in terms of $A_{0,}$
\begin{equation}
B\left( x\right) =\gamma _{1}\left( C_{2}e^{x\xi _{1}}-C_{1}e^{-x\xi
_{1}}\right) +\gamma _{2}\left( C_{4}e^{x\xi _{2}}-C_{3}e^{-x\xi
_{2}}\right) +C_{5} \eqnum{3.21}
\end{equation}
\begin{equation}
a_{0}\left( x\right) =\chi \gamma _{1}\left( C_{2}e^{x\xi
_{1}}-C_{1}e^{-x\xi _{1}}\right) +\chi \gamma _{2}\left( C_{4}e^{x\xi
_{2}}-C_{3}e^{-x\xi _{2}}\right) +C_{6} \eqnum{3.22}
\end{equation}
\begin{equation}
A_{0}\left( x\right) =\frac{1}{\xi _{1}}\left( C_{1}e^{-x\xi
_{1}}-C_{2}e^{x\xi _{1}}\right) +\frac{1}{\xi _{2}}\left( C_{3}e^{-x\xi
_{2}}-C_{4}e^{x\xi _{2}}\right) +C_{7} \eqnum{3.23}
\end{equation}
In the above formulas we introduced the notation $\gamma _{1}=\left( \xi
_{1}^{2}\kappa +\eta \right) /\xi _{1}$, $\gamma _{2}=\left( \xi
_{2}^{2}\kappa +\eta \right) /\xi _{2}$.
In obtaining eq. (3.16) we have taken the derivative of eq. (3.12).
Therefore, the solution of eq. (3.16) belongs to a wider class than the one
corresponding to eqs. (3.12)-(3.14). To exclude redundant solutions we must
require that they satisfy eq. (3.12) as a supplementary condition. In this
way the number of independent unknown coefficients is reduced to six, which
is the number corresponding to the original system (3.12)-(3.14). The extra
unknown coefficient is eliminated substituting the solutions (3.17), (3.21),
(3.22) and (3.23) into eq. (3.12) to obtain the relation
\begin{equation}
e{\it \Pi }_{{\it 1}}C_{5}=-{\it \Pi }_{{\it 0}}\left( C_{6}+eC_{7}\right)
\eqnum{3.24}
\end{equation}
Eq. (3.24) has an important meaning, it establishes a connection between the
coefficients of the long-range modes of the zero components of the gauge
potentials $(C_{6}+eC_{7})$ and the coefficient of the long-range mode of
the magnetic field $C_{5}$. Note that if the induced Chern-Simons
coefficient ${\it \Pi }_{{\it 1}}$, or the Debye screening coefficient ${\it %
\Pi }_{{\it 0}}$ were zero, there would be no link between $C_{5}$ and $%
(C_{6}+eC_{7})$. This relation between the long-range modes of $B$, $A_{0}$
and $a_{0}$ can be interpreted as a sort of Aharonov-Bohm effect, which
occurs in this system at finite temperature. At $T=0$, we have ${\it \Pi }_{%
{\it 0}}=0$, and the effect disappears.
Up to this point no boundary has been taken into account. Therefore, it is
easy to understand that the magnetic long-range mode associated with the
coefficient $C_{5}$, must be identified with the one found in ref. \cite{8}
for the infinite bulk using a different approach. However, as it is shown
below, when a constant and uniform magnetic field is perpendicularly applied
at the boundaries of a two-dimensional sample, this mode cannot propagate
(i.e. $C_{5}\equiv 0$) within the sample. This result is crucial for the
existence of Meissner effect in this system.
In order to determine the unknown coefficients we need to use the boundary
conditions. Henceforth we consider that the anyon fluid is confined to the
strip $-\infty <y<\infty $ with boundaries at $x=-L$ and $x=L$. The external
magnetic field will be applied from the vacuum at both boundaries ($-\infty
<x\leq -L$, $\;L\leq x<\infty $).
The boundary conditions for the magnetic field are $B\left( x=-L\right)
=B\left( x=L\right) =\overline{B}$ ($\overline{B}$ constant). Because no
external electric field is applied, the boundary conditions for this field
are, $E\left( x=-L\right) =E\left( x=L\right) =0$. Using them and assuming $%
L\gg \lambda _{1}$, $\lambda _{2}$ ($\lambda _{1}=1/\xi _{1}$, $\lambda
_{2}=1/\xi _{2}$), we find the following relations that give $C_{1,2,3,4}$
in terms of $C_{5}$,
\begin{equation}
C_{1}=Ce^{-L\xi _{1}},\quad C_{2}=-C_{1},\quad C_{3}=-Ce^{-L\xi _{2}},\quad
C_{4}=-C_{3},\quad C=\frac{C_{5}-\overline{B}}{\gamma _{1}-\gamma _{2}}
\eqnum{3.25}
\end{equation}
\section{Stability Condition for the Infinite-Strip Sample}
After using the boundary conditions, we can see from (3.25) that they were
not sufficient to find the coefficient $C_{5}$. In order to totally
determine the system magnetic response we have to use another physical
condition from where $C_{5}$ can be found. Since, obviously, any meaningful
solution have to be stable, the natural additional condition to be
considered is the stability equation derived from the system free energy.
With this goal in mind we start from the free energy of the infinite-strip
sample
\[
{\cal F}=\frac{1}{2}\int\limits_{-L^{\prime }}^{L^{\prime
}}dy\int\limits_{-L}^{L}dx\left\{ \left( E^{2}+B^{2}\right) +\frac{N}{\pi }%
a_{0}b-{\it \Pi }_{{\it 0}}\left( eA_{0}+a_{0}\right) ^{2}\right.
\]
\begin{equation}
\left. -{\it \Pi }_{{\it 0}}\,^{\prime }\left( eE+{\cal E}\right) ^{2}-2{\it %
\Pi }_{{\it 1}}\left( eA_{0}+a_{0}\right) \left( eB+b\right) +{\it \Pi }_{\,%
{\it 2}}\left( eB+b\right) ^{2}\right\} \eqnum{4.1}
\end{equation}
where $L$ and $L^{\prime }$ determine the two sample's lengths.
Using the field solutions (3.17), (3.21)-(3.23) with coefficients (3.25), it
is found that the leading contribution to the free-energy density ${\it f}=%
\frac{{\cal F}}{{\cal A}}$ ,\ (${\cal A}=4LL^{\prime }$ being the sample
area) in the infinite-strip limit $(L\gg \lambda _{1}$, $\lambda _{2}$, $%
L^{\prime }\rightarrow \infty )$ is given by
\begin{equation}
f=C_{5}^{2}-{\it \Pi }_{{\it 0}}\left( C_{6}+eC_{7}\right) ^{2}+e^{2}{\it %
\Pi }_{\,{\it 2}}C_{5}^{2}-2e{\it \Pi }_{{\it 1}}\left( C_{6}+eC_{7}\right)
C_{5} \eqnum{4.2}
\end{equation}
Taking into account the constraint equation (3.24), the free-energy density
(4.2) can be written as a quadratic function in $C_{5}$. Then, the value of $%
C_{5}$ is found, by minimizing the corresponding free-energy density
\begin{equation}
\frac{\delta {\it f}}{\delta C_{5}}=\left[ {\it \Pi }_{{\it 0}}+e^{2}{\it %
\Pi }_{{\it 1}}^{\,}\,^{2}+e^{2}{\it \Pi }_{{\it 0}}{\it \Pi }_{\,{\it 2}%
}\right] \frac{C_{5}}{{\it \Pi }_{{\it 0}}}=0, \eqnum{4.3}
\end{equation}
to be $C_{5}=0$.
This result implies that the long-range mode cannot propagate within the
infinite-strip when a uniform and constant magnetic field is perpendicularly
applied at the sample's boundaries.
We want to point out the following fact. The same property of the finite
temperature polarization operator component $\Pi _{00}$ that is producing
the appearance of a long-range mode in the infinite bulk, is also
responsible, when it is combined with the boundary conditions, for the
non-propagation of this mode in the bounded sample. It is known that the
nonvanishing of ${\it \Pi }_{{\it 0}}$ at $T\neq 0$ (or equivalently, the
presence of a pole $\sim 1/k^{2}$ in $\Pi _{00}$ at $T\neq 0$) guarantees
the existence of a long-range mode in the infinite bulk \cite{8}. On the
other hand, however, once ${\it \Pi }_{{\it 0}}$ is different from zero, we
can use the constraint (3.24) to eliminate $C_{6}+eC_{7}$ in favor of $C_{5%
\text{ }}$ in the free-energy density of the infinite strip. Then, as we
have just proved, the only stable solution of this boundary-value problem,
which is in agreement with the boundary conditions, is $C_{5}=0$.
Consequently, no long-range mode propagates in the bounded sample.
In the zero temperature limit $\left( \beta \rightarrow \infty \right) $,
because ${\it \Pi }_{{\it 0}}=0$, it is straightforwardly obtained from
(3.24) that $C_{5}=0$ and no long-range mode propagates.
At $T\neq 0$, taking into account that $C_{5}=0$ along with eq. (3.25) in
the magnetic field solution (3.21), we can write the magnetic field
penetration as
\begin{equation}
B\left( x\right) =\overline{B}_{1}\left( T\right) \left( e^{-(x+L)\xi
_{1}}+e^{\left( x-L\right) \xi _{1}}\right) +\overline{B}_{2}\left( T\right)
\left( e^{-(x+L)\xi _{2}}+e^{\left( x-L\right) \xi _{2}}\right) \eqnum{4.4}
\end{equation}
where,
\begin{equation}
\overline{B}_{1}\left( T\right) =\frac{\gamma _{1}}{\gamma _{1}-\gamma _{2}}%
\overline{B},\text{ \qquad \quad }\overline{B}_{2}\left( T\right) =\frac{%
\gamma _{2}}{\gamma _{2}-\gamma _{1}}\overline{B} \eqnum{4.5}
\end{equation}
For densities $n_{e}\ll m^{2}$, the coefficients $\overline{B}_{1}$and $%
\overline{B}_{2}$ can be expressed, in the low temperature approximation $%
\left( T\ll \omega _{c}\right) $, as
\begin{equation}
B_{1}\left( T\right) \simeq -\frac{\left( \pi n_{e}\right) ^{3/2}}{m^{2}}%
\left[ 1/m+\frac{5}{2}\beta \exp -\left( \frac{\pi n_{e}\beta }{2m}\right)
\right] \overline{B},\qquad \eqnum{4.6}
\end{equation}
\begin{equation}
B_{2}\left( T\right) \simeq \left[ 1+\frac{5\pi n_{e}}{2m^{2}}\sqrt{\pi n_{e}%
}\beta \exp -\left( \frac{\pi n_{e}\beta }{2m}\right) \right] \overline{B}
\eqnum{4.7}
\end{equation}
Hence, in the infinite-strip sample the applied magnetic field is totally
screened within the anyon fluid on two different scales, $\lambda _{1}=1/\xi
_{1}$ and $\lambda _{2}=1/\xi _{2}$. At $T=200K$, for the density value
considered above, the penetration lengths are given by $\lambda _{1}\simeq
0.6\times 10^{-8}cm$ and $\lambda _{2}\simeq 0.1\times 10^{-4}cm$ .
Moreover, taking into account that $\xi _{1}$ increases with the temperature
while $\xi _{2}$ decreases (see eqs. (3.19)-(3.20)), and that $B_{1}\left(
T\right) <0$ while $B_{2}\left( T\right) >0$, it can be shown that the
effective penetration length $\overline{\lambda }$ (defined as the distance $%
x$ where the magnetic field falls down to a value $B\left( \overline{\lambda
}\right) /\overline{B}=e^{-1}$) increases with the temperature as
\begin{equation}
\overline{\lambda }\simeq \overline{\lambda }_{0}\left( 1+\overline{\kappa }%
\beta \exp -\frac{1}{2}\overline{\kappa }\beta \right) \eqnum{4.8}
\end{equation}
where $\overline{\lambda }_{0}=\sqrt{m/n_{e}e^{2}}$ and $\overline{\kappa }%
=\pi n_{e}/m$. At $T=200K$ the effective penetration length is $\overline{%
\lambda }\sim 10^{-5}cm$.
It is timely to note that the presence of explicit (proportional to $N$) and
induced (proportional to ${\it \Pi }_{{\it 1}}$) Chern-Simons terms in the
anyon effective action (3.1) is crucial to obtain the Meissner solution
(4.4). If the Chern-Simons interaction is disconnected ($N\rightarrow \infty
$ and ${\it \Pi }_{{\it 1}}=0$), then $a=0,$ $d=1+e^{2}{\it \Pi }_{{\it 0}%
}{}^{\prime }\neq 0$ and $c=e^{2}{\it \Pi }_{{\it 0}}\,\neq 0$ in eq.
(3.16). In that case the solution of the field equations within the sample
are $E=0$, $B=\overline{B}$. That is, we regain the QED in (2+1)-dimensions,
which does not exhibit any superconducting behavior.
\section{High Temperature Non-Superconducting Phase}
We have just found that the charged anyon fluid confined to an infinite
strip exhibits Meissner effect at temperatures lower than the energy gap $%
\omega _{c}$. It is natural to expect that at temperatures larger than the
energy gap this superconducting behavior should not exist. At those
temperatures the electron thermal fluctuations should make available the
free states existing beyond the energy gap. As a consequence, the charged
anyon fluid should not be a perfect conductor at $T\gg \omega _{c}$. A
signal of such a transition can be found studying the magnetic response of
the system at those temperatures.
As can be seen from the magnetic field solution (4.4), the real character of
the inverse length scales (3.18) is crucial for the realization of the
Meissner effect. At temperatures much lower than the energy gap this is
indeed the case, as can be seen from eqs. (3.19) and (3.20).
In the high temperature $\left( T\gg \omega _{c}\right) $ region the
polarization operator coefficients are given by eq. (3.9). Using this
approximation together with the assumption $n_{e}\ll m^{2}$, we can
calculate the coefficients $a$, $c$ and $d$ that define the behavior of the
inverse length scales,
\begin{equation}
a\simeq \pi ^{2}{\it \Pi }_{{\it 0}}{}^{\prime }{\it \Pi }_{\,{\it 2}}
\eqnum{5.1}
\end{equation}
\begin{equation}
c\simeq e^{2}{\it \Pi }_{{\it 0}}{} \eqnum{5.2}
\end{equation}
\begin{equation}
d\simeq -1 \eqnum{5.3}
\end{equation}
Substituting with (5.1)-(5-3) in eq. (3.18) we obtain that the inverse
length scales in the high-temperature limit are given by
\begin{equation}
\xi _{1}\simeq e\sqrt{m/2\pi }\left( \tanh \frac{\beta \mu }{2}+1\right) ^{%
\frac{1}{2}} \eqnum{5.4}
\end{equation}
\begin{equation}
\xi _{2}\simeq i\left[ 24\sqrt{\frac{2m}{\beta }}\cosh \frac{\beta \mu }{2}%
\left( \tanh \frac{\beta \mu }{2}+1\right) ^{-\frac{1}{2}}\right]
\eqnum{5.5}
\end{equation}
The fact that $\xi _{2}$ becomes imaginary at temperatures larger than the
energy gap, $\omega _{c}$, implies that the term $\gamma _{2}\left(
C_{4}e^{x\xi _{2}}-C_{3}e^{-x\xi _{2}}\right) $ in the magnetic field
solution (3.21) ceases to have a damping behavior, giving rise to a periodic
inhomogeneous penetration. Therefore, the fluid does not exhibit a Meissner
effect at those temperatures since the magnetic field will not be totally
screened. This corroborate our initial hypothesis that at $T\gg \omega _{c}$
the anyon fluid is in a new phase in which the magnetic field can penetrate
the sample.
We expect that a critical temperature of the order of the energy gap ($T\sim
\omega _{c}$) separates the superconducting phase $\left( T\ll \omega
_{c}\right) $ from the non-superconducting one $\left( T\gg \omega
_{c}\right) $. Nevertheless, the temperature approximations (3.8) and (3.9)
are not suitable to perform the calculation needed to find the phase
transition temperature. The field solutions in this new non-superconducting
phase is currently under investigation. The results will be published
elsewhere.
\section{Concluding Remarks}
In this paper we have investigated the magnetic response at finite
temperature of a charged anyon fluid confined to an infinite strip. The
charged anyon fluid was modeled by a (2+1)-dimensional MCS theory in a
many-particle ($\mu \neq 0$, $\overline{b}\neq 0$) ground state. The
particle energy spectrum of the theory exhibits a band structure given by
different Landau levels separated by an energy gap $\omega _{c}$, which is
proportional to the background Chern-Simons magnetic field $\overline{b}$.
We found that the energy gap $\omega _{c}$ defines a scale that separates
two phases: a superconducting phase at $T\ll \omega _{c}$, and a
non-superconducting one at $T\gg \omega _{c}$.
The total magnetic screening in the superconducting phase is characterized
by two penetration lengths corresponding to two short-range eigenmodes of
propagation of the electromagnetic field within the anyon fluid. The
existence of a Meissner effect at finite temperature is the consequence of
the fact that a third electromagnetic mode, of a long-range nature, which is
present at finite temperature in the infinite bulk \cite{8}, does not
propagate within the infinite strip when a uniform and constant magnetic
field is applied at the boundaries. This is a significant property since the
samples used to test the Meissner effect in high-$T_{c}$ superconductors are
bounded.
It is noteworthy that the existence at finite temperature of a Debye
screening (${\it \Pi }_{{\it 0}}\,\neq 0$) gives rise to a sort of
Aharonov-Bohm effect in this system with Chern-Simons interaction ($N$
finite, ${\it \Pi }_{{\it 1}}\neq 0$). When ${\it \Pi }_{{\it 0}}\,\neq 0$,
the field combination $a_{0}+eA_{0}$ becomes physical because it enters in
the field equations in the same foot as the electric and magnetic fields
(see eq. (3.12)). A direct consequence of this fact is that the coefficient $%
C_{5}$, associated to the long-range mode of the magnetic field, is linked
to the coefficients $C_{6}$ and $C_{7}$ of the zero components of the
potentials (see eq. (3.24)).
When $T=0$, since ${\it \Pi }_{{\it 0}}\,=0$ and ${\it \Pi }_{{\it 1}}\neq 0$%
, eq. (3.24) implies $C_{5}=0$. That is, at zero temperature the long-range
mode is absent. This is the well known Meissner effect of the anyon fluid at
$T=0$. When $T\neq 0$, eq. (3.24) alone is not enough to determine the value
of $C_{5}$, since it is given in terms of $C_{6}$ and $C_{7}$ which are
unknown. However, when eq. (3.24) is taken together with the field
configurations that satisfy the boundary conditions for the infinite-strip
sample (eqs. (3.17), (3.21)-(3.23) and (3.25)), and with the sample
stability condition (4.3), we obtain that $C_{5}=0$. Thus, the combined
action of the boundary conditions and the Aharonov-Bohm effect expressed by
eq. (3.24) accounts for the total screening of the magnetic field in the
anyon fluid at finite temperature.
Finally, at temperatures large enough ($T\gg \omega _{c}$) to excite the
electrons beyond the energy gap, we found that the superconducting behavior
of the anyon fluid is lost. This result was achieved studying the nature of
the characteristic lengths (3.18) in this high temperature approximation. We
showed that in this temperature region the characteristic length $\xi _{2}$
becomes imaginary (eq. (5.5)), which means that a total damping solution for
the magnetic field does not exist any more, and hence the magnetic field
penetrates the sample.
\begin{quote}
Acknowledgments
\end{quote}
The authors are very grateful for stimulating discussions with Profs. G.
Baskaran, A. Cabo, E.S. Fradkin, Y. Hosotani and J. Strathdee. We would also
like to thank Prof. S. Randjbar-Daemi for kindly bringing the publication of
ref. \cite{b} to our attention. Finally, it is a pleasure for us to thank
Prof. Yu Lu for his hospitality during our stay at the ICTP, where part of
this work was done. This research has been supported in part by the National
Science Foundation under Grant No. PHY-9414509.
| proofpile-arXiv_065-474 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\bigskip
Over the past decade, beginning with the measurement of nucleon
spin--polarized structure function, $g_{1}(x,Q^2)$ by the
EMC \cite{emc88} at CERN and most recently with the
spin--structure function $g_2(x,Q^2)$ in the E143
experiment \cite{slac96} at SLAC, a wealth of information has been
gathered on the spin--polarized structure functions of the nucleon
and their corresponding sum rules (see in addition
\cite{smc93}, \cite{smc94a}, \cite{smc94b}, \cite{slac93},
\cite{slac95a}, \cite{slac95b}).
Initially the analysis of these experiments cast doubt
on the non--relativistic quark model \cite{kok79}
interpretations regarding the spin content of the proton.
By now it is firmly established
that the quark helicity of the nucleon is much smaller than the
predictions of that model, however, many questions remain
to be addressed concerning the spin structure.
As a result there have been numerous investigations within
models for the nucleon in an effort to determine the manner in which
the nucleon spin is distributed among its constituents.
One option is to study
the axial current matrix elements of the
nucleon such as
$\langle N |{\cal A}_{\mu}^{i}|N\rangle = 2\Delta q_{i}S_{\mu}$,
which, for example, provide information on
the nucleon axial singlet charge
\begin{eqnarray}
g_{A}^{0}&=&\langle N |{\cal A}_{3}^{0}|N\rangle\
= \left( \Delta u + \Delta d + \Delta s\right)=
\Gamma_{1}^{p} (Q^2) + \Gamma_{1}^{n} (Q^2) \ .
\end{eqnarray}
Here $\Delta q$ are the axial charges of the quark constituents and
$\Gamma_{1}^{N} (Q^2)=\int_{0}^{1} dx g_{1}^{N}(x,Q^2)$ is the
first moment of the longitudinal nucleon spin structure function,
$g_1^N\left(x,Q^{2}\right)$.
Of course, it is more illuminating to
directly compute the longitudinal and transverse
nucleon spin--structure functions, $g_{1}\left(x,Q^2\right)$
and $g_{T}(x,Q^2)=g_{1}(x,Q^2)+g_{2}(x,Q^2)$, respectively as
functions of the Bjorken variable $x$.
We will calculate these structure functions within the
Nambu--Jona--Lasinio (NJL) \cite{Na61} chiral soliton model \cite{Re88}.
Chiral soliton models are unique both in being
the first effective models of hadronic physics to shed light
on the so called ``proton--spin crisis" by predicting a singlet
combination in accord with the data \cite{br88}, and in predicting a
non--trivial strange quark content to the axial vector current of the
nucleon \cite{br88}, \cite{pa89}, \cite{jon90}, \cite{blo93};
about $10-30\%$ of the down quarks
(see \cite{wei96a} and \cite{ell96} for reviews).
However, while the leading moments of these
structure functions have been calculated within chiral soliton
models, from the Skyrme model \cite{Sk61}, \cite{Ad83} and its various
vector--meson extensions, to
models containing explicit quark degrees of freedom such as the
(NJL) model \cite{Na61},
the nucleon spin--structure functions
themselves have not been investigated in these models.
Soliton model calculations of structure functions
were, however, performed in
Friedberg-Lee \cite{frie77} and color-dielectric \cite{nil82} models.
In addition, structure functions have extensively been studied
within the framework of effective quark models such as the
bag--model \cite{Ch74}, and the Center of Mass
bag model \cite{So94}.
These models are confining by construction but they
neither contain non--perturbative pseudoscalar fields nor are they
chirally symmetric\footnote{In the
cloudy bag model the contribution of the pions to structure
functions has at most been treated perturbatively
\cite{Sa88}, \cite{Sc92}.}.
To this date it is fair to say that
many of the successes of low--energy effective models rely on
the incorporation of chiral symmetry and its spontaneous
symmetry breaking (see for e.g. \cite{Al96}).
In this article we therefore present our
calculation of the polarized spin structure functions in the NJL
chiral soliton model \cite{Re89}, \cite{Al96}.
Since in particular the static axial properties of the nucleon are
dominated by the valence quark contribution in this model
it is legitimate to focus on the valence quarks in this model.
At the outset it is important to note that a major difference
between the chiral soliton models and models previously employed
to calculate structure functions is the form of the nucleon
wave--function. In the latter the nucleon wave--function is a
product of Dirac spinors while in the former the nucleon appears as
a collectively excited (topologically) non--trivial meson
configuration.
As in the original bag model study \cite{Ja75}
of structure functions for
localized field configurations, the structure functions are most
easily accessible when the current operator is at most quadratic in
the fundamental fields and the propagation of the interpolating
field can be regarded as free.
Although the latter approximation is
well justified in the Bjorken limit the former condition is
difficult to satisfy in soliton models where mesons
are fundamental fields ({\it e.g.} the Skyrme
model \cite{Sk61}, \cite{Ad83},
the chiral quark model of ref. \cite{Bi85} or the chiral bag model
\cite{Br79}).
Such model Lagrangians typically possess all orders of the fundamental
pion field. In that case the current operator is not confined to
quadratic order and the calculation of the hadronic tensor
(see eq. (\ref{deften}) below) requires drastic approximations.
In this respect the chirally invariant NJL model
is preferred because it is entirely defined in terms of quark
degrees of freedom and formally the current
possesses the structure as in a non--interacting
model. This makes the evaluation of the hadronic tensor
feasible.
Nevertheless after bosonization
the hadronic currents
are uniquely defined
functionals of the solitonic meson fields.
The paper is organized as follows: In section 2 we give a brief
discussion of the standard operator product expansion (OPE) analysis
to establish the connection between the effective models for the
baryons at low energies and the quark--parton model description.
In section 3 we briefly review the NJL chiral soliton.
In section 4 we extract the polarized structure
functions from the hadronic tensor, eq. (\ref{had})
exploiting the ``valence quark approximation".
Section 5 displays the results of the spin--polarized
structure functions calculated in the NJL chiral soliton model
within this approximation and compare this result with a recent
low--renormalization point parametrization \cite{Gl95}.
In section 6 we use Jaffe's prescription \cite{Ja80} to impose
proper support for the structure
function within the interval $x\in \left[0,1\right]$.
Subsequently the structure functions
are evolved \cite{Al73}, \cite{Al94}, \cite{Ali91}
from the scale characterizing the NJL--model to the
scale associated with the experimental data. Section 7 serves to
summarize these studies and to propose further explorations.
In appendix A we list explicit analytic expressions for the isoscalar
and isovector polarized structure functions.
Appendix B summarizes details on the evolution of the twist--3
structure function, ${\overline{g}}_2\left(x,Q^2\right)$.
\bigskip
\section{DIS and the Chiral Soliton}
\bigskip
It has been a long standing effort to establish the connection
between the chiral soliton picture of the baryon, which essentially
views baryons as mesonic lumps and the quark parton model which
regards baryons as composites of almost non--interacting, point--like
quarks. While the former has been quite successful in describing
static properties of the nucleon, the latter, being firmly
established within the context of deep inelastic scattering (DIS),
has been employed extensively to calculate the short distance or
perturbative processes within QCD.
In fact this connection can be made through the OPE.
The discussion begins with the hadronic tensor for electron--nucleon
scattering,
\begin{eqnarray}
W_{\mu\nu}(q)=\frac{1}{4\pi}\int d^4 \xi \
{\rm e}^{iq\cdot\xi}
\langle N |\left[J_\mu(\xi),J^{\dag}_\nu(0)\right]|N\rangle\ ,
\label{deften}
\end{eqnarray}
where $J_\mu={\bar q}(\xi)\gamma_\mu {\cal Q} q(\xi)$ is the
electromagnetic
current, ${\cal Q}=\left(\frac{2}{3},\frac{-1}{3}\right)$ is the (two
flavor) quark charge matrix and $|N\rangle$ refers to the
nucleon state. In the DIS regime the OPE enables one to
express the product of these
currents in terms of the forward Compton scattering
amplitude $T_{\mu\nu}(q)$ of a virtual photon
from a nucleon
\begin{eqnarray}
T_{\mu\nu}(q)=i\int d^4 \xi \
{\rm e}^{iq\cdot\xi}
\langle N |T\left(J_\mu(\xi)J^{\dag}_\nu(0)\right)|N\rangle\ ,
\label{im}
\end{eqnarray}
by an expansion on the light cone $\left(\xi^2 \rightarrow 0\right)$
using a set of renormalized local
operators \cite{muta87}, \cite{rob90}. In the Bjorken
limit the influence of these operators is determined
by the twist, $\tau$ or the light cone singularity of their coefficient
functions. Effectively this becomes a power series in the inverse
of the Bjorken variable $x=-q^{2}/2P\cdot q$, with $P_\mu$ being the
nucleon momentum:
\begin{eqnarray}
T_{\mu\nu}(q)\ =\sum_{n,i,\tau}
\left(\frac{1}{x}\right)^{n}\ e_{\mu\nu}^{i}\left(q,P,S\right)\
C^{n}_{\tau,i}(Q^2/\mu^2,\alpha_s(\mu^2)){\cal O}^{n}_{\tau,i}(\mu^2)
(\frac{1}{Q^2})^{\frac{\tau}{2}\ - 1}\ .
\label{series}
\end{eqnarray}
Here the index $i$ runs over all scalar matrix
elements, ${\cal O}_{\tau,i}^{n}(\mu^2)$, with the
same Lorentz structure (characterized by
the tensor, $e_{\mu\nu}^{i}$). Furthermore,
$S^{\mu}$ is the spin
of the nucleon,
$\left(S^2=-1\ , S\cdot P\ =0\right)$ and $Q^2=-q^2 > 0$.
As is evident, higher twist contributions
are suppressed by powers of $1/{Q^2}$.
The coefficient functions,
$C^{n}_{\tau,i}(Q^2/\mu^2,\alpha_s(\mu^2))$
are target independent and in principle include all
QCD radiative corrections. Their $Q^2$
variation is determined from the solution of the renormalization
group equations and logarithmically diminishes at large $Q^2$.
On the other hand the reduced--matrix elements,
${\cal O}_{\tau,i}^{n}(\mu^2)$,
depend only on the renormalization
scale $\mu^2$ and reflect the non--perturbative properties
of the nucleon \cite{ans95}.
The optical theorem states that the hadronic tensor is
given in terms of the imaginary part of the virtual Compton scattering
amplitude, $W_{\mu\nu}=\frac{1}{2\pi}{\rm Im}\ T_{\mu\nu}$.
From the analytic properties of $T_{\mu\nu}(q)$,
together with eq. (\ref{series}) an infinite set of sum rules
result for the form factors, ${\cal W}_{i}\left(x,Q^2\right)$,
which are defined via the Lorentz covariant decomposition
$W_{\mu\nu}(q)=e_{\mu\nu}^{i}{\cal W}_i\left(x,Q^2\right)$.
These sum rules read
\begin{eqnarray}
\int^{1}_{0}dx\ x^{n-1}\ {\cal W}_{i}\left(y,Q^2\right)&=&
\sum _{\tau}\
C^{n}_{\tau,i}\left(Q^2/\mu^2,\alpha_s(\mu^2)\right)
{\cal O}_{\tau,i}^{n}(\mu^2)
(\frac{1}{Q^2})^{\frac{\tau}{2}\ - 1}\ .
\label{ope}
\end{eqnarray}
In the
{\em impulse approximation}
(i.e. neglecting radiative
corrections) \cite{Ja90,Ji90,Ja91}
one can directly sum the OPE gaining direct access
to the structure functions in terms of the reduced matrix elements
${\cal O}_{\tau,i}^{n}(\mu^2)$.
When calculating the renormalization--scale dependent
matrix elements, ${\cal O}_{\tau,i}^{n}(\mu^2)$
within QCD, $\mu^2$ is an arbitrary parameter adjusted to ensure
rapid convergence of the perturbation series.
However, given the difficulties of obtaining a
satisfactory description of the nucleon
as a bound--state in the $Q^2$ regime of DIS processes
it is customary to calculate these
matrix elements in models at a
low scale $\mu^2$ and subsequently evolve these results
to the relevant DIS momentum region of the data
employing, for example, the
Altarelli--Parisi evolution \cite{Al73}, \cite{Al94}.
In this context, the scale, $\mu^2 \sim \Lambda_{QCD}^{2}$,
characterizes the non--perturbative regime
where it is possible to formulate a nucleon
wave--function from which structure functions
are computed.
Here we will utilize the NJL chiral--soliton model to
calculate the spin--polarized nucleon structure functions
at the scale, $\mu^2$, subsequently evolving
the structure functions according to the Altarelli--Parisi scheme.
This establishes the connection between chiral soliton and the
parton models. In addition we compare the structure functions
calculated in the NJL model to a parameterization of spin structure
function \cite{Gl95} at a scale commensurate with our model.
\bigskip
\section{The Nucleon State in the NJL Model}
\bigskip
The Lagrangian of the NJL model reads
\begin{eqnarray}
{\cal L} = \bar q (i\partial \hskip -0.5em / - m^0 ) q +
2G_{\rm NJL} \sum _{i=0}^{3}
\left( (\bar q \frac {\tau^i}{2} q )^2
+(\bar q \frac {\tau^i}{2} i\gamma _5 q )^2 \right) .
\label{NJL}
\end{eqnarray}
Here $q$, $\hat m^0$ and $G_{\rm NJL}$ denote the quark field, the
current quark mass and a dimensionful coupling constant, respectively.
When integration out the gluon fields from QCD a current--current
interaction remains, which is meditated by the gluon propagator.
Replacing this gluon propagator by a local contact interaction and
performing the appropriate Fierz--transformations yields the
Lagrangian (\ref{NJL}) in leading order of $1/N_c$ \cite{Re90},
where $N_c$ refers to the number of color degrees of freedom. It is
hence apparent that the interaction term in eq. (\ref{NJL}) is a
remnant of the gluon fields. Hence gluonic effects are included
in the model described by the Lagrangian (\ref{NJL}).
Application of functional bosonization techniques \cite{Eb86} to the
Lagrangian (\ref{NJL}) yields the mesonic action
\begin{eqnarray}
{\cal A}&=&{\rm Tr}_\Lambda\log(iD)+\frac{1}{4G_{\rm NJL}}
\int d^4x\ {\rm tr}
\left(m^0\left(M+M^{\dag}\right)-MM^{\dag}\right)\ ,
\label{bosact} \\
D&=&i\partial \hskip -0.5em /-\left(M+M^{\dag}\right)
-\gamma_5\left(M-M^{\dag}\right)\ .
\label{dirac}
\end{eqnarray}
The composite scalar ($S$) and pseudoscalar ($P$) meson fields
are contained in $M=S+iP$ and appear as quark--antiquark bound
states. The NJL model embodies the approximate chiral symmetry of QCD
and has to be understood as an effective (non--renormalizable) theory
of the low--energy quark flavor dynamics.
For regularization, which is indicated by the cut--off
$\Lambda$, we will adopt the proper--time scheme \cite{Sch51}.
The free parameters of the model are the current quark mass $m^0$,
the coupling constant $G_{\rm NJL}$ and the cut--off $\Lambda$.
Upon expanding ${\cal A}$ to quadratic order in $M$ these parameters are
related to the pion mass, $m_\pi=135{\rm MeV}$ and pion decay constant,
$f_\pi=93{\rm MeV}$. This leaves one undetermined parameter which we
choose to be the vacuum expectation value $m=\langle M\rangle$. For
apparent reasons $m$ is called the constituent quark mass. It is
related to $m^0$, $G_{\rm NJL}$ and $\Lambda$ via the gap--equation,
{\it i.e.} the equation of motion for the scalar field $S$\cite{Eb86}. The
occurrence of this vacuum expectation value reflects the spontaneous
breaking of chiral symmetry and causes the pseudoscalar fields to
emerge as (would--be) Goldstone bosons.
As the NJL model soliton has exhaustively been discussed in
recent review articles \cite{Al96}, \cite{Gok96}
we only present those features,
which are relevant for the computation of the structure functions
in the valence quark approximation.
The chiral soliton is given by the hedgehog configuration
of the meson fields
\begin{eqnarray}
M_{\rm H}(\mbox{\boldmath $x$})=m\ {\rm exp}
\left(i\mbox{\boldmath $\tau$}\cdot{\hat{\mbox{\boldmath $x$}}}
\Theta(r)\right)\ .
\label{hedgehog}
\end{eqnarray}
In order to compute the functional trace in eq. (\ref{bosact}) for this
static configuration we express the
Dirac operator (\ref{dirac}) as, $D=i\gamma_0(\partial_t-h)$
where
\begin{eqnarray}
h=\mbox{\boldmath $\alpha$}\cdot\mbox{\boldmath $p$}+m\
{\rm exp}\left(i\gamma_5\mbox{\boldmath $\tau$}
\cdot{\hat{\mbox{\boldmath $x$}}}\Theta(r)\right)\
\label{hamil}
\end{eqnarray}
is the corresponding Dirac Hamiltonian. We denote the eigenvalues
and eigenfunctions of $h$ by $\epsilon_\mu$ and $\Psi_\mu$,
respectively. Explicit expressions for these wave--functions are
displayed in appendix A. In the proper time regularization scheme
the energy functional of the NJL model is found to be \cite{Re89,Al96},
\begin{eqnarray}
E[\Theta]=
\frac{N_C}{2}\epsilon_{\rm v}
\left(1+{\rm sgn}(\epsilon_{\rm v})\right)
&+&\frac{N_C}{2}\int^\infty_{1/\Lambda^2}
\frac{ds}{\sqrt{4\pi s^3}}\sum_\nu{\rm exp}
\left(-s\epsilon_\nu^2\right)
\nonumber \\* && \hspace{1.5cm}
+\ m_\pi^2 f_\pi^2\int d^3r \left(1-{\rm cos}\Theta(r)\right) ,
\label{efunct}
\end{eqnarray}
with $N_C=3$ being the number of color degrees of freedom.
The subscript ``${\rm v}$" denotes the valence quark level. This state
is the distinct level bound in the soliton background, {\it i.e.}
$-m<\epsilon_{\rm v}<m$. The chiral angle, $\Theta(r)$, is
obtained by self--consistently extremizing $E[\Theta]$ \cite{Re88}.
States possessing good spin and isospin quantum numbers are
generated by rotating the hedgehog field
\cite{Ad83}
\begin{eqnarray}
M(\mbox{\boldmath $x$},t)=
A(t)M_{\rm H}(\mbox{\boldmath $x$})A^{\dag}(t)\ ,
\label{collrot}
\end{eqnarray}
which introduces the collective coordinates $A(t)\in SU(2)$. The
action functional is expanded \cite{Re89} in the angular velocities
\begin{eqnarray}
2A^{\dag}(t)\dot A(t)=
i\mbox{\boldmath $\tau$}\cdot\mbox{\boldmath $\Omega$} \ .
\label{angvel}
\end{eqnarray}
In particular the valence quark wave--function receives a first
order perturbation
\begin{eqnarray}
\Psi_{\rm v}(\mbox{\boldmath $x$},t)=
{\rm e}^{-i\epsilon_{\rm v}t}A(t)
\left\{\Psi_{\rm v}(\mbox{\boldmath $x$})
+\frac{1}{2}\sum_{\mu\ne{\rm v}}
\Psi_\mu(\mbox{\boldmath $x$})
\frac{\langle \mu |\mbox{\boldmath $\tau$}\cdot
\mbox{\boldmath $\Omega$}|{\rm v}\rangle}
{\epsilon_{\rm v}-\epsilon_\mu}\right\}=:
{\rm e}^{-i\epsilon_{\rm v}t}A(t)
\psi_{\rm v}(\mbox{\boldmath $x$}).
\label{valrot}
\end{eqnarray}
Here $\psi_{\rm v}(\mbox{\boldmath $x$})$ refers to the spatial part
of the body--fixed valence quark wave--function with the rotational
corrections included. Nucleon states $|N\rangle$ are obtained
by canonical quantization of the collective coordinates, $A(t)$. By
construction these states live in the Hilbert space of a rigid rotator.
The eigenfunctions are Wigner $D$--functions
\begin{eqnarray}
\langle A|N\rangle=\frac{1}{2\pi}
D^{1/2}_{I_3,-J_3}(A)\ ,
\label{nwfct}
\end{eqnarray}
with $I_3$ and $J_3$ being respectively the isospin and spin
projection quantum numbers of the nucleon.
\bigskip
\section{Polarized Structure Functions in the NJL model}
\bigskip
The starting point for computing nucleon structure functions
is the hadronic tensor, eq. (\ref{deften}). The polarized structure
functions are extracted from its antisymmetric
piece, $W^{(A)}_{\mu\nu}=(W_{\mu\nu}-W_{\nu\mu})/2i$.
Lorentz invariance implies that the
antisymmetric portion, characterizing polarized
lepton--nucleon scattering, can be decomposed into
the polarized structure functions,
$g_1(x,Q^2)$ and $g_2(x,Q^2)$,
\begin{eqnarray}
W^{(A)}_{\mu\nu}(q)=
i\epsilon_{\mu\nu\lambda\sigma}\frac{q^{\lambda}M_N}{P\cdot q}
\left\{g_1(x,Q^2)S^{\sigma}+
\left(S^{\sigma}-\frac{q\cdot S}{q\cdot p}P^{\sigma}\right)
g_2(x,Q^2)\right\}\ ,
\label{had}
\end{eqnarray}
again, $P_\mu$ refers to the nucleon momentum and $Q^2=-q^2$.
The tensors multiplying the structure functions
in eq. (\ref{had})
should be identified with the Lorentz tensors $e_{\mu\nu}^{i}$
in (\ref{series}).
Contracting $W^{(A)}_{\mu\nu}$ with the longitudinal
$\Lambda^{\mu\nu}_{L}$ and transverse
$\Lambda^{\mu\nu}_{T}$ projection operators \cite{ans95},
\begin{eqnarray}
\Lambda^{\mu\nu}_{L}&=&\frac{2}{b}\left\{2P\cdot qxS_{\lambda}+
\frac{1}{q\cdot S}\left[(q\cdot S)^{2}-
\left(\frac{P\cdot q}{M}\right)^2\right] q_{\lambda}\right\}\ P_\tau \
\epsilon^{\mu\nu\lambda\tau },
\label{proj1}\\
\Lambda^{\mu\nu}_{T}&=&\frac{2}{b}
\left\{\left[\left(\frac{P\cdot q}{M}\right)^2+2P\cdot \
qx\right]S_\lambda + \left(q\cdot S\right)q_\lambda\right\}\ P_\tau \
\epsilon^{\mu\nu\lambda\tau }
\label{projT}
\end{eqnarray}
and choosing the pertinent polarization,
yields the longitudinal component
\begin{eqnarray}
g_L(x,Q^2)=g_1(x,Q^2)\ ,
\end{eqnarray}
as well as the transverse combination
\begin{eqnarray}
g_T(x,Q^2)=g_1(x,Q^2) + g_2(x,Q^2)\ .
\end{eqnarray}
Also, $b=-4M\left\{\left(\frac{P\cdot q}{M}\right)^2 + 2P\cdot{qx}-
\left(q\cdot{S}\right)^2\right\}$. In the Bjorken limit, which
corresponds to the kinematical regime
\begin{eqnarray}
q_0=|\mbox{\boldmath $q$}| - M_N x
\quad {\rm with}\quad
|\mbox{\boldmath $q$}|\rightarrow \infty \ ,
\label{bjlimit}
\end{eqnarray}
the antisymmetric component of the hadronic tensor
becomes \cite{Ja75},
\begin{eqnarray}
W^{(A)}_{\mu\nu}(q)&=&\int \frac{d^4k}{(2\pi)^4} \
\epsilon_{\mu\rho\nu\sigma}\ k^\rho\
{\rm sgn}\left(k_0\right) \ \delta\left(k^2\right)
\int_{-\infty}^{+\infty} dt \ {\rm e}^{i(k_0+q_0)t}
\nonumber \\* &&
\times \int d^3x_1 \int d^3x_2 \
{\rm exp}\left[-i(\mbox{\boldmath $k$}+\mbox{\boldmath $q$})\cdot
(\mbox{\boldmath $x$}_1-\mbox{\boldmath $x$}_2)\right]
\nonumber \\* &&
\times \langle N |\left\{
{\bar \Psi}(\mbox{\boldmath $x$}_1,t){\cal Q}^2\gamma^\sigma\gamma^{5}
\Psi(\mbox{\boldmath $x$}_2,0)+
{\bar \Psi}(\mbox{\boldmath $x$}_2,0){\cal Q}^2\gamma^\sigma\gamma^{5}
\Psi(\mbox{\boldmath $x$}_1,t)\right\}| N \rangle \ ,
\label{stpnt}
\end{eqnarray}
where $\epsilon_{\mu\rho\nu\sigma}\gamma^\sigma \gamma^5$
is the
antisymmetric combination of $\gamma_\mu\gamma_\rho\gamma_\nu$.
The matrix element between the nucleon states is to be taken in
the space of the collective coordinates, $A(t)$ (see eqs.
(\ref{collrot}) and (\ref{nwfct})) as the object in curly brackets
is an operator in this space. In deriving the expression (\ref{stpnt})
the {\it free} correlation function for the intermediate quark
fields has been assumed\footnote{Adopting a dressed correlation will
cause corrections starting at order twist--4 in QCD \cite{Ja96}.}
after applying Wick's theorem to the product of quark currents in eq.
(\ref{deften}). \cite{Ja75}. The use of the {\it free} correlation
function is justified because in the Bjorken limit (\ref{bjlimit})
the intermediate quark fields carry very large momenta and are hence
not sensitive to typical soliton momenta. This procedure reduces the
commutator $[J_\mu(\mbox{\boldmath $x$}_1,t),
J^{\dag}_\nu(\mbox{\boldmath $x$}_2,0)]$ of the quark currents in
the definition (\ref{deften}) to objects which are merely bilinear
in the quark fields. Consequently, in the Bjorken limit
(\ref{bjlimit}) the momentum, $k$, of the intermediate quark state
is highly off--shell and hence is not sensitive to momenta typical for
the soliton configuration. Therefore, the use of the free correlation
function is a good approximation in this kinematical regime.
Accordingly, the intermediate quark states are taken to be massless,
{\it cf.} eq. (\ref{stpnt}).
Since the NJL model is originally defined in terms of quark degrees of
freedom, quark bilinears as in eq. (\ref{stpnt}) can be computed
from the functional
\begin{eqnarray}
\hspace{-1cm}
\langle {\bar q}(x){\cal Q}^{2} q(y) \rangle&=&
\int D{\bar q} Dq \ {\bar q}(x){\cal Q}^{2} q(y)\
{\rm exp}\left(i \int d^4x^\prime\ {\cal L}\right)
\nonumber \\*
&=&\frac{\delta}{i\delta\alpha(x,y)}\int D{\bar q} Dq \
{\rm exp}\left(i\int d^4x^\prime d^4y^\prime
\left[\delta^4(x^\prime - y^\prime ){\cal L} \right. \right.
\nonumber \\* && \hspace{5cm}
\left. \left.
+\ \alpha(x^\prime,y^\prime){\bar q}(x^\prime){\cal Q}^{2}
q(y^\prime)\right] \right)\Big|_{\alpha(x,y)=0}\ .
\label{gendef}
\end{eqnarray}
The introduction of the bilocal source $\alpha(x,y)$ facilitates
the functional bosonization after which eq. (\ref{gendef})
takes the form
\begin{eqnarray}
\frac{\delta}{\delta\alpha(x,y)}{\rm Tr}_{\Lambda}{\rm log}
\left(\delta^4(x-y)D+\alpha(x,y){\cal Q}^{2})\right)
\Big|_{\alpha(x,y)=0}\ \ .
\label{gendef1}
\end{eqnarray}
The operator $D$ is defined in eq. (\ref{dirac}).
The correlation $\langle {\bar q}(x){\cal Q}^2 q(y) \rangle$ depends on
the angle between $\mbox{\boldmath $x$}$ and $\mbox{\boldmath $y$}$.
Since in general the functional (\ref{gendef}) involves quark states
of all angular momenta ($l$) a technical difficulty arises because
this angular dependence has to be treated numerically. The major
purpose of the present paper is to demonstrate that polarized
structure functions can indeed be computed from a chiral soliton.
With this in mind we will adopt the valence quark approximation
where the quark configurations in (\ref{gendef}) are restricted to the
valence quark level. Accordingly the valence quark wave--function
(\ref{valrot}) is substituted into eq. (\ref{stpnt}). Then only quark
orbital angular momenta up to $l=2$ are relevant. From a physical
point of view this approximation is justified for moderate constituent
quark masses ($m\approx400{\rm MeV}$) because in that parameter
region the soliton properties are dominated by their valence quark
contributions \cite{Al96}, \cite{Gok96}. In particular this is the
case for the axial properties of the nucleon.
In the next step the polarized structure functions, $g_1(x,\mu^2)$
and $g_T(x,\mu^2)$, are extracted according to eqs. (\ref{proj1})
and (\ref{projT}). In the remainder of this section we will omit
explicit reference to the scale $\mu^2$.
We choose the frame such that the nucleon is
polarized along the
positive--$\mbox{\boldmath $z$}$ and positive--$\mbox{\boldmath $x$}$
directions in the longitudinal and transverse cases, respectively.
Note also that this implies
the choice ${\mbox{\boldmath $q$}}=q\hat{\mbox{\boldmath $z$}}$.
When extracting the structure functions the integrals
over the time coordinate in eq. (\ref{stpnt}) can readily be done yielding the conservation
of energy for forward and backward moving intermediate quarks. Carrying
out the integrals over $k_0$ and $k=|\mbox{\boldmath $k$}|$ gives for
the structure functions
\begin{eqnarray}
\hspace{-1cm}
g_1(x)&=&-N_C\frac{M_N}{\pi}
\langle N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}|\int d\Omega_{\mbox{\boldmath
$k$}} k^2
\Bigg\{\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$})
\left(1-\mbox{\boldmath $\alpha$}\cdot
{\hat{\mbox{\boldmath $k$}}}\right)\gamma^5\Gamma
\tilde\psi_{\rm v}(\mbox{\boldmath $p$})
\Big|_{k=q_0+\epsilon_{\rm v}}
\nonumber \\* && \hspace{3cm}
+\tilde\psi_{\rm v}^{\dag}(-\mbox{\boldmath $p$})
\left(1-\mbox{\boldmath $\alpha$}\cdot
{\hat{\mbox{\boldmath $k$}}}\right)\gamma^5\Gamma
\tilde\psi_{\rm v}(-\mbox{\boldmath $p$})
\Big|_{k=q_0-\epsilon_{\rm v}}
\Bigg\} |N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}\rangle\ ,
\label{valg1}\\
\hspace{-1cm}
g_{T}(x)&=&g_1(x)+g_2(x)
\nonumber \\*
&=&-N_C\frac{M_N}{\pi} \langle N,\frac{1}{2}\hat{\mbox{\boldmath $x$}} |
\int d\Omega_{\mbox{\boldmath $k$}} k^2
\Bigg\{\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$})
\left(\mbox{\boldmath $\alpha$}\cdot
{\hat{\mbox{\boldmath $k$}}}\right)\gamma^5\Gamma
\tilde\psi_{\rm v}(\mbox{\boldmath $p$})
\Big|_{k=q_0+\epsilon_{\rm v}}
\nonumber \\* && \hspace{3cm}
+\tilde\psi_{\rm v}^{\dag}(-\mbox{\boldmath $p$})
\left(\mbox{\boldmath $\alpha$}\cdot
{\hat{\mbox{\boldmath $k$}}}\right)\gamma^5\Gamma
\tilde\psi_{\rm v}(-\mbox{\boldmath $p$})
\Big|_{k=q_0-\epsilon_{\rm v}}
\Bigg\} |N,\frac{1}{2}\hat{\mbox{\boldmath $x$}} \rangle\ ,
\label{valgt}
\end{eqnarray}
where $\mbox{\boldmath $p$}=\mbox{\boldmath $k$}+\mbox{\boldmath $q$}$
and $\Gamma =\frac{5}{18}{\mbox{{\sf 1}\zr{-0.16}\rule{0.04em}{1.55ex}\zr{0.1}}} +\frac{1}{6}D_{3i}\tau_{i}$
with $D_{ij}=\frac{1}{2}\
{\rm tr}\left(\tau_{i}A(t)\tau_{j}A^{\dagger}\right)$ being the
adjoint representation of the collective
rotation {\it cf.} eq. (\ref{collrot}).
The second entry
in the states labels the spin orientation.
$N_C$ appears as a multiplicative factor
because the functional trace (\ref{gendef1}) includes the color
trace as well. Furthermore the Fourier transform of the
valence quark wave--function
\begin{eqnarray}
\tilde\psi_{\rm v}(\mbox{\boldmath $p$})=\int \frac{d^3x}{4\pi}\
\psi_{\rm v}(\mbox{\boldmath $x$})\
{\rm exp}\left(i\mbox{\boldmath $p$}\cdot
\mbox{\boldmath $x$}\right)
\label{ftval}
\end{eqnarray}
has been introduced. Also, note that the wave--function $\psi_{\rm v}$
contains an implicit dependence on the collective coordinates through
the angular velocity $\mbox{\boldmath $\Omega$}$, {\it cf.}
eq. (\ref{valrot}).
The dependence of the wave--function
$\tilde\psi(\pm\mbox{\boldmath $p$})$ on the integration variable
${\hat{\mbox{\boldmath $k$}}}$ is only implicit.
In the Bjorken
limit the integration variables may then be changed to \cite{Ja75}
\begin{eqnarray}
k^2 \ d\Omega_{\mbox{\boldmath $k$}} =
p dp\ d\Phi\ , \qquad p=|\mbox{\boldmath $p$}|\ ,
\label{intdp}
\end{eqnarray}
where $\Phi$ denotes the azimuth--angle between
$\mbox{\boldmath $q$}$ and $\mbox{\boldmath $p$}$.
The lower bound for the $p$--integral is adopted when
$\mbox{\boldmath $k$}$ and $\mbox{\boldmath $q$}$ are anti--parallel;
$p^{\rm min}_\pm=|M_N x\mp \epsilon_{\rm v}|$
for $k=-\left(q_0\pm\epsilon_{\rm v}\right)$,
respectively. Since the wave--function
$\tilde\psi(\pm\mbox{\boldmath $p$})$ acquires its dominant
support for $p\le M_N$ the integrand is different from
zero only when $\mbox{\boldmath $q$}$ and $\mbox{\boldmath $k$}$
are anti--parallel. We may therefore take
${\hat{\mbox{\boldmath $k$}}}=-{\hat{\mbox{\boldmath $z$}}}$.
This is nothing but the light--cone description for computing
structure functions \cite{Ja91}. Although expected, this result is
non--trivial and will only come out in models which have a current
operator which, as in QCD, is formally identical to the one of
non--interacting quarks. The valence quark state possesses positive
parity yielding
$\tilde\psi(-\mbox{\boldmath $p$})=\gamma_0
\tilde\psi(\mbox{\boldmath $p$})$.
With this we arrive at the expression for the isoscalar
and isovector parts of the
polarized structure function in the valence quark approximation,
\begin{eqnarray}
\hspace{-.5cm}
g^{I=0}_{1,\pm}(x)&=&-N_C\frac{5\ M_N}{18\pi}
\langle N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}|
\int^\infty_{M_N|x_\mp|}p dp \int_0^{2\pi}d\Phi\
\nonumber \\* && \hspace{4cm}\times
\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$}_\mp)
\left(1\pm\alpha_3\right)\gamma^5\tilde\psi_{\rm v}(\mbox{\boldmath $p$}_\mp)
|N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}\rangle
\label{g10} \\
\hspace{-.5cm}
g^{I=1}_{1,\pm}(x)&=&-N_C\frac{M_N}{6\pi}
\langle N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}| D_{3i}
\int^\infty_{M_N|x_\mp|}p dp \int_0^{2\pi}d\Phi\
\nonumber \\* && \hspace{4cm}\times
\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$}_\mp)\tau_i
\left(1\pm\alpha_3\right)\gamma^5\tilde\psi_{\rm v}(\mbox{\boldmath $p$}_\mp)
|N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}\rangle\ ,
\label{g11}\\
\hspace{-.5cm}
g^{I=0}_{T,\pm}(x)&=&-N_C\frac{5\ M_N}{18\pi}
\langle N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}|
\int^\infty_{M_N|x_\mp|}p dp \int_0^{2\pi}d\Phi\
\nonumber \\* && \hspace{4cm}\times
\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$}_\mp)
\alpha_3\gamma^5\tilde\psi_{\rm v}(\mbox{\boldmath $p$}_\mp)
|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle
\label{gt0}\ , \\
\hspace{-.5cm}
g^{I=1}_{T,\pm}(x)&=&-N_C\frac{M_N}{6\pi}
\langle N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}| D_{3i}
\int^\infty_{M_N|x_\mp|}p dp \int_0^{2\pi}d\Phi\
\nonumber \\* && \hspace{4cm}\times
\tilde\psi_{\rm v}^{\dag}(\mbox{\boldmath $p$}_\mp)\tau_i
\alpha_3\gamma^5\tilde\psi_{\rm v}(\mbox{\boldmath $p$}_\mp)
|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle\ ,
\label{gt1}
\end{eqnarray}
where $x_{\pm}=x\pm\epsilon_{\rm v}/{M_N}$ and
${\rm cos}(\Theta^\pm_p)={M_N}x_\pm/{p}$.
The complete structure functions are given by
\begin{eqnarray}
g_{1}(x)&=&g^{I=0}_{1,+}(x)+g^{I=1}_{1,+}(x)
-\left(g^{I=0}_{1,-}(x)-g^{I=1}_{1,-}(x)\right)
\label{gone} \\* \hspace{-1cm}
g_{T}(x)&=&g^{I=0}_{T,+}(x)+g^{I=1}_{T,+}(x)
-\left(g^{I=0}_{T,-}(x)-g^{I=1}_{T,-}(x)\right)\ .
\label{gtran}
\end{eqnarray}
Note also, that we have made explicit the isoscalar
$\left(I=0\right)$
and isovector $\left(I=1\right)$ parts.
The wave--function implicitly depends
on $x$ because
$\tilde\psi_{\rm v}(\mbox{\boldmath $p$}_\pm)=
\tilde\psi_{\rm v}(p,\Theta^\pm_p,\Phi)$
where the polar--angle, $\Theta^\pm_p$, between $\mbox{\boldmath $p$}_\pm$
and $\mbox{\boldmath $q$}$ is fixed for a given value of the Bjorken
scaling variable $x$.
Turning to the evaluation of the nucleon matrix elements defined
above we first note that the Fourier transform of the wave--function
is easily obtained because the angular parts are tensor spherical
harmonics in both coordinate and momentum spaces. Hence, only the
radial part requires numerical treatment.
Performing straightforwardly
the azimuthal integrations in eqs. (\ref{g10}) and (\ref{g11})
reveals that the surviving isoscalar part of the longitudinal structure
function, $g_{1}^{I=0}$, is linear in the angular velocity,
$\mbox{\boldmath $\Omega$}$. It is this part which is associated with the
proton--spin puzzle. Using the standard quantization condition,
$\mbox{\boldmath $\Omega$} =\mbox{\boldmath $J$}/\ \alpha^2$,
where $\alpha^2$ is the moment of inertia of the soliton
and further noting that
the ${\hat{\mbox{\boldmath $z$}}}$--direction is distinct,
the required nucleon matrix elements are
$\langle N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}|
J_{z}|N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}\rangle=\frac{1}{2}$.
Thus, $g_1^{I=0}$ is identical for all nucleon states.
Choosing a symmetric ordering \cite{Al93}, \cite{Sch95} for
the non--commuting operators,
$D_{ia}J_j\rightarrow \frac{1}{2}\left\{D_{ia},J_j \right\}$
we find that the nucleon matrix elements associated with the
cranking portion of the isovector piece, $\langle
N,\pm\frac{1}{2}\hat{\mbox{\boldmath $z$}}|\left\{D_{3y},J_x
\right\}|N,\pm\frac{1}{2}\hat{\mbox{\boldmath $z$}}\rangle$, vanish.
With this ordering we avoid the occurance of PCAC violating pieces in
the axial current.
The surviving terms stem solely from the classical part of the
valence quark wave--function,
$\Psi_{\rm v}\left({\mbox{\boldmath $x$}}\right)$ in
combination with the collective Wigner--D function, $D_{3z}$. Again
singling out the ${\hat{\mbox{\boldmath $z$}}}$--direction,
the nucleon matrix elements become \cite{Ad83}
\begin{eqnarray}
\langle
N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}|D_{3z}|N,\frac{1}{2}\hat{\mbox{\boldmath $z$}}
\rangle = -\frac{2}{3} i_3\ ,
\label{matz}
\end{eqnarray}
where $i_3=\pm\frac{1}{2}$ is the nucleon isospin.
For the transverse structure function, the surviving piece of the
isoscalar contribution is again linear in the angular velocities.
The transversally polarized nucleon gives rise to
the matrix elements,
$\langle N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}
|J_{x}|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle=\frac{1}{2}$.
Again choosing symmetric ordering for terms
arising from the cranking contribution, the nucleon matrix elements
$\langle
N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}|\left\{D_{3y},J_y
\right\}|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle$
and $\langle
N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}|\left\{D_{33},J_y
\right\}|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle$ vanish.
As in the longitudinal case, there is a surviving isovector contribution
stemming solely from the classical part of the
valence quark wave--function, $\Psi_{\rm v}({\mbox{\boldmath $x$}})$
in combination with the collective Wigner--D function, $D_{3x}$.
Now singling out the $\hat{\mbox{\boldmath $x$}}$--direction
the relevant nucleon matrix elements become \cite{Ad83},
\begin{eqnarray}
\langle
N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}|D_{3x}
|N,\frac{1}{2}\hat{\mbox{\boldmath $x$}}\rangle
= -\frac{2}{3} i_3\ .
\label{matx}
\end{eqnarray}
Explicit expressions in terms of the valence quark
wave functions (\ref{gone} and \ref{gtran}) for
$g^{I=0}_{1,\pm}(x)$, $g^{I=1}_{1,\pm}(x)$, $g^{I=0}_{2,\pm}(x)$
and $g^{I=1}_{,\pm}(x)$ are listed in
the appendix A.
Using the expressions given in the appendix A
it is straightforward to verify the Bjorken sum rule \cite{Bj66}
\begin{eqnarray}
\Gamma_1^{p}-\Gamma_1^{n}&=&\int_{0}^{1} dx\ \left(g_{1}^{p}(x)-
g_{1}^{n}(x)\right)=g_{A}/6\ ,
\label{bjs}
\end{eqnarray}
the Burkhardt--Cottingham sum rule \cite{bur70}
\begin{eqnarray}
\Gamma_2^{p}&=&\int_{0}^{1} dx\ g_{2}^{p}(x)=0\ ,
\label{bcs}
\end{eqnarray}
as well as the axial singlet charge
\begin{eqnarray}
\Gamma_1^{p}+\Gamma_1^{n}&=&\int_{0}^{1} dx\ \left(g_{1}^{p}(x)+
g_{1}^{n}(x)\right)=g_A^{0}\ ,
\label{gas}
\end{eqnarray}
in this model calculation when the moment of inertia
$\alpha^2$, as well as the axial charges $g_A^0$ and $g_A$, are
confined to their dominating valence quark pieces.
We have used
\begin{eqnarray}
g_A&=&-\frac{N_C}{3}\int d^3 r
{\bar\psi}_{\rm v}^{\dagger}(\mbox{\boldmath $r$})\gamma_3
\gamma_5\tau_3 \psi_{\rm v}(\mbox{\boldmath $r$})
\label{gaval} \\
g_A^0&=&\frac{N_C}{\alpha_{\rm v}^2}
\int d^3 r{\bar\psi}_{\rm v}^{\dagger}(\mbox{\boldmath $r$})\gamma_3
\gamma_5\psi_{\rm v}(\mbox{\boldmath $r$}) \ .
\label{ga0val}
\end{eqnarray}
to verify the Bjorken Sum rule as well as the axial singlet charge.
This serves as an analytic check on our treatment.
Here $\alpha_{\rm v}^2$ refers to the valence quark contribution
to the moment of inertia, {\it i.e.}
$\alpha_{\rm v}^2=(1/2)\sum_{\mu\ne{\rm v}}
|\langle\mu|\tau_3|{\rm v}\rangle|^2/(\epsilon_\mu-\epsilon_{\rm v})$.
The restriction to the valence quark piece is required by consistency
with the Adler sum rule in the calculation of the unpolarized
structure functions in this approximation \cite{wei96}.
\bigskip
\section{Numerical Results}
\bigskip
In this section we display the results of the spin--polarized
structure functions calculated from eqs. (\ref{g1zro}--\ref{gton})
for constituent quark masses of $m=400{\rm MeV}$ and $450{\rm MeV}$.
In addition to checking the above mentioned sum rules
see eqs. (\ref{bjs})--(\ref{gas}),
we have numerically calculated the
first moment of $g_{1}^{p}(x,\mu^{2})$\footnote{Which in
this case amounts to the Ellis--Jaffe sum rule \cite{Ja74}
since we have omitted the strange degrees of freedom. A careful
treatment of symmetry breaking effects indicates that the role of the
strange quarks is
less important than originally assumed \cite{jon90,Li95}.}
\begin{eqnarray}
\Gamma_1^{p}&=&\int_{0}^{1} dx\ g_{1}^{p}(x)\ ,
\label{ejs}
\end{eqnarray}
and the
Efremov--Leader--Teryaev (ELT) sum rule \cite{Ef84}
\begin{eqnarray}
\Gamma_{\rm ETL}&=&\int_{0}^{1} dx\ x\left(g_{1}^{p}(x)
+2g_{2}^{n}(x)\right)\ .
\label{elts}
\end{eqnarray}
We summarize the results for the sum rules in table 1.
When comparing these results with the experimental data one observes
two short--comings, which are already known from studies of the
static properties in this model. First, the axial charge
$g_A\approx 0.73$ comes
out too low as the experimental value is $g_A=1.25$. It has
recently been speculated that a different ordering of the collective
operators $D_{ai}J_j$ ({\it cf.} section 4) may fill the gap
\cite{Wa93,Gok96}. However, since such an ordering unfortunately gives
rise to PCAC violating contributions to the axial current \cite{Al93}
and furthermore inconsistencies with $G$--parity may occur in
the valence quark approximation \cite{Sch95} we will not pursue
this issue any further at this time. Second, the predicted axial singlet
charge $g_A^0\approx 0.6$ is approximately twice as large
as the number extracted from experiment\footnote{Note
that this analysis assumes $SU(3)$ flavor symmetry, which, of course,
is not manifest in our two flavor model.} $0.27\pm0.04$\cite{ell96}.
This can be
traced back to the valence quark approximation as there are direct
and indirect contributions to $g_A^0$ from both the polarized
vacuum and the valence quark level. Before canonical quantization
of the collective coordinates one finds a sum of valence
and vacuum pieces
\begin{eqnarray}
g_A^0=2\left(g_{\rm v}^0+g_{\rm vac}^0\right)\Omega_3
=\frac{g_{\rm v}^0+g_{\rm vac}^0}
{\alpha^2_{\rm v}+\alpha^2_{\rm vac}} \ .
\label{ga0val1}
\end{eqnarray}
Numerically the vacuum piece is negligible, {\it i.e.}
$g_{\rm vac}^0/g_{\rm v}^0\approx 2\%$. Canonical quantization
subsequently involves the moment of inertia
$\alpha^2=\alpha^2_{\rm v}+\alpha^2_{\rm vac}$, which also has
valence and vacuum pieces. In this case, however, the vacuum
part is not so small: $\alpha^2_{\rm vac}/\alpha^2\approx25\%$.
Hence the full treatment of the polarized vacuum will drastically
improve the agreement with the empirical value for $g_A^0$.
On the other hand our model calculation nicely reproduces the
Ellis--Jaffe sum rule since the empirical value is $0.136$.
Note that this comparison is legitimate since neither the
derivation of this sum rule nor our model imply strange quarks.
While the vanishing Burkhardt--Cottingham sum rule can be
shown analytically in this model, the small value for the
Efremov--Leader--Teryaev sum rule is a numerical prediction.
Recently, it has been demonstrated \cite{So94} that that the ELT
sum rule (\ref{elts}), which is derived within the parton model,
neither vanishes in the Center of Mass bag model\cite{So94}
nor is supported by the
SLAC E143 data \cite{slac96}. This is also the case for our
NJL--model calculation as can be seen from table I.
In figure 1 we display the spin structure functions
$g_{1}^{p}(x,\mu^{2})$ and $g_{2}^{p}(x,\mu^{2})$ along with the
twist--2 piece, $g_{2}^{WW(p)}\left(x,\mu^{2}\right)$ and twist--3
piece, ${\overline{g}}_{2}^{p}\left(x,\mu^{2}\right)$. The actual
value for $\mu^2$ will be given in the proceeding section in the
context of the evolution procedure. We observe that the structure
functions $g_{2}^{p}(x,\mu^{2})$ and $g_{2}^{WW(p)}(x,\mu^{2})$ are
well localized in the interval $0\le x\le1$, while for $g_1^{p}$ about
$0.3\%$ of the first moment,
$\Gamma_1^{p}=\int_{0}^{1} dx\ g_{1}^{p}(x,\mu^2)$
comes from the region, $x > 1$.
The polarized structure function $g_1^{p}(x,\mu^2)$ exhibits a
pronounced maximum at $x\approx0.3$ which is smeared out when the
constituent quark mass increases. This can be understood as follows:
In our chiral soliton model the constituent mass serves as a coupling
constant of the quarks to the chiral field (see eqs. (\ref{bosact})
and (\ref{hamil})).
The valence quark
becomes more strongly bound as the constituent quark mass increases.
In this case the lower components of the valence quark
wave--function increase and relativistic effects
become more important resulting in a broadening of the maximum.
With regard to the Burkhardt--Cottingham sum rule the polarized
structure function $g_2^{p}(x,\mu^2)$ possesses a node. Apparently
this node appears at approximately the same value of the Bjorken
variable $x$ as the maximum of $g_1^{p}(x,\mu^2)$. Note also that
the distinct twist contributions to $g_2^{p}(x,\mu^2)$
by construction diverge as ${\rm ln}\left(x\right)$ as
$x\to0$ while their sum stays finite(see section 6 for details).
As the results displayed in figure 1 are the central issue of
our calculation it is of great interest to compare them with the
available data. As for all effective low--energy models of the
nucleon, the predicted results are at a lower scale $Q^2$ than
the experimental data. In order to carry out a sensible comparison
either the model results have to be evolved upward or the QCD
renormalization group equations have to be used to extract structure
functions at a low--renormalization point. For the combination
$xg_1(x)$ a parametrization of the empirical structure function
is available at a low scale \cite{Gl95}\footnote{These authors also
provide a low scale parametrization of quark distribution functions.
However, these refer to the distributions of perturbatively interacting
partons. Distributions for the NJL--model constituent quarks could
in principle be extracted from eqs. (\ref{g10})--(\ref{gt1}). It is
important to stress that these distributions may not be compared
to those of ref \cite{Gl95} because the associated quarks fields
are different in nature.}. In that study the experimental high $Q^2$
data are evolved to the low--renormalization point $\mu^2$, which is
defined as the lowest $Q^2$ satisfying the positivity constraint
between the polarized and unpolarized structure functions. In a
next--to--leading order calculation those authors found
$\mu^2=0.34{\rm GeV}^2$ \cite{Gl95}. In figure 2 we compare our
results for two different constituent quark masses with that
parametrization. We observe that our predictions reproduce gross
features like the position of the maximum. This agreement is the
more pronounced the lower the constituent quark is, {\it i.e.} the
agreement improves as the applicability of the valence quark
approximation becomes more justified. Unfortunately, such a
parametrization is currently not available for the transverse
structure function $g_T(x)$ (or $g_2(x)$). In order to nevertheless
be able to compare our corresponding results with the (few) available
data we will apply leading order evolution techniques to the structure
functions calculated in the valence quark approximation to the
NJL--soliton model. This will be subject of the following section.
\bigskip
\section{Projection and Evolution}
\bigskip
One notices that our baryon states are not momentum eigenstates
causing the structure functions (see figures 1 and 2)
not to vanish exactly for $x>1$ although the contributions
for $x>1$ are very small. This short--coming is due to the
localized field configuration and thus the nucleon not being a
representation of the Poincar\'{e} group which is common to the
low--energy effective models. The most feasible procedure to cure
this problem is to apply Jaffe's prescription \cite{Ja80},
\begin{eqnarray}
f(x)\longrightarrow \tilde f(x)=
\frac{1}{1-x}f\left(-{\rm log}(1-x)\right)
\label{proj}
\end{eqnarray}
to project any structure function $f(x)$ onto the interval
$[0,1]$. In view of the kinematic regime of DIS this
prescription, which was
derived in a Lorentz invariant fashion within the 1+1 dimensional
bag model, is a reasonable approximation. It is important to
note in the NJL model the unprojected nucleon wave--function
(including the cranking piece\footnote{Which in fact yields the
leading order to the Adler sum rule,
$F_1^{\nu p}\ - F_1^{{\bar \nu}p}$ \cite{wei96} rather than being a
correction.}, see \ref{valrot}) is anything but a product of
Dirac--spinors. In this context, techniques such as
Peierls--Yoccoz\cite{Pei57} (which does not completely enforce
proper support \cite{Sig90}, $0\le x\le1$ nor restore Lorentz
invariance, see \cite{Ard93}) appear to be infeasible. Thus,
given the manner in which the nucleon
arises in chiral--soliton models Jaffe's projection
technique is quite well suited.
It is also important to note
that, by construction, sum rules are not effected by this
projection, {\it i.e.}
$\int_0^\infty dxf(x)=
\int_0^1 dx \tilde f(x)$. Accordingly the sum--rules of the
previous section remain intact.
With regard to evolution of the spin--polarized structure functions
applying the OPE analysis of Section 2, Jaffe and Ji brought to light
that to leading order in $1/Q^{2}$,
$g_1(x,Q^2)$ receives only a leading order twist--2
contribution, while $g_2(x,Q^2)$ possesses contributions
from both twist--2 and twist--3 operators;
the twist--3 portion coming from
spin--dependent gluonic--quark correlations \cite{Ja90},\cite{Ji90}
(see also, \cite{ko79} and \cite{sh82}).
In the {\em impulse approximation}
\cite{Ja90}, \cite{Ji90}
these leading contributions are given by
\begin{eqnarray}
\hspace{-2cm}
\lim_{Q^2\to\infty}
\int_{0}^{1} dx\ x^{n} g_{1}(x,Q^2)&=&\frac{1}{2}\sum _{i}\
{\cal O}_{2,i}^{n}\ \ ,\ \ n=0,2,4,\ldots\ ,
\label{ltc1} \\
\lim_{Q^2\to\infty}
\int_{0}^{1} dx\ x^{n}\ g_{2}(x,Q^2)&=&-\frac{n}{2\ (n+1)}
\sum_{i} \left\{ {\cal O}_{2,i}^{n}
-{\cal O}_{3,i}^{n} \right\},\ n=2,4,\ldots\ .
\label{ltc2}
\end{eqnarray}
Note that there is no sum rule for the first
moment, $\Gamma_{2}(Q^2)=\int_{0}^{1}\ dx g_{2}(x,Q^2)$, \cite{Ja90}.
Sometime ago Wandzura and Wilczek \cite{wan77}
proposed that $g_2(x,Q^2)$ was given in terms of $g_1(x,Q^2)$,
\begin{eqnarray}
g_{2}^{WW}(x,Q^2)=-\ g_{1}(x,Q^2)+\ \int_{x}^{1}\frac{dy}{y}\ g_{1}(y,Q^2)
\label{ww}
\end{eqnarray}
which follows immediately from eqs. (\ref{ltc1}) and (\ref{ltc2})
by neglecting the twist--3 portion in the sum in
(\ref{ltc2}). One may reformulate
this argument to extract the twist--3 piece
\begin{eqnarray}
{\overline{g}}_{2}(x,Q^2)\ =\ g_{2}(x,Q^2)\ -\ g_{2}^{WW}(x,Q^2)\ ,
\end{eqnarray}
since,
\begin{eqnarray}
\int_{0}^{1} dx\ x^{n}\ {\overline{g}}_{2}(x,Q^2)=\frac{n}{2\ (n+1)}
\sum_{i} {\cal O}_{3,i}^{n}\ \ , \ n=2,4,\ \ldots \ .
\end{eqnarray}
In the NJL model as in the bag--model there are no explicit gluon
degrees of freedom, however, in both models twist--3
contributions to $g_2(x,\mu^2)$ exist. In contrast to the bag
model where the bag boundary simulates the quark--gluon and
gluon--gluon correlations \cite{So94} in the NJL model the
gluon degrees of freedom, having been ``integrated" out,
leave correlations characterized by the four--point quark
coupling $G_{\rm NJL}$. This is the source of the twist--3
contribution to $g_2(x,\mu^2)$, which is shown in figure 1.
For $g_{1}\left(x,Q^2\right)$ and the twist--2 piece
$g_2^{WW}\left(x,Q^2\right)$
we apply the leading
order (in $\alpha_{QCD}(Q^2)$) Altarelli--Parisi
equations \cite{Al73} to evolve
the structure functions from the model
scale, $\mu^2$, to that
of the experiment $Q^2$, by iterating
\begin{eqnarray}
g(x,t+\delta{t})=g(x,t)\ +\ \delta t\frac{dg(x,t)}{dt}\ ,
\end{eqnarray}
where $t={\rm log}\left(Q^2/\Lambda_{QCD}^2\right)$.
The explicit expression for the evolution differential
equation is given by the convolution integral,
\begin{eqnarray}
\frac{d g(x,t)}{dt}&=&\frac{\alpha(t)}{2\pi}
g(x,t)\otimes P_{qq}(x)
\nonumber \\* \hspace{1cm}
&=&\frac{\alpha(t)}{2\pi}
C_{R}(F)\int^1_{x}\ \frac{dy}{y}P_{qq}\left(y\right)
g\left(\frac{x}{y},t\right)
\label{convl}
\end{eqnarray}
where the quantity
$P_{qq}\left(z\right)=\left(\frac{1+z^2}{1-z^2}\right)_{+}$
represents the quark probability to emit a gluon such that the
momentum of the quark is reduced by the fraction $z$.
$C_{R}(f)=\frac{{n_{f}^{2}}-1}{2{n_{f}}}$ for $n_f$--flavors,
$\alpha_{QCD}=\frac{4\pi}{\beta\log\left(Q^2/ \Lambda^2\right)}$
and $\beta=(11-\frac{2}{3}n_f)$.
Employing the ``+" prescription\cite{Al94} yields
\begin{eqnarray}
\frac{d\ g(x,t)}{dt}&=&\frac{2C_{R}(f)}{9\ t}
\left\{\ \left(x + \frac{x^{2}}{2}+2\log(1-x)\right)g(x,t)
\right.
\nonumber \\*&& \hspace{1cm}
\left.
+\ \int^{1}_{x}\ dy
\left(\frac{1+y^2}{1-y}\right)
\left[\frac{1}{y}\ g\left(\frac{x}{y},t\right)-g(x,t)\right]\
\right\}\ .
\label{evol}
\end{eqnarray}
As discussed in section 2 the initial value for integrating the
differential equation is given by the scale $\mu^2$ at which the model is
defined. It should be emphasized that this scale essentially is a
new parameter of the model. For a given constituent quark mass we fit
$\mu^2$ to maximize the agreement of the predictions with the
experimental data on previously \cite{wei96} calculated unpolarized
structure functions for (anti)neutrino--proton scattering:
$F_2^{\nu p}-F_2^{\overline{\nu} p}$. For the constituent quark mass
$m=400{\rm MeV}$ we have obtained $\mu^2\approx0.4{\rm GeV}^2$.
One certainly wonders whether for such a low scale the restriction to
first order in $\alpha_{QCD}$ is reliable. There are two answers. First,
the studies in this section aim at showing that the required evolution
indeed improves the agreement with the experimental data and, second,
in the bag model it has recently been shown \cite{St95} that a
second order evolution just increases $\mu^2$ without significantly
changing the evolved data. In figure 3 we compare the
unevolved, projected, structure function
$g_1^{p}\left(x,\mu^{2}\right)$ with the one
evolved from $\mu^{2}=0.4{\rm GeV}^2$ to $Q^2=3.0{\rm GeV}^2$.
Also the data from the E143-collaboration from
SLAC\cite{slac95a} are given. Furthermore in
figure 3 we compare the projected, unevolved structure
function $g_2^{WW(p)}\left(x,\mu^{2}\right)$ as well as the one evolved
to $Q^2=5.0{\rm GeV}^2$ with the data from the recent E143-collaboration
at SLAC\cite{slac96}.
As expected we observe that the evolution pronounces the structure
function at low $x$; thereby improving the agreement with the
experimental data. This change towards small $x$ is a general feature
of the projection and evolution process and presumably not very
sensitive to the prescription applied here. In particular, choosing
an alternative projection technique may easily be compensated by
an appropriate variation of the scale $\mu^2$.
While the evolution of the structure function
$g_{1}\left(x,Q^2\right)$ and the twist--2 piece
$g_2^{WW}\left(x,Q^2\right)$ from $\mu^2$ to $Q^2$ can be performed
straightforwardly using the ordinary Altarelli--Parisi equations
this is not the case with the twist--3 piece
${\overline{g}}_{2}(x,Q^2)$.
As the twist--3 quark and quark--gluon operators mix
the number of independent operators contributing
to the twist--3 piece increases
with $n$, where $n$ refers to the $n^{\underline{\rm th}}$ moment\cite{sh82}.
We apply an approximation (see appendix B) suggested in\cite{Ali91}
where it is demonstrated
that in $N_c\to \infty$ limit the quark
operators of twist--3 decouple from the evolution equation
for the quark--gluon operators of the same twist resulting
in a unique evolution scheme.
This scheme is particularly suited for the NJL--chiral soliton model,
as the soliton picture for baryons is based on $N_c\rightarrow \infty$
arguments\footnote{This scheme has also employed by Song\cite{So94}
in the Center of Mass bag model.}.
In figure 4 we compare the projected unevolved structure function
${\overline{g}}_{2}^{p}(x,\mu^2)$ evolved to $Q^2=5.0{\rm GeV}^2$
using the scheme suggested in \cite{Ali91}. In addition
we reconstruct $g_2^{p}\left(x,Q^2\right)$ at $Q^2=3.0{\rm GeV}^2$ from
$g_2^{WW(p)}\left(x,Q^2\right)$ and ${\overline{g}}_{2}(x,Q^2)$
and compare it with the recent SLAC data\cite{slac96}
for $g_2^{p}\left(x,Q^2\right)$. As is evident our model
calculation of $g_2^{p}\left(x,Q^2\right)$,
built up from its twist--2 and twist--3 pieces,
agrees reasonably well
with the data although the experimental errors are quite large.
\bigskip
\section{Summary and Outlook}
\bigskip
In this paper we have presented the calculation of the polarized
nucleon structure functions $g_1\left(x,Q^2\right)$ and
$g_2\left(x,Q^2\right)$ within a model which is
based on chiral symmetry and its spontaneous breaking. Specifically
we have employed the NJL chiral soliton model which reasonably
describes the static properties of the nucleon \cite{Al96},
\cite{Gok96}. In this model the
current operator is formally identical to the one in an
non--interacting relativistic quark model. While the quark
fields become functionals of the chiral soliton upon bosonization,
this feature enables one calculate the hadronic tensor.
From this hadronic tensor we have then extracted the polarized
structure functions within the valence quark approximation. As the
explicit occupation of the valence quark level yields the
major contribution (about 90\%) to the associated static quantities
like the axial charge this presumably is a justified approximation.
When cranking corrections are included this share may be reduced
depending on whether or not the full moment of inertia is substituted.
It needs to be stressed that in contrast to {\it e.g.} bag models
the nucleon wave--function arises as a collective excitation of
a non--perturbative meson field configuration. In particular, the
incorporation of chiral symmetry leads to the distinct feature that
the pion field cannot be treated perturbatively. Because of the
hedgehog structure of this field one starts with grand spin symmetric
quark wave--functions rather than direct products of spatial-- and
isospinors as in the bag model. On top of these grand spin
wave--functions one has to include cranking corrections to generate
states with the correct nucleon quantum numbers. Not only are these
corrections sizable but even more importantly one would not be able
to make any prediction on the flavor singlet combination of the
polarized structure functions without them. The structure functions
obtained in this manner are, of course, characterized by the scale
of the low--energy effective model. We have confirmed this issue by
obtaining a reasonable agreement of the model predictions for the
structure function $g_1$ of the proton with the low--renormalization
point parametrization of ref \cite{Gl95}. In general this scale of
the effective model essentially represents an intrinsic parameter of
a model. For the NJL--soliton model we have previously determined
this parameter from the behavior of the unpolarized structure
functions under the Altarelli--Parisi evolution \cite{wei96}. Applying
the same procedure to the polarized structure functions calculated in
the NJL model yields good agreement with the data extracted from
experiment, although the error bars on $g_1\left(x,Q^2\right)$ are
still sizable. In particular, the good agreement at low $x$ indicates
that to some extend gluonic effects are already incorporated in the
model. This can be understood by noting that the quark fields, which
enter our calculation, are constituent quarks. They differ from the
current quarks by a mesonic cloud which contains gluonic components.
Furthermore, the existence of gluonic effects in the model would not
be astonishing because we had already observed from the non--vanishing
twist--3 part of $g_2\left(x,Q^2\right)$, which in the OPE is
associated with the quark--gluon interaction, that the model contains
the main features allocated to the gluons.
There is a wide avenue for further studies in this model. Of course,
one would like to incorporate the effects of the polarized vacuum,
although one expects from the results on the static axial properties
that their direct contributions are negligible. It may be more
illuminating to include the strange quarks within the valence quark
approximation. This extension of the model seems to be demanded by
the analysis of the proton spin puzzle. Technically two changes will
occur. First, the collective matrix elements will be more complicated than
in eqs. (\ref{matz}) and (\ref{matx}) because the nucleon wave--functions
will be distorted $SU(3)$ $D$--functions in the presence of flavor
symmetry breaking \cite{Ya88,wei96a}. Furthermore the valence quark
wave--function (\ref{valrot}) will contain an additional correction
due to different non--strange and strange constituent quark masses
\cite{We92}. When these corrections are included direct information
will be obtained on the contributions of the strange quarks to polarized
nucleon structure functions. In particular the previously developed
generalization to three flavors \cite{We92} allows one to
consistently include the effects of flavor symmetry breaking.
\bigskip
\acknowledgements
This work is supported in part by the Deutsche
Forschungsgemeinschaft (DFG) under contract Re 856/2-2.
LG is grateful for helpful comments by G. R. Goldstein.
\bigskip
\bigskip
| proofpile-arXiv_065-482 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
After the observation of the top quark signal at the Tevatron,
the mechanism of the spontaneous electroweak symmetry breaking remains the
last untested property of the Standard Model ($\cal{SM}${}).
Although the recent global fits to the precision electroweak data from LEP,
SLC and Tevatron seem to indicate a preference to a light Higgs boson
$m_H=149^{+148}_{-82}$~GeV, $m_H<450$~GeV ($95\%C.L.$) \cite{LEPEWWG},
it is definitely premature to exclude the heavy Higgs scenario. The reason is
that a restrictive upper bound for $m_H$ is dominated by the result on
$A_{LR}$, which differs significantly from the $\cal{SM}${} predictions \cite{ALR}.
Without $A_{LR}$ upper bound on $M_H$ becomes larger than 600~GeV \cite{ALR},
which is not far in the logarithmic scale from the value of the order of
1~TeV, where perturbation theory breaks down. In order to estimate the
region of applicability of the perturbation theory, the leading two--loop
${\cal O}(g^4 m_H^4/M_W^4)$ electroweak corrections were under intense study
recently. In particular, the high energy weak--boson scattering
\cite{scattering}, corrections to the heavy Higgs line shape
\cite{GhinculovvanderBij}, corrections to the partial widths of the Higgs
boson
decay to pairs of fermions \cite{fermi_G,fermi_DKR} and intermediate vector
bosons \cite{vector_G,vector_FKKR} at two--loops have been calculated.
All these calculations resort at least partly to numerical methods. Even for
the two--loop renormalization constants in the Higgs scalar sector of the $\cal{SM}${}
\cite{GhinculovvanderBij,MDR} complete analytical expressions are not known.
In this paper we present our analytic results for these two--loop
renormalization constants, evaluated in the on--mass--shell renormalization
scheme in terms of transcendental functions $\zeta(3)$ and the maximal value
of the Clausen function $\mbox{Cl}(\pi/3)$.
\section{Lagrangian and renormalization}
The part of the $\cal{SM}${} Lagrangian describing the Higgs scalar sector of the $\cal{SM}${}
in terms of the bare quantities is given by:
\begin{eqnarray}
&&{\cal L} = \frac{1}{2}\partial_\mu H_0\partial^\mu H_0
+ \frac{1}{2}\partial_\mu z_0\partial^\mu z_0
+ \partial_\mu w^+_0\partial^\mu w^-_0
\nonumber\\
&& -\, \frac{{m_H^2}_0}{2v_0^2}\left(w_0^+w_0^- + \frac{1}{2}z_0^2
+ \frac{1}{2}H_0^2 + v_0 H_0 + \frac{1}{2}\delta v^2
\right)^2.
\label{lagrangian}
\end{eqnarray}
Here the tadpole counterterm $\delta v^2$ is chosen in such a way, that
a Higgs field vacuum expectation value is equal to $v_0$
\begin{equation}
v_0 = \frac{2\, {M_W}_0}{g}
\end{equation}
to all orders.
Renormalized fields are given by
\begin{equation}
H_0 = \sqrt{Z_H}\,H, \quad z_0 = \sqrt{Z_z}\,z, \quad
w_0 = \sqrt{Z_w}\,w.
\end{equation}
At two--loop approximation the wave function renormalization constants,
tadpole and mass counterterms take the form
\begin{eqnarray}
\sqrt{Z} &=& 1 + \frac{g^2}{16\pi^2}\,\delta Z^{(1)}
+ \frac{g^4}{(16\pi^2)^2}\,\delta Z^{(2)};
\nonumber \\
\delta v^2 &=& \frac{1}{16\pi^2}\,{\delta v^2}^{(1)}
+ \frac{g^2}{(16\pi^2)^2}\, {\delta v^2}^{(2)};
\label{constants} \\
{M_W}_{0} &=& M_W + \frac{g^2}{16\pi^2}\, \delta M_W^{(1)}
+ \frac{g^4}{(16\pi^2)^2}\, \delta M_W^{(2)};
\nonumber \\
{m_H^2}_{0} &=& m_H^2 + \frac{g^2}{16\pi^2}\, {\delta m_H^2}^{(1)}
+ \frac{g^4}{(16\pi^2)^2}\, {\delta m_H^2}^{(2)}.
\nonumber
\end{eqnarray}
Since weak coupling constant $g$ is not renormalized at the leading order in
$m_H^2$, the $W$--boson mass counterterm is related to the Nambu--Goldstone
wave function renormalization constant by the Ward identity ${M_W}_0=Z_w M_W$.
In the on--mass--shell renormalization scheme all the counterterms are fixed
uniquely by the requirement that the pole position of the Higgs and
$W$--boson
propagators coincide with their physical masses and the residue of the
Higgs boson pole is normalized to unity.
The one--loop counterterms equivalent to those used in
\cite{GhinculovvanderBij,MDR} are given by
\begin{eqnarray}
{\delta v^2}^{(1)}&=&
m_H^{2}{\xi^\epsilon_H}\,\Biggl\{-{\frac {6}{\epsilon}}+3
-\, \epsilon \biggl(\frac{3}{2}+\frac{\pi^2}{8}\biggr)
\Biggr\};
\nonumbe
\\
\frac{{\delta m_H^2}^{(1)}}{m_H^2}&=&
\frac{m_H^{2}{\xi^\epsilon_H}}{M_W^2}
\,\Biggl\{-{\frac {3}{\epsilon}}+3-{\frac {3\,\pi\,\sqrt {3}}{8}}
+\, \epsilon \biggl(-3+\frac{\pi^2}{16}+\frac{3\pi\sqrt{3}}{8}
+\frac{3\sqrt{3}C}{4}-\frac{3\pi\sqrt{3}\log{3}}{16}\biggr)
\Biggr\};
\nonumbe
\\
\delta Z_H^{(1)}&=&{\frac {m_H^{2}{\xi^\epsilon_H}\,}{M_W^{2}}}
\Biggl\{\frac{3}{4}-{\frac {\pi\,\sqrt {3}}{8}}
+\, \epsilon \biggl(-\frac{3}{4}+\frac{3\pi\sqrt{3}}{32}
+\frac{\sqrt{3}C}{4}-\frac{\pi\sqrt{3}\log{3}}{16}\biggr)
\Biggr\};
\label{zh}
\\
\frac{\delta M_W^{(1)}}{M_W}&=&\delta Z_w^{(1)}=
-{\frac {m_H^{2}{\xi^\epsilon_H}}{16\,M_W^{2}}}\Biggl\{1
-\frac{3}{4}\epsilon\Biggr\}.
\nonumbe
\end{eqnarray}
Here the dimension of space--time is taken to be $d=4+\epsilon$ and
\begin{equation}
\xi^\epsilon_H=e^{\gamma\,\epsilon/2}\,
\left(\frac{m_H}{2\,\pi}\right)^\epsilon.
\end{equation}
In contrast to papers \cite{GhinculovvanderBij,MDR} we prefer not to keep the
one--loop
counterterms of ${\cal O}(\epsilon)$ order, {\it i.e.} unlike in the
conventional on--mass--shell scheme used in \cite{GhinculovvanderBij,MDR},
we require that the one-loop normalization conditions are fulfilled only in
the limit $\epsilon\to 0$, where the counterterms of the order
${\cal O}(\epsilon)$ do not contribute. Such a modified on--mass-shell scheme
is equally consistent as the conventional one or
the standard scheme of minimal dimensional renormalization, which assumes
only the subtraction of pole terms at $\epsilon=0$ \cite{BR}.
(Moreover, in general one cannot subtract all the nonsingular
${\cal O}(\epsilon)$
terms in the Laurent expansion in $\epsilon$, as they are not polynomial
in external momenta.) The ${\cal O}(\epsilon)$ one--loop counterterms
considered in \cite{GhinculovvanderBij,MDR} do can really combine with the
$1/\epsilon$ terms at two--loop order to give finite contributions, but these
contributions are completely canceled by the additional finite parts of the
two--loop counterterms, fixed through the renormalization conditions in the
on--mass--shell renormalization scheme.
The reason is that after the inclusion of the one--loop counterterms all the
subdivergences are canceled and only the overall divergence remains, which
is to be canceled by the two--loop counterterms. The account of finite
contributions coming from the combination of ${\cal O}(\epsilon)$ one--loop
counterterms with $1/\epsilon$ overall divergence just redefines the finite
parts of the two--loop counterterms. An obvious advantage of this modified
on--mass--shell scheme is that the lower loop counterterms once calculated
could be directly used in higher loop calculations, while in the conventional
on--mass--shell scheme for $l$-loop calculation one needs to recalculate the
one-loop counterterms to include all the terms up to
${\cal O}(\epsilon^{l-1})$,
two-loop counterterms to include ${\cal O}(\epsilon^{l-2})$ terms and so on.
\section{Analytic integration}
The calculation of the Higgs and $W$--boson (or Nambu--Goldstone $w$, $z$
bosons) two--loop self energies, needed to evaluate
the renormalization constants (\ref{constants}), reduces to the evaluation
of the two--loop massive scalar integrals
\begin{eqnarray}
&&J(k^2;
n_1\, m_1^2,n_2\, m_2^2,n_3\, m_3^2,n_4\, m_4^2,n_5\, m_5^2)
= -\frac{1}{\pi^4}\int\,D^{(d)}P\,D^{(d)}Q \,\biggl(P^2-m_1^2\biggr)^{-n_1}\\
&& \times
\biggl((P+k)^2-m_2^2\biggr)^{-n_2}
\biggl((Q+k)^2-m_3^2\biggr)^{-n_3}\biggl(Q^2-m_4^2\biggr)^{-n_4}
\biggl((P-Q)^2-m_5^2\biggr)^{-n_5}
\nonumber
\end{eqnarray}
and their derivatives $J'$ at $k^2=m_H^2$ or at $k^2=0$.
The most difficult is a calculation of the all--massive scalar master
integral corresponding to the topology shown in the Fig.~1.
\vspace*{0.4cm}
\setlength{\unitlength}{1cm}
\begin{picture}(15,3)
\put(5,0){\epsfig{file=a.eps,height=3cm}}
\end{picture}
\begin{center}
\parbox{6in}{\small\baselineskip=12pt Fig.~1.
The two loop all--massive master graph. Solid line represents Higgs bosons.
}
\end{center}
\vspace*{0.4cm}
This integral has a discontinuity that is an elliptic integral, resulting from
integration over the phase space of three massive particles, and is not
expressible in terms of polylogarithms. However one can show
\cite{ScharfTausk} that on--shell $k^2=m_H^2$ or at the threshold
$k^2=9 m_H^2$ this is not the case. We use the dispersive method
\cite{Broadhurst,BaubergerBohm} to evaluate
this finite integral on the mass shell:
\begin{equation}
m_H^2\, J(k^2;m_H^2,m_H^2,m_H^2,m_H^2,m_H^2)=
\sigma_a(k^2/m_H^2)+\sigma_b(k^2/m_H^2),
\label{master}
\end{equation}
where $\sigma_{a,b}$ correspond to the dispersive integrals
calculated, respectively, from the two-- and three--particle discontinuities,
which are itself reduced to one--dimensional integrals.
The $\tanh^{-1}$ functions entering $\sigma_{a,b}$ can be removed
integrating by parts either in the dispersive integral \cite{Broadhurst},
or in the discontinuity integral \cite{BaubergerBohm}. By interchanging the
order of integrations the latter representation gives the three--particle cut
contribution $\sigma_b$ as a single integral of logarithmic functions
\cite{BaubergerBohm}. After some rather heavy analysis we obtain at
$k^2=m_H^2$:
\begin{eqnarray}
\sigma_a(1)&=&\int_0^1 dy \, \frac{8}{y^{4}-y^{2}+1}
\log \left({\frac {\left (y^{2}+1\right )^{2}}{y^{4}+y^{2}+1}}\right)
\left [{\frac {\left (y^{4}-1\right )\log (y)}{y}}
-{\frac {\pi\,y}{\sqrt {3}}}\right ] \nonumber \\
&=&{\frac {17}{18}}\,\zeta(3)-{\frac {10}{9}}\,\pi \,C+\pi ^{2}\,\log {2}
-{\frac {4}{9}}\,\pi ^{2}\,\log {3},
\\
\sigma_b(1)&=&\int_0^1 dy \, 2\,
\log \left({\frac {y^{2}+y+1}{y}}\right) \nonumber \\
&&\left [{\frac {\log (y+1)}{y}}
+{\frac {{\frac {\pi\,}{\sqrt {3}}}\left (y^{2}-3\,y+1\right )
-2\,\left (y^{2}-1\right )
\log (y)}{y^{4}+y^{2}+1}}-{\frac {\log (y)}{y+1}}\right ] \nonumber\\
&=&{\frac {1}{18}}\zeta(3)+{\frac {4}{9}}\,\pi \,C-\pi ^{2}\,\log {2}
+{\frac {4}{9}}\pi ^{2}\,\log {3}.
\end{eqnarray}
Here
$
C = \mbox{Cl}(\pi/3) = \mathop{\mathrm{Im}} \,\mbox{li}_2\left(\exp\left(i\pi/3\right)\right)=
1.01494\: 16064\: 09653\: 62502\dots
$
As a result we find
\begin{eqnarray}
m_H^2\, J(m_H^2;m_H^2,m_H^2,m_H^2,m_H^2,m_H^2)&=&
\zeta(3)-{\frac {2}{3}}\,\pi\,C \nonumber \\
&=&-0.92363\: 18265\: 19866\: 53695 \dots
\label{HHHHH}
\end{eqnarray}
The numerical value is in agreement with the one, calculated using the
momentum expansion \cite{DavydychevTausk}, and with the numerical values
given in \cite{MDR,Adrian}.
Given the value (\ref{HHHHH}), the simplest way to calculate the derivative
of the integral (\ref{master}) is to use Kotikov's method of differential
equations \cite{Kotikov,MDR}
\begin{eqnarray}
m_H^4\, J'(m_H^2;m_H^2,m_H^2,m_H^2,m_H^2,m_H^2)&=&
{\frac {2}{3}}\,\pi\,C-\zeta(3)-{\frac {\pi^{2}}{9}}
\nonumber \\
&=&-0.17299\: 08847\: 12284\: 42069 \dots
\end{eqnarray}
All the other two--loop self energy scalar integrals contain ``light''
particles (Nambu--Goldstone or $W$, $Z$--bosons) and some of them are
IR divergent in the limit $M_{W,Z}\to 0$. In principle, one can
calculate these IR divergent integrals in Landau gauge, where masses
of Nambu--Goldstone bosons are equal to zero and IR divergences are
represented as (double) poles at $\epsilon=0$ \cite{MDR}. However, in order to
have an additional check of the cancellation in the final answer of all the
IR divergent $\log(M_{W,Z}^2)$--terms, we work in 't~Hooft--Feynman gauge.
For the infra--red finite integrals the correct answer in the leading order in
$m_H^2$ is obtained just by
setting $M_{W,Z}=0$. We agree with the results for these integrals given in
\cite{MDR,ScharfTausk}. The two--loop IR divergent integrals correspond
to the topologies shown in the Fig.~2, which contain ``massless''
propagators squared.
\vspace*{0.4cm}
\setlength{\unitlength}{1cm}
\begin{picture}(15,3)
\put(1,0){\epsfig{file=b.eps,height=3cm}}
\put(10,0){\epsfig{file=c.eps,height=3cm}}
\end{picture}
\begin{center}
\parbox{6in}{\small\baselineskip=12pt Fig.~2.
The two--loop IR divergent graphs. Dashed line represents ``light''
particles.}
\end{center}
\vspace*{0.4cm}
The relevant technique to handle these diagrams follows from the so--called
asymptotic operation method \cite{Tkachov}.
According to the recipe of As--operation, the formal Taylor
expansion in small mass $M_{W}$ entering propagator (or its powers) should
be accomplished by adding the terms, containing the
$\delta$--function or its derivatives. The additional terms counterbalance
the infra--red singularities, arising in the formal expansion of propagator.
In our case we have
\begin{eqnarray}
\frac{1}{(P^{2} - M_{W}^{2})^{2}} & = & \frac{1}{(P^{2})^{2}}
+ 2\frac{M_{W}^{2}}{(P^{2})^{3}} + ... \nonumber \\
&+& C_{1}(M_{W})\delta^{(d)}(P) + C_{2}(M_{W})\partial^{2}\delta^{(d)}(P)+ ...
\end{eqnarray}
Here the first coefficient functions $C_{i}(M_{W})$ read
\begin{eqnarray}
C_{1}(M_{W}) & = & \int D^{(d)}P \frac{1}{(P^{2} - M_{W}^{2})^{2}}
\sim {\cal O}(M_{W}^{0}), \\
C_{2}(M_{W}) & = & \frac{1}{2d}\int D^{(d)}P
\frac{P^{2}}{(P^{2} - M_{W}^{2})^{2}}
\sim {\cal O}(M_{W}^{2}). \nonumber
\end{eqnarray}
This equality is to be understood in the following sense \cite{Tkachov}.
One should integrate both parts of the equation multiplied by a test function
and then take the limit $d \rightarrow 4$. If one keeps in the expansion all
terms up to order $M_{W}^{2 n}$, the resulting expression will represent a
correct expansion of the initial integral to order $o(M_{W}^{2 n})$.
To obtain the leading contribution to the diagrams Fig.~2,
it suffices just to take the first term of the Taylor expansion
and, correspondingly, the first ``counterterm'' $C_{1}\delta^{(d)} (P)$.
Finally, the combination of Mellin--Barnes representation and Kotikov's
method gives the following answer, corresponding to the first graph in
Fig.~2 neglecting the terms of order ${\cal O}(M_W^2/m_H^2)$:
\begin{eqnarray}
m_H^2\, J(m_H^2;2\,M_W^2,M_W^2,0,M_W^2,m_H^2)&=&
{\xi^{2\epsilon}_H}\Biggl ({\frac {2\,i\,\pi}{\epsilon}}-{\frac {i\,\pi}{2}}
+\frac{2}{\epsilon}\,\log (\frac{M_W^2}{m_H^2})
\\
&&-\frac{1}{2}+{\frac {5\,\pi^{2}}{6}}
-\log (\frac{M_W^2}{m_H^2})
+\frac{1}{2}\log^{2} (\frac{M_W^2}{m_H^2})\Biggr );
\nonumber \\
m_H^4\, J'(m_H^2;2\,M_W^2,M_W^2,0,M_W^2,m_H^2)&=&
{\xi^{2\epsilon}_H}\Biggl (-{\frac {2\,i\,\pi}{\epsilon}}+2\,i\,\pi
-\frac{2}{\epsilon}\left(1\,+\,\log (\frac{M_W^2}{m_H^2})\right) \\
&&+2-{\frac {5\,\pi^{2}}{6}}
+\log (\frac{M_W^2}{m_H^2})
-\frac{1}{2}\log^{2} (\frac{M_W^2}{m_H^2})\Biggr ).
\nonumber
\end{eqnarray}
The integral diverges as $1/\epsilon$, while if we would set $M_W=0$ from the
very beginning, it would diverge as $1/\epsilon^2$. The integral corresponding
to the second graph in the Fig.~2 up to ${\cal O}(M_W^2/m_H^2)$ is
\begin{eqnarray}
J(m_H^2;2\,M_W^2,0,0,M_W^2,m_H^2)&=&
{\xi^{2\epsilon}_H}\Biggl ({\frac {2}{\epsilon^{2}}}
+\frac{1}{\epsilon}\left(2\,\log (\frac{M_W^2}{m_H^2})-1\right) \\
&&+\frac{1}{2}-{\frac {\pi^{2}}{12}}
-\log (\frac{M_W^2}{m_H^2})
+\frac{1}{2}\log^{2} (\frac{M_W^2}{m_H^2})\Biggr) \nonumber
\end{eqnarray}
The two--loop vacuum integrals needed to evaluate the tadpole counterterm
${\delta v^2}^{(2)}$ have been calculated in \cite{vanderBijVeltman}.
\section{Results}
The analytic results for the two--loop renormalization constants are:
\begin{eqnarray}
\delta {v^2}^{(2)}&=&
\frac{m_H^{4}{\xi^{2\epsilon}_H}}{16\,M_W^2}
\left (
{\frac {72}{\epsilon^{2}}}
+{\frac {36\,\pi\,\sqrt {3}-84}{\epsilon}}
-162-3\,\pi^{2}+60\,\sqrt {3}C
\right );
\label{dv2}
\\
\frac{{\delta m_H^2}^{(2)}}{m_H^2}&=&
\frac{m_H^{4}{\xi^{2\epsilon}_H}}{64\,M_W^4}\Biggl (
{\frac {576}{\epsilon^{2}}}
+\frac {144\,\pi\,\sqrt {3}-1014}{\epsilon}
\nonumber \\
&&+{\frac {99}{2}}-252\,\zeta(3)
+87\,\pi^{2}
-219\,\pi\,\sqrt {3}
\nonumber \\
&&+156\,\pi\,C
+204\,\sqrt {3}C
\Biggr);
\label{mh2}
\\
\delta Z_H^{(2)}&=&\frac{m_H^{4}{\xi^{2\epsilon}_H}}{64\,M_W^4}
\Biggl ( {\frac {3}{\epsilon}} - {\frac {75}{4}} - 126\,\zeta(3)
\nonumber \\
&&+{\frac {25\,\pi^{2}}{2}}-76\,\pi\,\sqrt {3}+78\,\pi\,C+108\,\sqrt {3}C
\Biggr );
\label{zh2}
\\
\frac{\delta M_W^{(2)}}{M_W}&=&\delta Z_w^{(2)}=
\frac{m_H^{4}{\xi^{2\epsilon}_H}}{64\,M_W^4}\left (
{\frac {3}{\epsilon}}-\frac{3}{8}-{\frac {\pi^{2}}{6}}+{\frac {3\,
\pi\,\sqrt {3}}{2}}-6\,\sqrt {3}C
\right ).
\label{zw2}
\end{eqnarray}
For comparison we have also calculated these counterterms following the
renormalization scheme \cite{GhinculovvanderBij,fermi_G} and keeping the
${\cal O}(\epsilon)$ terms and found complete agreement with their (partly
numerical) results. In this scheme the renormalization constants
(\ref{dv2}), (\ref{mh2}) look a bit more complicated due to the presence of
the additional $\pi \sqrt{3}\log{3}$ terms
\begin{eqnarray}
\delta {v^2}^{(2)}&=&
\frac{m_H^{4}{\xi^{2\epsilon}_H}}{16\,M_W^2}
\Biggl (
{\frac {72}{\epsilon^{2}}}
+{\frac {36\,\pi\,\sqrt {3}-84}{\epsilon}}
+90-12\,\pi^{2}-12\,\sqrt {3}C
\\
&&-36\,\pi\,\sqrt{3}+18\,\pi\,\sqrt{3}\,\log{3}
\Biggr );
\label{dv2_e}\nonumber
\\
\frac{{\delta m_H^2}^{(2)}}{m_H^2}&=&
\frac{m_H^{4}{\xi^{2\epsilon}_H}}{64\,M_W^4}\Biggl (
{\frac {576}{\epsilon^{2}}}
+\frac {144\,\pi\,\sqrt {3}-1014}{\epsilon}
\nonumber \\
&&+{\frac {2439}{2}}-252\,\zeta(3)
+63\,\pi^{2}
-363\,\pi\,\sqrt {3}
\nonumber \\
&&+156\,\pi\,C
-84\,\sqrt {3}C+72\,\pi\sqrt{3}\,\log{3}
\Biggr).
\label{mh2_e}
\end{eqnarray}
The wave function renormalization constants $Z_{H,w}$ are identical in these
two schemes.
As an example of the physical quantity, for which all the schemes should give
the same result, we consider the two--loop heavy Higgs correction to the
fermionic Higgs width \cite{fermi_G,fermi_DKR}. The correction is given by the
ratio
\begin{eqnarray}
\frac{Z_H}{{M_W^2}_0/M_W^2}&=&
1 + 2\frac{g^2}{16\,\pi^2}
\left(\delta Z_H^{(1)}-\frac{\delta M_W^{(1)}}{M_W} \right)
\nonumber \\
&&+ \frac{g^4}{(16\,\pi^2)^2}\Biggl[
2\,\frac{\delta M_W^{(1)}}{M_W}\,
\left (\frac{\delta M_W^{(1)}}{M_W}-\delta Z_H^{(1)}\right )
+\left (\delta Z_H^{(1)}-\frac{\delta M_W^{(1)}}{M_W}\right )^{2}
\nonumber \\
&&+2\,\delta Z_H^{(2)}-2\,\frac{\delta M_W^{(2)}}{M_W}
\Biggr].
\end{eqnarray}
Substituting (\ref{zh}), (\ref{zh2})--(\ref{zw2}) we find
\begin{eqnarray}
&&\frac{Z_H}{{M_W^2}_0/M_W^2}=
1\, +\, \frac{1}{8}
\frac{g^2}{16\, \pi^2}\frac{m_H^2}{M_W^2}\left( 13 - 2 \pi \sqrt{3}\right)
\\
&&+\, \frac{1}{16}\biggl(\frac{g^2}{16\, \pi^2}\frac{m_H^2}{ M_W^2}\biggr)^2
\left(3-63\,\zeta(3)-{\frac {169\,\pi\,\sqrt {3}}{4}}
+{\frac {85\,\pi^{2}}{12}}+39\,\pi\,C+57\,\sqrt {3}\,C\right).
\nonumber
\end{eqnarray}
Again, we find complete agreement with the numerical result \cite{fermi_G}
and exact agreement with the result \cite{fermi_DKR}, taking into account
that their numeric constant $K_5$ is just minus our integral (\ref{HHHHH}).
\section*{Acknowledgments}
G.J. is grateful to J.J.~van~der~Bij and A.~Ghinculov for valuable
discussions. This work was supported in part by the Alexander von Humboldt
Foundation and the Russian Foundation for Basic Research grant 96-02-19-464.
| proofpile-arXiv_065-492 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Let $Y$ be a connected simplicial complex. Suppose that
$\pi$ acts freely and simplicially on $Y$ so that $X=Y/\pi$ is a finite
simplicial complex. Let ${\mathcal F}$ a finite subcomplex of $Y$, which
is a fundamental domain
for the action of $\pi$ on $Y$.
We assume that $\pi$ is amenable. The F\o{}lner criterion for amenability
of $\pi$ enables one to get, cf. \cite{Ad}, a {\em regular exhaustion}
$\big\{Y_{m}\big\}^{\infty}_
{m=1}$, that is a sequence of finite subcomplexes of $Y$
such that
(1) $Y_{m}$ consists of $N_{m}$ translates $g.{\mathcal F}$ of
${\mathcal F}$ for
$g\in\pi$.
(2) $\displaystyle Y=\bigcup^{\infty}_{m=1}Y_{m}\;$.
(3) If $\dot{N}_{m,\delta}$ denotes the number of translates of
${\mathcal F}$
which have distance (with respect to the word metric in $\pi$) less than or
equal to $\delta$ from a translate of ${\mathcal F}$ having a
non-empty intersection with the topological boundary $\partial{Y}_{m}$ of
$Y_{m}$ (we identify here $g.{\mathcal F}$ with
$g$)
then, for every $\delta > 0$,
$$
\lim_{m\rightarrow\infty}\;\frac{{\dot{N}}_{m,\delta}}{N_{m}}=0.
$$
One of our main results is
\begin{theorem}[Amenable Approximation Theorem]
$\;$Let $Y$ be a connected simplicial complex.
Suppose that $\pi$ is amenable and that $\pi$ acts freely and simplicially
on $Y$ so that $X=Y/\pi$ is a finite simplicial complex.
Let $\big\{Y_{m}\big\}^{\infty}_
{m=1}$ be a regular exhaustion of $Y$. Then
$$
\lim_{m\rightarrow\infty}\;\frac{b^{j}(Y_{m})}{N_{m}}=b_{(2)}^{j}(Y:\pi)
\;\;\mbox{ for all }\;\;j\ge 0.
$$
$$
\lim_{m\rightarrow\infty}\;\frac{b^{j}(Y_{m},
\partial Y_{m})}{N_{m}}=b_{(2)}^{j}(Y:\pi)
\;\;\mbox{ for all }\;\;j\ge 0.
$$
\end{theorem}
Here $b^{j}(Y_{m})$ denotes the $j^{th}$ Betti number of $Y_m$,
$b^{j}(Y_{m}, \partial Y_{m})$ denotes the $j^{th}$ Betti number of
$Y_m$ relative its boundary $\partial Y_m$ and $b_{(2)}^{j}(Y:\pi)$
denotes the $j$th $L^2$ Betti number of $Y$.
(See the next section for the definition
of the $L^2$ Betti numbers of a manifold)
\noindent{\bf Remarks.} This theorem proves the main conjecture in the
introduction of
an earlier paper \cite{DM}. The combinatorial techniques of this paper
contrasts
with the heat kernel approach used in \cite{DM}.
Under the assumption dim $H^{k}(Y)<\infty$, a special case of the
Amenable Approximation Theorem above
is obtained by combining proofs of
Eckmann \cite{Ec} and Cheeger-Gromov \cite{CG}.
The assumption dim $H^{k}(Y)<\infty$ is very restrictive and essentially
says
that $Y$ is a fiber bundle over a $B\pi$ with fiber a space with
finite fundamental group. Cheeger-Gromov use this to
show that the Euler
characteristic of a finite $B\pi$, where $\pi$ contains an infinite
amenable normal subgroup, is zero. Eckmann proved the same result
in the special
case when $\pi$ itself is an infinite amenable group.
There is a standing conjecture that any normal
covering space of a finite simplicial complex is of determinant class
(cf. section 4 for the definition of determinant class and for a
more detailed discussion of what follows).
Let $M$ be a smooth compact manifold, and $X$ triangulation of $M$.
Let $\widetilde M$ be any normal covering space of $M$, and $Y$ be the
triangulation of $\widetilde M$ which projects down to $X$.
Then on $\widetilde M$, there are two notions of determinant class,
one analytic and the other combinatorial. Using results of
Efremov \cite{E}, Gromov and Shubin
\cite{GS}, one observes as in \cite{BFKM} that the
combinatorial and analytic
notions of determinant class coincide.
It was proved in \cite{BFK}) using estimates of L\"uck \cite{L} that any
{\em residually finite} normal
covering space of a finite simplicial complex is of determinant class, which
gave evidence supporting the conjecture.
Our interest in this conjecture stems from work on $L^2$
torsion \cite{CFM}, \cite{BFKM}. The $L^2$ torsion is a well defined
element in the determinant line of the reduced $L^2$ cohomology, whenever
the covering space is of determinant class. Our next main theorem
says that any {\em amenable} normal
covering space of a finite simplicial complex is of determinant class,
which gives further evidence supporting the conjecture.
\begin{theorem}[Determinant Class Theorem]
$\;$Let $Y$ be a connected simplicial complex.
Suppose that $\pi$ is amenable and that $\pi$ acts freely and simplicially
on $Y$ so that $X=Y/\pi$ is a finite simplicial complex. Then $Y$
is of determinant class.
\end{theorem}
The paper is organized as follows. In the first section, some
preliminaries on $L^2$
cohomology and amenable groups are
presented. In section 2, the main abstract approximation theorem is
proved. We
essentially use the combinatorial analogue of the principle of not feeling
the boundary (cf. \cite{DM}) in Lemma 2.3 and a finite dimensional result
in \cite{L}, to prove this theorem.
Section 3 contains the proof of the Amenable Approximation Theorem and
some related
approximation theorems.
In section 4, we prove that an arbitrary {\em amenable} normal covering
space of a finite simplicial complex is of determinant class.
The second author warmly thanks Shmuel Weinberger for some useful
discussions. This paper has been inspired by L\"uck's work \cite{L} on
residually finite groups.
\section{Preliminaries}
Let $\pi$ be a finitely generated discrete group and ${\mathcal U}(\pi)$
be the von Neumann algebra
generated by the action of $\pi$ on $\ell^{2}(\pi)$ via the left regular
representation. It is the weak (or strong) closure of the complex group
algebra of
$\pi$, ${\mathbb C}(\pi)$ acting on $\ell^2(\pi)$ by left translation.
The left regular representation is then a unitary representation
$\rho:\pi\rightarrow{\mathcal U}(\pi)$. Let ${\text{Tr}}_{{\mathcal U}(\pi)}$
be the faithful normal trace on
${\mathcal U}(\pi)$ defined by the inner product
${\text{Tr}}_{{\mathcal U}(\pi)}(A) \equiv
(A\delta_e,\delta_e)$ for
$A\in{\mathcal U}(\pi)$ and where $\delta_e\in\ell^{2}(\pi)$ is given by
$\delta_e(e)=1$ and
$\delta_e(g)=0$ for $g\in\pi$ and $g\neq e$.
Let $Y$ be a simplicial complex, and $|Y|_j$ denote the set of all
$p$-simplices
in $Y$. Regarding the orientation of simplices, we use the following
convention. For
each $p$-simplex $\sigma \in |Y|_j$, we identify $\sigma$ with any other
$p$-simplex
which is obtained by an {\em even} permutation of the vertices in $\sigma$,
whereas
we identify $-\sigma$ with any other $p$-simplex
which is obtained by an {\em odd} permutation of the vertices in $\sigma$.
Suppose that
$\pi$ acts freely and simplicially on $Y$ so that $X=Y/\pi$ is a finite
simplicial complex. Let ${\mathcal F}$ a finite subcomplex of $Y$, which
is a fundamental domain
for the action of $\pi$ on $Y$. Consider the Hilbert space of square
summable cochains on $Y$,
$$
C^j_{(2)}(Y) = \Big\{f\in C^j(Y): \sum_{\sigma\; a\; j-simplex}|f(\sigma)|^2
<\infty \Big\}
$$
Since $\pi$ acts freely on $Y$, we see that there is an isomorphism of
Hilbert $\ell^2(\pi)$ modules,
$$
C^j_{(2)}(Y) \cong C^j(X)\otimes\ell^2(\pi)
$$
Here $\pi$ acts trivially on $C^j(X)$
and via the left regular representation on $\ell^2(\pi)$. Let
$$
d_{j}:C^{j}_{(2)}(Y)\rightarrow C^{j+1}_{(2)}(Y)
$$
denote the coboundary operator. It is clearly a bounded linear operator.
Then the (reduced) $L^2$ cohomology groups of $Y$
are defined to be
$$
H^j_{(2)}(Y) = \frac{\mbox{ker}(d_j)}{\overline{\mbox{im}(d_{j-1})}}.
$$
Let ${d_j}^*$ denote the Hilbert space adjoint of
$d_{j}$. One defines the combinatorial Laplacian
$\Delta_{j} : C^{j}_{(2)}(Y) \rightarrow C^{j}_{(2)}(Y)$ as
$\Delta_j = d_{j-1}(d_{j-1})^{*}+(d_{j})^{*}d_{j}$.
By the Hodge decomposition theorem in this context, there is an isomorphism
of Hilbert $\ell^2(\pi)$ modules,
$$
H^j_{(2)}(Y)\;\; \cong\;\; \mbox{ker} (\Delta_j).
$$
Let $P_j: C^{j}_{(2)}(Y)\rightarrow \mbox{ker} \Delta_j$ denote the
orthogonal projection to the kernel of the Laplacian. Then the $L^2$
Betti numbers $b_{(2)}^j(Y:\pi)$ are defined as
$$
b_{(2)}^j(Y:\pi) = {\text{Tr}}_{{\mathcal U}(\pi)}(P_j).
$$
In addition, let $\Delta_j^{(m)}$
denote the
Laplacian on the finite dimensional cochain space $C^j(Y_m)$ or
$C^j(Y_m,\partial Y_m)$. We do use the same notation for the two Laplacians
since all proofs work equally well for either case.
Let $D_j(\sigma, \tau) = \left< \Delta_j \delta_\sigma, \delta_\tau\right>$
denote the matrix coefficients of the Laplacian $\Delta_j$ and
${D_j^{(m)}}(\sigma, \tau) = \left< \Delta_j^{(m)} \delta_\sigma,
\delta_\tau\right>$
denote the matrix coefficients of the Laplacian $\Delta_j^{(m)}$. Let
$d(\sigma,\tau)$ denote the
{\em distance} between $\sigma$ and $\tau$ in the natural simplicial
metric on $Y$, and
$d_m(\sigma,\tau)$ denote the
{\em distance} between $\sigma$ and $\tau$ in the natural simplicial
metric on $Y_m$. This distance (cf. \cite{Elek}) is defined as follows.
Simplexes $\sigma$ and $\tau$ are one step apart, $d(\sigma,\tau)=1$, if
they have equal dimensions,
$\dim \sigma = \dim \tau =j$, and there exists either a simplex of
dimension $j-1$ contained in both $\sigma$ and $\tau$ or a simplex of
dimension $j+1$ containing both $\sigma$ and $\tau$. The distance between
$\sigma$ and $\tau$ is equal to $k$
if there exists a finite sequence $\sigma = \sigma_0, \sigma_1, \ldots ,
\sigma_k = \tau$, $d(\sigma_i,\sigma_{i+1})=1$ for $i=0,\ldots,k-1$, and
$k$ is the minimal length of such a sequence.
Then one has the following, which is an easy
generalization of Lemma 2.5 in \cite{Elek} and follows immediately from
the definition of combinatorial Laplacians and finiteness of the
complex $X=Y/\pi$.
\begin{lemma} $D_j(\sigma, \tau) = 0$ whenever $d(\sigma,\tau)>1$ and
$ {D_j^{(m)}}(\sigma, \tau) = 0$ whenever $d_m(\sigma,\tau)>1$.
There is also a positive constant $C$ independent of $\sigma,\tau$ such that
$D_j(\sigma, \tau)\le C$ and ${D_j^{(m)}}(\sigma, \tau)\le C$.
\end{lemma}
Let $D_j^k(\sigma, \tau) = \left< \Delta_j^k \delta_\sigma,
\delta_\tau\right>$
denote the matrix coefficient of the $k$-th power of the Laplacian,
$\Delta_j^k$, and
$D_j^{(m)k}(\sigma, \tau) = \left< \left(\Delta_j^{(m)}\right)^k
\delta_\sigma, \delta_\tau\right>$
denote the matrix coefficient of the $k$-th power of the Laplacian,
$\Delta_j^{(m)k}$. Then
$$D_j^k(\sigma, \tau) = \sum_{\sigma_1,\ldots\sigma_{k-1} \in |Y|_j}
D_j(\sigma, \sigma_1)\ldots D_j(\sigma_{k-1}, \tau)$$
and
$$D_j^{(m)k}(\sigma, \tau) = \sum_{\sigma_1,\ldots\sigma_{k-1} \in |Y_m|_j}
{D_j^{(m)}}(\sigma, \sigma_1)\ldots {D_j^{(m)}}(\sigma_{k-1}, \tau).$$
Then the following lemma follows easily from Lemma 1.1.
\begin{lemma} Let $k$ be a positive integer. Then $D_j^k(\sigma, \tau) = 0$
whenever $d(\sigma,\tau)>k$ and
$D_j^{(m)k}(\sigma, \tau) = 0$ whenever $d_m(\sigma,\tau)>k$.
There is also a positive constant $C$ independent of $\sigma,\tau$ such that
$D_j^k(\sigma, \tau)\le C^k$ and $D_j^{(m)k}(\sigma, \tau)\le C^k$.
\end{lemma}
Since $\pi$ commutes with the Laplacian $\Delta_j^k$, it follows that
\begin{equation}\label{inv}
D_j^k(\gamma\sigma, \gamma\tau) = D_j^k(\sigma, \tau)
\end{equation}
for all $\gamma\in \pi$ and for all $\sigma, \tau \in |Y|_j$. The
{\em von Neumann
trace} of $\Delta_j^k$ is by definition
\begin{equation} \label{vNt}
{\text{Tr}}_{{\mathcal U}(\pi)}(\Delta_j^k) =
\sum_{\sigma\in |X|_j} D_j^k(\sigma, \sigma),
\end{equation}
where $\tilde{\sigma}$ denotes an arbitrarily chosen lift of $\sigma$ to
$Y$. The trace is well-defined in view of (\ref{inv}).
\subsection{Amenable groups}
Let $d_1$ be the word metric on $\pi$. Recall the following
characterization of amenability due
to F\o{}lner, see also \cite{Ad}.
\begin{definition} A discrete group $\pi$ is said to be {\em amenable}
if there is a sequence
of finite subsets $\big\{\Lambda_{k}\big\}^{\infty}_{k=1}$ such that for
any fixed $\delta>0$
$$
\lim_{k\rightarrow\infty}\;\frac{\#\{\partial_{\delta}\Lambda_{k}\}}{\#
\{\Lambda_{k}\}}=0
$$
where $\partial_{\delta}\Lambda_{k}=
\{\gamma\in\pi:d_1(\gamma,\Lambda_{k})<\delta$
and $d_1(\gamma,\pi-\Lambda_{k})<\delta\}$ is a $\delta$-neighborhood of
the boundary of $\Lambda_{k}$. Such a sequence
$\big\{\Lambda_{k}\big\}^{\infty}_
{k=1}$ is called a {\em regular sequence} in $\pi$. If in addition
$\Lambda_{k}\subset\Lambda_{k+1}$ for all $k\geq 1$ and
$\displaystyle\bigcup^
{\infty}_{k=1}\Lambda_{k}=\pi$, then the sequence
$\big\{\Lambda_{k}\big\}^{\infty}_
{k=1}$ is called a {\em regular exhaustion} in $\pi$.
\end{definition}
Examples of amenable groups are:
\begin{itemize}
\item[(1)]Finite groups;
\item[(2)] Abelian groups;
\item[(3)] nilpotent groups and solvable groups;
\item[(4)] groups of subexponential growth;
\item[(5)] subgroups, quotient groups and extensions of amenable groups;
\item[(6)] the union of an increasing family of amenable groups.
\end{itemize}
Free groups and fundamental groups of closed negatively curved manifolds are
{\em not} amenable.
Let $\pi$ be a finitely generated amenable discrete group, and
$\big\{\Lambda_{m}\big\}^{\infty}_{m=1}$ a regular exhaustion in $\pi$.
Then it defines a regular exhaustion $\big\{Y_m\big\}^{\infty}_{m=1}$ of
$Y$.
Let $\{P_j(\lambda):\lambda\in[0,\infty)\}$ denote the right continuous
family of spectral projections of the Laplacian $\Delta_j$.
Since $\Delta_j$ is $\pi$-equivariant, so are $P_j(\lambda) =
\chi_{[0,\lambda]}(\Delta_j)$,
for $\lambda\in [0,\infty)$. Let
$F:[0,\infty)\rightarrow[0,\infty)$ denote the spectral
density function,
$$
F(\lambda)={\text{Tr}}_{{\mathcal U}(\pi)}(P_j(\lambda)).
$$
Observe that the $j$-th $L^{2}$ Betti number of $Y$ is also given by
$$
b_{(2)}^j(Y:\pi)=F(0).
$$
We have the spectral density function for every dimension $j$ but we do not
indicate explicitly this dependence. All our arguments are performed with a
fixed value of $j$.
Let $E_{m}(\lambda)$ denote the number of eigenvalues $\mu$ of
$\Delta_j^{(m)}$ satisfying $\mu\leq\lambda$ and which are counted with
multiplicity. We may sometimes omit the subscript $j$ on $\Delta_j^{(m)}$
and $\Delta_j$ to simplify the notation.
We next make the following definitions,
$$
\begin{array}{lcl}
F_{m}(\lambda)& = &\displaystyle\frac{E_{m}(\lambda)}{N_{m}}\;\\[+10pt]
\overline{F}(\lambda) & = & \displaystyle\limsup_{m\rightarrow\infty}
F_{m}(\lambda) \;\\[+10pt]
\mbox{\underline{$F$}}(\lambda) & = & \displaystyle\liminf_{m\rightarrow
\infty}F_{m}(\lambda)\; \\[+10pt]
\overline{F}^{+}(\lambda) & = & \displaystyle\lim_{\delta\rightarrow +0}
\overline{F}(\lambda+\delta) \;\\[+10pt]
\mbox{\underline{$F$}}^{+}(\lambda) & = & \displaystyle\lim_{\delta
\rightarrow +0}\mbox{\underline{$F$}}(\lambda+\delta).
\end{array}
$$
\section{Main Technical Theorem}
Our main technical result is
\begin{theorem} Let $\pi$ be countable, amenable group.
In the notation of section 1, one has
\begin{itemize}
\item[(1)]$\;F(\lambda)=\overline{F}^{+}(\lambda)=
\mbox{\underline{$F$}}^{+}(\lambda)$.
\item[(2)] $\;\overline{F}$ and {\underline{$F$}} are right
continuous at zero and we have the equalities
\begin{align*}
\overline{F}(0) & =\overline{F}^{+}(0)=F(0)=\mbox{\underline{$F$}}
(0)=\mbox{\underline{$F$}}^{+}(0) \\
\displaystyle & =\lim_{m\rightarrow\infty}F_{m}(0)=\lim_{m\rightarrow
\infty}\;\frac{\#\{E_{m}(0)\}}{N_{m}}\;.
\end{align*}
\item[(3)] $\;$Suppose that $0<\lambda<1$. Then there is a
constant $K>1$ such
that
$$
F(\lambda)-F(0)\leq-a\;\frac{\log K^{2}}{\log\lambda}\;.
$$
\end{itemize}
\end{theorem}
To prove this Theorem, we will first prove a number of preliminary lemmas.
\begin{lemma} There exists a positive number $K$ such that the operator
norms of
$\Delta_j$ and of $\Delta_j^{(m)}$ for all $m=1,2\ldots$
are smaller than $K^2$.
\end{lemma}
\begin{proof}
The proof is similar to that in \cite{L}, Lemma 2.5 and uses Lemma 1.1
together with uniform local finiteness of $Y$. More precisely we use the
fact that the number of $j$-simplexes in $Y$ at distance at most one from a
$j$-simplex $\sigma$ can be bounded \emph{independently} of $\sigma$,
say
$\#\{\tau \in |Y|_j : d(\tau, \sigma) \leq 1\} \leq b$.
\emph {A fortiori} the same is true (with the same constant $a$) for $Y_m$
for all $m$. We now estimate the $\ell^2$ norm of $\Delta \kappa$ for a
cochain $\kappa = \sum_\sigma a_\sigma \sigma$ (having identified a
simplex $\sigma$ with the dual cochain). Now
$$
\Delta \kappa =
\sum_\sigma \left ( \sum_\tau D(\sigma, \tau ) a_\tau \right ) \sigma
$$
so that
$$
\sum_\sigma \left ( \sum_\tau D(\sigma, \tau) a_\tau \right )^2 \leq
\sum_\sigma \left ( \sum_{d(\sigma, \tau )\leq 1} D(\sigma, \tau )^2 \right
) \left ( \sum_{d(\sigma, \tau )\leq 1} a_\tau^2 \right )
\leq C^2 b \sum_\sigma \sum_{d(\sigma, \tau ) \leq 1} a_\tau^2,
$$
where we have used Lemma 1.1 and Cauchy-Schwartz inequality. In the last
sum above, for every simplex $\sigma$, $a_{\sigma}^2$ appears at most
$b$ times. This proves that $\|\Delta \kappa \|^2
\leq C^2 b^2 \|\kappa \|^2$.
Identical estimate holds (with the same proof) for $\Delta^{(m)}$
which yields the lemma if we set $K=\sqrt{C b}$.
\end{proof}
Observe that $\Delta_j$ can be regarded as a matrix with entries in
${\mathbb Z} [\pi]$, since
by definition, the coboundary operator $d_j$ is a matrix with entries in
${\mathbb Z} [\pi]$, and
so is its adjoint $d_j^*$ as it is equal to the simplicial boundary
operator.
There is a natural trace for matrices with entries in
${\mathbb Z} [\pi]$, viz.
$$
{\text{Tr}}_{{\mathbb Z} [\pi]}(A)= \sum_i {\text{Tr}}_{{\mathcal U} [\pi]}(A_{i,i}).
$$
\begin{lemma} $\;$Let $\pi$ be an amenable group and
let $p(\lambda) = \sum_{r=0}^d a_r \lambda^r$ be a polynomial. Then,
$$
{\text{Tr}}_{{\mathbb Z} [\pi]}(p(\Delta_j))=
\lim_{m\rightarrow\infty}\frac{1}{N_{m}}\;
{\text{Tr}}_{{\mathbb C}}\Big(p\Big(\Delta_j^{(m)}\Big)\Big).
$$
\end{lemma}
\begin{proof} First observe that if $\sigma\in |Y_m|_j$ is such that
$d(\sigma , \partial Y_m) > k$, then Lemma 1.2 implies that
$$
D_j^k(\sigma, \sigma) = \left<\Delta_j^k \delta_\sigma, \delta_\sigma\right>
= \left<\Delta_j^{(m)k} \delta_\sigma, \delta_\sigma\right> =
D_j^{(m)k}(\sigma, \sigma).
$$
By (\ref{inv}) and (\ref{vNt})
$$
{\text{Tr}}_{{\mathbb Z} [\pi]}(p(\Delta_j))= \frac{1}{N_m} \sum_{\sigma\in |Y_m|_j}
\left< p(\Delta_j)\sigma , \sigma \right>.
$$
Therefore we see that
$$
\left| {\text{Tr}}_{{\mathbb Z} [\pi]}(p(\Delta_j)) - \frac{1}{N_{m}}\;
{\text{Tr}}_{{\mathbb C}}\Big(p\Big(\Delta_j^{(m)}\Big)\Big)\right| \le
$$
$$
\frac{1}{N_{m}} \, \sum_{r=0}^d \, |a_r|
\sum_{
\begin{array}{lcl} & \sigma\in |Y_m|_j \\
& d(\sigma, \partial Y_m) \leq d
\end{array}}
\, \left( D^r(\sigma, \sigma) + D^{(m)r}(\sigma, \sigma)\right).
$$
Using Lemma 1.2, we see that there is a positive constant $C$ such that
$$
\left| {\text{Tr}}_{{\mathbb Z} [\pi]}(p(\Delta_j)) - \frac{1}{N_{m}}\;
{\text{Tr}}_{{\mathbb C}}\Big(p\Big(\Delta_j^{(m)}\Big)\Big)\right| \le
2\, \frac{\dot{N}_{m,d}}{N_{m}} \, \sum_{r=0}^d \, |a_r| \, C^r.
$$
The proof of the lemma is completed by taking the limit as
$m\rightarrow\infty$.
\end{proof}
We next recall the following abstract lemmata of L\"uck \cite{L}.
\begin{lemma} $\;$Let $p_{n}(\mu)$ be a sequence of polynomials
such that for
the characteristic function of the interval $[0,\lambda]$,
$\chi_{[0,\lambda]}
(\mu)$, and an appropriate real number $L$,
$$
\lim_{n\rightarrow\infty}p_{n}(\mu)=\chi_{[0,\lambda]}(\mu)\;\;\mbox{ and }
\;\;|p_{n}(\mu)|\leq L
$$
holds for each $\mu\in[0,||\Delta_j||^{2}]$. Then
$$
\lim_{n\rightarrow\infty}{\text{Tr}}_{{\mathbb Z} [\pi]}(p_{n}(\Delta_j))=F(\lambda).
$$\end{lemma}
\begin{lemma} $\;$Let $G:V\rightarrow W$ be a linear map of
finite dimensional
Hilbert spaces $V$ and $W$. Let $p(t)=\det(t-G^{*}G)$ be the characteristic
polynomial of $G^{*}G$. Then $p(t)$ can be written as $p(t)=t^{k}q(t)$
where $q(t)$ is a polynomial with $q(0)\neq 0$. Let $K$ be a real number,
$K\geq\max\{1,||G||\}$ and $C>0$ be a positive constant with
$|q(0)|\geq C>0$.
Let $E(\lambda)$ be the number of eigenvalues
$\mu$ of $G^{*}G$, counted with multiplicity,
satisfying
$\mu\leq\lambda$. Then for $0<\lambda<1$,
the following
estimate is satisfied.
$$
\frac{ \{E(\lambda)\}- \{E(0)\}}{\dim_{{\mathbb C}}V}\leq
\frac{-\log C}{\dim_{{\mathbb C}}V(-\log\lambda)}+
\frac{\log K^{2}}{-\log\lambda}\;.
$$\end{lemma}
\begin{proof}[Proof of theorem 2.1]
Fix $\lambda\geq 0$ and define for $n\geq 1$ a continuous function
$f_{n}:{\mathbb R}\rightarrow {\mathbb R}$ by
$$
f_{n}(\mu)=\left\{\begin{array}{lcl}
1+\frac{1}{n} & \mbox{ if } & \mu\leq\lambda\\[+7pt]
1+\frac{1}{n}-n(\mu-\lambda) & \mbox{ if } &
\lambda\leq\mu\leq\lambda+\frac{1}{n} \\[+7pt]
\frac{1}{n} & \mbox{ if } & \lambda+\frac{1}{n}\leq \mu
\end{array}\right.
$$
Then clearly $\chi_{[0,\lambda]}(\mu)<f_{n+1}(\mu)<f_{n}(\mu)$ and $f_{n}
(\mu)\rightarrow\chi_{[0,\lambda]}(\mu)$ as $n\rightarrow\infty$ for all
$\mu\in[0,\infty)$. For each $n$, choose a polynomial $p_{n}$ such that
$\chi_{[0,\lambda]}(\mu)<p_{n}(\mu)<f_{n}(\mu)$ holds for all
$\mu\in[0,K^{2}]$.
We can always find such a polynomial by a sufficiently close approximation of
$f_{n+1}$. Hence
$$
\chi_{[0,\lambda]}(\mu)<p_{n}(\mu)<2
$$
and
$$
\lim_{n\rightarrow\infty}p_{n}(\mu)=\chi_{[0,\lambda]}(\mu)
$$
for all $\mu\in [0,K^{2}]$. Recall that $E_{m}(\lambda)$ denotes the number
of eigenvalues $\mu$ of $\Delta_j^{(m)}$ satisfying $\mu\leq\lambda$
and counted with multiplicity. Note that
$||\Delta_j^{(m)} || \leq K^{2}$
by Lemma 2.2.
$$
\begin{array}{lcl}
\displaystyle\frac{1}{N_{m}}\;
{\text{Tr}}_{{\mathbb C}}\big(p_{n}(\Delta_j^{(m)})\big)
&=&\displaystyle \frac{1}{N_{m}}\sum_{\mu\in [0,K^2]}p_{n}(\mu)\\[+12pt]
\displaystyle & =&
\displaystyle\frac{ E_{m}(\lambda)}{N_{m}}+\frac{1}{N_{m}}\left\{
\sum_{\mu\in [0,\lambda ]}(p_{n}(\mu)-1)+\sum_{\mu\in (\lambda ,
\lambda + 1/n]}p_{n}(\mu)\right.\\[+12pt]
\displaystyle& &\displaystyle \hspace*{.5in}\left.+\;\sum_{\mu\in (\lambda
+ 1/n, K^2]}p_{n}(\mu)\right\}
\end{array}
$$
Hence, we see that
\begin{equation} \label{A}
F_{m}(\lambda)=\frac{ E_{m}(\lambda)}{N_{m}}
\leq\frac{1}{N_{m}}\;{\text{Tr}}_{{\mathbb C}}\big(p_{n}(\Delta_j^{(m)})\big).
\end{equation}
In addition,
$$
\begin{array}{lcl}
\displaystyle\frac{1}{N_{m}}\;
{\text{Tr}}_{{\mathbb C}}\big(p_{n}(\Delta_j^{(m)})\big)& \leq &
\displaystyle\frac{ E_{m}(\lambda)}{N_{m}}
+\;\frac{1}{N_{m}}\sup\{p_{n}(\mu)-1:\mu\in[0,\lambda]\}\;
E_{m}(\lambda) \\[+16pt]
\displaystyle &+&\displaystyle\;\frac{1}{N_{m}}\sup\{p_{n}(\mu):
\mu\in[\lambda,\lambda+1/n]\}\;
(E_{m}(\lambda+1/n)-E_{m}(\lambda)) \\[+16pt]
\displaystyle &+&\displaystyle\;\frac{1}{N_{m}}\sup\{p_{n}(\mu):
\mu\in[\lambda+1/n,\;K^{2}]\}\;
(E_{m}(K^{2})-E_{m}(\lambda+1/n)) \\[+16pt]
\displaystyle &\leq &\displaystyle\frac{ E_{m}(\lambda)}{N_{m}}+
\frac{ E_{m}(\lambda)}{nN_{m}}+
\frac{(1+1/n) (E_{m}(\lambda+1/n)-E_{m}(\lambda))}{N_{m}}
\\[+16pt]
\displaystyle & &\displaystyle\hspace*{.5in}+\;
\frac{(E_{m}(K^{2})-E_{m}(\lambda+1/n))}
{nN_{m}} \\[+16pt]
\displaystyle &\leq &\displaystyle
\frac{ E_{m}(\lambda+1/n)}{N_{m}}+\frac{1}{n}\;
\frac{ E_{m}(K^{2})}{N_{m}} \\[+16pt]
\displaystyle &\leq & \displaystyle F_{m}(\lambda+1/n)+\frac{a}{n}
\end{array}
$$
since $E_m(K^2)=\dim C^j(Y_m) \leq aN_m$ for a positive constant
$a$ independent
of $m$.
It follows that
\begin{equation} \label{B}
\frac{1}{N_{m}}\;{\text{Tr}}_{{\mathbb C}}\big(p_{n}(\Delta_j^{(m)})\big)\leq F_{m}
(\lambda+1/n)+\frac{a}{n}.
\end{equation}
Taking the limit inferior in (\ref{B}) and the limit superior in (\ref{A}),
as $m\rightarrow\infty$, we get that
\begin{equation} \label{C}
{\overline{F}}(\lambda)\leq {\text{Tr}}_{{\mathbb Z} [\pi]}\big(p_{n}(\Delta_j)\big)
\leq\mbox{\underline{$F$}}(\lambda+1/n)+\frac{a}{n}.
\end{equation}
Taking the limit as $n\rightarrow\infty$ in (\ref{C}) and
using Theorem 2.4, we see that
$$
{\overline{F}}(\lambda)\leq F(\lambda)
\leq\mbox{\underline{$F$}}^{+}(\lambda).
$$
For all $\varepsilon>0$ we have
$$
F(\lambda)\leq\mbox{\underline{$F$}}^{+}(\lambda)\leq\mbox{\underline{$F$}}
(\lambda+\varepsilon)\leq {\overline{F}}(\lambda+\varepsilon)
\leq F(\lambda+\varepsilon).
$$
Since $F$ is right continuous, we see that
$$
F(\lambda)={\overline{F}}^{+}(\lambda)=\mbox{\underline{$F$}}^{+}
(\lambda)
$$
proving the first part of theorem 2.1.
Next we apply theorem 2.5 to $\Delta_j^{(m)}$. Let $p_{m}(t)$ denote the
characteristic polynomial of $\Delta_j^{(m)}$ and
$p_{m}(t)=t^{r_{m}}q_{m}(t)$ where
$q_{m}(0)\neq 0$.
The matrix describing
$\Delta_j^{(m)}$ has integer entries. Hence $p_{m}$ is a polynomial
with integer
coefficients and $|q_{m}(0)|\geq 1$. By Lemma 2.2 and Theorem 2.5 there
are constants $K$ and $C=1$ independent of $m$, such that
$$
\frac{F_{m}(\lambda)-F_{m}(0)}{a}\leq\frac{\log K^{2}}
{-\log\lambda}
$$
That is,
\begin{equation}\label{D}
F_{m}(\lambda)\leq F_{m}(0)-\frac{a\log K^{2}}{\log\lambda}.
\end{equation}
Taking limit inferior in (\ref{D}) as $m\rightarrow\infty$ yields
$$
\mbox{\underline{$F$}}(\lambda)\leq\mbox{\underline{$F$}}(0)
-\frac{a\log K^{2}}{\log\lambda}.
$$
Passing to the limit as $\lambda\rightarrow +0$, we get that
$$
\mbox{\underline{$F$}}(0)=\mbox{\underline{$F$}}^{+}(0)
\qquad \qquad
\mbox{and } \qquad\qquad {\overline{F}}(0)={\overline{F}}^{+}(0).
$$
We have seen already that ${\overline{F}}^{+}(0)=F(0)=\mbox{\underline{$F$}}
(0)$, which proves part ii) of Theorem 2.1. Since $\displaystyle-
\frac{a\log K^{2}}{\log\lambda}$ is right continuous in $\lambda$,
$$
{\overline{F}}^{+}(\lambda)\leq F(0)-\frac{a\log K^{2}}{\log\lambda}.
$$
Hence part iii) of Theorem 2.1 is also proved.
\end{proof}
We will need the following lemma in the proof of Theorem 0.2 in
the last section. We
follow the proof of Lemma 3.3.1 in \cite{L}.
\begin{lemma}
$$
\int_{0+}^{K^2}\left\{\frac{F(\lambda)-F(0)}{\lambda}\right\} d\lambda\le
\liminf_{m\to\infty}
\int_{0+}^{K^2}\left\{\frac{F_m(\lambda)-F_m(0)}{\lambda}\right\} d\lambda
$$
\end{lemma}
\begin{proof}
By Theorem 2.1, and the monotone convergence theorem, one has
\begin{align*}
\int_{0+}^{K^2}\left\{\frac{F(\lambda)-F(0)}{\lambda}\right\}d\lambda
& = \int_{0+}^{K^2}\left\{\frac{\underline{F}(\lambda)-
\underline{F}(0)}{\lambda}\right\}d\lambda\\
& = \int_{0+}^{K^2}\liminf_{m\to\infty}\left\{\frac{F_m(\lambda)-
F_m(0)}{\lambda}\right\}d\lambda \\
& = \int_{0+}^{K^2}\lim_{m\to\infty}\left(\inf\left\{\frac{F_n(\lambda)-
F_n(0)}{\lambda}|n \ge m\right\}\right)d\lambda \\
& = \lim_{m\to\infty}\int_{0+}^{K^2}\inf\left\{\frac{F_n(\lambda)-
F_n(0)}{\lambda}|n \ge m\right\}d\lambda \\
& \le \liminf_{m\to\infty} \int_{0+}^{K^2}\left\{\frac{F_m(\lambda)-
F_m(0)}{\lambda}\right\}d\lambda.
\end{align*}
\end{proof}
\section{Proofs of the main theorems}
In this section, we will prove the Amenable Approximation Theorem (Theorem
0.1)
of the introduction. We will also prove some related spectral results.
\begin{proof}[Proof of Theorem 0.1 (Amenable Approximation Theorem)]
Observe that
\begin{align*}
\frac{b^j(Y_{m})}{N_{m}} & =
\frac{\dim_{\mathbb C}\Big(\ker(\Delta_j^{(m)})\Big)}{N_{m}} \\
&=F_{m}(0).
\end{align*}
Also observe that
\begin{align*}
b_{(2)}^j(Y:\pi) & = \dim_{\pi}\Big(\ker(\Delta_j)\Big)\\
&=F(0).
\end{align*}
Therefore Theorem $0.1$ follows from Theorem 2.1 after taking the
limit as $m\to\infty$.
\end{proof}
Suppose that $M$ is a compact Riemannian manifold and $\Omega^{j}_{(2)}
(\widetilde{M})$ denote the Hilbert space of square integrable $j$-forms on
a normal covering space $\widetilde{M}$, with transformation group $\pi$.
The Laplacian ${\widetilde{\Delta}}_{j}:\Omega^{j}_{(2)}
(\widetilde{M})\rightarrow\Omega^{j}_{(2)}(\widetilde{M})$ is essentially
self-adjoint and has a spectral decomposition
$\{{\widetilde P}_{j}(\lambda):\lambda\in
[0,\infty)\}$ where each ${\widetilde P}_{j}(\lambda)$ has finite
von Neumann trace.
The associated von Neumann spectral density function,
${\widetilde F}(\lambda)$ is
defined as
$$
{\widetilde F}:[0,\infty)\rightarrow [0,\infty),\;\;\;
{\widetilde F}(\lambda)=
{\text{Tr}}_{{\mathcal U}(\pi)}
({\widetilde P}_{j}(\lambda)).
$$
Note that ${\widetilde F}(0)=b_{(2)}^{j}(\widetilde{M}:\pi)$ and that
the spectrum of
$\widetilde{\Delta}_{j}$ has a gap at zero if and only if there is a
$\lambda>0$ such that
$$
{\widetilde F}(\lambda)={\widetilde F}(0).
$$
Suppose that $\pi$ is an amenable group. Fix a triangulation
$X$ on $M$. Then the normal cover $\widetilde{M}$ has an induced
triangulation
$Y$. Let $Y_{m}$ denote be a subcomplex of $Y$ such that
$\big\{Y_{m}\big\}^{\infty}_{m=1}$
is a regular exhaustion of $Y$. Let $\Delta^{(m)}_j:C^{j}(Y_{m},
{\mathbb C})\rightarrow C^{j}(Y_{m},{\mathbb C})$ denote the
combinatorial Laplacian, and
let $E^{(m)}_j(\lambda)$ denote
the number of eigenvalues $\mu$ of $\Delta^{(m)}_j$ which are less than or
equal to $\lambda$. Under the hypotheses above we prove the following.
\begin{theorem}[Gap criterion] $\;$The spectrum of ${\widetilde{\Delta}}_j$
has a gap at zero if and only if there is a $\lambda>0$ such that
$$
\lim_{m\rightarrow\infty}\;
\frac{E^{(m)}_j(\lambda)-E^{(m)}_j(0)}{N_{m}}=0.
$$\end{theorem}
\begin{proof}
Let $\Delta_j:C^{j}_{(2)}(Y)\rightarrow
C^{j}_{(2)}(Y)$ denote the combinatorial Laplacian acting on $L^2$
j-cochains on $Y$.
Then by \cite{GS}, \cite{E}, the von Neumann spectral density function $F$
of the combinatorial Laplacian $\Delta_j$ and the von Neumann
spectral density
function $\widetilde F$ of the
analytic Laplacian ${\widetilde{\Delta}}_j$ are dilatationally
equivalent, that is, there are constants $C>0$ and $\varepsilon>0$
independent
of $\lambda$ such that for all $\lambda\in(0,\varepsilon)$,
\begin{equation} \label{star}
F(C^{-1}\lambda)\leq {\widetilde F}(\lambda)\leq F(C\lambda).
\end{equation}
Observe that $\frac{E^{(m)}_j(\lambda)}{N_{m}} = F_m(\lambda)$.
Therefore the theorem also follows from Theorem 2.1.
\end{proof}
There is a standing conjecture that the Novikov-Shubin invariants of a
closed manifold are positive (see \cite{E}, \cite{ES} and \cite{GS}
for its definition). The next theorem gives evidence supporting
this conjecture, at least in the case of amenable fundamental groups.
\begin{theorem}[Spectral density estimate] $\;$There are constants $C>0$
and $\varepsilon>0$ independent of $\lambda$, such that for all $\lambda\in
(0,\varepsilon)$
$$
{\widetilde F}(\lambda)-{\widetilde F}(0)\leq\frac{C}{-\log(\lambda)}\;.
$$\end{theorem}
\begin{proof}
This follows from Theorem 2.1 and Theorem 3.1
since $\widetilde{\Delta}_j$ has a gap at zero if and only if
$
{\widetilde F}_j(\lambda)={\widetilde F}_j(0)
$ for some $\lambda>0$.
\end{proof}
\section{On the determinant class conjecture}
There is a standing conjecture that any normal
covering space of a finite simplicial complex is of determinant class.
Our interest in this conjecture stems from our work on $L^2$
torsion \cite{CFM}, \cite{BFKM}. The $L^2$ torsion is a well defined
element in the determinant line of the reduced $L^2$ cohomology, whenever
the covering space is of determinant class. In this section, we use
the results
of section 2 to prove that any {\em amenable} normal
covering space of a finite simplicial complex is of determinant class.
Recall that a covering space $Y$ of a finite simplicial complex $X$
is said to be of {\em determinant class} if, for $0 \le j \le n,$
$$ - \infty < \int^1_{0^+} \log \lambda d F (\lambda),$$
where $F(\lambda)$ denotes the von Neumann
spectral density function of the combinatorial Laplacian $\Delta_j$
as in Section 2.
Suppose that $M$ is a compact Riemannian manifold and $\Omega^{j}_{(2)}
(\widetilde{M})$ denote the Hilbert space of square integrable $j$-forms on
a normal covering space $\widetilde{M}$, with transformation group $\pi$.
The Laplacian ${\widetilde{\Delta}}_{j}:\Omega^{j}_{(2)}
(\widetilde{M})\rightarrow\Omega^{j}_{(2)}(\widetilde{M})$ is essentially
self-adjoint and the associated von Neumann spectral density function,
${\widetilde F}(\lambda)$ is
defined as in section 3.
Note that ${\widetilde F}(0)=b_{(2)}^{j}(\widetilde{M}:\pi)$
Then $\widetilde{M}$ is said to be of {\em analytic-determinant class},
if, for $0 \le j \le n,$
$$ - \infty < \int^1_{0^+} \log \lambda d {\widetilde F} (\lambda),$$
where ${\widetilde F}(\lambda)$ denotes the von Neumann spectral density
function
of the analytic Laplacian ${\widetilde{\Delta}}_{j}$ as above. By results
of
Gromov and Shubin \cite{GS},
the condition that $\widetilde{M}$ is of analytic-determinant class
is independent of the choice of Riemannian metric on $M$.
Fix a triangulation
$X$ on $M$. Then the normal cover $\widetilde{M}$ has an induced
triangulation
$Y$. Then $\widetilde{M}$ is said to be of {\em combinatorial-determinant
class}
if $Y$ is of determinant class. Using results of Efremov \cite{E}, and
\cite{GS} one sees that
the condition that $\widetilde{M}$ is of combinatorial-determinant class
is independent of the choice of triangulation on $M$.
Using again results of \cite{E} and \cite{GS}, one observes as in
\cite{BFKM}
that the combinatorial and analytic
notions of determinant class coincide, that is
$\widetilde{M}$ is of combinatorial-determinant class if and only if
$\widetilde{M}$ is of analytic-determinant class. The appendix of \cite{BFK}
contains a proof that every residually finite covering of a compact manifold
is of determinant class. Their proof is based on L\"uck's
approximation of von Neumann spectral density functions \cite{L}. Since
an analogous approximation holds in our setting (cf. Section 2),
we can apply the argument of \cite{BFK} to prove Theorem 0.2.
\begin{proof}[Proof of Theorem 0.2 (Determinant Class Theorem)]
Recall that the \emph{normalized} spectral density functions
$$
F_{m} (\lambda) = \frac 1 {N_m} E_j^{(m)} (\lambda)
$$
are right continuous.
Observe that $F_{m}(\lambda)$ are step functions and
denote by
${\det}' \Delta_j^{(m)}$ the modified determinant of $\Delta_j^{(m)}$,
i.e. the product
of all {\em nonzero} eigenvalues of $\Delta_j^{(m)}$. Let $a_{m, j}$
be the
smallest nonzero eigenvalue and $b_{m, j}$ the largest eigenvalue of
$\Delta_j^{(m)}$. Then, for any $a$ and $b$, such that
$0 < a < a_{m, j}$ and
$b > b_{m,j}$,
\begin{equation}\label{one}
\frac 1 {N_m} \log {\det}' \Delta_j^{(m)} = \int_a^b \log \lambda d F_{m}
(\lambda).
\end{equation}
Integration by parts transforms the Stieltjes integral
$\int_a^b \log \lambda d F_{m}
(\lambda)$ as follows.
\begin{equation}\label{two}
\int_a^b \log \lambda d F_{m} (\lambda) = (\log b) \big( F_{m} (b)
- F_{m} (0) \big) - \int_a^b \frac {F_{m} (\lambda) - F_{m} (0)} \lambda d
\lambda.
\end{equation}
As before, $F(\lambda)$ denotes the spectral density function of the operator
$\Delta_j$ for a fixed $j$.
Recall that $F(\lambda)$ is continuous to the right in $\lambda$. Denote
by ${\det}'_\pi\Delta_j$ the modified Fuglede-Kadison determinant
(cf. \cite{FK})
of $\Delta_j$,
that is, the Fuglede-Kadison determinant of $\Delta_j$ restricted to the
orthogonal
complement of its kernel. It is given by the
following Lebesgue-Stieltjes integral,
$$
\log {\det}^\prime_\pi \Delta_j = \int^{K^2}_{0^+} \log
\lambda d F (\lambda)
$$
with $K$ as in Lemma 2.2, i.e. $ || \Delta_j || < K^2$,
where $||\Delta_j||$ is the operator norm of $\Delta_j$.
Integrating by parts, one obtains
\begin{align} \label{three}
\log {\det}^\prime_\pi (\Delta_j) & = \log K^2 \big( F(K^2) - F(0) \big)
\nonumber\\
& + \lim_{\epsilon \rightarrow 0^+}
\Big\{(- \log \epsilon) \big( F (\epsilon)
-F(0) \big) - \int_\epsilon^{K^2} \frac {F (\lambda) -
F (0)} \lambda d \lambda
\Big\}.
\end{align}
Using the fact that $ \liminf_{\epsilon \rightarrow 0^+} (- \log \epsilon)
\big( F (\epsilon) - F (0) \big) \ge 0$
(in fact, this limit exists and is zero)
and $\frac {F (\lambda) - F (0)}
\lambda \ge 0$ for $\lambda > 0,$ one sees that
\begin{equation}\label{four}
\log {\det}^\prime_\pi (\Delta_j) \ge
( \log K^2) \big(F (K^2) - F(0) \big) -
\int_{0^+}^{K^2}
\frac {F(\lambda) - F (0)} \lambda d \lambda.
\end{equation}
We now complete the proof of Theorem 0.2.
The main ingredient is the estimate of $\log {\det}'_\pi(\Delta_j)$ in terms
of
$\log {{\det}}^\prime \Delta_j^{(m)}$ combined with the fact that
$\log {\det}'
\Delta_j^{(m)}\ge 0$ as the determinant $\det' \Delta_j^{(m)}$ is a
positive integer.
By Lemma 2.2, there exists a positive number $K$, $1 \le K < \infty$,
such that, for $m \ge 1$,
$$
|| \Delta_j^{(m)} || \le K^2 \quad {\text{and}}\quad || \Delta_j ||
\le K^2.$$
By Lemma 2.6,
\begin{equation}\label{five}
\int_{0^+}^{K^2} \frac {F (\lambda) - F (0)} \lambda d \lambda \le
\liminf_{m \rightarrow \infty} \int_{0^+}^{K^2} \frac
{F_{m} (\lambda) - F_{m} (0)} \lambda d \lambda.
\end{equation}
Combining (\ref{one}) and (\ref{two}) with the inequalities
$\log {\det}' \Delta_j^{(m)}
\ge 0$, we obtain
\begin{equation}\label{six}
\int_{0^+}^{K^2} \frac {F_{m} (\lambda) - F_{m} (0)} \lambda d \lambda
\le (\log K^2) \big(F_{m} (K^2) - F_{m} (0)\big).
\end{equation}
From (\ref{four}), (\ref{five}) and (\ref{six}), we conclude that
\begin{equation}\label{seven}
\log {\det}'_\pi \Delta_j \ge (\log K^2) \big( F(K^2) - F(0) \big)
- \liminf_{m \rightarrow \infty}(\log K^2) \big( F_{m} (K^2) - F_{m}
(0) \big).
\end{equation}
Now Theorem 2.1 yields
$$
F (\lambda) = \lim_{\epsilon \rightarrow 0^+} \liminf_{m \rightarrow
\infty}
F_{m} (\lambda + \epsilon)
$$
and
$$
F (0) = \lim_{m \rightarrow \infty} F_{m} (0).
$$
The last two equalities combined with (\ref{seven}) imply that
$\log {\det}'_\pi \Delta_j \ge 0$.
Since this is true for all $j=0,1,\ldots,\dim Y$,
$Y$ is of determinant class.
\end{proof}
| proofpile-arXiv_065-494 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter{Introduction} \label{intro_chapt}
\proverb{No time like the present.}
\section{Subject of this thesis} \label{thesis_subject}
Over the past thirty years, there have been significant advances in
the area of natural language interfaces to databases (\textsc{Nlidb}\xspace{s}).
\textsc{Nlidb}\xspace{s} allow users to access information stored in databases by
typing requests expressed in natural language (e.g.\ English). (The
reader is referred to \cite{Perrault}, \cite{Copestake}, and
\cite{Androutsopoulos1995} for surveys of \textsc{Nlidb}\xspace{s}.\footnote{The
project described in this thesis began with an extensive survey of
\textsc{Nlidb}\xspace{s}. The results of this survey were reported in
\cite{Androutsopoulos1995}.}) Most of the existing \textsc{Nlidb}\xspace{s} were
designed to interface to ``snapshot'' database systems, that provide
very limited facilities for manipulating time-dependent data.
Consequently, most \textsc{Nlidb}\xspace{s} also provide very limited support for the
notion of time. In particular, they were designed to answer questions
that refer mainly to the present (e.g.\ \pref{intro:1} --
\pref{intro:3}), and do not support adequately the mechanisms that
natural language uses to express time. For example, very few (if any)
temporal adverbials (\qit{in 1991}, \qit{after 5:00pm}, etc.) and verb
forms (simple past, past continuous, past perfect, etc.) are typically
allowed, and their semantics are usually over-simplified or ignored.
\begin{examps}
\item What is the salary of each engineer? \label{intro:1}
\item Who is at site 4? \label{intro:2}
\item Which generators are in operation? \label{intro:3}
\end{examps}
The database community is becoming increasingly interested in
\emph{temporal} database systems. These are intended to store and
manipulate in a principled manner information not only about the
present, but also about the past and the future (see \cite{Tansel3} and
\cite{tdbsglossary} for an introduction, and \cite{Bolour1983},
\cite{McKenzie2}, \cite{Stam}, \cite{Soo}, \cite{Kline1993},
and \cite{Tsotras1996} for bibliographies). When
interfacing to temporal databases, it becomes crucial for \textsc{Nlidb}\xspace{s} to
interpret correctly the temporal linguistic mechanisms (verb tenses,
temporal adverbials, temporal subordinate clauses, etc.) of questions
like \pref{intro:4} -- \pref{intro:6}.
\begin{examps}
\item What was the salary of each engineer while ScotCorp was
building bridge 5? \label{intro:4}
\item Did anybody leave site 4 before the chief engineer had inspected
the control room? \label{intro:5}
\item Which systems did the chief engineer inspect on Monday after the
auxiliary generator was in operation? \label{intro:6}
\end{examps}
In chapter \ref{comp_chapt}, I argue that previous approaches to
natural language interfaces for \emph{temporal} databases (\textsc{Nlitdb}\xspace{s})
are problematic, mainly because they ignore important time-related
linguistic phenomena, and/or they assume idiosyncratic temporal
database systems. This thesis develops a principled framework for
constructing English \textsc{Nlitdb}\xspace{s}, drawing on research in linguistic
theories of time, temporal logics, and temporal databases.
\section{Some background}
This section introduces some ideas from \textsc{Nlidb}\xspace{s}, linguistic theories
of time, temporal logics, and temporal databases. Ideas from the four
areas will be discussed further in following chapters.
\subsection{Natural language interfaces to databases}
\label{domain_config}
Past work on \textsc{Nlidb}\xspace{s} has shown the benefits of using the abstract
architecture of figure \ref{pipeline_fig}. The natural language
question is first parsed and analysed semantically by a linguistic
front-end, which translates the question into an intermediate meaning
representation language (typically, some form of logic). The generated
intermediate language expression captures formally what the system
understands to be the meaning of the natural language question,
without referring to particular database constructs. The intermediate
language expression is then translated into a database language
(usually \textsc{Sql}\xspace \cite{Ullman} \cite{Melton1993}) that is supported by
the underlying \emph{database management system} (\textsc{Dbms}\xspace; this is the
part of the database system that manipulates the information in the
database). The resulting database language expression specifies what
information needs to be retrieved in terms of database constructs. The
\textsc{Dbms}\xspace retrieves this information by evaluating the database language
expression, and the obtained information is reported back to the user.
\begin{figure}
\hrule
\medskip
\begin{center}
\includegraphics[scale=.7]{pipeline_arch}
\caption{Abstract architecture of many modern NLIDBs}
\label{pipeline_fig}
\end{center}
\hrule
\end{figure}
Most \textsc{Nlidb}\xspace{s} can only handle questions referring to a particular
knowledge-domain (e.g.\ questions about train departures, or about the
employees of a company), and need to be configured before they can be
used in a new domain. The configuration typically includes
``teaching'' the \textsc{Nlidb}\xspace words that can be used in the new domain, and
linking basic expressions of the formal intermediate language to
database constructs (see section 6 of \cite{Androutsopoulos1995}).
The architecture of figure \ref{pipeline_fig} has proven to have
several advantages (see sections 5.4 and 6 of
\cite{Androutsopoulos1995}), like modularity (e.g.\ the
linguistic-front end is shielded from database-level issues), and
\textsc{Dbms}\xspace portability (the same linguistic front-end can be used with
\textsc{Dbms}\xspace{s} that support different database languages). This thesis
examines how this architecture can be used to construct
\textsc{Nlitdb}\xspace{s}.
\subsection{Tense and aspect theories} \label{tense_aspect_intro}
In English, temporal information can be conveyed by verb forms (simple
past, past continuous, present perfect, etc.), nouns (\qit{beginning},
\qit{predecessor}, \qit{day}), adjectives (\qit{earliest}, \qit{next},
\qit{annual}), adverbs (\qit{yesterday}, \qit{twice}), prepositional
phrases (\qit{at 5:00pm}, \qit{for two hours}), and subordinate
clauses (\qit{while tank 4 was empty}), to mention just some of the
available temporal mechanisms. A linguistic theory of time must
account for the ways in which these mechanisms are used (e.g.\ specify
what is the temporal content of each verb form, how temporal
adverbials or subordinate clauses affect the meaning of the overall
sentences, etc.). The term ``tense and aspect theories'' is often used
in the literature to refer to theories of this kind. (The precise
meanings of ``tense'' and ``aspect'' vary from one theory to the
other; see \cite{Comrie} and \cite{Comrie2} for some discussion.
Consult chapter 5 of \cite{Kamp1993} for an extensive introduction to
tense and aspect phenomena.)
It is common practice in tense and aspect theories to classify
natural language expressions or situations described by natural
language expressions into \emph{aspectual classes}. (The term
\qit{Aktionsarten} is often used to refer to these
classes.) Many aspectual classifications are similar to
Vendler's taxonomy \cite{Vendler}, that distinguishes between
\emph{state} verbs, \emph{activity} verbs, \emph{accomplishment}
verbs, and \emph{achievement} verbs.\footnote{According to Mourelatos
\cite{Mourelatos1978}, a similar taxonomy was developed independently
in \cite{Kenny1963}, where Kenny notes that his classification is
similar to the distinction between \emph{kineseis} and \emph{energiai}
introduced by Aristotle in \textit{Metaphysics},
$\Theta$.1048\textit{b}, 18--36.} For example, \qit{to run} (as in
\qit{John ran.}) is said to be an activity verb, \qit{to know} (as in
\qit{John knows the answer.}) a state verb, \qit{to build} (as in
\qit{John built a house.}) an accomplishment verb, and \qit{to find}
(as in \qit{Mary found the treasure.}) an achievement verb.
Vendler's intuition seems to be that activity verbs describe actions
or changes in the world. For example, in \qit{John ran.} there is a
running action in the world. In contrast, state verbs do not refer to
any actions or changes. In \qit{John knows the answer.} there is no
change or action in the world. Accomplishment verbs are similar to
activity verbs, in that they denote changes or actions. In the case
of accomplishment verbs, however, the action or change has an inherent
``climax'', a point that has to be reached for the action or change to
be considered complete. In \qit{build a house} the climax is the
point where the whole of the house has been built. If the building
stops before the whole of the house has been built, the building
action is incomplete. In contrast, the action of the activity verb
\qit{to run} (with no object, as in \qit{John ran.}) does not seem to
have any climax. The runner can stop at any time without the running
being any more or less complete. If, however, \qit{to run} is used
with an object denoting a precise distance (e.g.\ \qit{to run a
mile}), then the action \emph{does} have a climax: the point where
the runner completes the distance. In this case, \qit{to run} is an
accomplishment verb. Finally, achievement verbs, like \qit{to find},
describe instantaneous events. In \qit{Mary found the treasure.} the
actual finding is instantaneous (according to Vendler, the time during
which Mary was searching for the treasure is not part of the actual
finding). In contrast, in \qit{John built a house.} (accomplishment
verb) the actual building action may have lasted many years.
Aspectual taxonomies are invoked to account for semantic differences
in similar sentences. The so-called ``imperfective paradox''
\cite{Dowty1977} \cite{Lascarides} is a well-known example (various
versions of the imperfective paradox have been proposed; see
\cite{Kent}). The paradox is that if the answer to a question like
\pref{taintro:1} is affirmative, then the answer to the
non-progressive \pref{taintro:2} must also be affirmative. In
contrast, an affirmative answer to \pref{taintro:3} does not
necessarily imply an affirmative answer to \pref{taintro:4} (John may
have abandoned the repair before completing it). The \textsc{Nlitdb}\xspace must
incorporate some account for this phenomenon. If the \textsc{Nlitdb}\xspace generates
an affirmative response to \pref{taintro:1}, there must be some
mechanism to guarantee that the \textsc{Nlitdb}\xspace's answer to \pref{taintro:2}
will also be affirmative. No such mechanism is needed in
\pref{taintro:3} and \pref{taintro:4}.
\begin{examps}
\item Was IBI ever advertising a new computer? \label{taintro:1}
\item Did IBI ever advertise a new computer? \label{taintro:2}
\item Was J.Adams ever repairing engine 2? \label{taintro:3}
\item Did J.Adams ever repair engine 2? \label{taintro:4}
\end{examps}
The difference between \pref{taintro:1} -- \pref{taintro:2} and
\pref{taintro:3} -- \pref{taintro:4} can be accounted for by
classifying \qit{to advertise} as an activity, \qit{to repair} as an
accomplishment, and by stipulating that: (i) the simple past of an
accomplishment requires the climax to have been reached; (ii) the past
continuous of an accomplishment or activity, and the simple past of an
activity impose no such requirement. Then, the fact that an
affirmative answer to \pref{taintro:3} does not necessarily imply an
affirmative answer to \pref{taintro:4} is accounted for by the fact
that \pref{taintro:4} requires the repair to have been completed,
while \pref{taintro:3} merely requires the repair to have been ongoing
at some past time. In contrast \pref{taintro:2} does not require any
climax to have been reached; like \pref{taintro:1}, it simply requires
the advertising to have been ongoing at some past time. Hence, an
affirmative answer to \pref{taintro:1} implies an affirmative answer
to \pref{taintro:2}. It will become clear in chapter
\ref{linguistic_data} that aspectual taxonomies pertain to the
semantics of almost all temporal linguistic mechanisms.
\subsection{Temporal logics} \label{temp_log_intro}
Time is an important research topic in logic, and many formal
languages have been proposed to express temporal information
\cite{VanBenthem} \cite{Gabbay1994b}. One of the simplest approaches
is to use the traditional first-order predicate logic, introducing
time as an extra argument of each predicate. \pref{tlogi:1} would be
represented as \pref{tlogi:2}, where $t$ is a time-denoting variable,
$\prec$ stands for temporal precedence, $\sqsubseteq$ for temporal
inclusion, and $now$ is a special term denoting the present
moment. The answer to \pref{tlogi:1} would be affirmative iff
\pref{tlogi:2} evaluates to true, i.e.\ iff there is a time $t$, such
that $t$ precedes the present moment, $t$ falls within 1/10/95, and
tank 2 contained water at $t$. (Throughout this thesis, I use ``iff''
as a shorthand for ``if and only if''.)
\begin{examps}
\item Did tank 2 contain water (some time) on 1/10/95? \label{tlogi:1}
\item $\exists t \; contain(tank2, water, t) \land t \prec now \land
t \sqsubseteq \mathit{1/10/95}$ \label{tlogi:2}
\end{examps}
An alternative approach is to employ \emph{temporal operators}, like
Prior's $P$ (past) and $F$ (future) \cite{Prior}. In that
approach, formulae are evaluated with respect to particular times. For
example, $contain(tank2, water)$ would be true at a time $t$ iff
tank 2 contained water at $t$. Assuming that $\phi$ is a formula,
$P\phi$ is true at a time $t$ iff there is a time $t'$,
such that $t'$ precedes $t$, and $\phi$ is true at $t'$. Similarly,
$F\phi$ is true at $t$ iff there is a $t'$, such that $t'$ follows
$t$, and $\phi$ is true at $t'$. \qit{Tank 2 contains water.}
can be expressed as $contain(tank2, water)$, \qit{Tank 2 contained
water.} as $P\, contain(tank2, water)$, \qit{Tank 2 will contain
water.} as $F \, contain(tank2, water)$, and \qit{Tank 2 will have
contained water.} as $F \, P \, contain(tank2, water)$.
Additional operators can be introduced, to capture the semantics
of temporal adverbials, temporal subordinate clauses,
etc. For example, an $On$ operator could be introduced, with the
following semantics: if $\phi$ is a formula and $\kappa$ specifies a
day (e.g.\ the day 1/10/95), then $On[\kappa, \phi]$ is true at a
time $t$ iff $t$ falls within the day specified by $\kappa$, and
$\phi$ is true at $t$. Then, \pref{tlogi:1} could be represented as
\pref{tlogi:4}.
\begin{examps}
\item $P \;\; On[\mathit{1/10/95}, contains(tank2, water)]$ \label{tlogi:4}
\end{examps}
The intermediate representation language of this thesis, called \textsc{Top}\xspace,
adopts the operators approach (\textsc{Top}\xspace stands for ``language with
Temporal OPerators''). Temporal operators have also been used in
\cite{Dowty1982}, \cite{Lascarides}, \cite{Richards}, \cite{Kent},
\cite{Crouch2}, \cite{Pratt1995}, and elsewhere.
Unlike logics designed to be used in systems that reason about what
changes or remains the same over time, what can or will happen, what
could or would have happened, or how newly arrived information fits
within already known facts or assumptions (e.g.\ the situation
calculus of \cite{McCarthy1969}, the event calculus of
\cite{Kowalski1986}, and the logics of \cite{Allen1983},
\cite{Allen1984}, and \cite{McDermott1982} -- see \cite{Vila1994} for a
survey), \textsc{Top}\xspace is not intended to be used in reasoning. I provide no
inference rules for \textsc{Top}\xspace, and this is why I avoid calling \textsc{Top}\xspace a
logic. \textsc{Top}\xspace is only a formal language, designed to facilitate the
systematic mapping of temporal English questions to formal expressions
(this mapping is not a primary consideration in the above mentioned
logics). The answers to the English questions are not generated by
carrying out reasoning in \textsc{Top}\xspace, but by translating the \textsc{Top}\xspace
expressions to database language expressions, which are then evaluated
by the underlying \textsc{Dbms}\xspace. The definition of \textsc{Top}\xspace will be given in
chapter \ref{TOP_chapter}, where other ideas from temporal logics will
also be discussed.
\subsection{Temporal databases} \label{tdbs_general}
In the \emph{relational model} \cite{Codd1970}, currently the dominant
database model, information is stored in \emph{relations}.
Intuitively, relations can be thought of as tables, consisting of rows
(called \emph{tuples}) and columns (called \emph{attributes}). For
example, the $salaries$ relation below shows the present salaries of
the current employees of a company. In the case of $salaries$,
whenever the salary of an employee is changed, or whenever an employee
leaves the company, the corresponding tuple is modified or deleted.
Hence, the database ``forgets'' past facts, and does not contain
enough information to answer questions like \qit{What was the salary
of T.Smith on 1/1/1992?}.
\adbtable{2}{|c|c|}{$salaries$}
{$employee$ & $salary$ }
{$J.Adams$ & $17000$ \\
$T.Smith$ & $19000$ \\
\ \dots & \ \dots
}
It is certainly true that traditional database models and languages
\emph{can} and \emph{have} been used to store temporal information.
(This has led several researchers to question the need for special
temporal support in database systems; see \cite{Davies1995} for some
discussion.) For example, two extra attributes ($\mathit{from}$ and
$\mathit{to}$) could be added to $salaries$ (as in $salaries2$) to
\emph{time-stamp} its tuples, i.e.\ to show when each employee had the
corresponding salary.
\adbtable{4}{|c|c|c|c|}{$salaries2$}
{$employee$ & $salary$ & $from$ & $to$ }
{$J.Adams$ & $17000$ & $1/1/88$ & $5/5/90$ \\
$J.Adams$ & $18000$ & $6/5/90$ & $9/8/91$ \\
$J.Adams$ & $21000$ & $10/8/91$ & $27/3/93$ \\
\ \dots & \ \dots & \ \dots & \ \dots \\
$T.Smith$ & $17000$ & $1/1/89$ & $1/10/90$ \\
$T.Smith$ & $21000$ & $2/10/90$ & $23/5/92$ \\
\ \dots & \ \dots & \ \dots & \ \dots
}
The lack of special temporal support in traditional database models
and languages, however, complicates the task of expressing in database
language time-related data manipulations. We may want, for example,
to compute from $salaries2$ a new relation $same\_salaries$ that shows
the times when J.Adams and T.Smith had the same salary, along with
their common salary:
\adbtable{3}{|c|c|c|}{$same\_salaries$}
{$salary$ & $from$ & $to$ }
{$17000$ & $1/1/89$ & $5/5/90$ \\
$21000$ & $10/8/91$ & $23/5/92$ \\
\ \dots & \ \dots & \ \dots
}
That is, for every tuple of J.Adams in $salaries2$, we need to check
if the period specified by the $\mathit{from}$ and $\mathit{to}$
values of that tuple overlaps the period specified by the
$\mathit{from}$ and $\mathit{to}$ values of a tuple for T.Smith which
has the same $salary$ value. If they overlap, we need to compute the
intersection of the two periods. This cannot be achieved easily in the
present version of \textsc{Sql}\xspace (the dominant database language for
relational databases \cite{Ullman} \cite{Melton1993}), because \textsc{Sql}\xspace
currently does not have any special commands to check if two periods
overlap, or to compute the intersection of two periods (in fact, it
does not even have a period datatype).
As a further example, the approach of adding a $\mathit{from}$ and a
$\mathit{to}$ attribute to every relation allows relations like $rel1$ and
$rel2$ to be formed. Although $rel1$ and $rel2$ contain
different tuples, they represent the same information.
\vspace{-9mm}
\begin{center}
\begin{tabular}{lr}
\dbtable{4}{|c|c|c|c|}{$rel1$}
{$employee$ & $salary$ & $from$ & $to$}
{$G.Foot$ & $17000$ & $1/1/88$ & $9/5/88$ \\
$G.Foot$ & $17000$ & $10/5/88$ & $9/5/93$ \\
$G.Foot$ & $18000$ & $10/5/93$ & $1/3/94$ \\
$G.Foot$ & $18000$ & $2/3/94$ & $11/2/95$ \\
$G.Foot$ & $17000$ & $12/2/95$ & \ $31/3/96$
}
&
\dbtable{4}{|c|c|c|c|}{$rel2$}
{$employee$ & $salary$ & $from$ & $to$}
{$G.Foot$ & $17000$ & $1/1/88$ & $31/5/89$ \\
$G.Foot$ & $17000$ & $1/6/89$ & $10/8/92$ \\
$G.Foot$ & $17000$ & $11/8/92$ & $9/5/93$ \\
$G.Foot$ & $18000$ & $10/5/93$ & $11/2/95$ \\
$G.Foot$ & $17000$ & $12/2/95$ & \ $31/3/96$
}
\end{tabular}
\end{center}
Checking if the two relations represent the same information is not
easy in the current \textsc{Sql}\xspace version. This task would be greatly
simplified if \textsc{Sql}\xspace provided some mechanism to ``normalise''
relations, by merging tuples that apart from their $from$ and $to$
values are identical (tuples of this kind are called
\emph{value-equivalent}). In our example, that mechanism would turn
both $rel1$ and $rel2$ into $rel3$. To check that $rel1$ and
$rel2$ contain the same information, one would check that the
normalised forms of the two relations are the same.
\adbtable{4}{|c|c|c|c|}{$rel3$}
{$employee$ & $salary$ & $from$ & $to$}
{$G.Foot$ & $17000$ & $1/1/88$ & $9/5/93$ \\
$G.Foot$ & $18000$ & $10/5/93$ & $11/2/96$ \\
$G.Foot$ & $17000$ & $12/2/95$ & \ $31/3/96$
}
Numerous temporal versions of \textsc{Sql}\xspace and the relational model have been
proposed (e.g.\ \cite{Clifford2}, \cite{Ariav1986},
\cite{Tansel},\cite{Snodgrass}, \cite{Navathe1988}, \cite{Gadia1988},
\cite{Lorentzos1988}; see \cite{McKenzie} for a summary of some of the
proposals). These add special temporal facilities to \textsc{Sql}\xspace (e.g.\
predicates to check if two periods overlap, functions to compute
intersections of periods, etc.), and often special types of relations
to store time-varying information (e.g.\ relations that force
value-equivalent tuples to be merged automatically). Until
recently there was little consensus on how temporal support should be
added to \textsc{Sql}\xspace and the relational model (or other database languages
and models), with every researcher in the field adopting
his/her own temporal database language and model. Perhaps as a result
of this, very few temporal \textsc{Dbms}\xspace{s} have been implemented (these are
mostly early prototypes; see \cite{Boehlen1995c}).
This thesis adopts \textsc{Tsql2}\xspace, a recently proposed temporal extension of
\textsc{Sql-92}\xspace that was designed by a committee comprising most leading
temporal database researchers. (\textsc{Sql-92}\xspace is the latest \textsc{Sql}\xspace standard
\cite{Melton1993}. \textsc{Tsql2}\xspace is defined in \cite{TSQL2book}. An earlier
definition of \textsc{Tsql2}\xspace can be found in \cite{Tsql2Sigmod}.) \textsc{Tsql2}\xspace and the
version of the relational model on which \textsc{Tsql2}\xspace is based will be
presented in chapter \ref{tdb_chapter}, along with some modifications
that were introduced to them for the purposes of this thesis. Until
recently, there was no implemented \textsc{Dbms}\xspace supporting \textsc{Tsql2}\xspace. A prototype
system, however, which is capable of evaluating \textsc{Tsql2}\xspace queries now
exists. (This system is called \textsc{TimeDB}. See
\cite{Boehlen1995c} for a brief technical description of
\textsc{TimeDB}. \textsc{TimeDB} actually supports \textsc{Atsql2}, a
variant of \textsc{Tsql2}\xspace. See \cite{Boehlen1996} for some information on
\textsc{Atsql2}.)
Researchers in temporal databases distinguish between \emph{valid
time} and \emph{transaction time}.\footnote{I adopt the consensus
terminology of \cite{tdbsglossary}. A third term, \emph{user-defined
time}, is also employed in the literature to refer to temporal
information that is stored in the database without the \textsc{Dbms}\xspace
treating it in any special way.} The valid time of some information
is the time when that information was true in the \emph{world}. The
transaction time of some information is the time when the
\emph{database} ``believed'' some piece of information. In this
thesis, I ignore the transaction-time dimension. I assume that the
natural language questions will always refer to the information that
the database currently believes to be true. Questions like
\pref{dbi:5}, where \qit{on 2/1/95} specifies a transaction time other
than the present, will not be considered.
\begin{examps}
\item According to what the database believed on 2/1/95, what was the
salary of J.Adams on 1/1/89? \label{dbi:5}
\end{examps}
\section{Contribution of this thesis} \label{contribution}
As mentioned in section \ref{thesis_subject}, most existing \textsc{Nlidb}\xspace{s}
were designed to interface to snapshot database systems. Although
there have been some proposals on how to build \textsc{Nlidb}\xspace{s} for temporal
databases, in chapter \ref{comp_chapt} I argue that these proposals
suffer from one or more of the following: (i) they ignore
important English temporal mechanisms, or assign to them
over-simplified semantics, (ii) they lack clearly defined meaning
representation languages, (iii) they do not provide complete
descriptions of the mappings from natural language to meaning
representation language, or (iv) from meaning representation language
to database language, (v) they adopt idiosyncratic and often not
well-defined temporal database models or languages, (vi) they do not
demonstrate that their ideas are implementable. In this thesis, I
develop a principled framework for constructing English \textsc{Nlitdb}\xspace{s},
attempting to avoid pitfalls (i) -- (vi). Building on the architecture
of figure \ref{pipeline_fig}:
\begin{itemize}
\item I explore temporal linguistic phenomena that are likely to
appear in English questions to \textsc{Nlitdb}\xspace{s}. Drawing on existing
linguistic theories of time, I formulate an account for many of
these phenomena that is simple enough to be embodied in practical
\textsc{Nlitdb}\xspace{s}.
\item Exploiting ideas from temporal logics, I define a temporal
meaning representation language (\textsc{Top}\xspace), which I use to represent the
semantics of English questions.
\item I show how \textsc{Hpsg}\xspace \cite{Pollard1} \cite{Pollard2}, currently a
highly regarded linguistic theory, can be modified to incorporate
the tense and aspect account of this thesis, and to map a wide range
of English questions involving time to appropriate \textsc{Top}\xspace
expressions.
\item I present and prove the correctness of a mapping that translates
\textsc{Top}\xspace expressions to \textsc{Tsql2}\xspace queries.
\end{itemize}
This way, I establish a sound route from English questions involving time
to a general-purpose temporal database language, that can act as a principled
framework for constructing \textsc{Nlitdb}\xspace{s}. To ensure that this framework is
workable:
\begin{itemize}
\item I demonstrate how it can be employed to implement a prototype
\textsc{Nlitdb}\xspace, using the \textsc{Ale}\xspace grammar development system
\cite{Carpenter1992} \cite{Carpenter1994} and Prolog
\cite{Clocksin1994} \cite{Sterling1994}. I configure the prototype \textsc{Nlitdb}\xspace
for a hypothetical air traffic control domain, similar to that of
\cite{Sripada1994}.
\end{itemize}
Unfortunately, during most of the work of this thesis no \textsc{Dbms}\xspace
supported \textsc{Tsql2}\xspace. As mentioned in section \ref{tdbs_general}, a
prototype \textsc{Dbms}\xspace (\textsc{TimeDB}) that supports a version of \textsc{Tsql2}\xspace
(\textsc{Atsql2}) was announced recently. Although it would be
obviously very interesting to link the \textsc{Nlitdb}\xspace of this thesis to
\textsc{TimeDB}, there is currently very little documentation on
\textsc{TimeDB}. The task of linking the two systems is further
complicated by the fact that both adopt their own versions of \textsc{Tsql2}\xspace
(\textsc{TimeDB} supports \textsc{Atsql2}, and the \textsc{Nlitdb}\xspace of this
thesis adopts a slightly modified version of \textsc{Tsql2}\xspace, to be discussed in
chapter \ref{tdb_chapter}). One would have to bridge the differences
between the two \textsc{Tsql2}\xspace versions. Due to shortage of time, I made no
attempt to link the \textsc{Nlitdb}\xspace of this thesis to \textsc{TimeDB}. The
\textsc{Tsql2}\xspace queries generated by the \textsc{Nlitdb}\xspace are currently not executed, and
hence no answers are produced.
Although several issues (summarised in section \ref{to_do}) remain to
be addressed, I am confident that this thesis will prove valuable to
both those wishing to implement \textsc{Nlitdb}\xspace{s} for practical applications,
and those wishing to carry out further research on \textsc{Nlitdb}\xspace{s},
because: (a) it is essentially the first in-depth exploration of time-related
problems the \textsc{Nlitdb}\xspace designer has to face, from the linguistic
level down to the database level, (b) it proposes a clearly defined
framework for building \textsc{Nlitdb}\xspace{s} that addresses a great number of
these problems, and (c) it shows how this framework was used to
implement a prototype \textsc{Nlitdb}\xspace on which more elaborate \textsc{Nlitdb}\xspace{s} can
be based.
Finally, I note that: (i) the work of this thesis is one of the first
to use \textsc{Tsql2}\xspace, and one of the first to generate feedback to the
\textsc{Tsql2}\xspace designers (a number of obscure points and possible improvements in
the definition of \textsc{Tsql2}\xspace were revealed during this project; these were
reported in \cite{Androutsopoulos1995b}); (ii) the prototype \textsc{Nlitdb}\xspace
of this thesis is currently one of the very few \textsc{Nlidb}\xspace{s} (at least
among \textsc{Nlidb}\xspace{s} whose grammar is publicly documented) that adopt
\textsc{Hpsg}\xspace.\footnote{See also \cite{Cercone1993}. A version of
the \textsc{Hpsg}\xspace grammar of this thesis, stripped of its temporal mechanisms,
was used in \cite{Seldrup1995} to construct a \textsc{Nlidb}\xspace for snapshot
databases.}
\section{Issues that will not be addressed} \label{no_issues}
To allow the work of this thesis to be completed within the available
time, the following issues were not considered.
\paragraph{Updates:} This thesis focuses on
\emph{questions}. Natural language requests to \emph{update} the
database (e.g.\ \pref{noiss:1}) are not considered
(see \cite{Davidson1983} for work on natural language updates.)
\begin{examps}
\item Replace the salary of T.Smith for the period 1/1/88 to 5/5/90
by 17000. \label{noiss:1}
\end{examps}
Assertions like \pref{noiss:2} will be treated as yes/no
questions, i.e.\ \pref{noiss:2} will be treated in the same way as
\pref{noiss:3}.
\begin{examps}
\item On 1/1/89 the salary of T.Smith was 17000. \label{noiss:2}
\item Was the salary of T.Smith 17000 on 1/1/89? \label{noiss:3}
\end{examps}
\paragraph{Schema evolution:} This term refers to cases where the
\emph{structure}, not only the \emph{contents}, of the database change
over time (new relations are created, old deleted, attributes are added or
removed from relations, etc.; see \cite{McKenzie1990}). Schema evolution
is not considered in this thesis. The structure of the
database is assumed to be static, although the information in the
database may change over time.
\paragraph{Modal questions:} Modal questions ask if
something could have happened, or could never have happened, or will
necessarily happen, or can possibly happen. For example,
\qit{Could T.Smith have been an employee of IBI in 1985?} does not ask
if T.Smith was an IBI employee in 1985, but if it would have
been possible for T.Smith to be an IBI employee at that time. Modal
questions are not examined in this thesis (see \cite{Mays1986} and
\cite{Lowden1991} for related work).
\paragraph{Future questions:} A temporal database may contain
predictions about the future. At some company, for example, it may
have been decided that T.Smith will retire two years from the present,
and that J.Adams will replace him. These decisions may have been
recorded in the company's database. In that context, one may want to
submit questions referring to the future, like \qit{When will T.Smith
retire?} or \qit{Who will replace T.Smith?}. To simplify the
linguistic data that the work of this thesis had to address, future
questions were not considered. The database may contain information
about the future, but the framework of this thesis does not currently
allow this information to be accessed through natural
language. Further work could extend the framework of this thesis to
handle future questions as well (see section \ref{to_do}).
\paragraph{Cooperative responses:} In many cases, it is helpful
for the user if the \textsc{Nlidb}\xspace reports more information than what the
question literally asks for. In the dialogue below (from
\cite{Johnson1985}), for example, the system has reasoned that the
user would be interested to know about the United flight, and has
included information about that flight in its answer although this
was not requested.
\begin{examps}
\item Do American Airlines have a night flight to Dallas? \label{noiss:4}
\item \sys{No, but United have flight 655.} \label{noiss:5}
\end{examps}
In other cases, the user's requests may be based on false
presumptions. \pref{noiss:4a}, for example, presumes that there is a
flight called BA737. If this is not true, it would be useful if the
\textsc{Nlidb}\xspace could generate a response like \pref{noiss:4b}.
\begin{examps}
\item Does flight BA737 depart at 5:00pm? \label{noiss:4a}
\item \sys{Flight BA737 does not exist.} \label{noiss:4b}
\end{examps}
The term \emph{cooperative responses} \cite{Kaplan1982} is used to
refer to responses like \pref{noiss:5} and \pref{noiss:4b}. The
framework of this thesis includes no mechanism to generate cooperative
responses. During the work of this thesis, however, it became clear
that such a mechanism is particularly important in questions to
\textsc{Nlitdb}\xspace{s}, and hence a mechanism of this kind should be added (this
will be discussed further in section \ref{to_do}).
\paragraph{Anaphora:}
Pronouns (\qit{she}, \qit{they}, etc.), possessive determiners
(\qit{his}, \qit{their}), and some noun phrases (\qit{the project},
\qit{these people}) are used anaphorically, to refer to contextually
salient entities. The term \emph{nominal anaphora} is frequently used
to refer to this phenomenon (see \cite{Hirst1981} for an overview of
nominal anaphora, and \cite{Hobbs1986} for methods that can be used to
resolve pronoun anaphora). Verb tenses and other temporal expressions
(e.g.\ \qit{on Monday}) are often used in a similar anaphoric manner
to refer to contextually salient times (this will be discussed in
section \ref{temporal_anaphora}). The term \emph{temporal anaphora}
\cite{Partee1984} is used in that case. Apart from a temporal
anaphoric phenomenon related to noun phrases like \qit{the sales
manager} (to be discussed in section \ref{noun_anaphora}), for which
support is provided, the framework of this thesis currently provides
no mechanism to resolve anaphoric expressions (i.e.\ to determine the
entities or times these expressions refer to). Words introducing
nominal anaphora (e.g.\ pronouns) are not allowed, and (excluding the
phenomenon of section \ref{noun_anaphora}) temporal anaphoric
expressions are treated as denoting any possible referent (e.g.\
\qit{on Monday} is taken to refer to any Monday).
\paragraph{Elliptical sentences:} Some \textsc{Nlidb}\xspace{s} allow
elliptical questions to be submitted as follow-ups to previous
questions (e.g.\ \qit{What is the salary of J.Adams?}, followed
by \qit{His address?}; see section 4.6 of
\cite{Androutsopoulos1995} for more examples). Elliptical questions
are not considered in this thesis.
\section{Outline of the remainder of this thesis}
The remainder of this thesis is organised as follows:
Chapter 2 explores English temporal mechanisms, delineating
the set of linguistic phenomena that this thesis attempts to
support. Drawing on existing ideas from tense and aspect theories, an
account for these phenomena is formulated that is suitable to the
purposes of this thesis.
Chapter 3 defines formally \textsc{Top}\xspace, discussing how it can be used to
represent the semantics of temporal English expressions, and how
it relates to other existing temporal representation
languages.
Chapter 4 provides a brief introduction to \textsc{Hpsg}\xspace, and discusses how
\textsc{Hpsg}\xspace can be modified to incorporate the tense and aspect account of
this thesis, and to map English questions involving time to appropriate
\textsc{Top}\xspace expressions.
Chapter 5 defines the mapping from \textsc{Top}\xspace to \textsc{Tsql2}\xspace, and proves its
correctness (parts of this proof are given in appendix
\ref{trans_proofs}). It also discusses the modifications to \textsc{Tsql2}\xspace that
are adopted in this thesis.
Chapter 6 describes the architecture of the prototype \textsc{Nlitdb}\xspace,
provides information about its implementation, and explains which
additional modules would have to be added if the system were to be
used in real-life applications. Several sample English questions
directed to a hypothetical temporal database of an airport are shown,
discussing the corresponding output of the prototype \textsc{Nlitdb}\xspace.
Chapter 7 discusses previous proposals in the area of \textsc{Nlitdb}\xspace{s},
comparing them to the framework of this thesis.
Chapter 8 summarises and proposes directions for further research.
\chapter{The Linguistic Data and an Informal Account} \label{linguistic_data}
\proverb{There is a time for everything.}
\section{Introduction}
This chapter explores how temporal information is conveyed in English,
focusing on phenomena that are relevant to \textsc{Nlitdb}\xspace{s}.
There is a wealth of temporal English mechanisms (e.g.\ verb tenses,
temporal adverbials, temporal adjectives, etc.), and it would be
impossible to consider all of those in this thesis. Hence,
several English temporal mechanisms will be ignored, and
simplifying assumptions will be introduced in some of the mechanisms
that will be considered. One of the goals of this chapter is to specify
exactly which linguistic phenomena this thesis attempts to
support. For the phenomena that will be supported, a further goal is
to provide an informal account of how they will be treated.
Although this chapter draws on existing tense and aspect theories, I
stress that it is in no way an attempt to formulate an improved tense
and aspect theory. The aim is more practical: to explore how
ideas from existing tense and aspect theories can be integrated into
\textsc{Nlitdb}\xspace{s}, in a way that leads to provably implementable systems.
\section{Aspectual taxonomies} \label{asp_taxes}
As mentioned in section \ref{tense_aspect_intro}, many tense and
aspect theories employ aspectual classifications, which are often
similar to Vendler's distinction between states (e.g.\ \qit{to know},
as in \qit{John knows the answer.}), activities (e.g.\ \qit{to run},
as in \qit{John ran.}), accomplishments (e.g.\ \qit{to build}, as in
\qit{John built a house.}), and achievements (e.g.\ \qit{to find}, as
in \qit{Mary found the treasure.}).
Vendler proposes a number of linguistic tests to determine the
aspectual classes of verbs. For example, according to Vendler,
activity and accomplishment verbs can appear in the progressive (e.g.\
\qit{John is running}, \qit{John is building a house}), while state
and achievement verbs cannot (*\qit{John is knowing the answer.},
*\qit{Mary is finding the treasure}). Activity verbs are said to
combine felicitously with \qit{for~\dots} adverbials specifying
duration (\qit{John ran for two minutes.}), but sound odd with
\qit{in~\dots} duration adverbials (?\qit{John ran in two minutes.}).
Accomplishment verbs, in contrast, combine felicitously with
\qit{in~\dots} adverbials (\qit{John built a house in two weeks.}),
but sound odd with \qit{for~\dots} adverbials (?\qit{John built a
house for two weeks.}). Finally, according to Vendler state verbs
combine felicitously with \qit{for~\dots} adverbials (e.g.\ \qit{John
knew the answer for ten minutes (but then forgot it).}), while
achievement verbs sound odd with \qit{for~\dots} adverbials
(?\qit{Mary found the treasure for two hours.}).
The exact nature of the objects classified by Vendler is unclear. In
most cases, Vendler's wording suggests that his taxonomy classifies
verbs. However, some of his examples (e.g.\ the fact that \qit{to run}
with no object is said to be an activity, while \qit{to run a mile} is
said to be an accomplishment) suggest that the natural language
expressions being classified are not always verbs, but sometimes
larger syntactic constituents (perhaps verb phrases). In other cases,
Vendler's arguments suggest that the objects being classified are not
natural language expressions (e.g.\ verbs, verb phrases), but world
situations denoted by natural language expressions. According to
Vendler, \qit{Are you smoking?} ``asks about an activity'', while
\qit{Do you smoke?} ``asks about a state''. In this case, the terms
``activity'' and ``state'' seem to refer to types of situations in the
world, rather than types of natural language expressions. (The first
question probably asks if somebody is actually smoking at the present
moment. The second one has a \emph{habitual} meaning: it asks if
somebody has the habit of smoking. Vendler concludes that habits
``are also states in our sense''.)
Numerous variants of Vendler's taxonomy have been proposed. These
differ in the number of aspectual classes they assume, the names of
the classes, the nature of the objects being classified, and the
properties assigned to each class. Vlach \cite{Vlach1993}
distinguishes four aspectual classes of sentences, and assumes that
there is a parallel fourfold taxonomy of world situations. Moens
\cite{Moens} distinguishes between ``states'', ``processes'',
``culminated processes'', ``culminations'', and ``points'', commenting
that his taxonomy does not classify real world situations, but ways
people use to describe world situations. Parsons \cite{Parsons1989}
distinguishes three kinds of ``eventualities'' (``states'',
``activities'', and ``events''), treating eventualities as entities in
the world. Lascarides \cite{Lascarides} classifies propositions
(functions from time-periods to truth values), distinguishing between
``state'', ``process'', and ``event'' propositions.
\section{The aspectual taxonomy of this thesis} \label{aspectual_classes}
Four aspectual classes are employed in this thesis: \emph{states},
\emph{activities}, \emph{culminating activities}, and \emph{points}.
(Culminating activities and points correspond to Vendler's
``accomplishments'' and ``achievements'' respectively. Similar terms
are used in \cite{Moens} and \cite{Blackburn1994}.) These aspectual
classes correspond to ways of \emph{viewing world situations} that
people seem to use: a situation can be viewed as involving no change
or action (state view), as an instantaneous change or action (point
view), as a change or action with no climax (activity view), or as a
change or action with a climax (culminating activity view).
(Throughout this thesis, I use ``situation'' to refer collectively to
elements of the world that other authors call ``events'',
``processes'', ``states'', etc.) Determining which view the speaker
has in mind is important to understand what the speaker means. For
example, \qit{Which tanks contained oil?} is typically uttered with a
state view. When an \qit{at \dots} temporal adverbial (e.g.\ \qit{at
5:00pm}) is attached to a clause uttered with a state view, the
speaker typically means that the situation of the clause simply holds
at the time of the adverbial. There is normally no implication that
the situation starts or stops holding at the time of the adverbial.
For example, in \qit{Which tanks contained oil at 5:00pm?} there is
normally no implication that the tanks must have started or stopped
containing oil at 5:00pm. In contrast, \qit{Who ran to the station?}
is typically uttered with a culminating activity view. In this case,
an \qit{at \dots} adverbial usually specifies the time when the
situation starts or is completed. \qit{Who ran to the station at
5:00pm?}, for example, probably asks for somebody who started
running to the station or reached it at 5:00pm.
Some linguistic markers seem to signal which view the speaker has in
mind. For example, the progressive usually signals a state view (e.g.\
unlike \qit{Who ran to the station at 5:00pm?}, \qit{Who was running
to the station at 5:00pm?} is typically uttered with a state view; in
this case, the running is simply ongoing at 5:00pm, it does not start
or finish at 5:00pm). Often, however, there are no such explicit
markers. The processes employed in those cases by hearers to determine
the speaker's view are not yet fully understood. In an \textsc{Nlitdb}\xspace,
however, where questions refer to a restricted domain,
reasonable guesses can be made by observing that in each domain, each
verb tends to be associated mainly with one particular view. Certain
agreements about how situations are to be viewed (e.g.\ that some
situations are to be treated as instantaneous -- point view) will also
have been made during the design of the database. These agreements
provide additional information about how the situations of the various
verbs are viewed in each domain.
More precisely, the following approach is adopted in this thesis.
Whenever the \textsc{Nlitdb}\xspace is configured for a new application domain, the
base form of each verb is assigned to one of the four aspectual
classes, using criteria to be discussed in section
\ref{aspect_criteria}. These criteria are intended to detect the view
that is mainly associated with each verb in the particular domain
that is being examined. Following \cite{Dowty1986}, \cite{Moens},
\cite{Vlach1993}, and others, aspectual class is treated as a property
of not only verbs, but also verb phrases, clauses, and
sentences. Normally, all verb forms will inherit the aspectual classes
of the corresponding base forms. Verb phrases, clauses, or sentences
will normally inherit the aspectual classes of their main verb
forms. Some linguistic mechanisms (e.g.\ the progressive or some
temporal adverbials), however, may cause the aspectual class of a verb
form to differ from that of the base form, or the aspectual class of a
verb phrase, clause, or sentence to differ from that of its main verb
form. The aspectual class of each verb phrase, clause, or sentence is
intended to reflect the view that users typically have in mind when
using that expression in the particular domain.
In the case of a verb like \qit{to run}, that typically involves a
culminating activity view when used with an expression that specifies
a destination or specific distance (e.g.\ \qit{to run to the
station/five miles}), but an activity view when used on its own, it
will be assumed that there are two different homonymous verbs \qit{to
run}. One has a culminating activity base form, and requires a
complement that specifies a destination or specific distance. The
other has an activity base form, and requires no such complement. A
similar distinction would be introduced in the case of verbs whose
aspectual class depends on whether or not the verb's object denotes a
countable or mass entity (e.g.\ \qit{to drink a bottle of wine} vs.\
\qit{to drink wine}; see \cite{Mourelatos1978}).
Similarly, when a verb can be used in a domain with both habitual and
non-habitual meanings (e.g. \qit{BA737 (habitually) departs from
Gatwick.} vs.\ \qit{BA737 (actually) departed from Gatwick five
minutes ago.}), a distinction will be made between a homonym with a
habitual meaning, and a homonym with a non-habitual
meaning.\footnote{When discussing sentences with multiple readings, I often use
parenthesised words (e.g.\ \qit{(habitually)}) to indicate which
reading is being considered.} The base forms of habitual homonyms
are classified as states. (This agrees with Vendler, Vlach
\cite{Vlach1993}, and Moens and Steedman \cite{Moens2}, who all
classify habituals as states.) The aspectual classes of non-habitual
homonyms depend on the verb and the application domain. Approaches
that do not postulate homonyms are also possible (e.g.\ claiming that
\qit{to run} is an activity which is transformed into a culminating
activity by \qit{the station}). The homonyms method, however, leads to
a more straight forward treatment in the \textsc{Hpsg}\xspace grammar of chapter
\ref{English_to_TOP} (where the base form of each homonym is mapped to
a different sign).
In the rest of this thesis, I refer to verbs whose base forms are
classified as states, activities, culminating activities, or points as
\emph{state verbs}, \emph{activity verbs}, \emph{culminating activity
verbs}, and \emph{point verbs}.
\section{Criteria for classifying base verb forms} \label{aspect_criteria}
This section discusses the criteria that determine the
aspectual class of a verb's base form in a particular \textsc{Nlitdb}\xspace
domain. Three criteria are employed, and they are
applied in the order of figure \ref{decision_tree}.
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.5]{decision_tree}
\caption{Determining the aspectual class of a verb's base form}
\label{decision_tree}
\end{center}
\hrule
\end{figure}
\subsection{The simple present criterion} \label{simple_present_criterion}
The first criterion distinguishes state verbs (verbs whose base forms
are states) from point, activity, and culminating activity verbs. If
the simple present of a verb can be used (in the particular domain) in
single-clause questions with non-futurate meanings, the verb is a
state one; otherwise it is a point, activity, or culminating activity
verb. For example, in domains where \pref{crit:1} and \pref{crit:2}
are possible, \qit{to contain} and \qit{to own} are state verbs.
\begin{examps}
\item Does any tank contain oil? \label{crit:1}
\item Which employees own a car? \label{crit:2}
\end{examps}
Some clarifications are needed. First, the simple present sometimes
refers to something that is scheduled to happen. For example,
\pref{crit:2.7} could refer to a scheduled assembling (in that case,
\pref{crit:2.7} is very similar to \pref{crit:2.7.2}). I consider this
meaning of \pref{crit:2.7} futurate. Hence, this use of
\pref{crit:2.7} does not constitute evidence that \qit{to assemble}
is a state verb.
\begin{examps}
\item When does J.Adams assemble engine 5? \label{crit:2.7}
\item When will J.Adams assemble engine 5? \label{crit:2.7.2}
\end{examps}
In reporting contexts, the simple present of
verbs whose base forms I would not want to be classified as states can be used
with a non-futurate meaning. For example, in a context where the speaker
reports events as they happen, \pref{crit:2.8} is possible. (This use
of the simple present is unlikely in \textsc{Nlitdb}\xspace questions.)
\begin{examps}
\item J.Adams arrives. He moves the container. He fixes the engine.
\label{crit:2.8}
\end{examps}
The simple present criterion examines questions directed to a \textsc{Nlitdb}\xspace,
not sentences from other contexts. Hence, \pref{crit:2.8} does not
constitute evidence that \qit{to arrive}, \qit{to move}, and \qit{to
fix} are state verbs.
The reader is reminded that when verbs have both habitual and
non-habitual meanings, I distinguish between habitual and non-habitual
homonyms (section \ref{aspectual_classes}). Ignoring
scheduled-to-happen meanings (that do not count for the simple present
criterion), \pref{crit:3} and \pref{crit:4} can only have
habitual meanings.
\begin{examps}
\item Which flight lands on runway 2? \label{crit:3}
\item Does any doctor smoke? \label{crit:4}
\end{examps}
\pref{crit:3} asks for a flight that
habitually lands on runway 2, and \pref{crit:4} for doctors that are
smokers. That is, \pref{crit:3} and \pref{crit:4} can only be
understood as involving the habitual homonyms of \qit{to land} and
\qit{to smoke}. (In contrast, \pref{crit:5} and \pref{crit:6} can be
understood with non-habitual meanings, i.e.\ as involving the
non-habitual homonyms.)
\begin{examps}
\item Which flight is landing on runway 2? \label{crit:5}
\item Is any doctor smoking? \label{crit:6}
\end{examps}
Therefore, in domains where \pref{crit:3} and \pref{crit:4} are
possible, the habitual \qit{to land} and \qit{to smoke} are state
verbs. \pref{crit:3} and \pref{crit:4} do not constitute evidence that
the non-habitual \qit{to land} and \qit{to smoke} are state verbs.
\subsection{The point criterion} \label{point_criterion}
The second criterion, the \emph{point criterion}, distinguishes point
verbs from activity and culminating activity ones (state verbs will
have already been separated by the simple present criterion; see
figure \ref{decision_tree}). The point criterion is based on the fact
that some verbs will be used to describe kinds of world situations
that are modelled in the database as being always instantaneous. If a
verb describes situations of this kind, its base form should be
classified as point; otherwise, it should be classified as activity or
culminating activity.
In section \ref{aspect_examples}, for example, I consider a
hypothetical airport database. That database does not distinguish
between the times at which a flight starts or stops entering an
airspace sector. Entering a sector is modelled as instantaneous.
Also, in the airport domain \qit{to enter} is only
used to refer to flights entering sectors. Consequently, in that
domain \qit{to enter} is a point verb. If \qit{to enter} were also
used to refer to, for example, groups of passengers entering planes,
and if situations of this kind were modelled in the database as
non-instantaneous, one would have to distinguish between two
homonyms \qit{to enter}, one used with flights entering sectors, and
one with passengers entering planes. The first would be a point verb;
the second would not.
The person applying the criterion will often have to decide exactly
what is or is not part of the situations described by the verbs. The
database may store, for example, the time-points at which a flight
starts to board, finishes boarding, starts to taxi to a runway,
arrives at the runway, and leaves the ground. Before classifying the
non-habitual \qit{to depart}, one has to decide exactly what is or is
not part of departing. Is boarding part of departing,
i.e.\ is a flight departing when it is boarding? Is taxiing to a
runway part of departing? Or does departing include only the time at
which the flight actually leaves the ground? If a flight starts to
depart when it starts to board, and finishes departing when it leaves
the ground, then the base form of \qit{to depart} should not be
classified as point, because the database does not treat departures as
instantaneous (it distinguishes between the beginning of the boarding
and the time when the flight leaves the ground). If, however,
departing starts when the front wheels of the aircraft leave the
ground and finishes when the rear wheels leave the ground, the base
form of \qit{to depart}
\emph{should} be classified as point, because the database does not
distinguish the two times. In any case, the user should be aware of
what \qit{to depart} is taken to mean.
The point criterion is similar to claims in
\cite{Vendler}, \cite{Singh}, \cite{Vlach1993},
and elsewhere that achievement (point) verbs denote instantaneous
situations.
\subsection{The imperfective paradox criterion} \label{ip_criterion}
The third criterion distinguishes activity from
culminating activity verbs (state and point verbs will have already
been separated by the point and simple present criteria). The
criterion is based on the imperfective paradox (section
\ref{tense_aspect_intro}). Assertions containing the past
continuous and simple past of the verbs, like \pref{crit:20} --
\pref{crit:23}, are considered.
\begin{examps}
\item John was running. \label{crit:20}
\item John ran. \label{crit:21}
\item John was building a house. \label{crit:22}
\item John built a house. \label{crit:23}
\end{examps}
The reader is reminded that
assertions are treated as yes/no questions (section \ref{no_issues}).
If an affirmative answer to the past continuous assertion implies an
affirmative answer to the simple past assertion (as in
\pref{crit:20} -- \pref{crit:21}), the verb
is an activity one; otherwise (e.g.\ \pref{crit:22} --
\pref{crit:23}), it is a culminating activity one.
As will be discussed in section \ref{progressives}, the past
continuous sometimes has a futurate meaning. Under this reading,
\pref{crit:20} means \qit{John was going to run.}, and an affirmative
answer to \pref{crit:20} does not necessarily imply an affirmative
answer to \pref{crit:21}. When applying the imperfective paradox
criterion, the past continuous must not have its futurate meaning.
In various forms, the imperfective paradox criterion has been
used in \cite{Vendler}, \cite{Vlach1993},
\cite{Kent}, and elsewhere.
\subsection{Other criteria} \label{other_aspect_criteria}
The three criteria above are not the only ones that could be used. The
behaviour of verbs when appearing in various forms or when combining
with some temporal adverbials varies depending on their aspectual
classes. Alternative criteria can be formulated by observing this
behaviour. For example, some authors classify verbs (or situations
denoted by verbs) by observing how easily they appear in progressive
forms (to be discussed in section \ref{progressives}), how easily they
combine with \qit{for~\dots} and \qit{in~\dots} duration adverbials
(sections \ref{for_adverbials} and \ref{in_adverbials} below), or what
the verbs entail about the start or the end of the described situation
when they combine with \qit{at \dots} temporal adverbials (section
\ref{point_adverbials} below). In some cases, the person classifying
the base verb forms may be confronted with a verb for which the three
criteria of sections \ref{simple_present_criterion} --
\ref{ip_criterion} do not yield a clear verdict. In such cases,
additional evidence for or against classifying a base verb form into a
particular class can be found by referring to following sections,
where the typical behaviour of each class is examined.
\subsection{Classifying base verb forms in the airport domain}
\label{aspect_examples}
To illustrate the use of the criteria of sections
\ref{simple_present_criterion} -- \ref{ip_criterion}, I
now consider a hypothetical \textsc{Nlitdb}\xspace to a temporal database that
contains information about the air-traffic of an airport. (I borrow
some terminology from \cite{Sripada1994}. The airport domain will be
used in examples throughout this thesis.) The airport database shows
the times when flights arrived at, or departed from, the airport, the
times flights spent circling around the airport while waiting for
permission to land, the runways they landed on or took off from, the
gates where the flights boarded, etc. The database is
occasionally queried using the \textsc{Nlitdb}\xspace to determine the causes of
accidents, and to collect data that are used to
optimise the airport's traffic-handling strategies.
The airport's airspace is divided into sectors. Flights
approaching or leaving the airport cross the boundaries of
sectors, each time \emph{leaving} a sector and \emph{entering}
another one. The airport is very busy, and some of its runways may
also \emph{be closed} for maintenance. Hence, approaching flights are
often instructed to \emph{circle} around the airport until a runway
\emph{becomes} free. When a runway is freed, flights
\emph{start} to \emph{land}. Landing
involves following a specific procedure. In some cases,
the pilot may abort the landing procedure before completing
it. Otherwise, the flight lands on a runway, and it then
\emph{taxies} to a gate that \emph{is free}. The moment at which the
flight \emph{reaches} the gate is considered the time at which the
flight \emph{arrived} (reaching a location and arriving are modelled
as instantaneous). Normally (habitually) each flight arrives at the
same gate and time every day. Due to traffic congestion, however, a
flight may sometimes arrive at a gate or time other than its normal
ones.
Before \emph{taking off}, each flight is \emph{serviced} by a service
company. This involves carrying out a specific set of tasks. Unless
all tasks have been carried out, the service is incomplete. Each
service company normally (habitually) services particular
flights. Sometimes, however, a company may be asked to service a
flight that it does not normally service. After being serviced, a
flight may be \emph{inspected}. Apart from flights, inspectors also
inspect gates and runways. In all cases, there are particular tasks to
be carried out for the inspections to be considered complete.
Shortly before taking off, flights start to \emph{board}. Unless all
the passengers that have checked in enter the aircraft, the boarding
is not complete, and the flight cannot depart. (There are special
arrangements for cases where passengers are too late.) The flight then
\emph{leaves} the gate, and that moment is considered the time at
which the flight \emph{departed} (leaving a location and departing are
modelled as instantaneous). Normally (habitually) each flight departs
from the same gate at the same time every day. Sometimes, however, flights
depart from gates, or at times, other than their normal ones. After
leaving its gate, a flight may be told to \emph{queue} for a
particular runway, until that runway becomes free. When the runway is free,
the flight starts to \emph{take off}, which involves following a
specific procedure. As with landings, the pilot may
abort the taking off procedure before completing it.
The database also records the states of parts of the airport's
emergency system. There are, for example, emergency tanks, used by the
fire-brigade. Some of those may
\emph{contain} water, others may contain foam, and others may
\emph{be empty} for maintenance.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|l|l|}
\hline
state verbs & activity verbs & culm.\ activity verbs & point verbs \\
\hline
service (habitually) & circle & land & cross \\
arrive (habitually) & taxi (no destination) & take off & enter \\
depart (habitually) & queue & service (actually) & become \\
contain & & inspect & start/begin \\
be (non-auxiliary) & & board & stop/finish \\
& & taxi (to destination) & reach \\
&&& leave \\
&&& arrive (actually) \\
&&& depart (actually) \\
\hline
\end{tabular}
}
\end{center}
\caption{Verbs of the airport domain}
\label{airport_verbs}
\end{table}
Table \ref{airport_verbs} shows some of the verbs that are used in the airport
domain. \qit{To depart}, \qit{to arrive}, and \qit{to service} are
used with both habitual and non-habitual meanings.
\pref{criteg:1.3} and \pref{criteg:1.4}, for example, can have
habitual meanings. In \pref{criteg:1.3.2} and
\pref{criteg:1.4.1}, the verbs are probably used with their
non-habitual meanings. I distinguish between
habitual and non-habitual homonyms of \qit{to depart}, \qit{to
arrive}, and \qit{to service} (section \ref{aspectual_classes}).
\begin{examps}
\item Which flights depart/arrive at 8:00am? \label{criteg:1.3}
\item Which flight departed/arrived at 8:00am yesterday? \label{criteg:1.3.2}
\item Which company services BA737? \label{criteg:1.4}
\item Which company serviced BA737 yesterday? \label{criteg:1.4.1}
\end{examps}
I also distinguish between two homonyms of \qit{to taxi}, one that
requires a destination-denoting complement (as in \qit{BA737 was
taxiing to gate 2.}), and one that requires no such complement (as in
\qit{BA737 was taxiing.}).
The simple present criterion and sentences like \pref{criteg:1.1},
\pref{criteg:1.2}, \pref{criteg:1.3}, and
\pref{criteg:1.4} imply that the non-auxiliary \qit{to be}, \qit{to
contain}, and the habitual \qit{to depart}, \qit{to arrive}, and
\qit{to service} are state verbs.
\begin{examps}
\item Which gates are free? \label{criteg:1.1}
\item Does any tank contain foam? \label{criteg:1.2}
\end{examps}
All other verbs of table \ref{airport_verbs} are not state verbs.
For example, (excluding habitual and futurate meanings)
\pref{criteg:4} -- \pref{criteg:12} sound unlikely or odd in the
airport domain. \pref{criteg:19} -- \pref{criteg:19.1} would be used instead.
\begin{examps}
\item \odd Which flights circle? \label{criteg:4}
\item \odd Which flight taxies to gate 2? \label{criteg:9}
\item \odd Which flight departs? \label{criteg:12}
\item Which flights are circling? \label{criteg:19}
\item Which flight is taxiing to gate 2? \label{criteg:20}
\item Which flight is departing? \label{criteg:19.1}
\end{examps}
The verbs in the rightmost column of table \ref{airport_verbs} are
used in the airport domain to refer to situations which I assume
are modelled as instantaneous in the database. Consequently, by
the point criterion these are all point verbs. In contrast, I assume
that the situations of the verbs in the two middle columns are not
modelled as instantaneous. Therefore, those are activity or
culminating activity verbs.
In the airport domain, a sentence like \pref{criteg:31} means that
BA737 spent some time circling around the airport. It does not imply
that BA737 completed any circle around the airport. Hence, an
affirmative answer to \pref{criteg:30} implies an affirmative answer
to \pref{criteg:31}. By the imperfective paradox criterion, \qit{to
circle} is an activity verb.
\begin{examps}
\item BA737 was circling. \label{criteg:30}
\item BA737 circled. \label{criteg:31}
\end{examps}
Similar assertions and the imperfective paradox criterion imply that
\qit{to taxi} (no destination) and \qit{to queue} are also activity
verbs. In contrast, the verbs in the third column of table
\ref{airport_verbs} are culminating activity verbs. For example, in
the airport domain an affirmative answer to \pref{criteg:43.2} does
not imply an affirmative answer to \pref{criteg:43.3}: J.Adams may
have aborted the inspection before completing all the inspection
tasks, in which case the inspection is incomplete.
\begin{examps}
\item J.Adams was inspecting runway 5. \label{criteg:43.2}
\item J.Adams inspected runway 5. \label{criteg:43.3}
\end{examps}
\section{Verb tenses} \label{verb_tenses}
I now turn to verb tenses. I use ``tense'' with the meaning it has in
traditional English grammar textbooks (e.g.\ \cite{Thomson}). For
example, \qit{John sings.} and \qit{John is singing.} will be said to
be in the simple present and present continuous tenses
respectively. In linguistics, ``tense'' is not always used in this
way. According to \cite{Comrie2}, for example, \qit{John sings.} and
\qit{John is singing.} are in the same tense, but differ aspectually.
Future questions are not examined in this thesis (section
\ref{no_issues}). Hence, future tenses and futurate meanings of other
tenses (e.g.\ the scheduled-to-happen meaning of the simple present;
section \ref{simple_present_criterion}) will be ignored. To
simplify further the linguistic data, the present perfect continuous
and past perfect continuous (e.g.\ \qit{has/had been inspecting})
were also not considered: these tenses combine problems from both
continuous and perfect tenses. This leaves six tenses to be
discussed: simple present, simple past, present continuous, past
continuous, present perfect, and past perfect.
\subsection{Simple present} \label{simple_present}
The framework of this thesis allows the simple present to be used only
with state verbs, to refer to a situation that holds at the present
(e.g.\ \pref{sp:1}, \pref{sp:2}).
\begin{examps}
\item Which runways are closed? \label{sp:1}
\item Does any tank contain water? \label{sp:2}
\end{examps}
Excluding the scheduled-to-happen meaning
(which is ignored in this thesis), \pref{sp:3} can only be understood
as asking for the current normal (habitual) servicer of
BA737. Similarly, \pref{sp:4} can only be
asking for the current normal departure gate of BA737. \pref{sp:3}
would not be used to refer to a company that is actually servicing
BA737 at the present moment (similar comments apply to \pref{sp:4}). That is,
\pref{sp:3} and \pref{sp:4} can only involve the habitual
homonyms of \qit{to service} and \qit{to depart} (which are state
verbs), not the non-habitual ones (which are culminating activity and
point verbs respectively; see table \ref{airport_verbs}). This is
consistent with the assumption that the simple present can only be used with
state verbs.
\begin{examps}
\item Which company services BA737? \label{sp:3}
\item Which flights depart from gate 2? \label{sp:4}
\end{examps}
In the airport domain, \qit{to circle} is an activity verb (there is
no state habitual homonym). Hence, \pref{sp:3.7} is rejected. This is
as it should be, because \pref{sp:3.7} can only be understood with a
habitual meaning, a meaning which is not available in the airport
domain (there are no circling habits).
\begin{examps}
\item Does BA737 circle? \label{sp:3.7}
\end{examps}
The simple present can also be used with non-state verbs to describe
events as they happen (section \ref{simple_present_criterion}), or
with a historic meaning (e.g.\ \qit{In 1673 a fire destroys the
palace.}), but these uses are extremely unlikely in \textsc{Nlitdb}\xspace questions.
\subsection{Simple past} \label{simple_past}
Like the simple present, the simple past can be used with verbs from
all four classes (e.g.\ \pref{spa:2} -- \pref{spa:7}).
\begin{examps}
\item Which tanks contained water on 1/1/95? \label{spa:2}
\item Did BA737 circle on 1/1/95? \label{spa:5}
\item Which flights (actually) departed from gate 2 on 1/1/95? \label{spa:3}
\item Which flights (habitually) departed from gate 2 in 1994? \label{spa:1}
\item Which company (actually) serviced BA737 yesterday? \label{spa:6}
\item Which company (habitually) serviced BA737 last year? \label{spa:7}
\end{examps}
\pref{spa:3} -- \pref{spa:7} show that both the habitual and the
non-habitual homonyms of verbs like \qit{to depart} or \qit{to
service} are generally possible in the simple past.
\pref{spa:8} is ambiguous. It may refer either to flights that
actually departed (perhaps only once) from gate 2 in 1994, or to
flights that normally (habitually) departed from gate 2 in 1994.
\begin{examps}
\item Which flights departed from gate 2 in 1994? \label{spa:8}
\end{examps}
The simple past of culminating activity verbs normally implies that
the situation of the verb reached its climax. For example, in
\pref{spa:13} the service must have been completed, and in
\pref{spa:12} BA737 must have reached gate 2 for the answer to be
affirmative.
\begin{examps}
\item Did Airserve service BA737? \label{spa:13}
\item Did BA737 taxi to gate 2? \label{spa:12}
\item BA737 was taxiing to gate 2 but never reached it. \label{spa:12.5}
\end{examps}
Some native English speakers consider simple negative answers to
\pref{spa:13} and \pref{spa:12} unsatisfactory, if for example BA737
was taxiing to gate 2 but never reached it. Although they agree that
strictly speaking the answer should be negative, they consider
\pref{spa:12.5} a much more appropriate answer. To
generate answers like \pref{spa:12.5}, a mechanism for
\emph{cooperative responses} is needed, an issue not addressed
in this thesis (section \ref{no_issues}).
The simple past (and other tenses) often has an \emph{anaphoric}
nature. For example, \pref{spa:13} probably does not ask if Airserve
serviced BA737 at \emph{any} time in the past. \pref{spa:13} would
typically be used with a particular time in mind (e.g.\ the present day),
to ask if Airserve serviced BA737 during that time. As will be
discussed in section \ref{temporal_anaphora}, a temporal anaphora resolution
mechanism is needed to determine the time the speaker has in mind.
The framework of this thesis currently provides no such mechanism, and
\pref{spa:13} is taken to refer to any past time. (The same approach
is adopted with all other tenses that refer to past situations.)
\subsection{Present continuous and past continuous} \label{progressives}
\paragraph{Futurate meanings:} The present and past continuous can be used
with futurate meanings. In that case,
\pref{prog:13} is similar to \pref{prog:14}.
\begin{examps}
\item Who is/was inspecting BA737? \label{prog:13}
\item Who will/would inspect BA737? \label{prog:14}
\end{examps}
Futurate meanings of tenses are not examined in this thesis. Hence,
this use of the present and past continuous will be ignored.
\paragraph{Activity and culminating activity verbs:}
The present and past continuous can be used with activity and
culminating activity verbs to refer to a situation that is or was in
progress (e.g.\ \pref{prog:1} -- \pref{prog:7} from the airport
domain).
\begin{examps}
\item Are/Were any flights circling? \label{prog:1}
\item Is/Was BA737 taxiing to gate 2? \label{prog:7}
\end{examps}
In the case of culminating activity verbs, there is no requirement for
the climax to be reached at any time. The past continuous version of
\pref{prog:7}, for example, does not require BA737 to have reached
gate 2 (cf.\ \pref{spa:12}).
\paragraph{Point verbs:}
The present and past continuous of point verbs
often refers to a preparatory process that is or was ongoing, and that
normally leads to the instantaneous situation of the verb. For
example, in the airport domain where \qit{to
depart} is a point verb and departing includes
only the moment where the flight leaves its gate, one could utter
\pref{prog:19.110} when the checking-in is ongoing or when
the flight is boarding.
\begin{examps}
\item BA737 is departing. \label{prog:19.110}
\end{examps}
The framework of this thesis provides no mechanism for determining
exactly which preparatory process is asserted to be in
progress. (Should the checking-in be in progress for the response to
\pref{prog:19.110} to be affirmative? Should the boarding be ongoing?)
Hence, this use of the present and past continuous of point verbs is
not allowed. The response to \pref{prog:19.110} will be affirmative
only at the time-point where BA737 is leaving its gate (as will be
discussed in section \ref{tsql2_time}, the database may model
time-points as corresponding to minutes or even whole days). To avoid
misunderstandings, the \textsc{Nlitdb}\xspace should warn the user that
\pref{prog:19.110} is taken to refer to the actual (instantaneous)
departure, not to any preparatory process. This is again a case for
cooperative responses, an issue not examined in this thesis.
\paragraph{State verbs:}
It has often been observed (e.g.\ Vendler's tests in
section \ref{asp_taxes}) that state verbs are not normally
used in progressive forms. For example, \pref{prog:28} and
\pref{prog:30} are easily rejected by most native
speakers. (I assume that \qit{to own} and \qit{to consist} would be
classified as state verbs, on the basis that simple present questions
like \pref{prog:29} and \pref{prog:31} are possible.)
\begin{examps}
\item \bad Who is owning five cars? \label{prog:28}
\item Who owns five cars? \label{prog:29}
\item \bad Which engine is consisting of 34 parts? \label{prog:30}
\item Which engine consists of 34 parts? \label{prog:31}
\end{examps}
The claim that state verbs do not appear in the progressive is
challenged by sentences like \pref{prog:35} (from \cite{Smith1986},
cited in \cite{Passonneau}; \cite{Kent} and \cite{Vlach1993} provide
similar examples). \pref{prog:35} shows that the non-auxiliary \qit{to
be}, which is typically classified as state verb, can be used in the
progressive.
\begin{examps}
\item My daughter is being very naughty. \label{prog:35}
\end{examps}
Also, some native English speakers find \pref{prog:32} and
\pref{prog:36} acceptable (though they would use
the non-progressive forms instead). (I assume that \qit{to border}
would be classified as state verb, on the basis that \qit{Which
countries border France?} is possible.)
\begin{examps}
\item \odd Tank 4 was containing water when the bomb exploded. \label{prog:32}
\item \odd Which countries were bordering France in 1937? \label{prog:36}
\end{examps}
Not allowing progressive forms of state verbs also seems problematic
in questions like \pref{prog:40}. \pref{prog:40} has a reading which
is very similar to the habitual reading of \pref{prog:41} (habitually
serviced BA737 in 1994).
\begin{examps}
\item Which company was servicing BA737 in 1994? \label{prog:40}
\item Which company serviced (habitually) BA737 in 1994? \label{prog:41}
\end{examps}
The reader is reminded that in the airport domain I distinguish
between a habitual and a non-habitual homonym of \qit{to service}. The
habitual homonym is a state verb, while the non-habitual one is a
culminating activity verb. If progressive forms of state verbs are not
allowed, then only the non-habitual homonym (actually servicing) is
possible in \pref{prog:40}. This does not account for the apparently
habitual meaning of \pref{prog:40}.
One could argue that the reading of \pref{prog:40} is not really habitual
but \emph{iterative} (servicing many times, as opposed to having a
servicing habit). As pointed out in
\cite{Comrie} (p.~27), the mere repetition of a situation does not
suffice for the situation to be considered a habit. \pref{prog:44},
for example, can be used when John is banging his hand on the table
repeatedly. In this case, it seems odd to claim that
\pref{prog:44} asserts that John has the habit of banging his hand on
the table, i.e.\ \pref{prog:44} does not seem to be equivalent to the
habitual \pref{prog:45}.
\begin{examps}
\item John is banging his hand on the table. \label{prog:44}
\item John (habitually) bangs his hand on the table. \label{prog:45}
\end{examps}
In sentences like \pref{prog:40} -- \pref{prog:41}, however, the
difference between habitual and iterative meaning is hard to
define. For simplicity, I do not distinguish between habitual and
iterative readings, and I allow state verbs to be used in progressive
forms (with the same meanings as the non-progressive forms). This
causes \pref{prog:40} to receive two readings: one involving the
habitual \qit{to service} (servicing habitually in 1994), and one
involving the non-habitual \qit{to service} (actually servicing at
some time in 1994; this reading is more likely without the \qit{in
1994}). \pref{prog:28} and \pref{prog:30} are treated as equivalent to
\pref{prog:29} and \pref{prog:31}.
As will be discussed in section \ref{temporal_adverbials}, I assume
that progressive tenses cause an aspectual shift from activities and
culminating activities to states. In the airport domain, for example,
although the base form of \qit{to inspect} is a culminating activity,
\qit{was inspecting} is a state.
\subsection{Present perfect} \label{present_perfect}
Like the simple past, the present perfect can be
used with verbs of all four classes to refer to past situations (e.g.\
\pref{prep:5} -- \pref{prep:10}). With culminating activity verbs, the
situation must have reached its climax (e.g.\ in \pref{prep:10}
the service must have been completed).
\begin{examps}
\item Has BA737 (ever) been at gate 2? \label{prep:5}
\item Which flights have circled today? \label{prep:8}
\item Has BA737 reached gate 2? \label{prep:7}
\item Which company has (habitually) serviced BA737 this year? \label{prep:9}
\item Has Airserve (actually) serviced BA737? \label{prep:10}
\end{examps}
It has often been claimed (e.g.\ \cite{Moens}, \cite{Vlach1993},
\cite{Blackburn1994}) that the English present perfect asserts that
some consequence of the past situation holds at the present. For
example, \pref{prep:11} seems to imply that there is a consequence of
the fact that engine 5 caught fire that still holds (e.g.\ engine 5 is
still on fire, or it was damaged by the fire and has not been
repaired). In contrast, \pref{prep:12} does not seem to imply (at least not
as strongly) that some consequence still holds.
\begin{examps}
\item Engine 5 has caught fire. \label{prep:11}
\item Engine 5 caught fire. \label{prep:12}
\end{examps}
Although these claims are intuitively appealing, it is
difficult to see how they could be used in a \textsc{Nlitdb}\xspace. Perhaps in
\pref{prep:15} the \textsc{Nlitdb}\xspace should check not only that
the landing was completed, but also that some consequence
of the landing still holds.
\begin{examps}
\item Has BA737 landed? \label{prep:15}
\end{examps}
It is unclear, however, what this consequence should be. Should the
\textsc{Nlitdb}\xspace check that the plane is still at the airport? And should the
answer be negative if the plane has departed since the landing?
Should the \textsc{Nlitdb}\xspace check that the passengers of BA737 are still at the
airport? And should the answer be negative if the passengers have left
the airport? Given this uncertainty, the framework of this thesis does
not require the past situation to have present consequences.
When the present perfect combines with \qit{for~\dots} duration
adverbials (to be discussed in section \ref{for_adverbials}), there is
often an implication that the past situation still holds at the
present (this seems related to claims that the past situation must
have present consequences). For example, there is a reading of
\pref{prep:16.10} where J.Adams is still a manager. (\pref{prep:16.10}
can also mean that J.Adams was simply a manager for two years, without
the two years ending at the present moment.) In contrast,
\pref{prep:16.12} carries no implication that J.Adams is still a
manager (in fact, it seems to imply that he is no longer a
manager).
\begin{examps}
\item J.Adams has been a manager for two years. \label{prep:16.10}
\item J.Adams was a manager for two years. \label{prep:16.12}
\end{examps}
Representing in \textsc{Top}\xspace the still-holding reading of
sentences like \pref{prep:16.10} has proven difficult. Hence, I
ignore the possible implication that the past situation
still holds, and I treat \pref{prep:16.10} as equivalent to
\pref{prep:16.12}.
The present perfect does not combine
felicitously with some temporal adverbials. For example,
\pref{prep:16} and \pref{prep:19} sound at least odd
to most native English speakers (they would use \pref{prep:16.1} and
\pref{prep:19.1} instead). In contrast, \pref{prep:17} and
\pref{prep:20} are acceptable.
\begin{examps}
\item \odd Which flights have landed yesterday? \label{prep:16}
\item Which flights landed yesterday? \label{prep:16.1}
\item Which flights have landed today? \label{prep:17}
\item \odd Which flights has J.Adams inspected last week? \label{prep:19}
\item Which flights did J.Adams inspect last week? \label{prep:19.1}
\item Which flights has J.Adams inspected this week? \label{prep:20}
\end{examps}
\pref{prep:16} -- \pref{prep:20} suggest that the present perfect can
only be used if the time of the adverbial contains not only
the time where the past situation occurred, but also the speech
time, the time when the sentence was uttered. (A similar
explanation is given on p.~167 of \cite{Thomson}.) \pref{prep:17}
is felicitous, because \qit{today} contains the speech time.
In contrast, \pref{prep:16} is unacceptable, because \qit{yesterday}
cannot contain the speech time.
The hypothesis, however, that the time of the adverbial must include
the speech time does not account for the fact that \pref{prep:22} is
acceptable by most native English speakers (especially
if the \qit{ever} is added), even if the question is not uttered on a
Sunday.
\begin{examps}
\item Has J.Adams (ever) inspected BA737 on a Sunday? \label{prep:22}
\end{examps}
As pointed out in \cite{Moens2}, a superstitious person could also
utter \pref{prep:23} on a day other than Friday the 13th.
\begin{examps}
\item They have married on Friday 13th! \label{prep:23}
\end{examps}
One could attempt to formulate more elaborate restrictions, to predict
exactly when temporal adverbials can be used with the present
perfect. In the case of a \textsc{Nlitdb}\xspace, however, it is difficult
to see why this would be worth the effort, as opposed to simply
accepting questions like \pref{prep:16} as equivalent to
\pref{prep:16.1}. I adopt the latter approach.
Given that the framework of this thesis does not associate present
consequences with the present perfect, that the
still-holding reading of sentences like \pref{prep:16.10} is not
supported, and that questions like \pref{prep:16} are allowed, there
is not much left to distinguish the present perfect from the simple
past. Hence, I treat the present perfect as equivalent to the simple past.
\subsection{Past perfect} \label{past_perfect}
The past perfect is often used to refer to a situation that occurred at
some past time before some other past time. Following Reichenbach
\cite{Reichenbach} and many others, let us call the latter time the
\emph{reference time}. \pref{pap:1} and \pref{pap:4} have readings
where \qit{at 5:00pm} specifies the reference time. In that case,
\pref{pap:1} asks for flights that Airserve serviced before 5:00pm,
and \pref{pap:4} asks if BA737 was at gate 2 some time before
5:00pm. (When expressing these meanings, \qit{by 5:00pm} is probably
preferable. I claim, however, that \qit{at 5:00pm} can also be used in this
way. \qit{By~\dots} adverbials are not examined in this thesis.)
\begin{examps}
\item Which flights had Airserve serviced at 5:00pm? \label{pap:1}
\item Had BA737 been at gate 2 at 5:00pm? \label{pap:4}
\end{examps}
With culminating activity verbs, the climax must have been reached
before (or possibly at) the reference time. For example, in
\pref{pap:1} the services must have been completed up to 5:00pm.
Perhaps some consequence of the past situation must still hold at the
reference time (see similar comments about the present perfect in
section \ref{present_perfect}). As with the present perfect, however, I
ignore such consequential links.
When the past perfect combines with temporal adverbials, it is often
unclear if the adverbial is intended to specify the reference time or
directly the time of the past situation. For example, \pref{pap:6}
could mean that BA737 had already reached gate 2 at 5:00pm, or that it
reached it at 5:00pm. In the latter case, \pref{pap:6} is similar to
the simple past \pref{pap:6.1}, except that \pref{pap:6} creates the
impression of a longer distance between the time of the reaching and
the speech time.
\begin{examps}
\item BA737 had reached gate 2 at 5:00pm. \label{pap:6}
\item BA737 reached gate 2 at 5:00pm. \label{pap:6.1}
\end{examps}
When the past perfect combines with \qit{for~\dots} duration
adverbials and a reference time is specified, there is often an
implication that the past situation still held at the reference
time. (A similar implication arises in the case of the present
perfect; see section \ref{present_perfect}.) For example, \pref{pap:8}
seems to imply that J.Adams was still a manager on 1/1/94. As in the
case of the present perfect, I ignore this implication, for reasons
related to the difficulty of representing it in \textsc{Top}\xspace.
\begin{examps}
\item J.Adams had been a manager for two years on 1/1/94. \label{pap:8}
\end{examps}
As will be discussed in section \ref{temporal_adverbials}, I assume
that the past perfect triggers an aspectual shift to state (e.g.\ the
base form of \qit{to inspect} is a culminating activity, but \qit{had
inspected} is a state). This shift seems to be a property of all
perfect tenses. For reasons, however, related to the fact that I treat
the present perfect as equivalent to the simple past (section
\ref{present_perfect}), I do not postulate any shift in the case of
the present perfect.
\section{Special temporal verbs} \label{special_verbs}
Through their tenses, all verbs can convey temporal
information. Some verbs, however, like \qit{to begin} or
\qit{to precede}, are of a more
inherently temporal nature. These verbs
differ from ordinary ones (e.g.\ \qit{to build}, \qit{to contain}) in
that they do not describe directly situations, but rather
refer to situations introduced by other
verbs or nouns. (A similar observation is made in \cite{Passonneau}.) In \pref{spv:1}, for example, \qit{to begin} refers to the
situation of \qit{to build}. \qit{To start}, \qit{to end}, \qit{to
finish}, \qit{to follow}, \qit{to continue}, and \qit{to happen} all
belong to this category of special temporal verbs.
\begin{examps}
\item They began to build terminal 9 in 1985. \label{spv:1}
\end{examps}
From all the special temporal verbs, I have considered only \qit{to
start}, \qit{to begin}, \qit{to stop}, and \qit{to finish}. I
allow \qit{to start}, \qit{to begin}, \qit{to end}, and \qit{to
finish} to be used with state and activity verbs, even though with
state verbs \qit{to begin} and \qit{to finish} usually sound unnatural
(e.g.\ \pref{spv:11}), and with activity verbs (e.g.\ \pref{spv:13})
it could be argued that the use of \qit{to begin} or \qit{to finish}
signals that the speaker has in mind a culminating activity (not
activity) view of the situation.
\begin{examps}
\item \odd Which tank began to contain/finished containing water on
27/7/95? \label{spv:11}
\item Which flight began/finished circling at 5:00pm? \label{spv:13}
\end{examps}
When combining with culminating activity verbs, \qit{to start} and
\qit{to begin} have the same meanings. \qit{To stop} and \qit{to
finish}, however, differ: \qit{to finish} requires the climax to be
reached, while \qit{to stop} requires the action or change to simply stop
(possibly without being completed). For example, in \pref{spv:20} the
service must have simply stopped, while in \pref{spv:21} it must have
been completed.
\begin{examps}
\item Which company stopped servicing (actually) BA737 at 5:00pm?
\label{spv:20}
\item Which company finished servicing (actually) BA737 at 5:00pm?
\label{spv:21}
\end{examps}
With point verbs (like \qit{to enter} and \qit{to leave} in the airport
domain), the use of \qit{to start}, \qit{to begin}, \qit{to stop}, and
\qit{to finish} (e.g.\ \pref{spv:22}, \pref{spv:23}) typically signals
that the person submitting the question is unaware that the situation
of the point verb is taken to be instantaneous. In these cases, I
ignore the temporal verbs (e.g.\ \pref{spv:22} is treated as
\pref{spv:24}). Ideally, the \textsc{Nlitdb}\xspace would also warn the user that the
temporal verb is ignored, and that the situation is modelled as
instantaneous (another case for cooperative responses; see section
\ref{no_issues}). The framework of this thesis, however, provides no
mechanism for generating such warnings.
\begin{examps}
\item Which flight started to enter sector 2 at 5:00pm? \label{spv:22}
\item Which flight finished leaving gate 2 at 5:00pm? \label{spv:23}
\item Which flight entered sector 2 at 5:00pm? \label{spv:24}
\end{examps}
\section{Temporal nouns} \label{temporal_nouns}
Some nouns have a special temporal nature. Nouns like
\qit{development} or \qit{inspection}, for example, are similar to
verbs, in that they introduce world situations that occur in time.
The role of \qit{development} in \pref{tn:0} is very similar to that
of \qit{to develop} in \pref{tn:0.1}.
\begin{examps}
\item When did the development of \textsc{Masque} start?
\label{tn:0}
\item When did they start to develop \textsc{Masque}? \label{tn:0.1}
\end{examps}
Other nouns indicate temporal order (e.g.\ \qit{predecessor},
\qit{successor}), or refer to start or end-points (e.g.\ \qit{beginning},
\qit{end}). Finally, many nouns (and proper names) refer to time
periods, points, or generally entities of the temporal ontology (e.g.\
\qit{minute}, \qit{July}, \qit{event}).
From all the temporal nouns (and proper names), this
thesis examines only nouns like \qit{year}, \qit{month}, \qit{week},
\qit{day}, \qit{minute}, \qit{second}, \qit{1993}, \qit{July},
\qit{1/1/85}, \qit{Monday}, \qit{3:05pm}. Temporal nouns of a more
abstract nature (e.g.\ \qit{period}, \qit{point}, \qit{interval},
\qit{event}, \qit{time}, \qit{duration}), nouns referring to start or
end-points, nouns introducing situations, and nouns of temporal
precedence are not considered.
\section{Temporal adjectives} \label{temporal_adjectives}
There are also adjectives of a special temporal nature. Some refer to
a temporal order (e.g.\ \qit{current}, \qit{previous},
\qit{earliest}), others refer to durations (e.g.\ \qit{brief},
\qit{longer}), and others specify frequencies (e.g.\ \qit{annual},
\qit{daily}). Adjectives of this kind are not examined in this thesis,
with the exception of \qit{current} which is supported to illustrate
some points related to temporal anaphora (these points will be discussed
in section \ref{noun_anaphora}). (``Normal'' adjectives, like \qit{open} and
\qit{free}, are also supported.)
\section{Temporal adverbials} \label{temporal_adverbials}
This section discusses adverbials that convey temporal
information.
\subsection{Punctual adverbials} \label{point_adverbials}
Some adverbials are understood as specifying time-points. Following
\cite{Vlach1993}, I call these \emph{punctual adverbials}. In English,
punctual adverbials are usually prepositional phrases introduced by
\qit{at} (e.g.\ \qit{at 5:00pm}, \qit{at the end of the
inspection}). In this thesis, only punctual adverbials consisting of
\qit{at} and clock-time expressions (e.g.\ \qit{at 5:00pm}) are
considered.
\paragraph{With states:}
When combining with state expressions, punctual adverbials specify a
time where the situation of the state expression holds. There is
usually no implication that the situation of the state starts or stops
at the time of the adverbial. \pref{pa:1}, for example, asks if tank 5
was empty at 5:00pm. There is no requirement that the tank must have
started or stopped being empty at 5:00pm. Similar comments apply to
\pref{pa:2}.
\begin{examps}
\item Was tank 5 empty at 5:00pm? \label{pa:1}
\item Which flight was at gate 2 at 5:00pm? \label{pa:2}
\end{examps}
In other words, with states punctual adverbials normally have an
\emph{interjacent} meaning, not an \emph{inchoative} or
\emph{terminal} one. (``Interjacent'', ``inchoative'',
and ``terminal'' are borrowed from \cite{Kent}. Kent explores the
behaviour of \qit{at}, \qit{for}, and \qit{in} adverbials, and arrives
at conclusions similar to the ones presented here.)
In narrative contexts, punctual adverbials combining with
states sometimes have inchoative meanings. For example, the \qit{at 8:10am} in
\pref{pa:4} most probably specifies the time when J.Adams arrived
(started being) in Glasgow. In \textsc{Nlitdb}\xspace questions, however, this
inchoative meaning seems unlikely. For example, it seems unlikely that
\pref{pa:5} would be used to enquire about persons that
\emph{arrived} in Glasgow at 8:10am. Hence, for the purposes of this
thesis, it seems reasonable to assume that punctual adverbials
combining with states always have interjacent meanings.
\begin{examps}
\item J.Adams left Edinburgh early in the morning, and at 8:10am he
was in Glasgow. \label{pa:4}
\item Who was in Glasgow at 8:10am? \label{pa:5}
\end{examps}
\paragraph{With points:}
With point expressions, punctual adverbials specify the time where
the instantaneous situation of the point expression takes
place (e.g.\ \pref{pa:8}, \pref{pa:9}; \qit{to enter} and \qit{to
reach} are point verbs in the airport domain).
\begin{examps}
\item Which flight entered sector 2 at 23:02? \label{pa:8}
\item Which flight reached gate 5 at 23:02? \label{pa:9}
\end{examps}
\pref{pa:10} is ambiguous. It may either involve the
non-habitual homonym of \qit{to depart} (this homonym is a point verb
in the airport domain), in which case 5:00pm is
the time of the actual departure, or the state habitual homonym (to
depart habitually at some time), in which case 5:00pm is the habitual
departure time. In the latter case, I treat \qit{at 5:00pm} as a
prepositional phrase complement of the habitual \qit{to depart}, not
as a temporal adverbial. This reflects the fact that the \qit{at
5:00pm} does not specify a time when
the habit holds, but it forms part of the
description of the habit, i.e.\ it is used in a way very similar to
how \qit{from gate 2} is used in \pref{pa:3}.
\begin{examps}
\item Which flight departed at 5:00pm? \label{pa:10}
\item Which flight departed (habitually) from gate 2 (last year)? \label{pa:3}
\end{examps}
\paragraph{With activities:}
With activities, punctual adverbials usually have an inchoative
meaning, but an interjacent one is also possible in some cases.
\pref{pa:11}, for example, could refer to a flight that either joined
the queue of runway 2 at 5:00pm or was simply in
the queue at 5:00pm. (In the airport domain, \qit{to queue} and
\qit{to taxi} (no destination) are activity verbs.) The
inchoative meaning seems the preferred one in \pref{pa:11}. It
also seems the preferred one in \pref{pa:13}, though an interjacent
meaning is (arguably) also possible. The interjacent meaning is
easier to accept in \pref{pa:14}.
\begin{examps}
\item Which flight queued for runway 2 at 5:00pm? \label{pa:11}
\item BA737 taxied at 5:00pm. \label{pa:13}
\item Which flights circled at 5:00pm? \label{pa:14}
\end{examps}
With past continuous forms of activity verbs, however, punctual
adverbials normally have only interjacent meanings (compare
\pref{pa:11} and \pref{pa:13} to \pref{pa:17} and
\pref{pa:19}). (One would not normally use punctual adverbials with
present continuous forms, since in that case the situation is known to
take place at the present.)
\begin{examps}
\item Which flight was queueing for runway 2 at 5:00pm? \label{pa:17}
\item BA737 was taxiing at 5:00pm. \label{pa:19}
\end{examps}
To account for sentences like \pref{pa:17} and \pref{pa:19} (and other
phenomena to be discussed in following sections), I classify the
progressive tenses (present and past continuous) of activity and
culminating activity verbs as states. For example, in the airport
domain, the base form of \qit{to queue} is an activity. Normally, all
other forms of the verb (e.g.\ the simple past) inherit the aspectual
class of the base form. The progressive tenses (e.g.\ \qit{is
queueing}, \qit{was queueing}) of the verb, however, are states. (The
progressive can be seen as forcing an aspectual shift from activities
or culminating activities to states. No such aspectual shift is needed
in the case of point verbs.) This arrangement, along with the assumption
that punctual adverbials combining with states have only interjacent
meanings, accounts for the fact that \pref{pa:17} and \pref{pa:19}
have only interjacent meanings. In various forms, assumptions that
progressive tenses cause aspectual shifts to states have also been
used in \cite{Dowty1986}, \cite{Moens}, \cite{Vlach1993}, \cite{Kent},
and elsewhere.
\paragraph{With culminating activities:}
When combining with culminating activities, punctual adverbials
usually have inchoative or terminal meanings (when they have terminal
meanings, they specify the time when the climax was
reached). The terminal reading is the preferred one in \pref{pa:23}.
In \pref{pa:25} both readings seem possible. In \pref{pa:24} the
inchoative meaning seems the preferred one. (In the airport domain,
\qit{to land}, \qit{to taxi} (to destination), and \qit{to inspect}
are culminating activity verbs.)
\begin{examps}
\item Which flight landed at 5:00pm? \label{pa:23}
\item Which flight taxied to gate 4 at 5:00pm? \label{pa:25}
\item Who inspected BA737 at 5:00pm? \label{pa:24}
\end{examps}
Perhaps, as with activities, an interjacent meaning
is sometimes also possible (e.g.\ \pref{pa:25} would refer to a flight
that was on its way to gate 4 at 5:00pm). This may be true, but with
culminating activities the inchoative or terminal reading is usually
much more dominant. For simplicity, I ignore the possible interjacent
meaning in the case of culminating activities.
With past continuous forms of culminating activity verbs, punctual
adverbials normally have only interjacent meanings.
Compare, for example, \pref{pa:23} -- \pref{pa:24} to \pref{pa:28} --
\pref{pa:30}. This is in accordance with the assumption that the
progressive tenses of activity and culminating activity verbs are
states.
\begin{examps}
\item Which flight was landing at 5:00pm? \label{pa:28}
\item Which flight was taxiing to gate 4 at 5:00pm? \label{pa:29}
\item Who was inspecting BA737 at 5:00pm? \label{pa:30}
\end{examps}
\paragraph{With past perfect:}
As discussed in section \ref{past_perfect}, in sentences like
\pref{pa:31} the adverbial can be taken to refer either directly to
the taxiing (the taxiing started or ended at 5:00pm) or to the
reference time (the taxiing had already finished at 5:00pm).
\begin{examps}
\item BA737 had taxied to gate 2 at 5:00pm. \label{pa:31}
\item BA737 had [taxied to gate 2 at 5:00pm]. \label{pa:32}
\item BA737 [had taxied to gate 2] at 5:00pm. \label{pa:33}
\end{examps}
The way in which sentences like \pref{pa:31} are treated in this
thesis will become clearer in chapter \ref{TOP_chapter}. A rough
description, however, can be given here. \pref{pa:31} is treated as
syntactically ambiguous between \pref{pa:32} (where the adverbial
applies to the past participle \qit{taxied}) and \pref{pa:33} (where
the adverbial applies to the past perfect \qit{had taxied}). The past
participle (\qit{taxied}) inherits the aspectual class of the base
form, and refers directly to the situation of the verb (the
taxiing). In contrast, the past perfect (\qit{had taxied}) is always
classified as state (the past perfect can be seen as causing an
aspectual shift to state), and refers to a time-period that starts
immediately after the end of the situation of the past participle (the
end of the taxiing), and extends up to the end of time. Let us call
this period the \emph{consequent period}.
In the airport domain, the base form of \qit{to taxi} (to destination)
is a culminating activity. Hence, the past participle \qit{taxied}
(which refers directly to the taxiing) is also a culminating
activity. In \pref{pa:32}, a punctual adverbial combines with the
(culminating activity) past participle. According to the discussion
above, two readings arise: an inchoative (the taxiing started at
5:00pm) and a terminal one (the taxiing finished at 5:00pm). In
contrast, in \pref{pa:33} the punctual adverbial combines with the
past perfect \qit{had taxied}, which is a state expression that refers
to the consequent period. Hence, only an interjacent reading arises:
the time of the adverbial must simply be within the consequent period
(there is no need for the adverbial's time to be the beginning or end
of the consequent period). This requires the taxiing to have been
completed at the time of the adverbial. A similar arrangement is used
when the past perfect combines with period adverbials, duration
\qit{for~\dots} and \qit{in~\dots} adverbials, or temporal subordinate
clauses (to be discussed in following sections).
The assumption that the past perfect causes an aspectual shift to
state is similar to claims in \cite{Moens},
\cite{Vlach1993}, and elsewhere, that English perfect forms are (or refer
to) states.
\paragraph{Lexical, consequent, and progressive states:}
There is sometimes a need to distinguish between expressions
that are states because they have inherited the state aspectual class
of a base verb form, and expressions that are states because of an aspectual
shift introduced by a past perfect or a progressive
tense. Following \cite{Moens}, I use the terms
\emph{lexical state}, \emph{consequent state}, and \emph{progressive
state} to distinguish the three genres. In the airport domain, for
example, the base form of \qit{to queue} is a lexical state. The
simple past \qit{queued} and the past participle \qit{queued} are also
lexical states. The past perfect \qit{had queued} is a consequent
state, while the present continuous form \qit{is queueing} is a
progressive state.
Finally, for reasons that will be discussed in section
\ref{hpsg:mult_mods}, I assume that punctual adverbials cause the
aspectual class of the syntactic constituent they modify to become
point. In \pref{pa:33}, for example, the \qit{taxied to gate 2}
inherits the culminating activity aspectual class of the base form.
The past perfect causes the aspectual class of \qit{had taxied to gate
2} to become consequent state. Finally, the \qit{at 5:00pm} causes
the aspectual class of \qit{had departed at 5:00pm} to become point.
Table \ref{punctual_adverbials_table} summarises the main points of
this section.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{meanings of punctual adverbials} \\
\hline \hline
with state & interjacent \\
\hline
with activity & inchoative or interjacent \\
\hline
with culm.\ activity & inchoative or terminal \\
\hline
with point & specifies time of instantaneous situation \\
\hline \hline
\multicolumn{2}{|l|}{The resulting aspectual class is point.}\\
\hline
\end{tabular}
}
\end{center}
\caption{Punctual adverbials in the framework of this thesis}
\label{punctual_adverbials_table}
\end{table}
\subsection{Period adverbials} \label{period_adverbials}
Unlike punctual adverbials, which are understood as specifying points
in time, adverbials like \qit{in 1991}, \qit{on Monday},
\qit{yesterday} (e.g.\ \pref{padv:1} -- \pref{padv:2}) are usually
understood as specifying longer, non-instantaneous periods of time.
In \pref{padv:1}, for example, the period of \qit{in 1991} covers the
whole 1991. I call adverbials of this kind \emph{period adverbials}.
\begin{examps}
\item Who was the sales manager in 1991? \label{padv:1}
\item Did BA737 circle on Monday? \label{padv:3}
\item Which flights did J.Adams inspect yesterday? \label{padv:2}
\end{examps}
\qit{Before~\dots} and \qit{after~\dots} adverbials (e.g.\ \pref{padv:3.2})
can also be considered period adverbials, except that in this case one
of the boundaries of the period is left unspecified. (I model time as
bounded; see section \ref{temporal_ontology} below. In the absence of other
constraints, I treat the unspecified boundary as the beginning or end
of time.) In \pref{padv:3.2}, for example, the end of the period is
the beginning of 2/5/95; the beginning of the period is left
unspecified. \qit{Before} and \qit{after} can also introduce
temporal subordinate clauses; this will be discussed in section
\ref{before_after_clauses}.
\begin{examps}
\item Which company serviced BA737 before 2/5/95? \label{padv:3.2}
\end{examps}
This thesis examines only period adverbials introduced by \qit{in},
\qit{on}, \qit{before}, and \qit{after}, as well as \qit{today} and
\qit{yesterday}. (\qit{In~\dots} adverbials can also
specify durations, e.g.\ \qit{in two hours}; this will be discussed in
section \ref{in_adverbials}.) Other period adverbials, like \qit{from 1989 to
1990}, \qit{since 1990}, \qit{last week}, or \qit{two days ago}, are
not considered. Extending the framework of this thesis to support
more period adverbials should not be difficult.
\paragraph{With states:}
When period adverbials combine with state expressions, the situation of
the state expression must hold for at least some time during the
period of the adverbial. In \pref{padv:10}, for example, the person must
have been a manager for at least some time in 1995. Similarly, in
\pref{padv:11}, the person must have been at gate 2 for at least some
time on the previous day.
\begin{examps}
\item Who was a manager in 1995? \label{padv:10}
\item Who was at gate 2 yesterday? \label{padv:11}
\end{examps}
There is often, however, an implication that the situation
holds \emph{throughout} the period of the adverbial.
\pref{padv:13}, for example, could mean that the tank was empty
throughout January, not at simply some part of January. Similarly, in
\pref{padv:12} the user could be referring to tanks that were empty
\emph{throughout} January. In that case, if a tank was empty only some days in
January and the \textsc{Nlitdb}\xspace included that tank in the answer, the user
would be misled to believe that the tank was empty throughout
January. Similar comments can be made for
\pref{padv:13.9} and \pref{padv:15}.
\begin{examps}
\item Tank 4 was empty in January. \label{padv:13}
\item Which tanks were empty in January? \label{padv:12}
\item Was runway 2 open on 6/7/95? \label{padv:13.9}
\item Which flights departed (habitually) from gate 2 in 1993? \label{padv:15}
\end{examps}
The same implication is possible in sentences with \qit{before~\dots}
or \qit{after~\dots} adverbials. \pref{padv:20.1}, for example, could
mean that the runway was open all the time from some unspecified time
up to immediately before 5:00pm (and possibly longer).
\begin{examps}
\item Runway 2 was open before 5:00pm. \label{padv:20.1}
\end{examps}
One way to deal with such implications is to treat sentences where
period adverbials combine with states as ambiguous. That is, to
distinguish between a reading where the situation holds throughout the
adverbial's period, and a reading where the situation holds at simply
some part of the adverbial's period. \cite{Vlach1993} (p.~256) uses
the terms \emph{durative} and \emph{inclusive} to refer to the two
readings. (A \textsc{Nlitdb}\xspace could paraphrase both readings and ask the user
to select one, or it could provide answers to both readings,
indicating which answer corresponds to which reading.) This approach
has the disadvantage of always generating two readings, even in cases
where the durative reading is clearly impossible. For example, when
the state expression combines not only with a period adverbial but
also with a \qit{for~\dots} duration adverbial, the meaning can never
be that the situation must necessarily hold all the time of the
adverbial's period. For example, \pref{padv:29} can never mean that
the tank must have been empty throughout January (cf.\
\pref{padv:12}).
\begin{examps}
\item Which tank was empty for two days in January? \label{padv:29}
\item When on 6/7/95 was tank 5 empty? \label{padv:27}
\end{examps}
Similarly, in time-asking questions like \pref{padv:27}, the durative
reading is impossible. \pref{padv:27} can never mean that the tank
must have been empty throughout 6/7/95 (cf.\
\pref{padv:13.9}). Formulating an account of exactly when the durative
reading is possible is a task which I have not undertaken.
Although in chapter \ref{TOP_chapter} I discuss how the distinction
between durative and inclusive readings could be captured in \textsc{Top}\xspace,
for simplicity in the rest of this thesis I consider only the
inclusive readings, ignoring the durative ones.
\paragraph{With points:}
When period adverbials combine with point expressions, the period of
the adverbial must contain the time where the instantaneous situation of
the point expression occurs (e.g.\ \pref{padv:62}).
\begin{examps}
\item Did BA737 enter sector 5 on Monday? \label{padv:62}
\end{examps}
\paragraph{With culminating activities:}
When period adverbials combine with culminating activity expressions,
I allow two possible readings: (a) that the situation of the
culminating activity expression both starts and reaches its completion
within the adverbial's period, or (b) that the situation simply
reaches its completion within the adverbial's period. In the second
reading, I treat the culminating activity expression as referring to
only the completion of the situation it would normally describe, and
the aspectual class is changed to point.
The first reading is the preferred one in \pref{padv:42} which is
most naturally understood as referring to a runner who both started
and finished running the 40 miles on Wednesday (\qit{to run} (a
distance) is typically classified as culminating activity verb).
\begin{examps}
\item Who ran 40 miles on Wednesday? \label{padv:42}
\end{examps}
In the airport domain, the first reading is the preferred one in
\pref{padv:47.1} (the inspection both started and was completed on
Monday).
\begin{examps}
\item J.Adams inspected BA737 on Monday. \label{padv:47.1}
\end{examps}
The second reading (the situation simply reaches its
completion within the adverbial's period) is needed in questions
like \pref{padv:48} and \pref{padv:49}. In the airport domain,
\qit{to land} and \qit{to take off} are culminating
activity verbs (landings and taking offs involve following particular
procedures; the landing or taking off starts when the pilot starts the
corresponding procedure, and is completed when that procedure is
completed). If only the first reading were available (both start and
completion within the adverbial's period), in
\pref{padv:48} the \textsc{Nlitdb}\xspace would report only flights that both started
and finished landing on Monday. If a flight started the
landing procedure at 23:55 on Sunday and finished it at 00:05
on Monday, that flight would not be reported. This seems over-restrictive. In
\pref{padv:48} the most natural reading is that the flights must have
simply touched down on Monday, i.e.\ the landing must have simply been
completed within Monday. Similar comments can be made for
\pref{padv:49} and \pref{padv:52} (in domains where \qit{to fix} is a
culminating activity verb).
\begin{examps}
\item Which flights landed on Monday? \label{padv:48}
\item Which flights took off after 5:00pm? \label{padv:49}
\item Did J.Adams fix any faults yesterday? \label{padv:52}
\end{examps}
The problem in these cases is that \qit{to land}, \qit{to take off},
and \qit{to fix} need to be treated as point verbs (referring to only
the time-points where the corresponding situations are completed),
even though they have been classified as culminating activity verbs
(section \ref{aspect_examples}). The second reading allows exactly
this. The culminating activity expression is taken to refer to only
the completion point of the situation it would normally describe, its
aspectual class is changed to point, and the completion point is
required to fall within the adverbial's period.
The fact that two readings are allowed when period adverbials combine
with culminating activities means that sentences like
\pref{padv:47.1} -- \pref{padv:52} are treated as ambiguous. In
all ambiguous sentences, I assume that a \textsc{Nlitdb}\xspace would present
all readings to the user asking them to choose one, or that it would
provide answers to all readings, showing which answer corresponds to
which reading. (The prototype \textsc{Nlitdb}\xspace of this thesis adopts the
second strategy, though the mechanism for explaining which answer
corresponds to which reading is primitive: the readings are shown as
\textsc{Top}\xspace formulae.)
In the case of \qit{before~\dots} adverbials (e.g.\ \pref{padv:49b}),
the two readings are semantically equivalent: requiring the situation
to simply reach its completion before some time is equivalent to
requiring the situation to both start and reach its completion before
that time. To avoid generating two equivalent readings, I allow only
the reading where the situation both starts and reaches its completion
within the adverbial's period.
\begin{examps}
\item Which flights took off before 5:00pm? \label{padv:49b}
\end{examps}
Even with the second reading, the answers of the \textsc{Nlitdb}\xspace may not
always be satisfactory. Let us assume, for example, that J.Adams
started inspecting a flight late on Monday, and finished the
inspection early on Tuesday. None of the two readings would include
that flight in the answer to \pref{padv:47}, because both require the
completion point to fall on Monday. While strictly speaking this seems
correct, it would be better if the \textsc{Nlitdb}\xspace could also include in the
answer inspections that \emph{partially} overlap the adverbial's
period, warning the user about the fact that these inspections are not
completely contained in the adverbial's period. This is another case
where cooperative responses (section \ref{no_issues}) are needed.
\begin{examps}
\item Which flights did J.Adams inspect on Monday? \label{padv:47}
\end{examps}
Finally, I note that although in the airport domain \qit{to taxi} (to
destination) is a culminating activity verb, in \pref{padv:61.7} the
verb form is a (progressive) state. Hence, the \textsc{Nlitdb}\xspace's answer would
be affirmative if BA737 was taxiing to gate 2 some time within the
adverbial's period (before 5:00pm), even if BA737 did not reach the
gate during that period. This captures correctly the most natural
reading of
\pref{padv:61.7}.
\begin{examps}
\item Was BA737 taxiing to gate 2 before 5:00pm? \label{padv:61.7}
\end{examps}
\paragraph{With activities:}
When period adverbials combine with activities, I require the
situation of the verb to hold for at least some time within
the adverbial's period (same meaning as with
states). In \pref{padv:66}, for example, the flight must have
circled for at least some time on Monday, and in \pref{padv:67} the
flights must have taxied for at least some time after 5:00pm.
\begin{examps}
\item Did BA737 circle on Monday? \label{padv:66}
\item Which flights taxied after 5:00pm? \label{padv:67}
\end{examps}
Another stricter reading is sometimes possible (especially with
\qit{before} and \qit{after}): that the situation
does not extend past the boundaries of the adverbial's period.
For example, \pref{padv:67} would refer to flights that \emph{started}
to taxi after 5:00pm (a flight that started to taxi
at 4:55pm and continued to taxi until 5:05pm would not be
reported). This reading is perhaps also possible with states (e.g.\
\pref{padv:20.1}), though with activities it seems easier to
accept. As a simplification, such readings are ignored in this thesis.
\paragraph{Elliptical forms:} \qit{Before} and \qit{after} are sometimes
followed by noun phrases that do not denote entities of the temporal
ontology (e.g.\ \pref{padv:71}).
\begin{examps}
\item Did J.Adams inspect BA737 before/after UK160? \label{padv:71}
\item Did J.Adams inspect BA737 before/after he inspected UK160?
\label{padv:71.1}
\end{examps}
Questions like \pref{padv:71} can be considered elliptical forms of
\pref{padv:71.1}, i.e.\ in these cases \qit{before} and \qit{after}
could be treated as when they introduce subordinate clauses
(section \ref{before_after_clauses} below). Questions like \pref{padv:71}
are currently not supported by the framework of this thesis.
Table \ref{period_adverbials_table} summarises the main points of this
section.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{meanings of period adverbials} \\
\hline \hline
with state or activity & situation holds for at least part of adverbial's period \\
\hline
with culm.\ activity & situation starts and is completed within
adverbial's period,\\
& or situation is simply completed within adverbial's
period$^{*\dagger}$\\
\hline
with point & instantaneous situation occurs within adverbial's period \\
\hline \hline
\multicolumn{2}{|l|}{$^*$Not with \qit{before~\dots} adverbials.} \\
\multicolumn{2}{|l|}{$^{\dagger}$The resulting aspectual class is point. (In
all other cases the aspectual class} \\
\multicolumn{2}{|l|}{\ \ remains the same.)}\\
\hline
\end{tabular}
}
\end{center}
\caption{Period adverbials in the framework of this thesis}
\label{period_adverbials_table}
\end{table}
\subsection{Duration \qit{for~\dots} adverbials} \label{for_adverbials}
This section discusses \qit{for~\dots} adverbials that
specify durations (e.g.\ \pref{dura:1}).
\begin{examps}
\item Runway 2 was open for five days. \label{dura:1}
\end{examps}
\paragraph{With states and activities:}
When \qit{for~\dots} adverbials combine with states or activities, one
reading is that there must be a period with the duration of the
\qit{for~\dots} adverbial, such that the situation of the state or
activity holds throughout that period. According to this reading, in
\pref{dura:3} there must be a five-year
period, throughout which the person was a manager, and in
\pref{dura:3} a twenty-minute period throughout which
the flight was circling. If J.Adams was a manager for six
consecutive years (e.g.\ 1981 -- 1986), he would
be included in the answer to \pref{dura:3}, because there is a five-year
period (e.g.\ 1981 -- 1985) throughout which he was a manager.
\begin{examps}
\item Who was a manager for five years? \label{dura:3}
\item Did BA737 circle for twenty minutes? \label{dura:4}
\end{examps}
In some cases, however, \qit{for~\dots} adverbials are used with a stricter
meaning: they specify the duration of a \emph{maximal}
period where a situation held. In that case, if J.Adams
started to be a manager at the beginning of 1981 and stopped being a
manager at the end of 1986 (six consecutive years), he would
\emph{not} be included in the answer to \pref{dura:3}. For simplicity,
this stricter reading is ignored in this thesis .
In other cases, a \qit{for~\dots} adverbial does not necessarily
specify the duration of a \emph{single} period, but a \emph{total
duration}. According to this reading, if J.Adams was a manager during
several non-overlapping periods, and the total duration of these
periods is five years, he would be included in the answer to
\pref{dura:3}, even if he was never a manager for a continuous
five-year period. This reading of \qit{for} adverbials is also not
supported in this thesis.
There is a problem if \qit{for~\dots} adverbials are allowed to
combine with consequent states (section \ref{point_adverbials}). This
problem will be discussed in section \ref{duration_adverbials}, once
some formal apparatus has been established. For the moment, I note
that the solution involves disallowing \qit{for~\dots} adverbials to
be used with consequent states.
\paragraph{With points:}
\qit{For~\dots} adverbials sometimes specify the duration of a
situation that \emph{follows} the situation of the verb. This is
particularly common when \qit{for~\dots} adverbials combine with point
expressions. For instance, \pref{dura:11} (based on an example from
\cite{Hwang1994}), probably does not mean that J.Adams was actually
leaving his office for fifteen minutes. It means that he stayed (or
intended to stay) out of his office for fifteen minutes. (I assume
here that \qit{to leave} is a point verb, as in the airport domain.)
This use of \qit{for~\dots} adverbials is not supported in this thesis.
\begin{examps}
\item J.Adams left his office for fifteen minutes. \label{dura:11}
\end{examps}
\qit{For~\dots} adverbials also give rise to iterative readings (section
\ref{progressives}). This is again particularly common with point
expressions. \pref{dura:12} (from \cite{Hwang1994})
probably means that Mary won several times (\qit{to win} is typically
classified as point verb). Such iterative uses of \qit{for~\dots}
adverbials are not supported in this thesis.
\begin{examps}
\item Mary won the competition for four years. \label{dura:12}
\end{examps}
Excluding iterative readings and readings where \qit{for~\dots}
adverbials refer to consequent situations (both
are not supported in this thesis), sentences where
\qit{for~\dots} adverbials combine with point expressions either
sound odd or signal that the user is unaware that the situation
of the point expression is modelled as instantaneous (an explanatory
message to the user is needed in the latter case; this thesis,
however, provides no mechanism to generate such messages). Hence,
for the purposes of this thesis it seems reasonable not to allow
\qit{for~\dots} adverbials to combine with point expressions.
\paragraph{With culminating activities:}
When \qit{for~\dots} adverbials combine with culminating activities,
the resulting sentences sometimes sound odd or even unacceptable. For
example, \pref{dura:38} (based on an example from
\cite{Moens}) sounds odd or unacceptable to most native English speakers
(\qit{to build} is typically classified as culminating
activity verb). In contrast, \pref{dura:37} where the adverbial combines
with a (progressive) state is easily acceptable.
\begin{examps}
\item \odd Housecorp built a shopping centre for two years. \label{dura:38}
\item Housecorp was building a shopping centre for two years. \label{dura:37}
\end{examps}
Based on similar examples, Vendler (section \ref{asp_taxes})
concludes that accomplishments (culminating activities) do not combine
with \qit{for~\dots} adverbials. This, however, seems
over-restrictive. \pref{dura:40} and \pref{dura:42}, for example, seem
acceptable.
\begin{examps}
\item BA737 taxied to gate 2 for two minutes. \label{dura:40}
\item Did J.Adams inspect BA737 for ten minutes? \label{dura:42}
\end{examps}
Unlike \pref{dura:43}, in \pref{dura:40}
there is no requirement that the taxiing must have been completed,
i.e.\ that BA737 must have reached the gate. Similar comments can be
made for \pref{dura:42} and \pref{dura:45}. \qit{For~\dots} adverbials seem to
cancel any requirement that the climax must have been
reached. (Similar observations are made in \cite{Dowty1986},
\cite{Moens2}, and \cite{Kent}.)
\begin{examps}
\item BA737 taxied to gate 2. \label{dura:43}
\item Did J.Adams inspect BA737? \label{dura:45}
\end{examps}
In the framework of this thesis, I allow \qit{for~\dots} adverbials to
combine with culminating activities, with the same meaning that
I adopted in the case of states and activities, and with the
proviso that any requirement that the climax
must have been reached should be cancelled. That is, in \pref{dura:42}
there must be a ten-minute period throughout which J.Adams
was inspecting BA737.
Table \ref{for_adverbials_table} summarises the main points of this
section.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{meanings of duration \qit{for~\dots} adverbials} \\
\hline \hline
with lexical or progressive state & situation holds continuously for at least
that long\\
\hline
with consequent state & (not allowed in the framework of this thesis) \\
\hline
with activity & situation holds continuously for at least that long \\
\hline
with culminating activity & situation holds continuously for at
least that long \\
& (no need for climax to be reached)\\
\hline
with point & (not allowed in the framework of this thesis) \\
\hline
\end{tabular}
}
\end{center}
\caption{Duration \qit{for~\dots} adverbials in the framework of this thesis}
\label{for_adverbials_table}
\end{table}
\subsection{Duration \qit{in~\dots} adverbials} \label{in_adverbials}
This section discusses \qit{in~\dots} adverbials that specify
durations (e.g.\ \pref{inad:1}, \pref{inad:2}). \qit{In} can also
introduce period adverbials (e.g.\ \qit{in 1995}; see section
section \ref{period_adverbials}).
\begin{examps}
\item Airserve serviced BA737 in two hours. \label{inad:1}
\item Which flight did J.Adams inspect in one hour? \label{inad:2}
\end{examps}
\paragraph{With culminating activities:}
With culminating activity expressions, \qit{in~\dots} adverbials
usually specify the length of a period that ends at the time-point
where the situation of the culminating activity expression is
completed. In \pref{inad:1}, for example, two hours is probably the
length of a period that ends at the time-point where the service was
completed. \pref{inad:2} is similar. The period whose length is
specified by the \qit{in~\dots} adverbial usually starts at the
time-point where the situation of the culminating activity expression
begins. In \pref{inad:1}, for example, the two hours probably start at
the time-point where the service began. The period of the adverbial,
however, may sometimes not start at the beginning of the situation of
the culminating activity expression, but at some other earlier
time. In \pref{inad:1}, the start of the two hours could be the
time-point where Airserve was asked to service BA737, not the
beginning of the actual service. The framework of this thesis
supports only the case where the period of the adverbial starts at the
beginning of the situation described by the culminating activity expression.
\paragraph{With points:}
With point expressions, the period of the \qit{in~\dots} adverbial
starts before the (instantaneous) situation of the point expression,
and ends at the time-point where the situation of the point expression
occurs. In \pref{inad:10} the ten minutes end at the point where BA737
arrived at gate 2, and start at some earlier time-point (e.g.\ when
BA737 started to taxi to gate 2). \pref{inad:11} is similar.
\begin{examps}
\item BA737 reached gate 2 in ten minutes. \label{inad:10}
\item BA737 entered sector 2 in five minutes. \label{inad:11}
\end{examps}
Determining exactly when the period of the adverbial starts is often
difficult. It is not clear, for example, when the five minutes of
\pref{inad:11} start. As a simplification, I do not allow duration
\qit{in~\dots} adverbials to combine with point expressions.
\paragraph{With states and activities:}
\qit{In~\dots} adverbials are sometimes used with activity
expressions, with the \qit{in~\dots} duration adverbial intended to
specify the duration of the situation described by the activity
expression. Typically, in these cases the speaker has a culminating
activity view in mind. For example, \pref{inad:17} can be used in this
way if the speaker has a particular destination (say gate 2) in
mind. In that case, \pref{inad:17} can be thought as an elliptical
form of \pref{inad:19}. The framework of this thesis does not support
this use of \pref{inad:17}.
\begin{examps}
\item BA737 taxied in ten minutes. \label{inad:17}
\item BA737 taxied to gate 2 in ten minutes. \label{inad:19}
\end{examps}
With state and activity expressions, \qit{in~\dots} adverbials can also
specify the duration of a period that ends at the beginning of the
situation of the state or activity expression. In \pref{inad:5}, for
example, the two hours probably end at the time-point where tank 5
started to be empty. The beginning of the two hours could be, for
example, a time-point where a pump started to empty the tank, or a time-point
where a decision to empty the tank was taken. Similar
comments apply to \pref{inad:17}.
\begin{examps}
\item Tank 5 was empty in two hours. \label{inad:5}
\end{examps}
As with point expressions, determining exactly when the period of the
adverbial starts is often difficult. As a simplification, I do not
allow duration \qit{in~\dots} adverbials to combine with state or
activity expressions.
Table \ref{in_adverbials_table} summarises the main points of this section.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{meanings of duration ``in \dots'' adverbials} \\
\hline \hline
with state, activity, or point & (not allowed in the framework of this
thesis) \\
\hline
with culminating activity & distance from the start to the completion of
the situation \\
\hline
\end{tabular}
}
\end{center}
\caption{Duration \qit{in~\dots} adverbials in the framework of this thesis}
\label{in_adverbials_table}
\end{table}
\subsection{Other temporal adverbials} \label{other_adverbials}
Other temporal adverbials, that are not supported by the framework of
this thesis, include some adverbials that specify boundaries (e.g.\
\qit{until 1/5/95}, \qit{since 1987}, \qit{by Monday}), frequency
adverbials (\qit{always}, \qit{twice}, \qit{every Monday}), and
adverbials of temporal order (\qit{for the second time},
\qit{earlier}).
\section{Temporal subordinate clauses} \label{subordinate_clauses}
Three kinds of temporal subordinate clauses are examined in this
thesis: clauses introduced by \qit{while}, \qit{before}, and
\qit{after} (e.g.\ clauses introduced by \qit{since}, \qit{until}, or
\qit{when} are not examined). From the temporal subordinate clauses
that are not examined, \qit{when~\dots} clauses are generally
considered the most difficult to support (see \cite{Ritchie},
\cite{Yip1985}, \cite{Hinrichs1986}, \cite{Moens},
\cite{Moens2}, and \cite{Lascarides1993} for explorations of
\qit{when~\dots} clauses).
\subsection{\qit{While~\dots} clauses} \label{while_clauses}
\paragraph{Subordinate clause:}
As with period adverbials (section \ref{period_adverbials}), each
\qit{while~\dots} clause is understood as specifying a time period.
This is a maximal period throughout which the situation of the
\qit{while~\dots} clause holds. Let us assume, for example, that
J.Adams was a manager only from 1/1/1980 to 31/12/1983, and from
1/1/1987 to 31/12/1990. Then, in \pref{whc:1} the period of the
\qit{while~\dots} clause can be either one of these two periods. The
user may have in mind a particular one of the two periods. In that
case, a temporal anaphora resolution mechanism is needed to determine
that period (temporal anaphora is discussed in section
\ref{temporal_anaphora}). The framework of this thesis, however,
provides no such mechanism (the answer to \pref{whc:1} includes
anybody who was fired during any of the two periods).
\begin{examps}
\item Who was fired while J.Adams was a manager? \label{whc:1}
\end{examps}
Sentences where the aspectual class of the \qit{while~\dots} clause is
point (e.g.\ \pref{whc:4} in the airport domain) typically signal
that the user is unaware that the situation of the \qit{while~\dots}
clause is modelled as instantaneous. In the framework of
this thesis, the answer to \pref{whc:4} includes any flight that was
circling at the time-point where BA737 entered sector 2. Ideally, a
message would also be generated to warn the user that entering a
sector is modelled as instantaneous (no warning is currently
generated). This is another case where cooperative responses (section
\ref{no_issues}) are needed.
\begin{examps}
\item Which flights were circling while BA737 entered sector 2? \label{whc:4}
\end{examps}
Sentences containing \qit{while~\dots} clauses whose aspectual class
is consequent state (section \ref{point_adverbials}) usually sound
unnatural or unacceptable. For example, \pref{whc:5.2} --
\pref{whc:5.8} sound at least unnatural (e.g.\ instead of
\pref{whc:5.2} one would normally use \pref{whc:5.10} or
\pref{whc:5.11}). Hence, I do not allow
\qit{while~\dots} clauses whose aspectual class is consequent
state. This also avoids some complications in the English to \textsc{Top}\xspace mapping.
\begin{examps}
\item \odd Did any flight depart while BA737 had landed?
\label{whc:5.2}
\item \odd Did ABM fire anybody while J.Adams had been
the manager? \label{whc:5.5}
\item \odd Had any flight departed while J.Adams had inspected BA737?
\label{whc:5.8}
\item Did any flight depart while BA737 was landing? \label{whc:5.10}
\item Did any flight depart after BA737 had landed? \label{whc:5.11}
\label{whc:5.12}
\end{examps}
When the aspectual class of the \qit{while~\dots} clause is
culminating activity, there is no requirement that the climax
of the situation of the \qit{while~\dots} clause must
have been reached, even if the tense of that clause
normally requires this. In \pref{whc:8} and
\pref{whc:6}, for example, there does not seem to be any requirement that the
service or the boarding must have been completed (cf.\ \pref{whc:10}
and \pref{whc:11}). \pref{whc:8} and \pref{whc:6} appear to have the
same meanings as \pref{whc:9} and \pref{whc:7} (in progressive tenses,
there is no requirement for the climax to be reached; see section
\ref{progressives}).
Table \ref{while_clauses_table}
summarises the main points about \qit{while~\dots} clauses so far.
\begin{examps}
\item Did Airserve service BA737? \label{whc:10}
\item Which flights departed while Airserve serviced BA737? \label{whc:8}
\item Which flights departed while Airserve was servicing BA737? \label{whc:9}
\item Did BA737 board? \label{whc:11}
\item Which flights departed while BA737 boarded? \label{whc:6}
\item Which flights departed while BA737 was boarding? \label{whc:7}
\end{examps}
\begin{table}
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
aspectual class of & \\
\qit{while~\dots} clause & period specified by \qit{while~\dots} clause \\
\hline \hline
consequent state & (not allowed in the framework of this thesis) \\
\hline
lexical/progressive state & maximal period where situation of
\qit{while~\dots} clause holds \\
or activity & \\
\hline
culminating activity & maximal period where situation of
\qit{while~\dots} clause holds \\
& (no need for climax of \qit{while~\dots} clause
to be reached) \\
\hline
point & instant.\ period where situation of \qit{where~\dots}
clause occurs \\
\hline
\end{tabular}
}
\end{center}
\caption{Periods of \qit{while~\dots} clauses in the framework of this thesis}
\label{while_clauses_table}
\end{table}
\paragraph{Main clause:}
Once the periods of the \qit{while~\dots} clauses have been
established (following table \ref{while_clauses_table}), the behaviour
of \qit{while~\dots} clauses appears to be the same as that of period
adverbials (i.e.\ it follows table \vref{period_adverbials_table}).
With main clauses whose aspectual class is point, the
instantaneous situation of the main clause must occur within
the period of the \qit{while~\dots} clause (e.g.\ in \pref{whc:30} the
departures must have occurred during a maximal period where runway 5
was closed; \qit{to depart} is a point verb in the airport domain).
\begin{examps}
\item Did any flight depart from gate 2 while runway 5 was closed?
\label{whc:30}
\end{examps}
With activity main clauses, the situation of the main clause must be
ongoing some time during the period of the \qit{while~\dots} clause. In
\pref{whc:33}, for example, the flights must have taxied
some time during a maximal period where BA737 was circling. As with
period adverbials, stricter readings are sometimes
possible with activity main clauses. \pref{whc:33}, for example, could
refer to flights that both started and stopped taxiing during a
maximal period where BA737 was circling. As with period adverbials, I
ignore such stricter readings.
\begin{examps}
\item Which flights taxied while BA737 circled? \label{whc:33}
\end{examps}
As in the case of period adverbials, with culminating activity main
clauses I allow two readings: (a) that the situation of the main
clause both starts and reaches its completion within the period of the
\qit{while~\dots} clause, or (b) that the situation of the main clause
simply reaches its completion within the period of the \qit{while~\dots}
clause. In the second reading, the main clause is taken
to refer to only the completion point of the situation it would
normally describe, and its aspectual is changed to
point. In the airport domain, the first reading is the preferred one in
\pref{whc:34}. The second reading allows the
answer to \pref{whc:35} to contain flights that simply
touched down during the service, even if their
landing procedures did not start during the service.
\begin{examps}
\item J.Adams inspected BA737 while Airserve was servicing
UK160. \label{whc:34}
\item Which flights landed while Airserve was servicing UK160?
\label{whc:35}
\end{examps}
With state main clauses, I require the situation of the main clause to
hold some time during the period of the \qit{while~\dots} clause
(inclusive reading; see section \ref{period_adverbials}). For
example, the answer to \pref{whc:20} must contain anybody who was a
lecturer some time during a maximal period where J.Adams was a
professor (the non-auxiliary \qit{to be} is typically classified as
state verb). As with period adverbials, there is often an
implication that the situation of the main clause holds
\emph{throughout} the period of the
\qit{while~\dots} clause (durative reading). The durative reading is
unlikely in \pref{whc:20}, but seems the preferred one in \pref{whc:21}
(progressive state main clause). According to the durative reading,
\pref{whc:21} refers to a flight that was circling \emph{throughout} a
maximal period where runway 2 was closed.
\begin{examps}
\item Who was a lecturer while J.Adams was a professor? \label{whc:20}
\item Which flight was circling while runway 2 was
closed? \label{whc:21}
\end{examps}
The treatment of \qit{while~\dots} clauses of this thesis is
similar to that of \cite{Ritchie}. Ritchie also views
\qit{while~\dots} clauses as establishing periods,
with the exact relations between these periods and the situations of
the main clauses depending on the aspectual classes of the main
clauses. Ritchie uses only two aspectual classes (``continuing'' and
``completed''), which makes presenting a direct comparison between his
treatment of \qit{while~\dots} clauses and the treatment of this
thesis difficult. Both approaches, however, lead to similar results,
with the following two main exceptions. (a) In \pref{whc:20} and
\pref{whc:21} (state main clause), Ritchie's treatment admits only durative
readings. In contrast, the framework of this thesis admits only
inclusive ones. (b) In \pref{whc:35} (culminating activity main
clause), Ritchie's arrangements allow only one reading, where the
landings must have both started and been completed during the
service. The framework of this thesis allows an additional reading,
whereby it is enough if the landings were simply completed during the
service.
\subsection{\qit{Before~\dots} and \qit{after~\dots} clauses}
\label{before_after_clauses}
I treat \qit{before~\dots} and \qit{after~\dots} clauses as
establishing periods, as in the case of the \qit{before~\dots} and
\qit{after~\dots} adverbials of section \ref{period_adverbials}. In
\qit{before~\dots} clauses, the period starts at some
unspecified time-point (in the absence of other constraints, the
beginning of time), and ends at a time-point provided by the
\qit{before~\dots} clause. In \qit{after~\dots} clauses, the period
starts at a time-point provided by the \qit{after~\dots} clause, and
ends at some unspecified time-point (the end of time, in the absence
of other constraints). I use the terms \emph{before-point} and
\emph{after-point} to refer to the time-points provided by
\qit{before~\dots} and \qit{after~\dots} clauses respectively. Once
the periods of the \qit{before~\dots} and \qit{after~\dots} clauses
have been established, the behaviour of the clauses appears to be the
same as that of period adverbials (i.e.\ it follows table
\vref{period_adverbials_table}).
\paragraph{State \qit{before/after~\dots} clause:}
Let us first examine sentences where the aspectual class of the
\qit{before~\dots} or \qit{after~\dots} clause is state. With
\qit{before~\dots} clauses, the before-point is a time-point where
the situation of the \qit{before~\dots} clause starts (table
\ref{before_clauses_table}). In
\pref{bac:1}, for example, the before-point is a time-point where
runway 2 started to be open. The aspectual class of the main clause is
point (\qit{to depart} is a point verb in the airport domain). Hence,
according to table \vref{period_adverbials_table}, the departures must
have occurred within the period of the \qit{before~\dots} clause, i.e.\
before the time-point where runway 2 started to be open. Similar
comments apply to \pref{bac:1.1}, \pref{bac:2} (progressive
\qit{before~\dots} clause), and \pref{bac:3} (consequent state
\qit{before~\dots} clause). In \pref{bac:3}, the
before-point is the beginning of the consequent period of the
inspection (the period that contains all the time after the completion of
the inspection; see section \ref{point_adverbials}), i.e.\ the
departures must have happened before the inspection was completed.
\begin{examps}
\item Which flights departed before runway 2 was open? \label{bac:1}
\item Which flights departed before the emergency system was in
operation? \label{bac:1.1}
\item Which flights departed before BA737 was circling? \label{bac:2}
\item Which flights departed before J.Adams had inspected BA737? \label{bac:3}
\end{examps}
\begin{table}[t]
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline aspectual class of & before-point \\
\qit{before~\dots} clause & (right boundary of period specified by
\qit{before~\dots} clause) \\
\hline \hline
state & time-point where situation of \qit{before~\dots} clause starts \\
\hline activity & time-point where situation of \qit{before~\dots}
clause starts \\
\hline culm.\ activity & time-point where situation of
\qit{before~\dots} clause \\
& starts or is completed \\
\hline point & time-point where situation of \qit{before~\dots}
clause occurs \\
\hline
\end{tabular}
}
\end{center}
\caption{Boundaries of \qit{before~\dots} clauses in the
framework of this thesis}
\label{before_clauses_table}
\end{table}
According to table \vref{period_adverbials_table}, in \pref{bac:12}
where the main clause is a state, the flight must have been at gate 2
some time during the period of the \qit{before~\dots} clause,
i.e.\ for some time before runway 2 started to be open. In
\pref{bac:10} (activity main clause), the flight must
have circled for some time before runway 2 started to be open, and in
\pref{bac:11} (culminating activity main clause) the inspections must
have both started and been completed before runway 2 started to be
open. (As with the \qit{before~\dots} adverbials of section
\ref{period_adverbials}, in \pref{bac:11} it would be better if the
\textsc{Nlitdb}\xspace could also report inspections that started but were not
completed before runway 2 opened, warning the user that these
inspections were not completed before runway 2 opened.)
\begin{examps}
\item Was any flight at gate 2 before runway 2 was open? \label{bac:12}
\item Did any flight circle before runway 2 was open? \label{bac:10}
\item Which flights did J.Adams inspect before runway 2 was open?
\label{bac:11}
\end{examps}
In the case of \qit{after~\dots} clauses, when the aspectual class of
the \qit{after~\dots} clause is state, the after-point is a time-point
where the situation of the \qit{after~\dots} clause either starts or
ends. \pref{bac:1a}, for example, has two readings: that the
flights must have departed after runway 2 \emph{started} to be
open, or that the flights must have departed after
runway 2 \emph{stopped} being open. Similar comments apply to
\pref{bac:1.1a} and \pref{bac:2a}.
\begin{examps}
\item Which flights departed after runway 2 was open? \label{bac:1a}
\item Which flights departed after the emergency system was in
operation? \label{bac:1.1a}
\item Which flights departed after BA737 was circling? \label{bac:2a}
\end{examps}
In sentences like \pref{bac:3a}, where the aspectual class of the
\qit{after~\dots} clause is consequent state, the after-point can only
be the beginning of the consequent period (the first time-point after
the completion of the inspection). It cannot be the end of the
consequent period: the end of the consequent period is the end of
time; if the after-point were the end of the consequent period, the
departures of \pref{bac:3a} would have to occur after the end of time,
which is impossible. This explains the distinction between
lexical/progressive and consequent states in table
\ref{after_clauses_table}.
\begin{examps}
\item Which flights departed after J.Adams had inspected BA737? \label{bac:3a}
\end{examps}
\begin{table}[t]
\begin{center}
{\small
\begin{tabular}{|l|l|}
\hline
aspectual class of & after-point \\
\qit{before~\dots} clause & (left boundary of period specified by
\qit{after~\dots} clause) \\
\hline \hline
lexical/progressive state & time-point where situation of
\qit{after~\dots} clause
starts or ends \\
\hline
consequent state & time-point where consequent period of
\qit{after~\dots} clause starts \\
\hline
activity & time-point where situation of \qit{after~\dots} clause
ends \\
\hline
culm.\ activity & time-point where situation of \qit{after~\dots}
clause is completed \\
\hline
point & time-point where situation of \qit{before~\dots} clause occurs \\
\hline
\end{tabular}
}
\end{center}
\caption{Boundaries of \qit{after~\dots} clauses in the framework of this thesis}
\label{after_clauses_table}
\end{table}
\paragraph{Point \qit{before/after~\dots} clause:}
If the aspectual class of the \qit{before~\dots} or \qit{after~\dots}
clause is point, the before/after-point is the
time-point where the instantaneous situation of the subordinate
clause occurs. In \pref{bac:20}, for example, the before/after-point is
the point where BA737 reached gate2.
\begin{examps}
\item Which flights departed before/after BA737 reached gate 2?
\label{bac:20}
\end{examps}
\paragraph{Activity \qit{before/after~\dots} clause:}
With activity \qit{before/after~\dots} clauses, I
consider the before-point to be a time-point where the situation of
the \qit{before~\dots} clause starts, and the after-point to be a
point where the situation of the \qit{after~\dots} clause
ends. In \pref{bac:23} and \pref{bac:24}, for example, the departures
must have occurred before BA737 \emph{started} to taxi or circle. In
\pref{bac:25} and \pref{bac:26}, the departures must have occurred
after BA737 \emph{stopped} taxiing or circling.
\begin{examps}
\item Which flights departed before BA737 taxied? \label{bac:23}
\item Which flights departed before BA737 circled? \label{bac:24}
\item Which flights departed after BA737 taxied? \label{bac:25}
\item Which flights departed after BA737 circled? \label{bac:26}
\end{examps}
Perhaps another reading is sometimes possible with
\qit{after~\dots} clauses: that the after-point is a time-point where
the situation of the \qit{after~\dots} clause \emph{starts} (e.g.\
\pref{bac:26} would refer to departures that occurred
after BA737 \emph{started} to circle). This reading, however, does not seem
very likely, and for simplicity I ignore it.
\paragraph{Culminating activity \qit{before/after~\dots} clause:} With
\qit{after~\dots} clauses whose aspectual class is culminating
activity, I consider the after-point to be a time-point where the
situation of the \qit{after~\dots} clause reaches its completion. In
\pref{bac:27}, the departures must have occurred after
the completion of the inspection, and in \pref{bac:28} they must have
occurred after the time-point where BA737 reached gate 2.
\begin{examps}
\item Which flights departed after J.Adams inspected BA737?
\label{bac:27}
\item Which flights departed after BA737 taxied to gate 2?
\label{bac:28}
\end{examps}
With culminating activity \qit{before~\dots} clauses, I allow the
before-point to be a time-point where the situation of the
\qit{before~\dots} clause either starts or reaches its completion. In
the airport domain, the
first reading seems the preferred one in \pref{bac:30} (the flights
must have departed before the \emph{beginning} of the inspection). The
second reading seems the preferred one in \pref{bac:31} (the flights
must have departed before the \emph{completion} of the landing). Both
readings seems possible in \pref{bac:31}.
\begin{examps}
\item Which flights departed before J.Adams inspected BA737?
\label{bac:30}
\item Which flights departed before BA737 landed? \label{bac:33}
\item Which flights departed before BA737 taxied to gate 2?
\label{bac:31}
\end{examps}
If the first reading is adopted (the situation of the
\qit{before~\dots} clause \emph{starts} at the before-point) and the
\qit{before~\dots} clause is in the simple past, it is unclear if
the situation of the \qit{before~\dots} clause must have necessarily
reached its climax (the simple past of culminating
activity verbs normally requires this;
see section \ref{simple_past}). For example, let us assume that the
first reading is adopted in \pref{bac:30}. Should the before-point be
the beginning of an inspection that was necessarily completed, or can
it also be the beginning of an inspection that was never completed?
The framework of this thesis currently adopts the first approach, but
this is perhaps over-restrictive. It would probably be better if the
\textsc{Nlitdb}\xspace allowed the before-point to be the beginning of both inspections
that were and were not completed, warning the user
about inspections that were not completed. This is another case for
cooperative responses (section \ref{no_issues}).
\paragraph{Other uses:}
\qit{Before} and \qit{after} can be preceded by expressions specifying
durations (e.g.\ \pref{bac:40}). This use of \qit{before} and
\qit{after} is not considered in this thesis.
\begin{examps}
\item BA737 reached gate 2 five minutes after UK160 departed. \label{bac:40}
\end{examps}
\qit{Before~\dots} clauses also have counter-factual uses. For
example, in \pref{bac:43} (from \cite{Crouch}) the situation where the
car runs into the tree never takes place. This use of \qit{before} is
not considered in this thesis.
\begin{examps}
\item Smith stopped the car before it ran into the tree. \label{bac:43}
\end{examps}
The treatment of \qit{before~\dots} and \qit{after~\dots} clauses of
this thesis is similar to that of \cite{Ritchie}. Ritchie also views
\qit{before~\dots} and \qit{after~\dots} clauses as providing before
and after-points. As noted in section \ref{while_clauses}, however,
Ritchie uses only two aspectual classes. According to
Ritchie, in the case of \qit{before~\dots} clauses, the before-point
is a time-point where the situation of the \qit{before~\dots}
clause starts, and the situation of the main clause must simply start
before that point. In \pref{bac:11},
this requires the inspections to have simply
\emph{started} before the time-point where runway 2 started to be
open. In contrast, the framework of this thesis requires the
inspections to have been \emph{completed} before that time-point.
In the case of \qit{after~\dots} clauses, the main difference between
Ritchie's treatment and the treatment of this thesis concerns state
\qit{after~\dots} clauses. In that case, Ritchie allows the
after-point to be only the beginning of the situation of the
\qit{after~\dots} clause. In \pref{bac:1.1a}, this requires the
flights to have departed after the time-point where the system \emph{started}
to be in operation. The framework of this thesis
allows an additional reading, where the flights must have departed after
the time-point where the system \emph{stopped} being in operation.
\subsection{Tense coordination} \label{tense_coordination}
Some combinations of tenses in the main and subordinate clauses are
unacceptable (e.g.\ \pref{coo:0}, \pref{coo:2}).
This thesis makes no attempt to account for the unacceptability of
such combinations. The reader is referred
to \cite{Harper} and \cite{Brent1990} for methods that could be used
to detect and reject sentences like \pref{coo:0} and \pref{coo:2}.
\begin{examps}
\item \bad BA737 left gate 2 before runway 2 is free. \label{coo:0}
\item \bad Which runways are closed while runway 2 was circling? \label{coo:2}
\end{examps}
\section{Noun phrases and temporal reference} \label{noun_anaphora}
A question like \pref{nana:1} can refer either to the present sales
manager (asking the 1991 salary of the present sales manager) or to
the 1991 sales manager (asking the 1991 salary of the 1991 sales
manager). Similarly, \pref{nana:2} may refer either to present
students or last year's students. In \pref{nana:3.1}, \qit{which
closed runway} probably refers to a runway that is \emph{currently}
closed, while in \pref{nana:3.5} \qit{a closed runway} probably refers
to a runway that was closed at the time of the landing.
\begin{examps}
\item What was the salary of the sales manager in 1991? \label{nana:1}
\item Which students failed in physics last year? \label{nana:2}
\item Which closed runway was open yesterday? \label{nana:3.1}
\item Did BA737 ever land on a closed runway in 1991? \label{nana:3.5}
\end{examps}
It seems that noun phrases (e.g.\ \qit{the sales manager}, \qit{which
students}, \qit{a closed runway}) generally refer either to the
present or to the time of the verb tense (if this time is different
than the present). In \pref{nana:1}, the simple past tense refers to
some time in 1991. Therefore, there are two options:
\qit{the sales manager} can refer either to the present sales manager
or to somebody who was the sales manager in 1991. Similar comments
apply to \pref{nana:2}. In contrast, in \pref{nana:3} the
verb tense refers to the present. Hence, there is only one
possibility: \qit{the sales manager} refers to the present sales manager.
\begin{examps}
\item What is the salary of the sales manager? \label{nana:3}
\end{examps}
In \pref{nana:3.1}, the verb tense refers to a time (within the
previous day) where the runway was open. There should be two readings:
it should be possible for \qit{which closed runway} to refer either to
a currently closed runway, or to a runway that was closed at the time
it was open. Since a runway cannot be closed at the same time where it
is open, the second reading is ruled out. (This clash, however, cannot
be spotted easily by a \textsc{Nlitdb}\xspace without some inferential capability.)
The hypothesis that noun phrases refer either to the present or to the
time of the verb tense is not always adequate. For example, a person
submitting \pref{nana:5} to the \textsc{Nlitdb}\xspace of a university most probably
refers to \emph{previous} students of the university. In contrast, the
hypothesis predicts that the question can refer only to \emph{current}
students. (Similar examples can be found in \cite{Enc1986}.)
\begin{examps}
\item How many of our students are now professors? \label{nana:5}
\end{examps}
The hypothesis also predicts that \pref{nana:6} can refer only to
current Prime Ministers or to persons that were Prime Ministers at the
time they were born (an extremely unlikely reading). There is, however, a reading where the question refers to all past and present Prime
Ministers. This reading is incorrectly ruled out by the hypothesis.
\begin{examps}
\item Which Prime Ministers were born in Scotland? \label{nana:6}
\end{examps}
Hinrichs \cite{Hinrichs} argues that determining the
times to which noun phrases refer is part of a more general
problem of determining the entities to which noun phrases refer.
According to Hinrichs, a noun phrase like \qit{every
admiral} generally refers to anybody who was, is, or will be an admiral
of any fleet in the world at any time. If, however, \pref{nana:8}
is uttered in a context where the current personnel of the U.S.\
Pacific fleet is being discussed, the temporal scope of \qit{every
admiral} is restricted to current admirals, in the same way that the
scope of \qit{every admiral} is restricted to admirals of the U.S.\
Pacific fleet (e.g.\ \pref{nana:8} does not mean that all Russian
admirals also graduated from Annapolis).
\begin{examps}
\item Every admiral graduated from Annapolis. \label{nana:8}
\end{examps}
The fact that Hinrichs does not limit the times of the noun phrases to
the present and the time of the verb tense is in accordance with the
fact that \qit{our students} in \pref{nana:5} is not limited to
present students, and the fact that \qit{which Prime
Ministers} in \pref{nana:6} may refer to all past and present Prime
Ministers. Hinrichs' approach,
however, requires some mechanism to restrict the scope of noun phrases
as the discourse evolves. Hinrichs offers only a very limited sketch of
how such a mechanism could be constructed. Also, in the absence of
previous discourse, Hinrichs' treatment suggests that
\pref{nana:1} refers to the sales managers of all times, an unlikely
interpretation. The hypothesis that noun phrases refer either to the
present or to the time of the verb tense performs better in this case.
Given these deficiencies of Hinrichs' approach, I adopt the initial
hypothesis that noun phrases refer to the present or the time of the
verb tense. (An alternative approach would be to attempt to merge this
hypothesis with Hinrichs' method. \cite{Dalrymple1988} goes towards
this direction.)
A further improvement can be made to the hypothesis that noun phrases
refer to the present or the time of the verb tense. When a noun phrase
is the complement of the predicative \qit{to be}, it seems that the
noun phrase can refer only to the time of the verb
tense. \pref{nana:11}, for example, can only be a request to report
the 1991 sales manager, not the current sales manager. Similarly,
\pref{nana:11.5} cannot mean that J.Adams is the current sales
manager. This also accounts for the fact that in \pref{nana:1}, unlike
\qit{the sales manager} which can refer either to the present or
1991, \qit{the salary of the sales manager} (the complement of
\qit{was}) can refer only to a 1991 salary, not to a present
salary. (I assume that the restriction that the complement of the
predicative \qit{to be} must refer to the time of the verb tense does
not extend to noun phrases that are subconstituents of that
complement, like \qit{the sales manager} in \pref{nana:1}.) The same
restriction applies to bare adjectives used as complements of the
predicative \qit{to be}. In \pref{nana:3.1}, for example, \qit{open} can only
refer to runways that were open on the previous day. It cannot refer
to currently open runways.
\begin{examps}
\item Who was the sales manager in 1991? \label{nana:11}
\item J.Adams was the sales manager in 1991. \label{nana:11.5}
\end{examps}
The hypothesis that noun phrases refer to the present or the time of
the verb tense does not apply when a temporal adjective (e.g.\
\qit{current}) specifies explicitly the time of the noun phrase (e.g.\
\pref{nana:9}). (Although temporal adjectives are not considered in
this thesis, I support \qit{current} to be able to illustrate this point.)
\begin{examps}
\item Which current students failed in Physics last year?
\label{nana:9}
\end{examps}
In chapter \ref{English_to_TOP}, an additional mechanism will be
introduced, that allows the person configuring the \textsc{Nlitdb}\xspace to force
some noun phrases to be treated as always referring to the time of the
verb tense, or as always referring to the present.
\section{Temporal anaphora} \label{temporal_anaphora}
There are several English expressions (e.g.\ \qit{that time}, \qit{the
following day}, \qit{then}, \qit{later}) that refer implicitly to
contextually salient times, in a way that is similar to how pronouns,
possessive determiners, etc.\ refer to contextually salient world
entities (the terms \emph{temporal} and \emph{nominal anaphora} were
used in section \ref{no_issues} to refer to these two phenomena; the
parallels between temporal and nominal anaphora are discussed in
\cite{Partee1984}). For example, the user of a
\textsc{Nlitdb}\xspace may submit \pref{tan:1}, followed by \pref{tan:2}. In
\pref{tan:2}, \qit{at that time} refers to the time when John became
manager (temporal anaphora). In a similar manner, \qit{he} refers to
John (nominal anaphora).
\begin{examps}
\item When did John become manager? \label{tan:1}
\item Was he married at that time? \label{tan:2}
\end{examps}
Names of months, days, etc.\ often have a similar temporal anaphoric
nature. For example, in a context where several questions about the
1990 status of a company have just been asked, \pref{tan:6} most
probably refers to the January of 1990, not any other January. In the
absence of previous questions, \pref{tan:6} most probably refers to
the January of the current year. (See section 5.5.1 of \cite{Kamp1993}
for related discussion.)
\begin{examps}
\item Who was the sales manager in January? \label{tan:6}
\end{examps}
Verb tenses also seem to have a temporal anaphoric nature (the term
\emph{tense anaphora} is often used in this case). For example, the
user may ask \pref{tan:7} (let us assume that the response is
``\sys{no}''), followed by \pref{tan:8}. In that case, the simple past
\qit{was} of \pref{tan:8} does not refer to an arbitrary past time, it
refers to the past time of the previous question, i.e.\ 1993.
\begin{examps}
\item Was Mary the personnel manager in 1993? \label{tan:7}
\item Who was the personnel manager? \label{tan:8}
\end{examps}
The anaphoric nature of verb tenses is clearer in multi-sentence text
(see \cite{Hinrichs1986}, \cite{Webber1988}, \cite{Kamp1993},
\cite{Kameyama1993} for related work). In \pref{tan:9}, for example,
the simple past \qit{landed} refers to a landing that happened
immediately after the permission of the first sentence was given. It
does not refer to an arbitrary past time where BA737 landed on runway
2. Similar comments apply to the \qit{taxied}.
\begin{examps}
\item BA737 was given permission to land at 5:00pm. It landed on
runway 2, and taxied to gate 4. \label{tan:9}
\end{examps}
In dialogues like the one in \pref{tan:7} -- \pref{tan:8}, a
simplistic treatment of tense anaphora is to
store the time of the adverbial of \pref{tan:7}, and to require the
simple past of \pref{tan:8} to refer to
that time. (A more elaborate version of this approach will be
discussed in section \ref{lt_anaphora}.)
The behaviour of noun phrases like \qit{the sales manager} of section
\ref{noun_anaphora} can be seen as a case of temporal
anaphora. This is the only type of temporal anaphora that is supported
by the framework of this thesis. Expressions like \qit{at that time},
\qit{the following day}, etc.\ are not supported, and tenses referring
to the past are taken to refer to \emph{any} past time. For example,
\pref{tan:8} is taken to refer to anybody who was the personnel
manager at any past time. The reader is also reminded (section
\ref{no_issues}) that nominal anaphora is not considered in this
thesis.
\section{Other phenomena that are not supported} \label{ling_not_supported}
This section discusses some further phenomena that are not supported
by the framework of this thesis.
\paragraph{Cardinality and duration questions:}
Questions about the cardinality of a set or the duration of a
situation (e.g.\ \pref{oiss:1}, \pref{oiss:2}) are not
supported. (\textsc{Top}\xspace is currently not powerful enough to express the
meanings of these questions.)
\begin{examps}
\item How many flights have landed today? \label{oiss:1}
\item For how long was tank 2 empty? \label{oiss:2}
\end{examps}
\paragraph{Cardinalities and plurals:}
Expressions specifying cardinalities of sets (e.g.\ \qit{eight
passengers}, \qit{two airplanes}) are not supported (this does not
include duration expressions like \qit{five hours}, which are
supported). Expressions of this kind give rise to a distinction
between \emph{distributive} and \emph{collective} readings
\cite{Stirling1985} \cite{Crouch2}. \pref{oiss:10}, for example, has a
collective reading where the eight passengers arrive at the same
time, and a distributive one where there are eight separate
arrivals. This distinction was not explored during the work of this thesis.
\textsc{Top}\xspace is also currently not powerful enough to express cardinalities of
sets.
\begin{examps}
\item Eight passengers arrived. \label{oiss:10}
\end{examps}
The framework of this thesis accepts plural noun phrases
introduced by \qit{some} and \qit{which} (e.g.\ \qit{some flights},
\qit{which passengers}), but it treats them semantically as
singular. For example, \pref{oiss:11} and \pref{oiss:12} are treated
as having the same meanings as \pref{oiss:11.1} and \pref{oiss:12.1}
respectively.
\begin{examps}
\item Which flights landed? \label{oiss:11}
\item Which flight landed? \label{oiss:11.1}
\item Some flights entered sector 2. \label{oiss:12}
\item A flight entered sector 2. \label{oiss:12.1}
\end{examps}
\paragraph{Quantifiers:}
Expressions introducing universal quantifiers at the logical level
(e.g.\ \qit{every}, \qit{all}) are not supported. This leaves only
existential quantifiers (and an interrogative version of them, to be
discussed in chapter \ref{TOP_chapter}) at the logical level,
avoiding issues related to quantifier scoping (see also section
\ref{quantif_scoping}). It also simplifies the semantics of \textsc{Top}\xspace and the
mapping from \textsc{Top}\xspace to \textsc{Tsql2}\xspace.
\paragraph{Conjunction, disjunction, and negation:}
Conjunctions of words or phrases are not supported. Among other
things, this avoids phenomena related to sequencing of events. For
example, \pref{oiss:15} is understood as saying that the patient died
\emph{after} (and probably as a result of) being given Qdrug (cf.\
\pref{oiss:16} which sounds odd). In contrast, in \pref{oiss:17} the
patient was given Qdrug \emph{while} he had high fever. (See, for
example, \cite{Hinrichs1986}, \cite{Hinrichs}, \cite{Webber1988},
\cite{Kamp1993}, \cite{terMeulen1994}, and \cite{Hwang1994} for
related work.)
\begin{examps}
\item Which patient was given Qdrug and died? \label{oiss:15}
\item \odd Which patient died and was given Qdrug? \label{oiss:16}
\item Which patient had high fever and was given Qdrug? \label{oiss:17}
\end{examps}
Expressions introducing disjunction or negation (e.g.\ \qit{or},
\qit{either}, \qit{not}, \qit{never}) are also not supported. This
simplifies the semantics of \textsc{Top}\xspace and the \textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping. Not
supporting negation also avoids various temporal phenomena related
to negation (see section 5.2.5 of \cite{Kamp1993}), and claims that
negation causes aspectual shifts (see, for example, \cite{Dowty1986}
and \cite{Moens}).
\paragraph{Relative clauses:}
Relative clauses are also not supported. Relative clauses require
special temporal treatment. \pref{oiss:20}, for example, most probably
does not refer to a runway that was closed at an \emph{arbitrary} past
time; it probably refers to a runway that was closed at the time of
the landing. The relation between the time of the relative clause and
that of the main clause can vary. In \pref{oiss:21} (from
\cite{Dowty1986}), for example, the woman may have seen John during,
before, or even after the stealing.
\begin{examps}
\item Which flight landed on a runway that was closed? \label{oiss:20}
\item The woman that stole the book saw John. \label{oiss:21}
\end{examps}
Relative clauses can also be used with nouns that refer to the
temporal ontology (e.g.\ \qit{period} in \pref{oiss:22}). Additional
temporal phenomena involving relative clauses are discussed in section
5.5.4.2 of \cite{Kamp1993}.
\begin{examps}
\item Who was fired during the period that J.Adams was
personnel manager? \label{oiss:22}
\end{examps}
\paragraph{Passives:}
Finally, I have concentrated on active voice verb forms. This
simplifies the \textsc{Hpsg}\xspace grammar of chapter 4. It should be easy to extend
the framework of this thesis to cover passive forms as well.
\section{Summary}
The framework of this thesis uses an aspectual taxonomy of four
classes (states, points, activities, and culminating activities). This
taxonomy classifies verb forms, verb phrases, clauses, and
sentences. Whenever the \textsc{Nlitdb}\xspace is configured for a new application,
the base form of each verb is assigned to one of the four aspectual
classes. All other verb forms normally inherit the aspectual class of
the base form. Verb phrases, clauses, and sentences normally inherit
the aspectual classes of their main verb forms. Some linguistic mechanisms
(e.g.\ progressive tenses, or some temporal adverbials), however, may
cause the aspectual class of a verb form to differ from that of the
base form, or the aspectual class of a verb phrase, clause, or
sentence to differ from that of its main verb form. The
aspectual taxonomy plays an important role in most time-related
linguistic phenomena.
Six tenses (simple present, simple past, present continuous, past
continuous, present perfect, and past perfect) are supported, with
various simplifications introduced in their meanings. Some
special temporal verbs were identified (e.g.\ \qit{to happen}, \qit{to
start}); from these only \qit{to start}, \qit{to begin}, \qit{to
stop}, and \qit{to finish} are supported.
Some nouns have a special temporal nature. For example,
some introduce situations (e.g.\ \qit{inspection}), others specify
temporal order (e.g.\ \qit{predecessor}), and others refer to
entities of the temporal ontology (e.g.\ \qit{day}, \qit{period},
\qit{event}). From all these, only nouns like \qit{year}, \qit{month},
\qit{day}, etc.\ (and proper names like \qit{Monday}, \qit{January}, and
\qit{1/5/92}) are supported. Nouns referring to more abstract temporal
entities (e.g.\ \qit{period}, \qit{event}) are not supported.
No temporal adjectives (e.g.\ \qit{first}, \qit{earliest}) are
handled, with the only exception of \qit{current} which is supported
to demonstrate the anaphoric behaviour of some noun phrases.
Among temporal adverbials, only punctual adverbials (e.g.\ \qit{at
5:00pm}), \qit{for~\dots} and \qit{in~\dots}
duration adverbials, and period adverbials introduced by \qit{on}, \qit{in},
\qit{before}, or \qit{after}, as well as \qit{today} and
\qit{yesterday} are handled. Frequency, order, or other adverbials
that specify boundaries (e.g.\ \qit{twice}, \qit{for the second time},
\qit{since 1992}) are not supported.
Only subordinate clauses introduced by \qit{while}, \qit{before}, and
\qit{after} are handled (e.g.\ clauses introduced by \qit{when} or
\qit{since} and relative clauses are not supported). The issue of
tense coordination between main and subordinate clauses is ignored.
Among temporal anaphora phenomena, only the temporal anaphoric nature
of noun phrases like \qit{the sales manager} is
supported. Proper names like \qit{May} or \qit{Monday} are
taken to refer to \emph{any} May or Monday. Similarly, past tenses
are treated as referring to \emph{any} past time. Temporal anaphoric
expressions like \qit{that time} or \qit{the following day} are not
allowed. (Nominal anaphoric expressions, e.g.\ \qit{he}, \qit{her
salary}, are also not allowed.)
The framework of this thesis does not support cardinality or duration
queries (\qit{How many~\dots?}, \qit{How long~\dots?}) and cardinality
expressions (e.g.\ \qit{five flights}). Plurals introduced by
\qit{which} and \qit{some} (e.g.\ \qit{which flights}, \qit{some
gates}) are treated semantically as singular. Conjunctions of words or
phrases, and expressions introducing universal quantifiers,
disjunction, or negation are also not supported. Finally, only active
voice verb forms have been considered, though it should be easy to
extend the mechanisms of this thesis to support passive voice as well.
Table \ref{coverage_table} summarises the linguistic coverage of the
framework of this thesis.
\begin{table}
\begin{tabular}{|l|l|}
\hline
verb tenses & \supp simple present (excluding scheduled meaning) \\
& \supp simple past \\
& \supp present continuous (excluding futurate meaning) \\
& \supp past continuous (excluding futurate meaning) \\
& \supp present perfect (treated as simple past) \\
& \supp past perfect \\
& \nosupp other tenses \\
\hline
temporal verbs & \supp \qit{to start}, \qit{to begin}, \qit{to stop},
\qit{to finish} \\
& \nosupp other temporal verbs (e.g.\ \qit{to happen},
\qit{to follow}) \\
\hline
temporal nouns & \supp \qit{year}, \qit{month}, \qit{day}, etc.\ \\
& \nosupp \qit{period}, \qit{event}, \qit{time}, etc.\ \\
& \nosupp nouns introducing situations (e.g.\
\qit{inspection}) \\
& \nosupp nouns of temporal order (e.g.\ \qit{predecessor}) \\
\hline
temporal adjectives & \nosupp (only \qit{current}) \\
\hline
temporal adverbials & \supp punctual adverbials (e.g.\ \qit{at 5:00pm})\\
& \supp period adverbials (only those introduced by
\qit{on}, \qit{in}, \\
& \ \ \ \ \qit{before}, or \qit{after}, and
\qit{today}, \qit{yesterday}) \\
& \supp \qit{for~\dots} adverbials \\
& \supp \qit{in~\dots} duration adverbials (only
with culm.\ act.\ verbs) \\
& \nosupp frequency adverbials (e.g.\ \qit{twice}) \\
& \nosupp order adverbials (e.g.\ \qit{for the
second time}) \\
& \nosupp other boundary adverbials (e.g.\
\qit{since 1987}) \\
\hline
subordinate clauses & \supp \qit{while~\dots} clauses \\
& \supp \qit{before~\dots} clauses \\
& \supp \qit{after~\dots} clauses \\
& \nosupp relative clauses \\
& \nosupp other subordinate clauses (e.g.\ introduced
by \qit{when}) \\
& \nosupp tense coordination between
main-subordinate clauses \\
\hline
anaphora & \supp noun phrases and temporal reference \\
& \nosupp \qit{January}, \qit{August}, etc.\ \\
& \ \ \ \ (taken to refer to any January, August, etc.) \\
& \nosupp tense anaphora \\
& \ \ \ \ (past tenses taken to refer to any past time) \\
& \nosupp \qit{that time}, \qit{the following day}, etc.\ \\
& \nosupp nominal anaphora (e.g.\ \qit{he}, \qit{her salary}) \\
\hline
other phenomena & \nosupp cardinality and duration queries \\
& \ \ \ \ (\qit{How many~\dots?}, \qit{How long~\dots?}) \\
& \nosupp cardinality expressions (e.g.\ \qit{five flights})\\
& \nosupp plurals (treated as singulars) \\
& \nosupp conjunctions of words or phrases \\
& \nosupp expressions introducing universal quantifiers, \\
& \ \ \ \ disjunction, negation \\
& \nosupp passive voice \\
\hline
\end{tabular}
\caption{The linguistic coverage of the framework of this thesis}
\label{coverage_table}
\end{table}
\chapter{The TOP Language} \label{TOP_chapter}
\proverb{Time will tell.}
\section{Introduction} \label{top_intro}
This chapter defines \textsc{Top}\xspace, the intermediate representation language
of this thesis. As noted in section \ref{temp_log_intro}, \textsc{Top}\xspace
employs temporal operators. \pref{tintro:1}, for example, is
represented in \textsc{Top}\xspace as \pref{tintro:2}. Roughly speaking, the \ensuremath{\mathit{Past}}\xspace
operator requires $contain(tank2, water)$ to be true at some past time
$e^v$, and the \ensuremath{\mathit{At}}\xspace operator requires that time to fall within
1/10/95. The answer to \pref{tintro:1} is affirmative iff
\pref{tintro:2} evaluates to true.
\begin{examps}
\item Did tank 2 contain water (some time) on 1/10/95? \label{tintro:1}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{1/10/95}, \ensuremath{\mathit{Past}}\xspace[e^v, contain(tank2, water)]]$
\label{tintro:2}
\end{examps}
An alternative operator-less approach is to introduce time as an
extra argument of each predicate (section \ref{temp_log_intro}). I use
temporal operators because they lead to more compact formulae, and
because they make the semantic contribution of each linguistic
mechanism easier to see (in \pref{tintro:2}, the simple
past tense contributes the \ensuremath{\mathit{Past}}\xspace operator, while the \qit{on~\dots}
adverbial contributes the \ensuremath{\mathit{At}}\xspace operator).
\textsc{Top}\xspace is period-based, in the sense that the truth of a \textsc{Top}\xspace formula
is checked with respect to a time-period (a segment of the time-axis)
rather than an individual time-point. (The term ``period'' is used
here to refer to what other authors call ``intervals''; see section
\ref{temporal_ontology} below.) A \textsc{Top}\xspace formula may be true at a
time-period without being true at the subperiods of that period.
Actually, following the Reichenbachian tradition \cite{Reichenbach},
\textsc{Top}\xspace formulae are evaluated with respect to more than one times:
\emph{speech time} (time at which the question is submitted),
\emph{event time} (time where the situation described by the formula
holds), and \emph{localisation time} (a temporal window within which
the event time must be placed; this is different from Reichenbach's
reference time, and similar to the ``location time'' of
\cite{Kamp1993}). While speech time is always a time-point, the event
and localisation times are generally periods, and this is why I
consider \textsc{Top}\xspace period-based. Period-based languages have been used in
\cite{Dowty1982}, \cite{Allen1984}, \cite{Lascarides},
\cite{Richards}, \cite{Pratt1995}, and elsewhere. Multiple temporal
parameters have been used by several researchers (e.g.\
\cite{Dowty1982}, \cite{Hinrichs}, \cite{Brent1990}, \cite{Crouch2}).
The term ``localisation time'' is borrowed from \cite{Crouch2}, where
$lt$ is a temporal window for $et$ as in \textsc{Top}\xspace.
Although the aspectual classes of linguistic expressions affect how
these expressions are represented in \textsc{Top}\xspace, it is not always possible
to tell the aspectual class of a linguistic expression by examining
the corresponding \textsc{Top}\xspace formula. The approach here is different from
those of \cite{Dowty1977}, \cite{Dowty1986}, \cite{Lascarides}, and
\cite{Kent}, where aspectual class is a property of formulae (or
denotations of formulae).
\textsc{Top}\xspace was greatly influenced by the representation language of
Pirie et al.\ \cite{Pirie1990} \cite{Crouch} \cite{Crouch2}, that was
used in a natural language front-end to a planner. \textsc{Top}\xspace, however,
differs in numerous ways from the language of Pirie et al.\ (several
of these differences will be mentioned in following sections).
\section{Syntax of TOP} \label{top_syntax}
This section defines the syntax of \textsc{Top}\xspace. Some informal comments about
the semantics of the language are also given to make the syntax
definition easier to follow. The semantics of \textsc{Top}\xspace will be defined
formally in following sections.
\paragraph{Terms:} Two disjoint sets of strings, \ensuremath{\mathit{CONS}}\xspace
\index{cons@\ensuremath{\mathit{CONS}}\xspace (set of all \textsc{Top}\xspace constants)}
(constants) and \ensuremath{\mathit{VARS}}\xspace
\index{vars@\ensuremath{\mathit{VARS}}\xspace (set of all \textsc{Top}\xspace variables)}
(variables), are assumed. I use the suffix ``$^v$'' to distinguish
variables from constants. For example, $\mathit{runway^v},
\mathit{gate1^v} \in \ensuremath{\mathit{VARS}}\xspace$, while $\mathit{ba737}, \mathit{1/5/94}
\in \ensuremath{\mathit{CONS}}\xspace$. \ensuremath{\mathit{TERMS}}\xspace
\index{terms@\ensuremath{\mathit{TERMS}}\xspace (set of all \textsc{Top}\xspace terms)}
(\textsc{Top}\xspace terms) is the set $\ensuremath{\mathit{CONS}}\xspace \union \ensuremath{\mathit{VARS}}\xspace$. (\textsc{Top}\xspace has no function
symbols.)
\paragraph{Predicate functors:} A set of strings \ensuremath{\mathit{PFUNS}}\xspace
\index{pfuns@\ensuremath{\mathit{PFUNS}}\xspace (set of all \textsc{Top}\xspace predicate functors)}
is assumed. These strings are used as predicate functors (see atomic
formulae below).
\paragraph{Complete partitioning names:} A set of strings \ensuremath{\mathit{CPARTS}}\xspace
\index{cparts@\ensuremath{\mathit{CPARTS}}\xspace (set of all \textsc{Top}\xspace complete partitioning names)}
is assumed. These strings represent \emph{complete partitionings} of
the time-axis. A complete partitioning of the time-axis is a set of
consecutive non-overlapping periods, such that the union of all the
periods covers the whole time-axis. (A formal definition will be given
in section \ref{top_model}.) For example, the word \qit{day}
corresponds to the complete partitioning that contains the period that
covers exactly the day 13/10/94, the period that covers exactly
14/10/94, etc. No day-period overlaps another one, and together all
the day-periods cover the whole time-axis. Similarly, \qit{month}
corresponds to the partitioning that contains the period for October
1994, the period for November 1994, etc. I use the suffix ``$^c$'' for
elements of \ensuremath{\mathit{CPARTS}}\xspace. For example, $\mathit{day}^c$ could represent the
partitioning of day-periods, and $\mathit{month}^c$ the partitioning
of month-periods.
\paragraph{Gappy partitioning names:} A set of strings \ensuremath{\mathit{GPARTS}}\xspace
\index{gparts@\ensuremath{\mathit{GPARTS}}\xspace (set of all \textsc{Top}\xspace gappy partitioning names)}
is assumed. These strings represent \emph{gappy partitionings} of the
time-axis. A gappy partitioning of the time-axis is a set of
non-overlapping periods, such that the union of all the periods does
\emph{not} cover the whole time-axis. For example, \qit{Monday}
corresponds to the gappy partitioning that contains the period which
covers exactly the Monday 17/10/94, the period that covers exactly the
Monday 24/10/94, etc. No Monday-period overlaps another Monday-pariod,
and all the Monday-periods together do not cover the whole
time-axis. I use the suffix ``$^g$'' for elements of \ensuremath{\mathit{GPARTS}}\xspace. For
example, $\mathit{monday}^g$ could represent the partitioning of
Monday-periods, and $\text{\textit{5:00pm}}^g$ the partitioning of all
5:00pm-periods (the period that covers exactly the 5:00pm minute of
24/10/94, the period that covers the 5:00pm minute of 25/10/94, etc.).
\paragraph{Partitioning names:}
\index{parts@\ensuremath{\mathit{PARTS}}\xspace (set of all \textsc{Top}\xspace partitioning names)}
\ensuremath{\mathit{PARTS}}\xspace (partitioning names) is the set $\ensuremath{\mathit{CPARTS}}\xspace \union \ensuremath{\mathit{GPARTS}}\xspace$.
\paragraph{Atomic formulae:}
\index{aforms@\ensuremath{\mathit{AFORMS}}\xspace (set of all \textsc{Top}\xspace atomic formulae)}
\ensuremath{\mathit{AFORMS}}\xspace (atomic formulae) is the smallest possible set, such that:
\begin{itemize}
\item If $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, and $\tau_1, \tau_2, \dots, \tau_n \in
\ensuremath{\mathit{TERMS}}\xspace$, then $\pi(\tau_1, \tau_2, \dots, \tau_n) \in
\ensuremath{\mathit{AFORMS}}\xspace$. $\pi(\tau_1, \tau_2, \dots, \tau_n)$ is called a
\emph{predicate}. $\tau_1, \tau_2, \dots, \tau_n$ are the
\emph{arguments} of the predicate.
\item
\index{part@$\ensuremath{\mathit{Part}}\xspace[\;]$ (used to select periods from partitionings)}
If $\sigma \in \ensuremath{\mathit{PARTS}}\xspace$, $\beta \in \ensuremath{\mathit{VARS}}\xspace$, and
$\nu_{ord} \in \{\dots, -3, -2, -1, 0\}$, then $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta,
\nu_{ord}] \in \ensuremath{\mathit{AFORMS}}\xspace$ and $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta] \in \ensuremath{\mathit{AFORMS}}\xspace$.
\end{itemize}
Greek letters are used as meta-variables, i.e.\ they stand for
expressions of \textsc{Top}\xspace. Predicates (e.g.\ $be\_at(ba737,
gate^v)$) describe situations in the world. $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta,
\nu_{ord}]$ means that $\beta$ is a period in the partitioning
$\sigma$. The $\nu_{ord}$ is used to select a particular period from
the partitioning. If $\nu_{ord} = 0$, then $\beta$ is the current
period of the partitioning (the one that contains the present
moment). If $\nu_{ord} < 0$, then $\beta$ is the $-\nu_{ord}$-th
period of the partitioning before the current one. When there is no
need to select a particular period from a partitioning, the
$\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ form is used.
\paragraph{Yes/no formulae:} Yes/no formulae represent questions that
are to be answered with a \qit{yes}
or \qit{no} (e.g.\ \qit{Is BA737 circling?}). \ensuremath{\mathit{YNFORMS}}\xspace
\index{ynforms@\ensuremath{\mathit{YNFORMS}}\xspace (set of all \textsc{Top}\xspace yes/no formulae)}
is the set of all yes/no formulae. It is the smallest possible set,
such that if $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, $\tau_1, \dots,
\tau_n \in \ensuremath{\mathit{TERMS}}\xspace$, $\phi, \phi_1, \phi_2 \in \ensuremath{\mathit{FORMS}}\xspace$, $\sigma_c \in
\ensuremath{\mathit{CPARTS}}\xspace$, $\nu_{qty} \in \{1,2,3,\dots\}$, $\beta$ is a
\textsc{Top}\xspace variable that does not occur in $\phi$, and $\tau$ is a \textsc{Top}\xspace
variable that does not occur in $\phi$ or a \textsc{Top}\xspace constant, all the
following hold. (The restriction that $\beta$ and $\tau$ must not be
variables that occur in $\phi$ is needed in the translation from \textsc{Top}\xspace
to \textsc{Tsql2}\xspace of chapter \ref{tdb_chapter}.)
\begin{itemize}
\item $\ensuremath{\mathit{AFORMS}}\xspace \subseteq \ensuremath{\mathit{YNFORMS}}\xspace$
\item $\phi_1 \land \phi_2 \in \ensuremath{\mathit{YNFORMS}}\xspace$
\index{^@$\land$ (\textsc{Top}\xspace's conjunction)}
\item
\index{pres@$\ensuremath{\mathit{Pres}}\xspace[\;]$ (used to refer to the present)}
\index{past@$\ensuremath{\mathit{Past}}\xspace[\;]$ (used to refer to the past)}
\index{perf@$\ensuremath{\mathit{Perf}}\xspace[\;]$ (used to express the past perfect)}
$\ensuremath{\mathit{Pres}}\xspace[\phi]$, $\ensuremath{\mathit{Past}}\xspace[\beta, \phi]$, $\ensuremath{\mathit{Perf}}\xspace[\beta, \phi] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{at@$\ensuremath{\mathit{At}}\xspace[\;]$ (narrows the localisation time)}
$\ensuremath{\mathit{At}}\xspace[\tau, \phi]$, $\ensuremath{\mathit{At}}\xspace[\phi_1, \phi_2] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{before@$\ensuremath{\mathit{Before}}\xspace[\;]$ (used to express \qit{before})}
\index{after@$\ensuremath{\mathit{After}}\xspace[\;]$ (used to express \qit{after})}
$\ensuremath{\mathit{Before}}\xspace[\tau, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\phi_1, \phi_2]$, $\ensuremath{\mathit{After}}\xspace[\tau,
\phi]$, $\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{ntense@$\ensuremath{\mathit{Ntense}}\xspace[\;]$ (used when expressing nouns or adjectives)}
$\ensuremath{\mathit{Ntense}}\xspace[\beta, \phi]$, $\ensuremath{\mathit{Ntense}}\xspace[\mathit{now}^*, \phi] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{for@$\ensuremath{\mathit{For}}\xspace[\;]$ (used to express durations)}
\index{fills@$\ensuremath{\mathit{Fills}}\xspace[\;]$ (requires $et = lt$)}
$\ensuremath{\mathit{For}}\xspace[\sigma_c, \nu_{qty}, \phi]$, $\ensuremath{\mathit{Fills}}\xspace[\phi] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{begin@$\ensuremath{\mathit{Begin}}\xspace[\;]$ (used to refer to start-points of situations)}
\index{end@$\ensuremath{\mathit{End}}\xspace[\;]$ (used to refer to end-points of situations)}
$\ensuremath{\mathit{Begin}}\xspace[\phi]$, $\ensuremath{\mathit{End}}\xspace[\phi] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\item
\index{culm@$\ensuremath{\mathit{Culm}}\xspace[\;]$ (used to express non-progressives of culminating activity verbs)}
$\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)] \in \ensuremath{\mathit{YNFORMS}}\xspace$
\end{itemize}
No negation and disjunction connectives are defined, because English
expressions introducing these connectives are not considered (section
\ref{ling_not_supported}). For the same reason no universal
quantifiers are defined. All variables can be thought of as
existentially quantified. Hence, no explicit existential quantifier is
needed.
An informal explanation of \textsc{Top}\xspace's operators follows (\textsc{Top}\xspace's semantics
will be defined formally in following sections).
$\ensuremath{\mathit{Pres}}\xspace[\phi]$ means that $\phi$ is true at the
present. For example, \qit{Runway 2 is open.} is represented as
$\ensuremath{\mathit{Pres}}\xspace[open(runway2)]$. $\ensuremath{\mathit{Past}}\xspace[\beta, \phi]$ means that
$\phi$ is true at some past time $\beta$. The \ensuremath{\mathit{Perf}}\xspace operator is used
along with the \ensuremath{\mathit{Past}}\xspace operator to express the past perfect. For
example, \qit{Runway 2 was open.} is represented as $\ensuremath{\mathit{Past}}\xspace[e^v,
open(runway2)]$, and \qit{Runway 2 had been open.} as:
\[\ensuremath{\mathit{Past}}\xspace[e1^v,\ensuremath{\mathit{Perf}}\xspace[e2^v, open(runway2)]] \]
$\ensuremath{\mathit{At}}\xspace[\tau, \phi]$ means that $\phi$ holds some time
within a period $\tau$, and $\ensuremath{\mathit{At}}\xspace[\phi_1, \phi_2]$ means that
$\phi_2$ holds at some time where $\phi_1$ holds. For example,
\qit{Runway 2 was open (some time) on 1/1/94.} is represented as
$\ensuremath{\mathit{At}}\xspace[\mathit{1/1/94}, \ensuremath{\mathit{Past}}\xspace[e^v, open(runway2)]]$,
and \qit{Runway 2 was open (some time) while BA737 was circling.} as:
\[\ensuremath{\mathit{At}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, circling(ba737)], \ensuremath{\mathit{Past}}\xspace[e2^v, open(runway2)]]\]
$\ensuremath{\mathit{Before}}\xspace[\tau, \phi]$ means that $\phi$ is true at some
time before a period $\tau$, and $\ensuremath{\mathit{Before}}\xspace[\phi_1, \phi_2]$ means that
$\phi_2$ is true at some time before a time where $\phi_1$ is
true. $\ensuremath{\mathit{After}}\xspace[\tau, \phi]$ and $\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]$ have similar
meanings. For example, \qit{Tank 2 was empty (some time) after
1/1/92.} is represented as $\ensuremath{\mathit{After}}\xspace[\mathit{1/1/92}, \ensuremath{\mathit{Past}}\xspace[e^v,
empty(tank2)]]$, and \qit{Tank 2 was empty (some time) before the bomb
exploded.} as:
\[
\ensuremath{\mathit{Before}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, explode(bomb)], \ensuremath{\mathit{Past}}\xspace[e2^v, empty(tank2)]]
\]
\ensuremath{\mathit{Ntense}}\xspace is used when expressing noun phrases (see section
\ref{noun_anaphora}). $\ensuremath{\mathit{Ntense}}\xspace[\beta, \phi]$ means that
at a time $\beta$ something has the property specified by $\phi$.
$\ensuremath{\mathit{Ntense}}\xspace[\mathit{now}^*, \phi]$ means that something has the
property specified by $\phi$ at the present. The reading of \qit{The
president was visiting Edinburgh.} that refers to the person who was
the president during the visit is represented as
$\ensuremath{\mathit{Ntense}}\xspace[e1^v, president(p^v)] \land \ensuremath{\mathit{Past}}\xspace[e1^v, visiting(p^v, edinburgh)]$.
In contrast, the reading that refers to the current president is
represented as:
\[\ensuremath{\mathit{Ntense}}\xspace[\mathit{now}^*,president(p^v)] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, visiting(p^v, edinburgh)]
\]
$\ensuremath{\mathit{For}}\xspace[\sigma_c, \nu_{qty}, \phi]$ means that
$\phi$ holds throughout a period that is $\nu_{qty}$ $\sigma_c$-periods long.
\qit{Runway 2 was open for two days.} is represented as:
\[\ensuremath{\mathit{For}}\xspace[day^c,2, \ensuremath{\mathit{Past}}\xspace[e^v, open(runway2)]]
\]
The \ensuremath{\mathit{Fills}}\xspace operator is currently not
used in the framework of this thesis, but it could be used to capture
readings of sentences like \qit{Tank 2 was empty in 1992.} whereby the
situation of the verb holds \emph{throughout} the period of the
adverbial (see section \ref{period_adverbials}). $\ensuremath{\mathit{At}}\xspace[1992, \ensuremath{\mathit{Past}}\xspace[e^v,
\ensuremath{\mathit{Fills}}\xspace[empty(tank2)]]]$ means that the tank was empty
\emph{throughout} 1992, while $\ensuremath{\mathit{At}}\xspace[1992, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$
means that the tank was empty some time in 1992, but not necessarily
throughout 1992.
$\ensuremath{\mathit{Begin}}\xspace[\phi]$ means that $\phi$ starts to hold, and
$\ensuremath{\mathit{End}}\xspace[\phi]$ means that $\phi$ stops holding. For example, \qit{BA737
started to land.} can be represented as $\ensuremath{\mathit{Past}}\xspace[e^v,
\ensuremath{\mathit{Begin}}\xspace[landing(ba737)]]$, and \qit{Tank 2 stopped being empty.} as
$\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{End}}\xspace[empty(tank2)]]$.
Finally, \ensuremath{\mathit{Culm}}\xspace is used to represent sentences where verbs whose
base forms are culminating activities appear in tenses that require
some inherent climax to have been reached. The \ensuremath{\mathit{Culm}}\xspace operator will be
discussed in section \ref{culm_op}.
\paragraph{Wh-formulae:} \emph{Wh-formulae} are used to represent
questions that contain interrogatives (e.g.\ \qit{Which~\dots?},
\qit{Who~\dots?}, \qit{When~\dots}?). \ensuremath{\mathit{WHFORMS}}\xspace is the set of all
wh-formulae. $\ensuremath{\mathit{WHFORMS}}\xspace \defeq \ensuremath{\mathit{WHFORMS}}\xspace_1 \union \ensuremath{\mathit{WHFORMS}}\xspace_2$,
\index{whforms@\ensuremath{\mathit{WHFORMS}}\xspace (set of all \textsc{Top}\xspace wh-formulae)}
\index{whforms1@$\ensuremath{\mathit{WHFORMS}}\xspace_1$ (set of all \textsc{Top}\xspace wh-formulae with no $?_{mxl}$)}
\index{whforms2@$\ensuremath{\mathit{WHFORMS}}\xspace_2$ (set of all \textsc{Top}\xspace wh-formulae with a $?_{mxl}$)}
where:
\begin{itemize}
\item
\index{?@$?$ (\textsc{Top}\xspace's interrogative quantifier)}
$\ensuremath{\mathit{WHFORMS}}\xspace_1$ is the set of all expressions of the form $?\beta_1
\; ?\beta_2 \; \dots \; ?\beta_n \; \phi$, where $\beta_1, \beta_2,
\dots, \beta_n \in \ensuremath{\mathit{VARS}}\xspace$, $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, and each one of
$\beta_1, \beta_2, \dots, \beta_n$ occurs at least once within
$\phi$.
\item
\index{?@$?$ (\textsc{Top}\xspace's interrogative quantifier)}
\index{?mxl@$?_{mxl}$ (\textsc{Top}\xspace's interrogative-maximal quantifier)}
$\ensuremath{\mathit{WHFORMS}}\xspace_2$ is the set of all expressions of the form
$?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \; \dots \; ?\beta_n \; \phi$,
where $\beta_1, \beta_2, \beta_3, \dots, \beta_n \in \ensuremath{\mathit{VARS}}\xspace$, $\phi \in
\ensuremath{\mathit{YNFORMS}}\xspace$, each one of $\beta_2$, $\beta_3$, \dots, $\beta_n$ occurs
at least once within $\phi$, and $\beta_1$ occurs at least once within
$\phi$ as the first argument of a \ensuremath{\mathit{Past}}\xspace, \ensuremath{\mathit{Perf}}\xspace, \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, \ensuremath{\mathit{After}}\xspace,
or \ensuremath{\mathit{Ntense}}\xspace operator, or as the second argument of a \ensuremath{\mathit{Part}}\xspace operator.
\end{itemize}
``$?$'' is the \emph{interrogative quantifier}, and $?_{mxl}$ the
\emph{interrogative-maximal quantifier}. The interrogative quantifier
is similar to an explicit existential quantifier, but it has the
additional effect of reporting the values of its variables that
satisfy its scope. Intuitively, $?\beta_1 \; ?\beta_2 \; ?\beta_n \;
\phi$ means \qit{report all $\beta_1, \beta_2, \dots, \beta_n$ such
that $\phi$}. For example, \qit{Which runways are open?} is
represented as $?r^v \; \ensuremath{\mathit{Ntense}}\xspace[\mathit{now}^*, runway(r^v)] \land
\ensuremath{\mathit{Pres}}\xspace[open(r^v)]$. The constraint that each one of $\beta_1, \dots,
\beta_n$ must occur at least once within $\phi$ rules out meaningless
formulae like $?o^v \; \ensuremath{\mathit{Past}}\xspace[manager(john)]$, where the $o^v$ does not
have any relation to the rest of the formula. This constraint is
similar to the notion of \emph{safety} in \textsc{Datalog}\xspace \cite{Ullman}, and
it is needed in the translation from \textsc{Top}\xspace to \textsc{Tsql2}\xspace of chapter
\ref{tdb_chapter}.
The interrogative-maximal quantifier is similar, except that it
reports only \emph{maximal periods}. $?_{mxl}$ is intended to be used
only with variables that denote periods, and this is why in the case
of $?_{mxl}$, $\beta_1$ is required to occur within $\phi$ as the
first argument of a \ensuremath{\mathit{Past}}\xspace, \ensuremath{\mathit{Perf}}\xspace, \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, \ensuremath{\mathit{After}}\xspace, or \ensuremath{\mathit{Ntense}}\xspace
operator, or as the second argument of a \ensuremath{\mathit{Part}}\xspace operator (the
semantics of these operators ensure that variables occurring at these
potitions denote periods). Intuitively, $?_{mxl}\beta_1 \;
?\beta_2 \; ?\beta_n \; \phi$ means \qit{report all the maximal
periods $\beta_1$, and all $\beta_2$, \dots, $\beta_n$, such that
$\phi$}. The interrogative-maximal quantifier is used in \qit{When
\dots?} questions, where we want the answer to contain only the
\emph{maximal} periods during which a situation held, not all the
periods during which the situation held. If, for example, gate 2 was
open from 9:00am to 11:00am and from 3:00pm to 5:00pm, we want the
answer to \qit{When was gate 2 open?} to contain only the two maximal
periods 9:00am to 11:00am and 3:00pm to 5:00pm; we do not want the
answer to contain any subperiods of these two maximal periods (e.g.\
9:30am to 10:30am). To achieve this, the question is represented as
$?_{mxl}e^v \; \ensuremath{\mathit{Past}}\xspace[e^v, open(gate2)]$.
\paragraph{Formulae:}
\index{forms@\ensuremath{\mathit{FORMS}}\xspace (set of all \textsc{Top}\xspace formulae)}
\ensuremath{\mathit{FORMS}}\xspace is the set of all \textsc{Top}\xspace formulae. $\ensuremath{\mathit{FORMS}}\xspace \defeq \ensuremath{\mathit{YNFORMS}}\xspace
\union \ensuremath{\mathit{WHFORMS}}\xspace$.
\section{The temporal ontology} \label{temporal_ontology}
\paragraph{Point structure:} A \emph{point structure} for \textsc{Top}\xspace is an
ordered pair $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$, such that \ensuremath{\mathit{PTS}}\xspace
\index{pts@\ensuremath{\mathit{PTS}}\xspace (set of all time-points)}
is a non-empty set, $\prec$ \index{<@$\prec$ (precedes)} is a binary
relation over $\ensuremath{\mathit{PTS}}\xspace \times \ensuremath{\mathit{PTS}}\xspace$, and $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$ has the
following five properties:
\begin{description}
\item[transitivity:] If $t_1, t_2, t_3 \in \ensuremath{\mathit{PTS}}\xspace$, $t_1 \prec t_2$, and
$t_2 \prec t_3$, then $t_1 \prec t_3$.
\item[irreflexivity:] If $t \in \ensuremath{\mathit{PTS}}\xspace$, then $t \prec t$ does not hold.
\item[linearity:] If $t_1, t_2 \in \ensuremath{\mathit{PTS}}\xspace$ and $t_1 \not= t_2$, then
exactly one of the following holds: $t_1 \prec t_2$ or $t_2 \prec t_1$.
\item[left and right boundedness:] There is a $t_{first} \in \ensuremath{\mathit{PTS}}\xspace$,
\index{tfirst@$t_{first}$ (earliest time-point)}
such that for all $t \in \ensuremath{\mathit{PTS}}\xspace$, $t_{first} \preceq t$. Similarly, there is
a $t_{last} \in \ensuremath{\mathit{PTS}}\xspace$,
\index{tlast@$t_{last}$ (latest time-point)}
such that for all $t \in \ensuremath{\mathit{PTS}}\xspace$, $t \preceq t_{last}$.
\item[discreteness:] For every $t_1, t_2 \in \ensuremath{\mathit{PTS}}\xspace$, with $t_1 \not=
t_2$, there is at most a finite number of $t_3 \in \ensuremath{\mathit{PTS}}\xspace$, such
that $t_1 \prec t_3 \prec t_2$.
\end{description}
Intuitively, a point structure $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$ for \textsc{Top}\xspace is a
model of time. \textsc{Top}\xspace models time as being discrete, linear, bounded,
and consisting of time-points (see \cite{VanBenthem} for other time
models.) \ensuremath{\mathit{PTS}}\xspace is the set of all time-points, and $p_1
\prec p_2$ means that the time-point $p_1$ precedes the
time-point $p_2$.
\paragraph{prev(t) and next(t):}
\index{prev@$prev()$ (previous time-point)}
\index{next@$next()$ (next time-point)}
If $t_1 \in \ensuremath{\mathit{PTS}}\xspace - \{t_{last}\}$, then $next(t_1)$ denotes a $t_2 \in
\ensuremath{\mathit{PTS}}\xspace$, such that $t_1 \prec t_2$ and for no $t_3 \in \ensuremath{\mathit{PTS}}\xspace$ is it true
that $t_1 \prec t_3 \prec t_2$. Similarly, if $t_1 \in \ensuremath{\mathit{PTS}}\xspace -
\{t_{first}\}$, then $prev(t_1)$ denotes a $t_2 \in \ensuremath{\mathit{PTS}}\xspace$, such that
$t_2 \prec t_1$ and for no $t_3 \in \ensuremath{\mathit{PTS}}\xspace$ is it true that $t_2 \prec
t_3 \prec t_1$. In the rest of this thesis, whenever $next(t)$ is
used, it is assumed that $t \not= t_{last}$. Similarly, whenever
$prev(t)$ is used, it is assumed that $t \not= t_{first}$.
\paragraph{Periods and instantaneous periods:}
A \emph{period} $p$ over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$ is a non-empty subset of
\ensuremath{\mathit{PTS}}\xspace with the following property:
\begin{description}
\item[convexity:] If $t_1, t_2 \in p$, $t_3 \in \ensuremath{\mathit{PTS}}\xspace$, and $t_1 \prec
t_3 \prec t_2$, then $t_3 \in p$.
\end{description}
The term ``interval'' is often used in the literature instead of
``period''. Unfortunately, \textsc{Tsql2}\xspace uses ``interval'' to refer to a
duration (see chapter \ref{tdb_chapter}). To avoid confusing the
reader when \textsc{Tsql2}\xspace will be discussed, I follow the \textsc{Tsql2}\xspace terminology
and use the term ``period'' to refer to convex sets of time-points.
\index{periods1@$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ (set of all periods over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$)}
$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ is the set of all periods over
$\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$. If $p \in \ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ and $p$
contains only one time-point, then $p$ is an \emph{instantaneous
period over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$}. $\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$
\index{instants1@$\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ (set of all instantaneous periods over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$)}
is the set of all instantaneous periods over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$. For
simplicity, I often write \ensuremath{\mathit{PERIODS}}\xspace
\index{periods@$\ensuremath{\mathit{PERIODS}}\xspace$ (set of all periods)}
and \ensuremath{\mathit{INSTANTS}}\xspace
\index{instants@$\ensuremath{\mathit{INSTANTS}}\xspace$ (set of all instantaneous periods)}
instead of $\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ and $\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace,
\prec}}$, and I often refer to simply ``periods'' and
``instantaneous periods'' instead of ``periods over $\tup{\ensuremath{\mathit{PTS}}\xspace,
\prec}$'' and ``instantaneous periods over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$''.
\index{periods*@$\ensuremath{\mathit{PERIODS}}\xspace^*$ ($\ensuremath{\mathit{PERIODS}}\xspace \union \emptyset$)}
$\ensuremath{\mathit{PERIODS}}\xspace^*_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ (or simply $\ensuremath{\mathit{PERIODS}}\xspace^*$)
is the set $\ensuremath{\mathit{PERIODS}}\xspace \union \{\emptyset\}$, i.e.\ $\ensuremath{\mathit{PERIODS}}\xspace^*$
is the same as $\ensuremath{\mathit{PERIODS}}\xspace$, except that it also contains the
empty set. (The reader is reminded that periods are non-empty sets.)
\paragraph{Subperiods:} $p_1$ is a \emph{subperiod} of
$p_2$, iff $p_1, p_2 \in \ensuremath{\mathit{PERIODS}}\xspace$ and $p_1 \subseteq p_2$. In this
case I write $p_1 \subper p_2$.
\index{<sq@$\subper$ (subperiod)}
($p_1 \subseteq p_2$ is weaker than $p_1 \subper p_2$, because it
does not guarantee that $p_1, p_2 \in \ensuremath{\mathit{PERIODS}}\xspace$.)
Similarly, $p_1$ is a \emph{proper
subperiod} of $p_2$, iff $p_1, p_2 \in \ensuremath{\mathit{PERIODS}}\xspace$ and $p_1 \subset
p_2$. In this case I write $p_1 \propsubper
p_2$.
\index{<sq@$\propsubper$ (proper subperiod)}
\paragraph{Maximal periods:}
\index{mxlpers@$mxlpers()$ (maximal periods of a set or temporal element)}
If $S$ is a set of periods, then $\ensuremath{\mathit{mxlpers}}\xspace(S)$ is the set of
\emph{maximal periods} of $S$. $\ensuremath{\mathit{mxlpers}}\xspace(S) \defeq \{p \in S \mid
\text{for no } p' \in S \text{ is it true that } p \propsubper p'\}$.
\paragraph{minpt(S) and maxpt(S):}
\index{minpt@$minpt()$ (earliest time-point in a set)}
\index{maxpt@$maxpt()$ (latest time-point in a set)}
If $S \subseteq \ensuremath{\mathit{PTS}}\xspace$, $minpt(S)$ denotes
the time-point $t \in S$, such that for every $t' \in S$,
$t \preceq t'$. Similarly, if $S \subseteq \ensuremath{\mathit{PTS}}\xspace$, $maxpt(S)$
denotes the time-point $t \in S$, such that for every $t' \in S$,
$t' \preceq t$.
\paragraph{Notation:} Following standard conventions, $[t_1,
t_2]$ denotes the set $\{t \in \ensuremath{\mathit{PTS}}\xspace \mid t_1 \preceq t \preceq t_2
\}$. (This is not always a period. If $t_2 \prec t_1$, then $[t_1,
t_2]$ is the empty set, which is not a period.) Similarly, $(t_1,
t_2]$ denotes the set $\{t \in \ensuremath{\mathit{PTS}}\xspace \mid t_1 \prec t \preceq t_2
\}$. $[t_1, t_2)$ and $(t_1,t_2)$ are defined similarly.
\section{TOP model} \label{top_model}
A \textsc{Top}\xspace model $M$ is an ordered 7-tuple:
\[ M = \tup{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}, \ensuremath{\mathit{OBJS}}\xspace,
\ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{gparts}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace}
\]
such that $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$ is a point structure for \textsc{Top}\xspace
(section \ref{temporal_ontology}), $\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}
\subseteq \ensuremath{\mathit{OBJS}}\xspace$, and \ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{gparts}}}\xspace, and \ensuremath{\mathit{f_{cparts}}}\xspace
are as specified below.
\paragraph{$\mathbf{OBJS}$:}
\index{objs@\ensuremath{\mathit{OBJS}}\xspace (\textsc{Top}\xspace's world objects)}
\ensuremath{\mathit{OBJS}}\xspace is a set containing all the objects in the
modelled world that can be denoted by \textsc{Top}\xspace terms. For example, in the
airport domain \ensuremath{\mathit{OBJS}}\xspace contains all the gates and runways of the
airport, the inspectors, the flights, etc. The constraint
$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}} \subseteq \ensuremath{\mathit{OBJS}}\xspace$ ensures that all
periods are treated as world objects. This simplifies the semantics of
\textsc{Top}\xspace.
\paragraph{$\mathbf{f_{cons}}$:}
\index{fcons@$\ensuremath{\mathit{f_{cons}}}\xspace()$ (maps \textsc{Top}\xspace constants to world objects)}
\ensuremath{\mathit{f_{cons}}}\xspace is a function $\ensuremath{\mathit{CONS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace$. (I use the notation
$D \mapsto R$ to refer to a function whose domain and range are $D$
and $R$ respectively.) \ensuremath{\mathit{f_{cons}}}\xspace specifies which world object each
constant denotes. In the airport domain, for example, \ensuremath{\mathit{f_{cons}}}\xspace may map
the constants $gate2$ and $ba737$ to some gate of the airport and some
flight respectively.
\paragraph{$\mathbf{f_{pfuns}}$:}
\index{fpfuns@$\ensuremath{\mathit{f_{pfuns}}}\xspace()$ (returns the maximal periods where predicates hold)}
\ensuremath{\mathit{f_{pfuns}}}\xspace is a function that maps each pair $\tup{\pi, n}$, where
$\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$, to another function
$(\ensuremath{\mathit{OBJS}}\xspace)^n \mapsto \ensuremath{\mathit{pow}}\xspace(\ensuremath{\mathit{PERIODS}}\xspace)$. ($\ensuremath{\mathit{pow}}\xspace(S)$\/
\index{pow@$\ensuremath{\mathit{pow}}\xspace()$ (powerset)}
denotes the powerset of $S$, i.e.\ the set of all subsets of $S$.
$(\ensuremath{\mathit{OBJS}}\xspace)^n$
\index{objsn@$(\ensuremath{\mathit{OBJS}}\xspace)^n$ ($\ensuremath{\mathit{OBJS}}\xspace \times \dots \times \ensuremath{\mathit{OBJS}}\xspace$)}
is the $n$-ary cartesian product $\ensuremath{\mathit{OBJS}}\xspace \times \ensuremath{\mathit{OBJS}}\xspace
\times \dots \times \ensuremath{\mathit{OBJS}}\xspace$.) That is, for every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and
each $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi,n)$ is a function that maps
each $n$-tuple of elements of $\ensuremath{\mathit{OBJS}}\xspace$ to a set of periods (an element
of $\ensuremath{\mathit{pow}}\xspace(\ensuremath{\mathit{PERIODS}}\xspace)$).
Intuitively, if $\tau_1, \tau_2, \dots, \tau_n$ are \textsc{Top}\xspace terms
denoting the world objects $o_1, o_2, \dots, o_n$,
$\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)(o_1, o_2, \dots, o_n)$ is the set of the maximal
periods throughout which the situation described by $\pi(\tau_1,
\tau_2, \dots, \tau_n)$ is true. For example, if the constant
$ba737$ denotes a flight-object $o_1$, $gate2$ denotes a gate-object $o_2$, and
$be\_at(ba737,gate2)$ describes the situation whereby the flight $o_1$
is located at the gate $o_2$, then $\ensuremath{\mathit{f_{pfuns}}}\xspace(be\_at, 2)(o_1, o_2)$ will
be the set that contains all the maximal periods throughout which the
flight $o_1$ is located at the gate $o_2$.
For every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$,
$\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)$ must have the following property: for every
$\tup{o_1,o_2,\dots,o_n} \in (\ensuremath{\mathit{OBJS}}\xspace)^n$, it must be
the case that:
\[
\text{if } p_1, p_2 \in \ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)(o_1,o_2,\dots,o_n) \text{ and }
p_1 \union p_2 \in \ensuremath{\mathit{PERIODS}}\xspace, \text{ then } p_1 = p_2
\]
This ensures that no two different periods $p_1, p_2$ in $\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi,
n)(o_1,\dots,o_n)$ overlap or are adjacent (because if they overlap or
they are adjacent, then their union is also a period, and then it must
be true that $p_1 = p_2$). Intuitively, if $p_1$ and $p_2$ overlap or are
adjacent, we want $\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)(o_1,o_2,\dots,o_n)$ to contain
their union $p_1 \union p_2$ instead of $p_1$ and $p_2$.
\paragraph{$\mathbf{f_{culms}}$:}
\index{fculms@$\ensuremath{\mathit{f_{culms}}}\xspace()$ (shows if the situation of a predicate reaches its climax)}
\ensuremath{\mathit{f_{culms}}}\xspace is a function that maps each pair $\tup{\pi, n}$, where $\pi
\in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$, to another function $(\ensuremath{\mathit{OBJS}}\xspace)^n
\mapsto \{T,F\}$ ($T,F$ are the two truth values). That is, for every
$\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and each $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{f_{culms}}}\xspace(\pi,n)$ is
a function that maps each $n$-tuple of elements of
\ensuremath{\mathit{OBJS}}\xspace to $T$ or $F$.
\ensuremath{\mathit{f_{culms}}}\xspace is only consulted in the case of predicates that represent
actions or changes that have inherent climaxes. If $\pi(\tau_1,
\tau_2, \dots, \tau_n)$ represents such an action or change, and
$\tau_1, \tau_2, \dots, \tau_n$ denote the world objects $o_1, o_2,
\dots, o_n$, then $\ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)(o_1, o_2, \dots, o_n)$ is
the set of all maximal periods throughout which the action or change
is ongoing. $\ensuremath{\mathit{f_{culms}}}\xspace(\pi, n)(o_1, o_2, \dots, o_n)$ shows whether or
not the change or action reaches its climax at the latest time-point
at which the change or action is ongoing.
For example, if the constant $j\_adams$ denotes a person $o_1$
in the world, $bridge2$ denotes an object
$o_2$, and $building(j\_adams, ba737)$ describes the
situation whereby $o_1$ is building $o_2$,
$\ensuremath{\mathit{f_{pfuns}}}\xspace(building,2)(o_1,o_2)$ will be the set of all maximal periods
where $o_1$ is building $o_2$. $\ensuremath{\mathit{f_{culms}}}\xspace(building,2)(o_1,o_2)$ will be
$T$ if the building is completed at the end-point of the latest
maximal period in $\ensuremath{\mathit{f_{pfuns}}}\xspace(building,2)(o_1,o_2)$, and $F$
otherwise. The role of \ensuremath{\mathit{f_{culms}}}\xspace will become clearer in section
\ref{culm_op}.
\paragraph{$\mathbf{f_{gparts}}$:}
\index{fgparts@$\ensuremath{\mathit{f_{gparts}}}\xspace()$ (assigns gappy partitionings to elements of \ensuremath{\mathit{GPARTS}}\xspace)}
\ensuremath{\mathit{f_{gparts}}}\xspace is a function that maps each element of \ensuremath{\mathit{GPARTS}}\xspace to a
\emph{gappy partitioning}. A gappy partitioning is a subset $S$ of
\ensuremath{\mathit{PERIODS}}\xspace, such that for every $p_1, p_2 \in S$, $p_1 \intersect p_2 =
\emptyset$, and $\bigcup_{p \in S}p \not= \ensuremath{\mathit{PTS}}\xspace$. For example,
$\ensuremath{\mathit{f_{gparts}}}\xspace(monday^g)$ could be the gappy partitioning of all Monday-periods.
\paragraph{$\mathbf{f_{cparts}}$:}
\index{fcparts@$\ensuremath{\mathit{f_{cparts}}}\xspace()$ (assigns complete partitionings to elements of \ensuremath{\mathit{CPARTS}}\xspace)}
\ensuremath{\mathit{f_{cparts}}}\xspace is a function that maps each element of \ensuremath{\mathit{CPARTS}}\xspace to a
\emph{complete partitioning}. A complete partitioning is a subset
$S$ of \ensuremath{\mathit{PERIODS}}\xspace, such that for every $p_1, p_2 \in S$, $p_1 \intersect
p_2 = \emptyset$, and $\bigcup_{p \in S}p = \ensuremath{\mathit{PTS}}\xspace$. For example,
$\ensuremath{\mathit{f_{cparts}}}\xspace(day^c)$ could be the complete partitioning of all
day-periods.
\section{Variable assignment} \label{var_assign}
A variable assignment with respect to (w.r.t.) a \textsc{Top}\xspace model $M$
is a function $g: \ensuremath{\mathit{VARS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace$
\index{g@$g()$, $g^\beta_o()$ (variable assignment)}
($g$ assigns to each variable an element of \ensuremath{\mathit{OBJS}}\xspace). $G_M$,
or simply $G$,
\index{G@$G$, $G_M$ (set of all variable assignments)}
is the set of all possible variable assignments w.r.t.\ $M$, i.e.\ $G$
is the set of all functions $\ensuremath{\mathit{VARS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace$.
If $g \in G$, $\beta \in \ensuremath{\mathit{VARS}}\xspace$, and $o \in \ensuremath{\mathit{OBJS}}\xspace$, then $g^\beta_o$
\index{g@$g()$, $g^\beta_o()$ (variable assignment)}
is the variable assignment defined as follows: $g^\beta_o(\beta) = o$,
and for every $\beta' \in \ensuremath{\mathit{VARS}}\xspace$ with $\beta' \not= \beta$,
$g^\beta_o(\beta') = g(\beta)$.
\section{Denotation of a TOP expression} \label{denotation}
\paragraph{Index of evaluation:} An index of evaluation is an ordered 3-tuple
$\tup{st,et,lt}$, such that $st \in \ensuremath{\mathit{PTS}}\xspace$, $et \in \ensuremath{\mathit{PERIODS}}\xspace$, and $lt
\in \ensuremath{\mathit{PERIODS}}\xspace^*$.
$st$
\index{st@$st$ (speech time)}
(\emph{speech time}) is the time-point at which the English question
is submitted to the \textsc{Nlitdb}\xspace. $et$
\index{et@$et$ (event time)}
(\emph{event time}) is a period where the situation described by a
\textsc{Top}\xspace expression takes place. $lt$
\index{lt@$lt$ (localisation time)}
(\emph{localisation time}) can be thought of as a temporal window,
within which $et$ must be located. When computing the denotation of a
\textsc{Top}\xspace formula that corresponds to an English question, $lt$ is
initially set to
\ensuremath{\mathit{PTS}}\xspace. That is, the temporal window covers the whole
time-axis, and $et$ is allowed to be located anywhere. Various
operators, however, may narrow down $lt$, imposing constraints on
where $et$ can be placed.
\paragraph{Denotation w.r.t.\ M, st, et, lt, g:}
The denotation of a \textsc{Top}\xspace expression $\xi$ w.r.t.\ a model $M$,
an index of evaluation $\tup{st,et,lt}$, and a variable assignment $g$,
is written $\denot{M,st,et,lt,g}{\xi}$ or simply
$\denot{st,et,lt,g}{\xi}$. When the denotation of $\xi$ does not
depend on $st$, $et$, and $lt$, I often write $\denot{M,g}{\xi}$
or simply $\denot{g}{\xi}$.
The denotations w.r.t.\ $M,st,et,lt,g$ of \textsc{Top}\xspace expressions are defined
recursively, starting with the denotations of terms and atomic
formulae which are defined below.
\begin{itemize}
\item If $\kappa \in \ensuremath{\mathit{CONS}}\xspace$, then $\denot{g}{\kappa} =
\ensuremath{\mathit{f_{cons}}}\xspace(\kappa)$.
\item If $\beta \in \ensuremath{\mathit{VARS}}\xspace$, then $\denot{g}{\beta} =
g(\beta)$.
\item If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, then $\denot{st,et,lt,g}{\phi} \in
\{T,F\}$.
\end{itemize}
The general rule above means that in the case of yes/no formulae, we only
need to define when the denotation is $T$. In all other cases the
denotation is $F$.
\begin{itemize}
\item If $\phi_1, \phi_2 \in \ensuremath{\mathit{YNFORMS}}\xspace$, then
\index{^@$\land$ (\textsc{Top}\xspace's conjunction)}
$\denot{st,et,lt,g}{\phi_1 \land \phi_2} = T$ iff
$\denot{st,et,lt,g}{\phi_1} = T$ and $\denot{st,et,lt,g}{\phi_2} = T$.
\item
\index{part@$\ensuremath{\mathit{Part}}\xspace[\;]$ (used to select periods from partitionings)}
If $\sigma \in \ensuremath{\mathit{PARTS}}\xspace$, $\beta \in \ensuremath{\mathit{VARS}}\xspace$, and
$\nu_{ord} \in \{\dots, -3, -2, -1, 0\}$, then
$\denot{g}{\ensuremath{\mathit{Part}}\xspace[\sigma, \beta, \nu_{ord}]}$ is $T$, iff
all the following hold (below $f = \ensuremath{\mathit{f_{cparts}}}\xspace$ if $\sigma \in
\ensuremath{\mathit{CPARTS}}\xspace$, and $f = \ensuremath{\mathit{f_{gparts}}}\xspace$ if $\sigma \in \ensuremath{\mathit{GPARTS}}\xspace$):
\begin{itemize}
\item $g(\beta) \in f(\sigma)$,
\item if $\nu_{ord} = 0$, then $st \in g(\beta)$,
\item if $\nu_{ord} \leq -1$, then the
following set contains exactly $-\nu_{ord} - 1$ elements:
\[
\{ p \in f(\sigma) \mid
maxpt(g(\beta)) \prec minpt(p) \text{ and } maxpt(p) \prec st \}
\]
\end{itemize}
\end{itemize}
Intuitively, if $\nu_{ord} = 0$, then $\beta$ must denote a period in
the partitioning that contains $st$. If $\nu_{ord} \leq -1$, $\beta$
must denote the $-\nu_{ord}$-th period of the partitioning
that is completely situated before the speech time (e.g.\ if
$\nu_{ord} = -4$, $\beta$ must denote the 4th period which is
completely situated before $st$); that is, there must be $-\nu_{ord} -
1$ periods in the partitioning that fall completely between
the end of the period denoted by $\beta$ and $st$ ($-(-4) - 1 = 3$
periods if $\nu_{ord} = -4$).
For example, if $\ensuremath{\mathit{f_{cparts}}}\xspace(day^c)$ is the partitioning of
all day-periods, then $\denot{g}{\ensuremath{\mathit{Part}}\xspace[day^c, \beta, 0]}$ is $T$
iff $g(\beta)$ covers exactly the whole current
day. Similarly, $\denot{g}{\ensuremath{\mathit{Part}}\xspace[day^c, \beta, -1]}$
is $T$ iff $g(\beta)$ covers exactly the whole
previous day. ($\ensuremath{\mathit{Part}}\xspace[day^c, \beta, 0]$ and $\ensuremath{\mathit{Part}}\xspace[day^c, \beta,
-1]$ can be used to represent the meanings of \qit{today} and
\qit{yesterday}; see section \ref{at_before_after_op}.)
The definition of \ensuremath{\mathit{Part}}\xspace could be extended to allow positive values
as its third argument. This would allow expressing \qit{tomorrow},
\qit{next January}, etc.
\begin{itemize}
\index{part@$\ensuremath{\mathit{Part}}\xspace[\;]$ (used to select periods from partitionings)}
\item If $\sigma \in \ensuremath{\mathit{PARTS}}\xspace$ and $\beta \in \ensuremath{\mathit{VARS}}\xspace$,
then $\denot{g}{\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]} = T$ iff $g(\beta)
\in f(\sigma)$ (where $f = \ensuremath{\mathit{f_{cparts}}}\xspace$ if $\sigma \in \ensuremath{\mathit{CPARTS}}\xspace$, and $f
= \ensuremath{\mathit{f_{gparts}}}\xspace$ if $\sigma \in \ensuremath{\mathit{GPARTS}}\xspace$).
\end{itemize}
$\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ is a simplified version of $\ensuremath{\mathit{Part}}\xspace[\sigma,
\beta, \nu_{ord}]$, used when we want to ensure that $g(\beta)$ is
simply a period in the partitioning of $\sigma$.
\begin{itemize}
\item If $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $\tau_1, \tau_2, \dots, \tau_n \in
\ensuremath{\mathit{TERMS}}\xspace$, then $\denot{st,et,lt,g}{\pi(\tau_1, \tau_2, \dots, \tau_n)}$
is $T$ iff $et \subper lt$ and for some $p_{mxl} \in \ensuremath{\mathit{f_{pfuns}}}\xspace(\pi,
n)(\denot{g}{\tau_1}, \denot{g}{\tau_2}, \dots, \denot{g}{\tau_n})$,
$et \subper p_{mxl}$.
\end{itemize}
Intuitively, for the denotation of a predicate to be $T$, $et$ must
fall within $lt$, and $et$ must be a subperiod of a maximal period
where the situation described by the predicate holds. It is trivial to
prove that the definition above causes all \textsc{Top}\xspace predicates to have
the following property:
\paragraph{Homogeneity:} A \textsc{Top}\xspace formula $\phi$ is \emph{homogeneous}, iff
for every $st \in \ensuremath{\mathit{PTS}}\xspace$, $et \in \ensuremath{\mathit{PERIODS}}\xspace$, $lt \in \ensuremath{\mathit{PERIODS}}\xspace^*$, and $g \in
G$, the following implication holds:\footnote{The term ``homogeneity''
is also used in the temporal databases literature, but with a
completely different meaning; see \cite{tdbsglossary}.}
\[
\text{if } et' \subper et \text{ and } \denot{st,et,lt,g}{\phi} = T,
\text{ then } \denot{st,et',lt,g}{\phi} = T
\]
Intuitively, if a predicate is true at some $et$, then it is also true
at any subperiod $et'$ of $et$. Although \textsc{Top}\xspace predicates are
homogeneous, more complex formulae are not always homogeneous.
Various versions of homogeneity have been used in \cite{Allen1984},
\cite{Lascarides}, \cite{Richards}, \cite{Kent}, \cite{Pratt1995}, and
elsewhere.
The denotation of a wh-formula w.r.t.\ $st$, $et$, $lt$, and $g$ is
defined below. It is assumed that $\beta_1, \beta_2, \beta_3, \dots,
\beta_n \in \ensuremath{\mathit{VARS}}\xspace$ and $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$.
\begin{itemize}
\item
\index{?@$?$ (\textsc{Top}\xspace's interrogative quantifier)}
$\denot{st,et,lt,g}{?\beta_1 \; ?\beta_2 \; \dots \; ?\beta_n \; \phi}
= \{\tup{g(\beta_1), g(\beta_2), \dots, g(\beta_n)} \mid
\denot{st,et,lt,g}{\phi} = T\}$
\end{itemize}
That is, if $\denot{st,et,lt,g}{\phi} = T$, then
$\denot{st,et,lt,g}{?\beta_1 \; ?\beta_2 \; \dots \; ?\beta_n \;
\phi}$ is a one-element set; it contains one tuple that holds the
world-objects assigned to $\beta_1, \beta_2, \dots, \beta_n$ by
$g$. Otherwise, $\denot{st,et,lt,g}{?\beta_1 \; ?\beta_2 \; \dots \;
?\beta_n \; \phi}$ is the empty set.
\begin{itemize}
\item
\index{?@$?$ (\textsc{Top}\xspace's interrogative quantifier)}
\index{?mxl@$?_{mxl}$ (\textsc{Top}\xspace's interrogative-maximal quantifier)}
$
\denot{st,et,lt,g}{?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \;
\dots \; ?\beta_n \; \phi} =
$ \\
$
\{\tup{g(\beta_1), g(\beta_2), g(\beta_3), \dots, g(\beta_n)} \mid
\denot{st,et,lt,g}{\phi} = T \text{, and }
$ \\
$
\text{ for no } et' \in \ensuremath{\mathit{PERIODS}}\xspace \text{ and } g' \in G
\text{ is it true that }
\denot{st,et',lt,g'}{\phi} = T,
$ \\
$
g(\beta_1) \propsubper g'(\beta_1), \;
g(\beta_2) = g'(\beta_2), \; g(\beta_3) = g'(\beta_3), \;
\dots, \; g(\beta_n) = g'(\beta_n)\}
$
\end{itemize}
The denotation $\denot{st,et,lt,g}{?_{mxl}\beta_1 \; ?\beta_2 \;
?\beta_3 \; \dots \; ?\beta_n \; \phi}$ is either a one-element
set that contains a tuple holding the world-objects $g(\beta_1),
g(\beta_2), \dots, g(\beta_n)$, or the empty set. Intuitively, the
denotation of $?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \; \dots \;
?\beta_n \; \phi$ contains the values assigned to $\beta_1, \beta_2,
\beta_3, \dots, \beta_n$ by $g$, if these values satisfy $\phi$, and
there is no other variable assignment $g'$ that assigns the
same values to $\beta_2, \beta_3, \dots, \beta_n$, a superperiod of
$g(\beta_1)$ to $\beta_1$, and that satisfies $\phi$ (for any $et' \in
\ensuremath{\mathit{PERIODS}}\xspace$). That is, it must not be possible to extend any further the
period assigned to $\beta_1$ by $g$, preserving at the same time the
values assigned to $\beta_2, \beta_3, \dots, \beta_n$, and satisfying
$\phi$. Otherwise, the denotation of $?_{mxl}\beta_1 \; ?\beta_2 \;
?\beta_3 \; \dots \; ?\beta_n \; \phi$ is the empty set.
The syntax of \textsc{Top}\xspace (section \ref{top_syntax})
requires $\beta_1$ to appear at least once within $\phi$ as the first
argument of a \ensuremath{\mathit{Past}}\xspace, \ensuremath{\mathit{Perf}}\xspace, \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, \ensuremath{\mathit{After}}\xspace, or \ensuremath{\mathit{Ntense}}\xspace operator,
or as the second argument of a \ensuremath{\mathit{Part}}\xspace operator. The semantics of
these operators require variables occurring at these positions
to denote periods. Hence, variable assignments $g$ that do not
assign a period to $\beta_1$ will never satisfy $\phi$, and no tuples
for these variable assignments will be included in
$\denot{st,et,lt,g}{?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \; \dots \;
?\beta_n \; \phi}$.
The rules for computing the denotations w.r.t.\ $M,st,et,lt,g$ of
other \textsc{Top}\xspace expressions will be given in following sections.
\paragraph{Denotation w.r.t.\ M, st:} I now define the
denotation of a \textsc{Top}\xspace expression with respect to only $M$ and
$st$. The denotation w.r.t.\ $M, st$ is similar to the denotation
w.r.t.\ $M, st, et, lt, g$, except that there is an implicit
existential quantification over all $g \in G$ and all $et \in
\ensuremath{\mathit{PERIODS}}\xspace$, and $lt$ is set to \ensuremath{\mathit{PTS}}\xspace (the whole time-axis). The
denotation of $\phi$ w.r.t.\ $M, st$, written $\denot{M,st}{\phi}$ or simply
$\denot{st}{\phi}$, is defined only for \textsc{Top}\xspace formulae:
\begin{itemize}
\item If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, then $\denot{st}{\phi} =$
\begin{itemize}
\item $T$, if for some $g \in G$ and $et \in \ensuremath{\mathit{PERIODS}}\xspace$,
$\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\phi} = T$,
\item $F$, otherwise
\end{itemize}
\item If $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$, then $\denot{st}{\phi} =
\bigcup_{g \in G, \; et \in \ensuremath{\mathit{PERIODS}}\xspace}\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\phi}$.
\end{itemize}
Each question will be mapped to a \textsc{Top}\xspace formula $\phi$
(if the question is ambiguous, multiple formulae will
be generated, one for each reading). $\denot{st}{\phi}$ specifies what
the \textsc{Nlitdb}\xspace's answer should report. When $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$,
$\denot{st}{\phi} = T$ (i.e.\ the answer should be \qit{yes}) if for
some assignment to the variables of $\phi$ and for some event time,
$\phi$ is satisfied; otherwise
$\denot{st}{\phi} = F$ (the answer should be \qit{no}). The
localisation time is set to \ensuremath{\mathit{PTS}}\xspace (the whole time-axis) to
reflect the fact that initially there is no restriction on where
$et$ may be located. As mentioned in section \ref{denotation},
however, when computing the denotations of the subformulae of
$\phi$, temporal operators may narrow down the localisation
time, placing restrictions on $et$.
In the case where $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$ (i.e
$\phi = ?\beta_1 \; \dots \; ?\beta_n \; \phi'$ or
$\phi = ?_{mxl}\beta_1 \; \dots \; ?\beta_n \; \phi'$
with $\phi' \in \ensuremath{\mathit{YNFORMS}}\xspace$), $\denot{st}{\phi}$ is the union
of all $\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\phi}$, for any $g \in G$ and $et \in
\ensuremath{\mathit{PERIODS}}\xspace$. For each $g \in G$ and $et \in \ensuremath{\mathit{PERIODS}}\xspace$,
$\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\phi}$ is either an empty set or a one-element
set containing a tuple that holds values of $\beta_1, \beta_2,
\beta_3, \dots, \beta_n$ that satisfy $\phi'$ ($\beta_1$ must be
maximal if $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace_2$). Hence, $\denot{st}{\phi}$ (the
union of all $\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\phi}$) is the set of all tuples
that hold values of $\beta_1, \beta_2, \beta_3, \dots, \beta_n$ that
satisfy $\phi'$. The answer should report these tuples to the user (or
be a message like \qit{No answer found.}, if $\denot{st}{\phi} =
\emptyset$).
\section{The Pres operator} \label{pres_op}
The \ensuremath{\mathit{Pres}}\xspace operator is used to express the simple present and present
continuous tenses. For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$:
\begin{itemize}
\item
\index{pres@$\ensuremath{\mathit{Pres}}\xspace[\;]$ (used to refer to the present)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Pres}}\xspace[\phi]} = T$, iff $st \in et$ and
$\denot{st,et,lt,g}{\phi} = T$.
\end{itemize}
\pref{presop:1}, for example, is represented as \pref{presop:2}.
\begin{examps}
\item Is BA737 at gate 2? \label{presop:1}
\item $\ensuremath{\mathit{Pres}}\xspace[be\_at(ba737, gate2)]$ \label{presop:2}
\end{examps}
Let us assume that the only maximal periods where BA737
was/is/will be at gate 2 are $p_{mxl_1}$ and $p_{mxl_2}$ (i.e.\
\pref{presop:3} holds; see section \ref{top_model}), and that \pref{presop:1}
is submitted at a time-point $st_1$, such that
\pref{presop:4} holds (figure \ref{pres_op_fig}).
\begin{gather}
\ensuremath{\mathit{f_{pfuns}}}\xspace(be\_at, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ba737), \ensuremath{\mathit{f_{cons}}}\xspace(gate2)) =
\{p_{mxl_1}, p_{mxl_2}\} \label{presop:3} \\
st_1 \in p_{mxl_2} \label{presop:4}
\end{gather}
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{pres_op}
\caption{\qit{Is BA737 at gate 2?}}
\label{pres_op_fig}
\end{center}
\hrule
\end{figure}
The answer to \pref{presop:1} will be affirmative iff \pref{presop:5}
is $T$.
\begin{equation}
\denot{st_1}{\ensuremath{\mathit{Pres}}\xspace[be\_at(ba737, gate2)]} \label{presop:5}
\end{equation}
According to section \ref{denotation}, \pref{presop:5} is $T$ iff for
some $g \in G$ and $et \in \ensuremath{\mathit{PERIODS}}\xspace$, \pref{presop:6} holds.
\begin{equation}
\denot{st_1, et, \ensuremath{\mathit{PTS}}\xspace, g}{\ensuremath{\mathit{Pres}}\xspace[be\_at(ba737, gate2)]} = T \label{presop:6}
\end{equation}
By the definition of \ensuremath{\mathit{Pres}}\xspace, \pref{presop:6}
holds iff both \pref{presop:7} and \pref{presop:8} hold.
\begin{gather}
st_1 \in et \label{presop:7} \\
\denot{st_1,et,\ensuremath{\mathit{PTS}}\xspace,g}{be\_at(ba737, gate2)} = T \label{presop:8}
\end{gather}
By the definitions of $\denot{st,et,lt,g}{\pi(\tau_1, \dots, \tau_n)}$
and $\denot{g}{\kappa}$ (section \ref{denotation}), \pref{presop:8} holds
iff for some $p_{mxl}$, \pref{presop:9} -- \pref{presop:11} hold.
\begin{gather}
et \subper \ensuremath{\mathit{PTS}}\xspace \label{presop:9} \\
p_{mxl} \in \ensuremath{\mathit{f_{pfuns}}}\xspace(be\_at, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ba737), \ensuremath{\mathit{f_{cons}}}\xspace(gate2))
\label{presop:10} \\
et \subper p_{mxl} \label{presop:11}
\end{gather}
By \pref{presop:3}, \pref{presop:10} is equivalent to \pref{presop:13}.
\begin{gather}
p_{mxl} \in \{p_{mxl_1}, p_{mxl_2}\} \label{presop:13}
\end{gather}
The answer to \pref{presop:1} will be affirmative iff for some $et
\in \ensuremath{\mathit{PERIODS}}\xspace$ and some $p_{mxl}$, \pref{presop:7},
\pref{presop:9}, \pref{presop:11}, and \pref{presop:13} hold. For
$p_{mxl} = p_{mxl_2}$, and $et$ any subperiod of $p_{mxl_2}$ that
contains $st_1$ (figure \ref{pres_op_fig}), \pref{presop:7},
\pref{presop:9}, \pref{presop:11}, and \pref{presop:13} hold. Hence,
the answer to \pref{presop:1} will be affirmative, as one would
expect. If the question is submitted at an
$st_2$ that falls outside $p_{mxl_1}$ and $p_{mxl_2}$
(figure \ref{pres_op_fig}), then the answer will be negative,
because in that case there is no subperiod $et$ of $p_{mxl_1}$ or
$p_{mxl_2}$ that contains $st_2$.
The present continuous is expressed similarly. For example, the
reading of \pref{presop:14} where Airserve is actually
servicing BA737 at the present moment is expressed as
\pref{presop:15}. Unlike \cite{Dowty1977},
\cite{Lascarides}, \cite{Pirie1990}, and \cite{Crouch2},
in \textsc{Top}\xspace progressive tenses do not introduce any special
progressive operator. This will be discussed in section \ref{culm_op}.
\begin{examps}
\item Airserve is (actually) servicing BA737. \label{presop:14}
\item $\ensuremath{\mathit{Pres}}\xspace[servicing(airserve, ba737)]$ \label{presop:15}
\end{examps}
The habitual \pref{presop:16} is represented using a different
predicate functor from that of \pref{presop:14}, as in
\pref{presop:17}. As will be explained in chapter
\ref{English_to_TOP}, \pref{presop:14} is taken to involve a
non-habitual homonym of \qit{to service}, while \pref{presop:16} is
taken to involve a habitual homonym. The two homonyms introduce
different predicate functors.
\begin{examps}
\item Airserve (habitually) services BA737. \label{presop:16}
\item $\ensuremath{\mathit{Pres}}\xspace[hab\_server\_of(airserve, ba737)]$ \label{presop:17}
\end{examps}
\textsc{Top}\xspace's \ensuremath{\mathit{Pres}}\xspace operator is similar to that of
\cite{Pirie1990}. The main difference is that the \ensuremath{\mathit{Pres}}\xspace
of Pirie et al.\ does not require $st$ to fall within
$et$. Instead, it narrows $lt$ to start at or after $st$. This,
in combination with the requirement $et \subper lt$, requires $et$ to
start at or after $st$. Using this version of \ensuremath{\mathit{Pres}}\xspace in \pref{presop:2}
would cause the answer to be affirmative if \pref{presop:1} is
submitted at $st_2$ (figure \ref{pres_op_fig}), i.e.\ at a point
where BA737 is not at gate 2, because there is an $et$
at which BA737 is at gate 2 (e.g.\ the $et$ of figure
\ref{pres_op_fig}), and this $et$ starts after $st_2$. This version of
\ensuremath{\mathit{Pres}}\xspace was adopted by Pirie et al.\ to cope with sentences
like \qit{J.Adams inspects BA737 tomorrow.}, where the simple present
refers to a future inspection (section \ref{simple_present}). In
this case, $et$ (inspection time) must be allowed to start after $st$.
The \ensuremath{\mathit{Pres}}\xspace of Pirie et al.\ is often over-permissive (e.g.\ it causes
the answer to be affirmative if \pref{presop:1} is submitted at
$st_2$). Pirie et al.\ employ a post-processing mechanism, which is
invoked after the English sentence is translated into logic, and which
attempts to restrict the semantics of \ensuremath{\mathit{Pres}}\xspace in cases where it is
over-permissive. In effect, this mechanism introduces modifications in
only one case: if the \ensuremath{\mathit{Pres}}\xspace is introduced by a state verb (excluding
progressive states) which is not modified by a temporal adverbial,
then $et$ is set to $\{st\}$. For example, in \qit{J.Adams is at site
2.} where the verb is a state, the mechanism causes $et$ to be set to
$\{st\}$, which correctly requires J.Adams to be at gate 2 at $st$.
In \qit{J.Adams is at site 2 tomorrow.}, where the state verb is
modified by a temporal adverbial, the post-processing has no effect,
and $et$ (the time where J.Adams is at site 2) is allowed to start at
or after $st$. This is again correct, since in this case $et$ must be
located within the following day, i.e.\ after $st$. In \qit{J.Adams is
inspecting site 2.}, where the verb is a progressive state, the
post-processing has again no effect, and $et$ (inspection time) can
start at or after $st$. The rationale in this case is that $et$ cannot
be set to $\{st\}$, because there is a reading where the present
continuous refers to a future inspection (section
\ref{progressives}). For the purposes of this project, where the
futurate readings of the simple present and the present continuous are
ignored, \textsc{Top}\xspace's \ensuremath{\mathit{Pres}}\xspace is adequate. If, however, these futurate
readings were to be supported, a more permissive \ensuremath{\mathit{Pres}}\xspace operator, like
that of Pirie et al., might have to be adopted.
\section{The Past operator} \label{past_op}
The \ensuremath{\mathit{Past}}\xspace operator is used when expressing the simple past, the past
continuous, the past perfect, and the present perfect (the latter is
treated as equivalent to the simple past; section
\ref{present_perfect}). For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $\beta \in \ensuremath{\mathit{VARS}}\xspace$:
\begin{itemize}
\item
\index{past@$\ensuremath{\mathit{Past}}\xspace[\;]$ (used to refer to the past)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Past}}\xspace[\beta, \phi]} = T$, iff
$g(\beta) = et$ and
$\denot{st,et, lt \intersect [t_{first}, st), g}{\phi} = T$.
\end{itemize}
The \ensuremath{\mathit{Past}}\xspace operator narrows the localisation time, so that the latter
ends before $st$. $et$ will eventually be required to be a subperiod
of the localisation time (this requirement will be introduced by the
rules that compute the denotation of $\phi$). Hence, $et$ will be
required to end before $st$. $\beta$ is used as a pointer to $et$
(the definition of $\ensuremath{\mathit{Past}}\xspace[\beta, \phi]$ makes sure that the value of
$\beta$ is $et$). $\beta$ is useful in formulae that contain
\ensuremath{\mathit{Ntense}}\xspace{s} (to be discussed in section \ref{ntense_op}). It is also
useful in time-asking questions, where $et$ has to be reported. For
example, \qit{When was gate 2 open?} is represented as $?_{mxl}e^v \;
\ensuremath{\mathit{Past}}\xspace[e^v, open(gate2)]$, which reports the maximal $et$s that end
before $st$, such that gate 2 is open throughout $et$.
\textsc{Top}\xspace's \ensuremath{\mathit{Past}}\xspace operator is essentially the same as that of
\cite{Pirie1990}. (A slightly different \ensuremath{\mathit{Past}}\xspace operator is adopted in
\cite{Crouch2}.)
\section{Progressives, non-progressives, and the Culm operator} \label{culm_op}
Let us now examine in more detail how \textsc{Top}\xspace represents the simple past
and the past continuous. Let us start
from verbs whose base forms are culminating activities,
like \qit{to inspect} in the airport domain. The past continuous
\pref{culmop:1} is represented as \pref{culmop:2}.
\begin{examps}
\item Was J.Adams inspecting BA737? \label{culmop:1}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, inspecting(j\_adams, ba737)]$ \label{culmop:2}
\end{examps}
Let us assume that the inspection of BA737 by J.Adams started at the
beginning of $p_{mxl_1}$ (figure \ref{culm_op_fig}), that it
stopped temporarily at the end of $p_{mxl_1}$, that it was resumed at
the beginning of $p_{mxl_2}$, and that it was completed at the end of
$p_{mxl_2}$. Let us also assume that there is no other
time at which J.Adams was/is/will be inspecting BA737. Then,
\pref{culmop:3} and \pref{culmop:3.2} hold.
\begin{gather}
\ensuremath{\mathit{f_{pfuns}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(j\_adams), \ensuremath{\mathit{f_{cons}}}\xspace(ba737)) =
\{p_{mxl_1},p_{mxl_2}\} \label{culmop:3} \\
\ensuremath{\mathit{f_{culms}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(j\_adams), \ensuremath{\mathit{f_{cons}}}\xspace(ba737)) = T \label{culmop:3.2}
\end{gather}
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{culm_op}
\caption{\qit{Was J.Adams inspecting BA737?} vs.\
\qit{Did J.Adams inspect BA737?}}
\label{culm_op_fig}
\end{center}
\hrule
\end{figure}
The reader can check that \pref{culmop:4} is $T$ iff there is an $et$
that is a subperiod of $p_{mxl_1}$ or $p_{mxl_2}$, and that ends
before $st$.
\begin{equation}
\label{culmop:4}
\denot{st}{\ensuremath{\mathit{Past}}\xspace[e^v, inspecting(j\_adams, ba737)]}
\end{equation}
If \pref{culmop:1} is submitted at
$st_1$ or $st_2$ (figure \ref{culm_op_fig}), then \pref{culmop:4}
is $T$ (the answer to \pref{culmop:1} will be \qit{yes}),
because in both cases there is an $et$ (e.g.\ the $et_1$ of figure
\ref{culm_op_fig}) that ends before $st_1$ and $st_2$, and that is a
subperiod of $p_{mxl_1}$. In contrast, if the question is submitted at
$st_3$, \pref{culmop:4} is $F$ (the answer will be
negative), because in this case there is no subperiod of $p_{mxl_1}$ or
$p_{mxl_2}$ that ends before $st_3$. This is what one would expect: at
$st_1$ and $st_2$ the answer to \pref{culmop:1} should be affirmative,
because J.Adams has already spent some time inspecting BA737. In
contrast, at $st_3$ J.Adams has not yet spent any time inspecting
BA737, and the answer should be negative.
Let us now consider the simple past \pref{culmop:5}. We want the
answer to be affirmative if \pref{culmop:5} is
submitted at $st_1$ (or any other time-point after the end of
$p_{mxl_2}$), but not if it is submitted at $st_2$ (or any other
time-point before the end of $p_{mxl_2}$), because at
$st_2$ J.Adams has not yet completed the inspection (section
\ref{simple_past}).
\begin{examps}
\item Did J.Adams inspect BA737? \label{culmop:5}
\end{examps}
\pref{culmop:5} cannot be represented as \pref{culmop:2}, because this
would cause the answer to \pref{culmop:5} to be affirmative if the
question is submitted at $st_2$. Instead, \pref{culmop:5} is
represented as \pref{culmop:6}. The same predicate
$inspecting(j\_adams, ba737)$ of \pref{culmop:2} is used, but an
additional \ensuremath{\mathit{Culm}}\xspace operator is inserted.
\begin{equation}
\label{culmop:6}
\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]]
\end{equation}
Intuitively, the \ensuremath{\mathit{Culm}}\xspace requires the event time to be the $et_2$
of figure \ref{culm_op_fig}, i.e.\ to cover the whole time from
the point where the inspection starts to the point where the
inspection is completed. (If the inspection is never completed,
\ensuremath{\mathit{Culm}}\xspace causes the denotation of \pref{culmop:6} to be $F$.)
Combined with the \ensuremath{\mathit{Past}}\xspace operator, the \ensuremath{\mathit{Culm}}\xspace causes the answer
to be affirmative if \pref{culmop:5} is submitted at
$st_1$ (because $et_2$ ends before $st_1$), and negative if the
question is submitted at $st_2$ (because $et_2$ does not end before
$st_2$). More formally, for $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $\tau_1, \dots,
\tau_n \in \ensuremath{\mathit{TERMS}}\xspace$:
\begin{itemize}
\item
\index{culm@$\ensuremath{\mathit{Culm}}\xspace[\;]$ (used to express non-progressives of culminating activity verbs)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]} = T$, iff
$et \subper lt$, $\ensuremath{\mathit{f_{culms}}}\xspace(\pi,n)(\denot{g}{\tau_1}, \dots,
\denot{g}{\tau_n}) = T$, $S \not= \emptyset$, and $et = [minpt(S),
maxpt(S)]$, where:
\[
S = \bigcup_{p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(\pi, n)(\denot{g}{\tau_1}, \dots,
\denot{g}{\tau_n})}p
\]
\end{itemize}
The $et = [minpt(S), maxpt(S)]$
requires $et$ to start at the first time-point where the change or
action of $\pi(\tau_1, \dots, \tau_n)$ is ongoing, and to
end at the latest time-point where the change or action is
ongoing. The $\ensuremath{\mathit{f_{culms}}}\xspace(\pi)(\denot{g}{\tau_1}, \dots,
\denot{g}{\tau_n}) = T$ means that the change or action must
reach its climax at the latest time-point where it
is ongoing.
Let us now check formally that the denotation \pref{culmop:10} of
\pref{culmop:6} is in order.
\begin{equation}
\label{culmop:10}
\denot{st}{\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]]}
\end{equation}
\pref{culmop:10} is $T$ iff for some $g \in G$ and
$et \in \ensuremath{\mathit{PERIODS}}\xspace$, \pref{culmop:11} holds.
\begin{equation}
\label{culmop:11}
\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}{\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]]} = T
\end{equation}
By the definition of \ensuremath{\mathit{Past}}\xspace, \pref{culmop:11} holds iff
\pref{culmop:12} and \pref{culmop:13} hold ($\ensuremath{\mathit{PTS}}\xspace \intersect
[t_{first}, st) = [t_{first}, st)$).
\begin{gather}
g(e^v) = et \label{culmop:12} \\
\denot{st,et, [t_{first}, st), g}{\ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]} = T
\label{culmop:13}
\end{gather}
By the definition of \ensuremath{\mathit{Culm}}\xspace, \pref{culmop:13} holds iff
\pref{culmop:14} -- \pref{culmop:18} hold.
\begin{gather}
et \subper [t_{first}, st) \label{culmop:14} \\
\ensuremath{\mathit{f_{culms}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(j\_adams), \ensuremath{\mathit{f_{cons}}}\xspace(ba737)) = T
\label{culmop:15} \\
S \not= \emptyset \label{culmop:17} \\
et = [minpt(S), maxpt(S)] \label{culmop:16} \\
S = \bigcup_{p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(j\_adams), \ensuremath{\mathit{f_{cons}}}\xspace(ba737))}p
\label{culmop:18}
\end{gather}
By \pref{culmop:3}, and assuming that $maxpt(p_{mxl_1}) \prec
minpt(p_{mxl_2})$ (as in figure \ref{culm_op_fig}), \pref{culmop:17}
-- \pref{culmop:18} are equivalent to \pref{culmop:19} --
\pref{culmop:20}. \pref{culmop:19} holds (the union of two periods is
never the empty set), and \pref{culmop:15} is the same as
\pref{culmop:3.2}, which was assumed to hold.
\begin{gather}
p_{mxl_1} \union p_{mxl_2} \not= \emptyset \label{culmop:19} \\
et = [minpt(p_{mxl_1}), maxpt(p_{mxl_2})] \label{culmop:20}
\end{gather}
Hence, \pref{culmop:10} is $T$ (i.e.\ the answer to
\pref{culmop:5} is affirmative) iff for some $g \in G$ and $et
\in \ensuremath{\mathit{PTS}}\xspace$, \pref{culmop:12}, \pref{culmop:14}, and \pref{culmop:20}
hold. Let $et_2 = [minpt(p_{mxl_1}), maxpt(p_{mxl_2})]$ (as in
figure \ref{culm_op_fig}).
Let us assume that \pref{culmop:5} is submitted at an $st$ that
follows the end of $et_2$ (e.g.\ $st_1$ in figure
\ref{culm_op_fig}). For $et = et_2$,
\pref{culmop:14} and \pref{culmop:20} are satisfied. \pref{culmop:12}
is also satisfied by choosing $g = g_1$, where $g_1$ as below. Hence,
the answer to \pref{culmop:5} will be affirmative, as required.
\[g_1(\beta) =
\begin{cases}
et_2 & \text{if } \beta = e^v \\
o & \text{otherwise ($o$ is an arbitrary element of \ensuremath{\mathit{OBJS}}\xspace)}
\end{cases}
\]
In contrast, if the question is submitted before the end of $et_2$
(e.g.\ $st_2$ or $st_3$ in figure \ref{culm_op_fig}), then the answer
to \pref{culmop:5} will be negative, because there is no $et$ that
satisfies \pref{culmop:14} and \pref{culmop:20}.
In the case of verbs whose base forms are processes, states, or
points, the simple past does not introduce a \ensuremath{\mathit{Culm}}\xspace operator. In this
case, when both the simple past and the past continuous are possible,
they are represented using the same \textsc{Top}\xspace formula. (A similar
approach is adopted in \cite{Parsons1989}.) For example, in the
airport domain where \qit{to circle} is classified as process, both
\pref{culmop:28} and \pref{culmop:29} are represented as \pref{culmop:30}.
\begin{examps}
\item Was BA737 circling? \label{culmop:28}
\item Did BA737 circle? \label{culmop:29}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, circling(ba737)]$ \label{culmop:30}
\end{examps}
The reader can check that the denotation of
\pref{culmop:30} w.r.t.\ $st$ is $T$ (i.e.\ the answer to
\pref{culmop:28} and \pref{culmop:29} is affirmative) iff there
is an $et$ which is a subperiod of a maximal period where BA737 was
circling, and $et$ ends before $st$. That is, the answer is
affirmative iff BA737 was circling at some time before $st$.
There is no requirement that any climax must
have been reached.
The reader will have noticed that in the case of verbs whose base
forms are culminating activities, the (non-progressive) simple past is
represented by adding a \ensuremath{\mathit{Culm}}\xspace operator to the expression that
represents the (progressive) past continuous. For example, assuming
that \qit{to build (something)} is a culminating activity,
\pref{culmop:36} is represented as \pref{culmop:37}, and
\pref{culmop:38} as \pref{culmop:39}).
\begin{examps}
\item Housecorp was building bridge 2. \label{culmop:36}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, building(housecorp, bridge2)]$ \label{culmop:37}
\item Housecorp built bridge 2. \label{culmop:38}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[building(housecorp, bridge2)]]$ \label{culmop:39}
\end{examps}
In contrast, in \cite{Dowty1977}, \cite{Lascarides}, \cite{Pirie1990},
\cite{Crouch2}, and \cite{Kamp1993}, progressive tenses are
represented by adding a progressive operator to the expressions that
represent the non-progressive tenses. For example, ignoring some
details, Pirie et al.\ represent \pref{culmop:36} and \pref{culmop:38}
as \pref{culmop:44} and \pref{culmop:46} respectively.
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Prog}}\xspace[build(housecorp, bridge2)]]$ \label{culmop:44}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, build(housecorp, bridge2)]$ \label{culmop:46}
\end{examps}
In \pref{culmop:46}, the semantics that Pirie et al.\ assign to
$build(housecorp, bridge2)$ require $et$ to cover the whole
building of the bridge by Housecorp, from its beginning to the point
where the building is complete. (The semantics of
\textsc{Top}\xspace's $building(housecorp, bridge2)$ in \pref{culmop:37}
require $et$ to be simply a period throughout which Housecorp is
building bridge 2.) The \ensuremath{\mathit{Past}}\xspace of \pref{culmop:46} requires $et$ (start
to completion of inspection) to end before $st$. Hence, the answer to
\pref{culmop:38} is affirmative iff the building was completed before
$st$.
In \pref{culmop:44}, the semantics that Pirie et al.\ assign to
\ensuremath{\mathit{Prog}}\xspace require $et$ to be a subperiod of another period
$et'$ that covers the whole building (from start to completion;
figure \ref{prog_op_fig}). The \ensuremath{\mathit{Past}}\xspace of \pref{culmop:44}
requires $et$ to end before $st$. If, for
example, \pref{culmop:36} is submitted at an $st$ that falls between
the end of $et$ and the end of $et'$ (figure
\ref{prog_op_fig}), the answer will be affirmative. This is
correct, because at that $st$ Housecorp has already been building the
bridge for some time (although the bridge is not yet complete).
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{prog_op}
\caption{A flawed Prog operator}
\label{prog_op_fig}
\end{center}
\hrule
\end{figure}
The \ensuremath{\mathit{Prog}}\xspace of Pirie et al., however, has a flaw (acknowledged in
\cite{Crouch2}): \pref{culmop:44} implies that there is a period
$et'$, such that the building is completed at the end of $et'$; i.e.\
according to \pref{culmop:44} the building was or will be necessarily
completed at some time-point. This does not capture correctly the
semantics of \pref{culmop:36}. \pref{culmop:36} carries no
implication that the building was or will ever be
completed. (\textsc{Top}\xspace's representation of \pref{culmop:36}, i.e.\
\pref{culmop:37}, does not suffer from this problem: it contains
no assumption that the building is ever completed.) To overcome
similar problems with \ensuremath{\mathit{Prog}}\xspace operators, ``branching'' models
of time or ``possible worlds'' have been employed (see, for
example, \cite{Dowty1977}, \cite{McDermott1982}, \cite{Mays1986},
\cite{Kent}; see also \cite{Lascarides} for criticism of
possible-worlds approaches to progressives.) Approaches based on
branching time and possible worlds, however, seem unnecessarily
complicated for the purposes of this thesis.
\section{The At, Before, and After operators} \label{at_before_after_op}
The \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, and \ensuremath{\mathit{After}}\xspace operators are used to express punctual
adverbials, period adverbials, and \qit{while~\dots},
\qit{before~\dots}, and \qit{after~\dots} subordinate clauses (sections \ref{temporal_adverbials} and \ref{subordinate_clauses}).
For $\phi, \phi_1, \phi_2 \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $\tau \in \ensuremath{\mathit{TERMS}}\xspace$:
\begin{itemize}
\item
\index{at@$\ensuremath{\mathit{At}}\xspace[\;]$ (narrows the localisation time)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{At}}\xspace[\tau, \phi]} = T$, iff
$\denot{g}{\tau} \in \ensuremath{\mathit{PERIODS}}\xspace$ and
$\denot{st,et,lt \intersect \denot{g}{\tau},g}{\phi} = T$.
\item
\index{at@$\ensuremath{\mathit{At}}\xspace[\;]$ (narrows the localisation time)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{At}}\xspace[\phi_1, \phi_2]} = T$, iff
for some $et'$ \\
$et' \in mxlpers(\{e \in \ensuremath{\mathit{PERIODS}}\xspace \mid \denot{st,e,\ensuremath{\mathit{PTS}}\xspace,g}{\phi_1} = T\})$
and
$\denot{st,et,lt \intersect et',g}{\phi_2} = T$.
\end{itemize}
If the first argument of \ensuremath{\mathit{At}}\xspace is a term $\tau$, then
$\tau$ must denote a period. The localisation time is
narrowed to the intersection of the original $lt$ with the period
of $\tau$. If the first argument of \ensuremath{\mathit{At}}\xspace is a formula $\phi_1$,
the localisation time of $\phi_2$ is narrowed to the
intersection of the original $lt$ with a maximal event time period
$et'$ at which $\phi_1$ holds. For example, \pref{atop:1} is
represented as \pref{atop:2}.
\begin{examps}
\item Was tank 2 empty (some time) on 25/9/95? \label{atop:1}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{25/9/95}, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{atop:2}
\end{examps}
In \pref{atop:2}, $lt$ initially covers the whole time-axis. The \ensuremath{\mathit{At}}\xspace
operator causes $lt$ to become the 25/9/95 period (I assume that the
constant $\mathit{25/9/95}$ denotes the obvious period), and the \ensuremath{\mathit{Past}}\xspace
operator narrows $lt$ to end before $st$ (if 25/9/95 is entirely in
the past, the \ensuremath{\mathit{Past}}\xspace operator has not effect). The answer to
\pref{atop:1} is affirmative iff it is possible to find an $et$ that is
a subperiod of the narrowed $lt$, such that tank 2 was empty during
$et$.
If \pref{atop:1} is submitted before 25/9/95 (i.e.\ 25/9/95 starts
after $st$), the \textsc{Nlitdb}\xspace's answer will be negative, because the \ensuremath{\mathit{At}}\xspace
and \ensuremath{\mathit{Past}}\xspace operators cause $lt$ to become the empty set, and hence it
is impossible to find a subperiod $et$ of $lt$ where tank 2 is empty. A
simple negative response is unsatisfactory in this case: \pref{atop:1}
is unacceptable if uttered before 25/9/95, and the system should warn
the user about this. The unacceptability of \pref{atop:1} in this case
seems related to the unacceptability of \pref{atop:20}, which would be
represented as \pref{atop:21} (the definition of \ensuremath{\mathit{Part}}\xspace would have to
be extended to allow positive values of its third argument; see
section \ref{denotation}).
\begin{examps}
\item \bad Was tank 2 empty tomorrow? \label{atop:20}
\item $\ensuremath{\mathit{Part}}\xspace[day^c, tom^v, 1] \land \ensuremath{\mathit{At}}\xspace[tom^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$
\label{atop:21}
\end{examps}
In both cases, the combination of the simple past and the adverbial
causes $lt$ to become the empty set. In \pref{atop:20}, $tom^v$
denotes the period that covers exactly the day after $st$. The \ensuremath{\mathit{At}}\xspace and
\ensuremath{\mathit{Past}}\xspace operators set $lt$ to the intersection of that period with
$[t_{first}, st)$. The two periods do not overlap, and hence $lt =
\emptyset$, and it is impossible to find a subperiod $et$ of $lt$.
This causes the answer to be always negative, no matter what happens
in the world (i.e.\ regardless of when tank 2 is empty). Perhaps the
questions sound unacceptable because people, using a concept similar
to \textsc{Top}\xspace's $lt$, realise that the answers can never be affirmative.
This suggests that the \textsc{Nlitdb}\xspace should check if $lt = \emptyset$, and
if this is the case, generate a cooperative response (section
\ref{no_issues}) explaining that the question is problematic (this is
similar to the ``overlap rule'' of \cite{Harper} and the
``non-triviality constraint'' on p.~653 of \cite{Kamp1993}). The
framework of this thesis currently provides no such mechanism.
Moving to further examples, \pref{atop:3} and \pref{atop:22} are
represented as \pref{atop:4} and \pref{atop:23}.
Unlike the \qit{on 25/9/95} of
\pref{atop:1}, which is represented using a constant
($\mathit{25/9/95}$), the \qit{on Monday} of \pref{atop:3} is
represented using a variable ($mon^v$) that ranges over the periods of
the partitioning of Monday-periods. Similarly,
the \qit{at 5:00pm} of \pref{atop:22} is represented using a variable
($fv^v$) that ranges over the 5:00pm minute-periods.
\begin{examps}
\item Was tank 2 empty on Monday? \label{atop:3}
\item Was tank 2 empty on a Monday? \label{atop:24}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, mon^v] \land \ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$
\label{atop:4}
\item Was tank 2 empty on Monday at 5:00pm? \label{atop:22}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, mon^v] \land
\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]]$ \label{atop:23}
\end{examps}
\pref{atop:4} requires tank 2 to have been empty at some
past $et$ that falls within some Monday. No attempt is made to
determine exactly which Monday the user has in mind in \pref{atop:3}
(\pref{atop:3} is treated as equivalent to \pref{atop:24}; section
\ref{temporal_anaphora}). Similarly, \pref{atop:23} requires tank 2 to
have been empty at some past $et$ that falls within the intersection
of some 5:00pm-period with some Monday-period.
Assuming that \qit{to inspect} is a culminating activity (as in the
airport application), the reading of \pref{atop:28} that requires the
inspection to have both started and been completed within the previous
day (section \ref{period_adverbials}) is represented as
\pref{atop:29}. The \ensuremath{\mathit{Culm}}\xspace requires $et$ to cover exactly the whole
inspection, from its beginning to its completion. The \ensuremath{\mathit{Past}}\xspace requires
$et$ to end before $st$, and the \ensuremath{\mathit{At}}\xspace requires $et$ to fall within the
day before $st$.
\begin{examps}
\item Did J.Adams inspect BA737 yesterday? \label{atop:28}
\item $\ensuremath{\mathit{Part}}\xspace[day^c, y^v, -1] \land
\ensuremath{\mathit{At}}\xspace[y^v, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]]]$ \label{atop:29}
\end{examps}
In contrast, \pref{atop:26} is represented as \pref{atop:27}. In this
case, $et$ must be simply a subperiod of a maximal period where
J.Adams was inspecting BA737, and also be located
within the previous day.
\begin{examps}
\item Was J.Adams inspecting BA737 yesterday? \label{atop:26}
\item $\ensuremath{\mathit{Part}}\xspace[day^c, y^v, -1] \land
\ensuremath{\mathit{At}}\xspace[y^v, \ensuremath{\mathit{Past}}\xspace[e^v, inspecting(j\_adams, ba737)]]$ \label{atop:27}
\end{examps}
Finally, \pref{atop:30} is represented as \pref{atop:31}, which
intuitively requires BA737 to have been circling at some past period
$e2^v$, that falls within some past maximal period $e1^v$ where
gate 2 was open.
\begin{examps}
\item Did BA737 circle while gate 2 was open? \label{atop:30}
\item $\ensuremath{\mathit{At}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, open(gate2)], \ensuremath{\mathit{Past}}\xspace[e2^v, circling(ba737)]]$
\label{atop:31}
\end{examps}
The \ensuremath{\mathit{Before}}\xspace and \ensuremath{\mathit{After}}\xspace operators are similar. They are used to express
adverbials and subordinate clauses introduced by \qit{before} and
\qit{after}. For $\phi, \phi_1, \phi_2 \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $\tau \in \ensuremath{\mathit{TERMS}}\xspace$:
\begin{itemize}
\item
\index{before@$\ensuremath{\mathit{Before}}\xspace[\;]$ (used to express \qit{before})}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Before}}\xspace[\tau, \phi]} = T$, iff
$\denot{g}{\tau} \in \ensuremath{\mathit{PERIODS}}\xspace$ and \\
$\denot{st,et,
lt \intersect [t_{first}, minpt(\denot{g}{\tau})),
g}{\phi} = T$.
\item
\index{before@$\ensuremath{\mathit{Before}}\xspace[\;]$ (used to express \qit{before})}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Before}}\xspace[\phi_1, \phi_2]} = T$, iff
for some $et'$ \\
$et' \in mxlpers(\{e \in \ensuremath{\mathit{PERIODS}}\xspace \mid \denot{st,e,\ensuremath{\mathit{PTS}}\xspace,g}{\phi_1} = T\})$,
and \\
$\denot{st,et,
lt \intersect [t_{first}, minpt(et')),
g}{\phi_2} = T$.
\item
\index{after@$\ensuremath{\mathit{After}}\xspace[\;]$ (used to express \qit{after})}
$\denot{st,et,lt,g}{\ensuremath{\mathit{After}}\xspace[\tau, \phi]} = T$, iff
$\denot{g}{\tau} \in \ensuremath{\mathit{PERIODS}}\xspace$ and \\
$\denot{st,et,
lt \intersect (maxpt(\denot{g}{\tau}), t_{last}],
g}{\phi} = T$.
\item
\index{after@$\ensuremath{\mathit{After}}\xspace[\;]$ (used to express \qit{after})}
$\denot{st,et,lt,g}{\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]} = T$, iff
for some $et'$ \\
$et' \in mxlpers(\{e \mid \denot{st,e,\ensuremath{\mathit{PTS}}\xspace,g}{\phi_1} = T\})$
and \\
$\denot{st,et,
lt \intersect (maxpt(et'), t_{last}],
g}{\phi_2} = T$.
\end{itemize}
If the first argument of \ensuremath{\mathit{Before}}\xspace is a term
$\tau$, $\tau$ must denote a period. The localisation time
is required to end before the beginning of $\tau$'s period. If the
first argument of \ensuremath{\mathit{Before}}\xspace is a formula $\phi_1$, the localisation time
of $\phi_2$ is required to end before the beginning of a maximal event
time period $et'$ where $\phi_1$ holds. The \ensuremath{\mathit{After}}\xspace operator is similar.
For example, \pref{atop:35} is expressed as \pref{atop:36}, and the
reading of \pref{atop:37} that requires BA737 to have departed after
the \emph{end} of a maximal period where the emergency system was in
operation is expressed as \pref{atop:38}. (I assume here that \qit{to
depart} is a point, as in the airport application.)
\begin{examps}
\item Was tank 2 empty before 25/9/95? \label{atop:35}
\item $\ensuremath{\mathit{Before}}\xspace[\mathit{25/9/95}, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{atop:36}
\item BA737 departed after the emergency system was in
operation. \label{atop:37}
\item $\ensuremath{\mathit{After}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, in\_operation(emerg\_sys)],
\ensuremath{\mathit{Past}}\xspace[e2^v, depart(ba737)]]$ \label{atop:38}
\end{examps}
\pref{atop:37} also has a reading where BA737 must have departed after
the emergency system \emph{started} to be in operation (section
\ref{before_after_clauses}). To express this reading, we need the
\ensuremath{\mathit{Begin}}\xspace operator of section \ref{begin_end_op} below.
\textsc{Top}\xspace's \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, and \ensuremath{\mathit{After}}\xspace operators are similar to those of
\cite{Pirie1990}. The operators of Pirie et
al., however, do not narrow $lt$ as in
\textsc{Top}\xspace. Instead, they place directly restrictions on $et$. For example,
ignoring some details, the $\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]$
of Pirie et al.\ requires $\phi_2$ to hold at an event time $et_2$
that follows an $et_1$ where $\phi_1$ holds (both
$et_1$ and $et_2$ must fall within $lt$). Instead, \textsc{Top}\xspace's
$\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]$ requires $et_1$ to be a maximal period where
$\phi_1$ holds ($et_1$ does not need to fall within the original $lt$), and
evaluates $\phi_2$ with respect to a narrowed $lt$, which is the
intersection of the original $lt$ with $et_1$. In most cases, both
approaches lead to similar results. \textsc{Top}\xspace's approach, however, is
advantageous in sentences like \pref{atop:60}, where one may want to
express the reading whereby the tank was empty \emph{throughout}
26/9/95 (section \ref{period_adverbials}).
\begin{examps}
\item Tank 2 was empty on 26/9/95. \label{atop:60}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{26/9/95}, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{atop:61}
\end{examps}
In these cases one
wants $et$ (time where the tank was empty)
to cover all the available time, where by ``available time'' I mean
the part of the time-axis where the tense and the
adverbial allow $et$ to be placed. This notion of ``available
time'' is captured by \textsc{Top}\xspace's $lt$: the simple past and
the \qit{on 26/9/95} of \pref{atop:60} introduce \ensuremath{\mathit{At}}\xspace and
\ensuremath{\mathit{Past}}\xspace operators that, assuming that \pref{atop:60} is submitted after
26/9/95, cause $lt$ to become the period that covers exactly the
day 26/9/95. The intended reading can be expressed easily in \textsc{Top}\xspace by
including an additional operator that forces $et$ to cover the whole
$lt$ (this operator will be discussed in section \ref{fills_op}). This
method cannot be used in the language of Pirie et al. Their \ensuremath{\mathit{Past}}\xspace
operator narrows the $lt$ to the part of the time-axis up to $st$, but
their \ensuremath{\mathit{At}}\xspace does not narrow $lt$ any further; instead, it imposes a
direct restriction on $et$ (the semantics of Pirie et al.'s \ensuremath{\mathit{At}}\xspace is not
very clear, but it seems that this restriction requires $et$ to be a
subperiod of 26/9/95). Hence, $lt$ is left to be the time-axis up to
$st$, and one cannot require $et$ to cover the whole $lt$, because
this would require the tank to be empty all the time from $t_{first}$
to $st$.
The \ensuremath{\mathit{At}}\xspace operator of Pirie et al.\ also does not allow its first
argument to be a formula, and it is unclear how they represent
\qit{while~\dots} clauses. Finally, Pirie et al.'s \ensuremath{\mathit{Before}}\xspace allows
counter-factual uses of \qit{before} to be expressed (section
\ref{before_after_clauses}). Counter-factuals are not considered in this
thesis, and hence Pirie et al.'s \ensuremath{\mathit{Before}}\xspace will not be discussed any
further.
\section{The Fills operator} \label{fills_op}
As discussed in section \ref{period_adverbials}, when states combine
with period adverbials, there is often a reading where the situation
of the verb holds \emph{throughout} the adverbial's period. For
example, there is a reading of \pref{fop:1} where tank 2
was empty throughout 26/9/95, not at simply some part of that day.
\begin{examps}
\item Tank 2 was empty on 26/9/95. \label{fop:1}
\end{examps}
Similar behaviour was observed in cases where states
combine with \qit{while~\dots} subordinate clauses (section
\ref{while_clauses}). For example, there is a reading of
\pref{fop:3} whereby BA737 was at gate 2 throughout the
entire inspection of UK160 by J.Adams, not at simply some time during
the inspection.
\begin{examps}
\item BA737 was at gate 2 while J.Adams was inspecting UK160. \label{fop:3}
\end{examps}
It is also interesting that \pref{fop:5} cannot be understood as
saying that tank 2 was empty throughout the period of \qit{last
summer}. There is, however, a reading of \pref{fop:5} where tank 2 was
empty throughout the August of the previous summer.
\begin{examps}
\item Tank 2 was empty in August last summer. \label{fop:5}
\end{examps}
It seems that states give rise to readings where the situation of the
verb covers the whole available localisation time. \pref{fop:2} --
\pref{fop:10} would express the readings of \pref{fop:1} --
\pref{fop:5} that are under discussion, if there
were some way to force the event times of the predicates
$empty(tank2)$, $be\_at(ba737, gate2)$, and $empty(tank2)$ to cover their
whole localisation times.
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{26/9/95}, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{fop:2}
\item $\ensuremath{\mathit{At}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, inspecting(j\_adams, uk160)],
\ensuremath{\mathit{Past}}\xspace[e2^v, be\_at(ba737, gate2)]]$ \label{fop:4}
\item $\ensuremath{\mathit{Part}}\xspace[august^g, aug^v] \land \ensuremath{\mathit{Part}}\xspace[summer^g, sum^v, -1] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[aug^v, \ensuremath{\mathit{At}}\xspace[sum^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]]$ \label{fop:10}
\end{examps}
The \ensuremath{\mathit{Fills}}\xspace operator achieves exactly this: it sets $et$ to the whole
of $lt$. For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$:
\begin{itemize}
\item
\index{fills@$\ensuremath{\mathit{Fills}}\xspace[\;]$ (requires $et = lt$)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Fills}}\xspace[\phi]} = T$, iff $et = lt$ and
$\denot{st,et,lt,g}{\phi} = T$.
\end{itemize}
The readings of \pref{fop:1} -- \pref{fop:5} that are under discussion
can be expressed as \pref{fop:2b} -- \pref{fop:11} respectively.
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{26/9/95}, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Fills}}\xspace[empty(tank2)]]]$ \label{fop:2b}
\item $\ensuremath{\mathit{At}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, inspecting(j\_adams, uk160)],$ \\
$ \ensuremath{\mathit{Past}}\xspace[e2^v, \ensuremath{\mathit{Fills}}\xspace[be\_at(ba737, gate2)]]]$ \label{fop:4b}
\item $\ensuremath{\mathit{Part}}\xspace[august^g, aug^v] \land \ensuremath{\mathit{Part}}\xspace[summer^g, sum^v, -1]$ \\
$\ensuremath{\mathit{At}}\xspace[aug^v, \ensuremath{\mathit{At}}\xspace[sum^v, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Fills}}\xspace[empty(tank2)]]]]$ \label{fop:11}
\end{examps}
This suggests that when state expressions
combine with period-specifying subordinate clauses or adverbials, the
\textsc{Nlitdb}\xspace could generate two formulae, one with and one without a
\ensuremath{\mathit{Fills}}\xspace, to capture the readings where $et$ covers
the whole or just part of $lt$. As mentioned
in section \ref{period_adverbials}, this approach (which was tested in
one version of the prototype \textsc{Nlitdb}\xspace) has the disadvantage that it
generates a formula for the reading where $et$ covers the whole $lt$
even in cases where this reading is impossible. In time-asking
questions like \pref{fop:35}, for example, the reading where $et$
covers the whole $lt$ (the whole 1994) is impossible, and hence the
corresponding formula should not be generated.
\begin{examps}
\item When was tank 5 empty in 1994? \label{fop:35}
\end{examps}
Devising an algorithm to decide when the formulae that contain
\ensuremath{\mathit{Fills}}\xspace should or should not be generated is a task which I have not
addressed. For simplicity, the prototype \textsc{Nlitdb}\xspace
and the rest of this thesis ignore the readings that require $et$
to cover the whole $lt$, and hence the \ensuremath{\mathit{Fills}}\xspace
operator is not used. The \ensuremath{\mathit{Fills}}\xspace operator, however, may prove useful
to other researchers who may attempt to explore further the topic of this
section.
\section{The Begin and End operators} \label{begin_end_op}
The \ensuremath{\mathit{Begin}}\xspace and \ensuremath{\mathit{End}}\xspace operators are used to refer to the time-points
where a situation starts or ends. For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$:
\begin{itemize}
\item
\index{begin@$\ensuremath{\mathit{Begin}}\xspace[\;]$ (used to refer to start-points of situations)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Begin}}\xspace[\phi]} = T$, iff $et \subper lt$\\
$et' \in mxlpers(\{e \in \ensuremath{\mathit{PERIODS}}\xspace \mid \denot{st,e,\ensuremath{\mathit{PTS}}\xspace,g}{\phi} = T\})$
and $et = \{minpt(et')\}$.
\item
\index{end@$\ensuremath{\mathit{End}}\xspace[\;]$ (used to refer to end-points of situations)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{End}}\xspace[\phi]} = T$, iff $et \subper lt$ \\
$et' \in mxlpers(\{e \in \ensuremath{\mathit{PERIODS}}\xspace \mid \denot{st,e,\ensuremath{\mathit{PTS}}\xspace,g}{\phi} = T\})$
and $et = \{maxpt(et')\}$.
\end{itemize}
$\ensuremath{\mathit{Begin}}\xspace[\phi]$ is true only at instantaneous event times $et$ that
are beginnings of maximal event times $et'$ where $\phi$
holds. The \ensuremath{\mathit{End}}\xspace operator is similar.
The \ensuremath{\mathit{Begin}}\xspace and \ensuremath{\mathit{End}}\xspace operators can be used to express \qit{to
start}, \qit{to stop}, \qit{to begin}, and \qit{to finish} (section
\ref{special_verbs}). For example, \pref{beop:3} is expressed as
\pref{beop:4}. Intuitively, in \pref{beop:4} the
$\ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, uk160)]$
refers to an event-time period that covers exactly a complete
inspection of UK160 by J.Adams (from start to
completion). $\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, uk160)]]$ refers to the
end of that period, i.e.\ the completion point of J.Adams'
inspection. $\ensuremath{\mathit{Begin}}\xspace[inspecting(t\_smith, ba737)]$ refers to the
beginning of an inspection of BA737 by T.Smith. The beginning of
T.Smith's inspection must precede the completion point of J.Adams'
inspection, and both points must be in the past.
\begin{examps}
\item Did T.Smith start to inspect BA737 before J.Adams finished
inspecting UK160? \label{beop:3}
\item $\begin{aligned}[t]
\ensuremath{\mathit{Before}}\xspace[&\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, uk160)]]], \\
&\ensuremath{\mathit{Past}}\xspace[e2^v, \ensuremath{\mathit{Begin}}\xspace[inspecting(t\_smith, ba737)]]]
\end{aligned}$
\label{beop:4}
\end{examps}
The reading of \pref{atop:37} (section \ref{at_before_after_op}) that
requires BA737 to have departed after the emergency system
\emph{started} to be in operation can be expressed as
\pref{beop:6}. (The reading of \pref{atop:37} where BA737 must have
departed after the system \emph{stopped} being in operation is
expressed as \pref{atop:38}.)
\begin{examps}
\item $\ensuremath{\mathit{After}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Begin}}\xspace[in\_operation(emerg\_sys)]],
\ensuremath{\mathit{Past}}\xspace[e2^v, depart(ba737)]]$ \label{beop:6}
\end{examps}
\section{The Ntense operator} \label{ntense_op}
The framework of this thesis (section \ref{noun_anaphora}) allows noun
phrases like \qit{the sales manager} in \pref{ntop:1} to refer either
to the present (current sales manager) or the time of the verb tense
(1991 sales manager). The \ensuremath{\mathit{Ntense}}\xspace operator is used to represent these
two possible readings.
\begin{examps}
\item What was the salary of the sales manager in 1991?
\label{ntop:1}
\item $?slr^v \; \ensuremath{\mathit{Ntense}}\xspace[now^*, manager\_of(mgr^v, sales)] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[1991, \ensuremath{\mathit{Past}}\xspace[e^v, salary\_of(mgr^v, slr^v)]]$ \label{ntop:1.1}
\item $?slr^v \; \ensuremath{\mathit{Ntense}}\xspace[e^v, manager\_of(mgr^v, sales)] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[1991, \ensuremath{\mathit{Past}}\xspace[e^v, salary\_of(mgr^v, slr^v)]]$ \label{ntop:1.2}
\end{examps}
The reading of \pref{ntop:1} where \qit{the sales manager} refers to
the present is represented as \pref{ntop:1.1}, while the reading where
it refers to the time of
the verb tense is represented as \pref{ntop:1.2}. Intuitively,
\pref{ntop:1.1} reports any $slr^v$, such that $slr^v$ was the
salary of $mgr^v$ at some past time $e^v$ that falls within 1991, and
$mgr^v$ is the manager of the sales department \emph{at the present}.
In contrast, \pref{ntop:1.2} reports any $slr^v$, such
that $slr^v$ was the salary of $mgr^v$ at some past time $e^v$ that falls
within 1991, and $mgr^v$ was the manager of the sales department \emph{at}
$e^v$. Notice that in \pref{ntop:1.2} the first
argument of the \ensuremath{\mathit{Ntense}}\xspace is the same as the first argument of the
\ensuremath{\mathit{Past}}\xspace, which is a pointer to the past event time where
$salary\_of(mgr^v, slr^v)$ is true (see the semantics of \ensuremath{\mathit{Past}}\xspace in
section \ref{past_op}).
For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $\beta \in \ensuremath{\mathit{VARS}}\xspace$ :
\begin{itemize}
\item
\index{ntense@$\ensuremath{\mathit{Ntense}}\xspace[\;]$ (used when expressing nouns or adjectives)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Ntense}}\xspace[\beta, \phi]} = T$, iff for some
$et' \in \ensuremath{\mathit{PERIODS}}\xspace$, it is true that $g(\beta)= et'$ and
$\denot{st,et',\ensuremath{\mathit{PTS}}\xspace,g}{\phi} = T$.
\item
\index{ntense@$\ensuremath{\mathit{Ntense}}\xspace[\;]$ (used when expressing nouns or adjectives)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Ntense}}\xspace[now^*, \phi]} = T$, iff
$\denot{st,\{st\},\ensuremath{\mathit{PTS}}\xspace,g}{\phi} = T$.
\end{itemize}
\ensuremath{\mathit{Ntense}}\xspace evaluates $\phi$ with
respect to a new event time $et'$, which may be different from the
original event time $et$ that is used to evaluate the part of the
formula outside the \ensuremath{\mathit{Ntense}}\xspace. Within the \ensuremath{\mathit{Ntense}}\xspace, the localisation time
is reset to \ensuremath{\mathit{PTS}}\xspace (whole time-axis) freeing $et'$ from restrictions
imposed on the original $et$. If the first argument of \ensuremath{\mathit{Ntense}}\xspace is
$now^*$, the new event time is the instantaneous period that contains
only $st$, i.e.\ the object to which the noun phrase refers must have
at $st$ the property described by $\phi$. If the first argument of
\ensuremath{\mathit{Ntense}}\xspace is a variable $\beta$, the new event time $et'$ can generally
be any period, and $\beta$ denotes $et'$. In
\pref{ntop:1.2}, however, $\beta$ is the same as the first
argument of the \ensuremath{\mathit{Past}}\xspace, which denotes the original
$et$ that the \ensuremath{\mathit{Past}}\xspace requires to be placed before $st$. This
means that $manager\_of(mgr^v, sales)$
must hold at the same event time where $salary\_of(mgr^v, slr^v)$
holds, i.e.\ the person $mgr^v$ must be the sales manager at the same
time where the salary of $mgr^v$ is $slr^v$. If the first argument of
the \ensuremath{\mathit{Ntense}}\xspace in \pref{ntop:1.2} and the first argument of the \ensuremath{\mathit{Past}}\xspace
were different variables, the answer would contain any 1991 salary of
anybody who was, is, or will be the sales manager at any time.
This would be useful in \pref{ntop:13}, where one may want to allow
\qit{Prime Minister} to refer to the Prime Ministers of all times, a
reading that can be expressed as \pref{ntop:16}.
\begin{examps}
\item Which Prime Ministers were born in Scotland? \label{ntop:13}
\item $?pm^v \; \ensuremath{\mathit{Ntense}}\xspace[e1^v, pminister(pm^v)] \land
\ensuremath{\mathit{Past}}\xspace[e2^v, birth\_in(pm^v,scotland)]$ \label{ntop:16}
\end{examps}
The framework of this thesis, however, does not currently generate
\pref{ntop:16}. \pref{ntop:13} would receive only two formulae, one
for current Prime Ministers, and one for persons that were Prime
Ministers at the time they were born (the latter reading is, of
course, unlikely).
Questions like \pref{ntop:5} and \pref{ntop:7}, where temporal
adjectives specify explicitly the times to which the noun phrases
refer, can be represented as \pref{ntop:6} and \pref{ntop:8}.
(The framework of this thesis, however, does not support
temporal adjectives other than \qit{current}; see section
\ref{temporal_adjectives}.)
\begin{examps}
\item What was the salary of the current sales manager in 1991?
\label{ntop:5}
\item $?slr^v \; \ensuremath{\mathit{Ntense}}\xspace[now^*, manager\_of(mgr^v, sales)] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[1991, \ensuremath{\mathit{Past}}\xspace[e^v, salary\_of(mgr^v, slr^v)]]$ \label{ntop:6}
\item What was the salary of the 1988 sales manager in
1991? \label{ntop:7}
\item $?slr^v \; \ensuremath{\mathit{Ntense}}\xspace[e1^v, \ensuremath{\mathit{At}}\xspace[1988, manager\_of(mgr^v, sales)]]
\; \land$\\
$\ensuremath{\mathit{At}}\xspace[1991, \ensuremath{\mathit{Past}}\xspace[e^v, salary\_of(mgr^v, slr^v)]]$ \label{ntop:8}
\end{examps}
The \ensuremath{\mathit{Ntense}}\xspace operator of \textsc{Top}\xspace is the same as the \ensuremath{\mathit{Ntense}}\xspace operator of
\cite{Crouch} and \cite{Crouch2}.
\section{The For operator} \label{for_op}
The \ensuremath{\mathit{For}}\xspace operator is used to express \qit{for~\dots} and
duration \qit{in~\dots} adverbials (sections \ref{for_adverbials} and
\ref{in_adverbials}). For $\sigma_c \in \ensuremath{\mathit{CPARTS}}\xspace$, $\nu_{qty} \in
\{1,2,3,\dots\}$, and $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$:
\begin{itemize}
\item
\index{for@$\ensuremath{\mathit{For}}\xspace[\;]$ (used to express durations)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{For}}\xspace[\sigma_c, \nu_{qty}, \phi]} = T$, iff
$\denot{st,et,lt,g}{\phi} = T$, and for some $p_1,p_2,\dots,p_{\nu_{qty}}
\in \ensuremath{\mathit{f_{cparts}}}\xspace(\sigma_c)$, it is true that $minpt(p_1) = minpt(et)$,
$next(maxpt(p_1)) = minpt(p_2)$, $next(maxpt(p_2)) = minpt(p_3)$,
\dots, $next(maxpt(p_{\nu_{qty} - 1})) = minpt(p_{\nu_{qty}})$, and
$maxpt(p_{\nu_{qty}}) = maxpt(et)$.
\end{itemize}
$\ensuremath{\mathit{For}}\xspace[\sigma_c, \nu_{qty}, \phi]$ requires $\phi$ to be true at an event
time period that is $\nu_{qty}$ $\sigma_c$-periods long. For example,
assuming that $month^c$ denotes the partitioning of month-periods (the
period that covers exactly the August of 1995, the period for
September of 1995, etc.), \pref{forop:3} can be expressed as
\pref{forop:4}.
\begin{examps}
\item Was tank 2 empty for three months? \label{forop:3}
\item $\ensuremath{\mathit{For}}\xspace[month^c, 3, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{forop:4}
\end{examps}
\pref{forop:4} requires an event time $et$ to exist, such that $et$
covers exactly three continuous months, and tank 2 was empty
throughout $et$.
As noted in section \ref{for_adverbials}, \qit{for~\dots}
adverbials are sometimes used to specify the duration of a
\emph{maximal} period where a situation holds, or to refer to the
\emph{total duration} of possibly non-overlapping periods where some
situation holds. The current version of \textsc{Top}\xspace cannot express such
readings.
Expressions like \qit{one week}, \qit{three months}, \qit{two years},
\qit{two hours}, etc., are often used to specify a duration of seven
days, $3 \times 30$ days, $2 \times 365$ days, $2 \times 60$ minutes,
etc. \pref{forop:4} expresses \pref{forop:3} if \qit{three months}
refers to \emph{calendar} months (e.g.\ from the beginning of a June
to the end of the following August). If \qit{three months} means $3
\times 30$ days, \pref{forop:10} has to be used instead. (I assume that
$day^c$ denotes the partitioning of day-periods: the period that
covers exactly 26/9/95, the period for 27/9/95, etc.)
\begin{examps}
\item $\ensuremath{\mathit{For}}\xspace[day^c, 90, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{forop:10}
\end{examps}
Assuming that \qit{to inspect} is a culminating
activity (as in the airport application), \pref{forop:13} represents the
reading of \pref{forop:12} where 42 minutes is the
duration from the beginning of the inspection to the
inspection's completion (section \ref{in_adverbials}). \pref{forop:13}
requires $et$ to cover the whole inspection (from beginning to
completion), $et$ to be in the past, and the duration of $et$ to be 42
minutes.
\begin{examps}
\item J.Adams inspected BA737 in 42 minutes. \label{forop:12}
\item $\ensuremath{\mathit{For}}\xspace[minute^c, 42, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]]]$
\label{forop:13}
\end{examps}
Unlike \pref{forop:12}, \pref{forop:14} does not require the
inspection to have been completed (section
\ref{for_adverbials}). \pref{forop:14} is represented as
\pref{forop:15}, which contains no \ensuremath{\mathit{Culm}}\xspace.
In this case, $et$ must simply be a
period throughout which J.Adams was inspecting BA737, it
must be located in the past, and it must be 42 minutes long.
\begin{examps}
\item J.Adams inspected BA737 for 42 minutes. \label{forop:14}
\item $\ensuremath{\mathit{For}}\xspace[minute^c, 42, \ensuremath{\mathit{Past}}\xspace[e^v, inspecting(j\_adams, ba737)]]$
\label{forop:15}
\end{examps}
\section{The Perf operator} \label{perf_op}
The \ensuremath{\mathit{Perf}}\xspace operator is used when expressing the past perfect. For
example, \pref{perfop:3} is expressed as \pref{perfop:4}. \ensuremath{\mathit{Perf}}\xspace could
also be used to express the present perfect (e.g.\ \pref{perfop:1}
could be represented as \pref{perfop:2}). This thesis, however, treats
the present perfect in the same way as the simple past (section
\ref{present_perfect}), and \pref{perfop:1} is mapped to
\pref{perfop:6}, the same formula that expresses \pref{perfop:5}.
\begin{examps}
\item BA737 had departed. \label{perfop:3}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, depart(ba737)]]$ \label{perfop:4}
\item BA737 has departed. \label{perfop:1}
\item $\ensuremath{\mathit{Pres}}\xspace[\ensuremath{\mathit{Perf}}\xspace[e^v, depart(ba737)]]$ \label{perfop:2}
\item BA737 departed. \label{perfop:5}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, depart(ba737)]$ \label{perfop:6}
\end{examps}
For $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $\beta \in \ensuremath{\mathit{VARS}}\xspace$:
\begin{itemize}
\item
\index{perf@$\ensuremath{\mathit{Perf}}\xspace[\;]$ (used to express the past perfect)}
$\denot{st,et,lt,g}{\ensuremath{\mathit{Perf}}\xspace[\beta, \phi]} = T$, iff $et \subper
lt$, and for some $et' \in \ensuremath{\mathit{PERIODS}}\xspace$, it is true that $g(\beta) =
et'$, $maxpt(et') \prec minpt(et)$, and $\denot{st,et',\ensuremath{\mathit{PTS}}\xspace,g}{\phi}
= T$.
\end{itemize}
$\ensuremath{\mathit{Perf}}\xspace[\beta, \phi]$ holds at the event time $et$, only if $et$ is
preceded by a new event time $et'$ where $\phi$ holds (figure
\ref{perf_op_fig}). The original $et$ must be a subperiod of $lt$. In
contrast $et'$ does not need to be a subperiod of $lt$ (the
localisation time in $\denot{st,et',\ensuremath{\mathit{PTS}}\xspace,g}{\phi}$ is reset to \ensuremath{\mathit{PTS}}\xspace,
the whole time-axis). The $\beta$ of $\ensuremath{\mathit{Perf}}\xspace[\beta, \phi]$ is a pointer
to $et'$, similar to the $\beta$ of $\ensuremath{\mathit{Past}}\xspace[\beta,\phi]$.
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{perf_op}
\caption{The Perf operator}
\label{perf_op_fig}
\end{center}
\hrule
\end{figure}
Ignoring constraints imposed by
$lt$, the event time $et$ where $\ensuremath{\mathit{Perf}}\xspace[\beta, \phi]$ is true can be
placed anywhere within the period that starts immediately after the
end of $et'$ ($et'$ is where $\phi$ is true) and that extends up to
$t_{last}$. The informal term ``consequent period'' was used in section
\ref{point_adverbials} to refer to this period.
Using the \ensuremath{\mathit{Perf}}\xspace operator, the reading of \pref{perfop:7} where
the inspection happens at some time before (or possibly on 27/9/95) is
expressed as \pref{perfop:8} (in this case, \qit{on 27/9/95} provides
a ``reference time''; see section \ref{past_perfect}). In contrast,
the reading of \pref{perfop:7} where the inspection happens on
27/9/95 is expressed as \pref{perfop:9}.
\begin{examps}
\item J.Adams had inspected gate 2 on 27/9/95. \label{perfop:7}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]]$
\label{perfop:8}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]]$
\label{perfop:9}
\end{examps}
Let us explore formally the denotations of
\pref{perfop:8} and \pref{perfop:9}. The denotation of \pref{perfop:8}
w.r.t.\ $st$ is $T$ iff for some $et \in \ensuremath{\mathit{PERIODS}}\xspace$ and $g \in G$,
\pref{perfop:10} holds.
\begin{examps}
\item $\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}
{\ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]]} = T$
\label{perfop:10}
\end{examps}
Assuming that $\mathit{27/9/95}$ denotes the obvious period, by
the definition of \ensuremath{\mathit{At}}\xspace, \pref{perfop:10} holds iff \pref{perfop:11} is
true ($\ensuremath{\mathit{PTS}}\xspace \intersect \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}) = \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95})$).
\begin{examps}
\item $\denot{st,et,\ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}),g}
{\ensuremath{\mathit{Past}}\xspace[e1^v,
\ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]} = T$
\label{perfop:11}
\end{examps}
By the definition of \ensuremath{\mathit{Past}}\xspace, ignoring $e1^v$ which
does not play any interesting role here, and assuming that $st$
follows 27/9/95, \pref{perfop:11} is true iff
\pref{perfop:12} holds.
\begin{examps}
\item $\denot{st, et, \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}), g}
{\ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]} = T$
\label{perfop:12}
\end{examps}
By the definition of \ensuremath{\mathit{Perf}}\xspace (ignoring $e2^v$),
\pref{perfop:12} holds iff for some $et' \in \ensuremath{\mathit{PERIODS}}\xspace$,
\pref{perfop:14}, \pref{perfop:15}, and \pref{perfop:16} hold.
\begin{gather}
et \subper \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}) \label{perfop:14} \\
maxpt(et') \prec minpt(et) \label{perfop:15} \\
\denot{st,et',\ensuremath{\mathit{PTS}}\xspace,g}{\ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]} = T \label{perfop:16}
\end{gather}
By the definition of \ensuremath{\mathit{Culm}}\xspace, \pref{perfop:16} holds iff
\pref{perfop:17} -- \pref{perfop:21} hold.
\begin{gather}
et' \subper \ensuremath{\mathit{PTS}}\xspace \label{perfop:17} \\
\ensuremath{\mathit{f_{culms}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ja), \ensuremath{\mathit{f_{cons}}}\xspace(g2)) = T
\label{perfop:18} \\
S = \bigcup_{p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ja), \ensuremath{\mathit{f_{cons}}}\xspace(g2))}p
\label{perfop:19} \\
S \not= \emptyset \label{perfop:20} \\
et' = [minpt(S), maxpt(S)] \label{perfop:21}
\end{gather}
Let us assume that there is only one maximal period where J.Adams is
inspecting BA737, and that the inspection is completed at the end of
that period. Then, the $S$ of \pref{perfop:19} is the
maximal period, and \pref{perfop:18} and \pref{perfop:20}
hold. \pref{perfop:21} requires $et'$ to be the same period as $S$, in
which case \pref{perfop:17} is trivially satisfied. The denotation of
\pref{perfop:8} w.r.t.\ $st$ is $T$ (i.e.\ the answer to
\pref{perfop:7} is affirmative) iff for some $et$, $et' = S$, and
\pref{perfop:14} and \pref{perfop:15} hold, i.e.\ iff there is an
$et$ within 27/9/95, such that $et$ follows $S$
($S = et'$ is the period that covers the whole inspection). The
situation is depicted in figure \ref{perf_op2_fig}. In other words,
27/9/95 must contain an $et$ where the inspection has already
been completed.
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{perf_op2}
\caption{First reading of \qit{J.Adams had inspected gate 2 on
27/9/95}}
\label{perf_op2_fig}
\end{center}
\hrule
\end{figure}
Let us now consider \pref{perfop:9}. Its denotation w.r.t.\ $st$ will
be true iff for some $et \in \ensuremath{\mathit{PERIODS}}\xspace$ and $g \in G$, \pref{perfop:22}
holds.
\begin{examps}
\item $\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}
{\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]]}
= T$
\label{perfop:22}
\end{examps}
By the definition of \ensuremath{\mathit{Past}}\xspace, \pref{perfop:22} holds iff
\pref{perfop:23} is true. (For simplicity, I ignore again $e1^v$ and
$e2^v$.)
\begin{examps}
\item $\denot{st,et,[t_{first}, st),g}
{\ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]}$
\label{perfop:23}
\end{examps}
By the definition of \ensuremath{\mathit{Perf}}\xspace, \pref{perfop:23} is true iff
for some $et' \in \ensuremath{\mathit{PERIODS}}\xspace$, \pref{perfop:24}, \pref{perfop:25}, and
\pref{perfop:26} hold.
\begin{gather}
et \subper [t_{first}, st) \label{perfop:24} \\
maxpt(et') \prec minpt(et) \label{perfop:25} \\
\denot{st,et',\ensuremath{\mathit{PTS}}\xspace,g}{\ensuremath{\mathit{At}}\xspace[\mathit{27/9/95},
\ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]} = T
\label{perfop:26}
\end{gather}
By the definition of the \ensuremath{\mathit{At}}\xspace operator, \pref{perfop:26}
holds iff \pref{perfop:27} holds. (I assume again that
$\mathit{27/9/95}$ denotes the obvious period.)
\begin{equation}
\denot{st,et',\ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}),g}{\ensuremath{\mathit{Culm}}\xspace[inspecting(ja,
g2)]}
= T \label{perfop:27}
\end{equation}
By the definition of \ensuremath{\mathit{Culm}}\xspace, \pref{perfop:27} holds iff
\pref{perfop:28} -- \pref{perfop:32} are true.
\begin{gather}
et' \subper \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{27/9/95}) \label{perfop:28} \\
\ensuremath{\mathit{f_{culms}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ja), \ensuremath{\mathit{f_{cons}}}\xspace(g2)) = T
\label{perfop:29} \\
S = \bigcup_{p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(inspecting, 2)(\ensuremath{\mathit{f_{cons}}}\xspace(ja), \ensuremath{\mathit{f_{cons}}}\xspace(g2))}p
\label{perfop:30} \\
S \not= \emptyset \label{perfop:31} \\
et' = [minpt(S), maxpt(S)] \label{perfop:32}
\end{gather}
Assuming again that there is only one maximal period where J.Adams is
inspecting BA737, and that the inspection is completed at the end of
that period, the $S$ of \pref{perfop:30} is the
maximal period, and \pref{perfop:29} and \pref{perfop:31} hold.
\pref{perfop:32} requires $et'$ to be the same as $S$. The
denotation of \pref{perfop:9} w.r.t.\ $st$ is $T$ (i.e.\ the answer to
\pref{perfop:7} is affirmative) iff for some $et$, $et' = S$, and
\pref{perfop:24}, \pref{perfop:25}, and \pref{perfop:28} hold. That is
there must be some past $et$ that follows $S$ ($S = et'$ is the period
that covers the whole inspection), with $S$ falling within 27/9/95
(figure \ref{perf_op2_fig}). The inspection must have been completed
within 27/9/95.
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{perf_op3}
\caption{Second reading of \qit{J.Adams had inspected gate 2 on
27/9/95}}
\label{perf_op3_fig}
\end{center}
\hrule
\end{figure}
In \pref{perfop:33}, where there are no temporal adverbials, the
corresponding formula \pref{perfop:34} requires some past
$et$ (pointed to by $e1^v$) to exist, such that $et$ follows an
$et'$ (pointed to by $e2^v$) that covers exactly the whole
(from start to completion) inspection of gate 2 by
J.Adams. The net effect is that the inspection must have been
completed in the past.
\begin{examps}
\item J.Adams had inspected gate 2 \label{perfop:33}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(ja, g2)]]]$
\label{perfop:34}
\end{examps}
As noted in section \ref{past_perfect}, there is a reading of
\pref{perfop:35} (probably the preferred one)
whereby the two-year period ends on 1/1/94, i.e.\ J.Adams was still a
manager on 1/1/94. Similarly, there is a reading of \pref{perfop:37},
whereby the two-year period ends at $st$, i.e.\ J.Adams is still a
manager (section \ref{present_perfect}). These readings cannot be
captured in \textsc{Top}\xspace.
\begin{examps}
\item On 1/1/94, J.Adams had been a manager for two years. \label{perfop:35}
\item J.Adams has been a manager for two years. \label{perfop:37}
\end{examps}
For example, \pref{perfop:36} requires some past $et$
(pointed to by $e1^v$) to exist, such that $et$ falls within 1/1/94,
$et$ follows a period $et'$ (pointed to by $e2^v$), $et'$
is a period where J.Adams is a manager, and
the duration of $et'$ is two years. If, for example, J.Adams was a
manager only from 1/1/88 to 31/12/89, \pref{perfop:36}
causes the answer to \pref{perfop:35} to be affirmative.
\pref{perfop:36} does not require the two-year period to end on 1/1/94.
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{1/1/94},
\ensuremath{\mathit{Past}}\xspace[e1^v,
\ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{For}}\xspace[year^c, 2, be(ja, manager)]]]]$
\label{perfop:36}
\end{examps}
Various versions of \ensuremath{\mathit{Perf}}\xspace operators have been used in
\cite{Dowty1982}, \cite{Richards}, \cite{Pirie1990}, \cite{Crouch2},
and elsewhere.
\section{Occurrence identifiers} \label{occurrence_ids}
Predicates introduced by verbs whose base forms are culminating
activities often have an extra argument that acts as an
\emph{occurrence identifier}. Let us consider a scenario involving an
engineer, John, who worked on engine 2 repairing
faults of the engine at several past times (figure
\ref{episodes_fig}). John started repairing a
fault of engine 2 on 1/6/92 at 9:00am. He continued to work on this
fault up to 1:00pm on the same day, at which point he temporarily
abandoned the repair without completing it. He resumed the repair at
3:00pm on 25/6/92, and completed it at 5:00pm on the same day.
\begin{figure}[tb]
\hrule
\medskip
\begin{center}
\includegraphics[scale=.58]{episodes}
\caption{Occurrence identifiers}
\label{episodes_fig}
\end{center}
\hrule
\end{figure}
In 1993, John was asked to repair another fault of engine 2. He
started the repair on 1/7/93 at 9:00am, and continued to work on that
fault up to 1:00pm on the same day without completing the repair. He
then abandoned the repair for ever (John was not qualified to fix that
fault, and the repair was assigned to another engineer). Finally, in
1994 John was asked to repair a third fault of engine 2. He started
to repair the third fault on 1/6/94 at 9:00am, and continued to
work on that fault up to 1:00pm on the same day, without completing
the repair. He resumed the repair at 3:00pm, and completed it at
5:00pm on the same day.
There is a problem if \pref{epid:1} is represented as
\pref{epid:2}. Let us assume that the question is submitted after
1/6/94. One would expect the answer to be affirmative, since a
complete past repair of engine 2 by John is situated within 1/6/94. In
contrast, \pref{epid:2} causes the answer to be negative. The
semantics of \ensuremath{\mathit{Culm}}\xspace (section \ref{culm_op}) requires
$et$ to start at the beginning of the earliest maximal
period where $repairing(john, eng2)$ holds (i.e.\ at the beginning of
$p_1$ in figure \ref{episodes_fig}) and to end at the end of the
latest maximal period where $repairing(john, eng2)$ holds (i.e.\ at
the end of $p_5$ in figure \ref{episodes_fig}). That is, $et$ must
be $p_8$ of figure \ref{episodes_fig}. The \ensuremath{\mathit{At}}\xspace requires
$et$ ($p_8$) to be also a subperiod of 1/6/94. Since this is not the
case, the answer is negative.
\begin{examps}
\item Did John repair engine 2 on 1/6/94? \label{epid:1}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{1/6/94}, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(john,eng2)]]]$
\label{epid:2}
\end{examps}
The problem is that although John was repairing engine 2 during all
five periods ($p_1$, $p_2$, $p_3$, $p_4$, and $p_5$), the five periods
intuitively belong to different occurrences of the situation where
John is repairing engine 2. The first two periods have to do with the
repair of the first fault (occurrence 1), the third period has to do with
the repair of the second fault (occurrence 2), and the last two periods
relate to the repair of the third fault (occurrence 3). The
$\ensuremath{\mathit{Culm}}\xspace[repairing(john,eng2)]$ of \pref{epid:2}, however, does not
distinguish between the three occurrences, and forces $et$ to start at
the beginning of $p_1$ and to end at the end of $p_5$. Instead, we
would like $\ensuremath{\mathit{Culm}}\xspace[repairing(john,eng2)]$ to distinguish between the
three occurrences: to require $et$ to start at the beginning of $p_1$
(beginning of the first repair) and to end at the end of $p_2$
(completion of the first repair), or to require $et$ to start at
the beginning of $p_4$ (beginning of the third repair) and to end at
the end of $p_5$ (completion of the third
repair). ($\ensuremath{\mathit{Culm}}\xspace[repairing(john,eng2)]$ should not allow $et$ to be
$p_3$, because the second repair does not reach its completion at the
end of $p_3$.)
To achieve this, an occurrence-identifying argument is added to
$fixing(john,eng2)$. If $occ1$, $occ2$, and $occ3$ denote
the three repairing-occurrences,
$fixing(occ1,john,eng2)$ will be true only at $et$s that are
subperiods of $p_1$ or $p_2$, $fixing(occ2, john, eng2)$ only at
$et$s that are subperiods of $p_3$, and $fixing(occ3, john,
eng2)$ only at $et$s that are subperiods of $p_4$ or $p_5$. In
practice, the occurrence-identifying argument is always a
variable. For example, \pref{epid:1} is now represented as
\pref{epid:3} instead of \pref{epid:2}.
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[\mathit{1/6/94}, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occ^v,john,eng2)]]]$
\label{epid:3}
\end{examps}
Intuitively, according to \pref{epid:3} the answer should be
affirmative if there is an $et$ and a particular occurrence $occ^v$ of
the situation where John is repairing engine 2, such that $et$ starts
at the beginning of the first period where $occ^v$ is ongoing, $et$
ends at the end of the last period where $occ^v$ is ongoing, $occ^v$
reaches its completion at the end of $et$, and $et$ falls within the
past and 1/6/94. Now if \pref{epid:1} is submitted after 1/6/94, the
answer is affirmative.
To see that \pref{epid:3} generates the correct result, let us
examine the denotation of \pref{epid:3}. The denotation of
\pref{epid:3} w.r.t.\ $st$ is affirmative if for some $et \in
\ensuremath{\mathit{PERIODS}}\xspace$ and $g \in G$, \pref{epid:4} holds.
\begin{examps}
\item $\denot{st,et,\ensuremath{\mathit{PTS}}\xspace,g}
{\ensuremath{\mathit{At}}\xspace[\mathit{1/6/94}, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occ^v,john,eng2)]]]}
= T$ \label{epid:4}
\end{examps}
Assuming that the question is submitted after 1/6/94, and that
$\mathit{1/6/94}$ denotes the obvious period, by the definitions of
\ensuremath{\mathit{At}}\xspace and \ensuremath{\mathit{Past}}\xspace, \pref{epid:4} holds iff \pref{epid:5} and \pref{epid:6} hold.
\begin{gather}
g(e^v) = et \label{epid:5} \\
\denot{st,et,\ensuremath{\mathit{f_{cons}}}\xspace(\mathit{1/6/94}),g}{\ensuremath{\mathit{Culm}}\xspace[repairing(occ^v,john,eng2)]}
= T \label{epid:6}
\end{gather}
By the definition of \ensuremath{\mathit{Culm}}\xspace, \pref{epid:6} holds iff
\pref{epid:7} -- \pref{epid:10} hold, where $S$ is as in \pref{epid:11}.
\begin{gather}
et \subper \ensuremath{\mathit{f_{cons}}}\xspace(\mathit{1/6/94}) \label{epid:7} \\
\ensuremath{\mathit{f_{culms}}}\xspace(repairing,2)(g(occ^v), \ensuremath{\mathit{f_{cons}}}\xspace(john), \ensuremath{\mathit{f_{cons}}}\xspace(eng2)) = T
\label{epid:8} \\
S \not= \emptyset \label{epid:9} \\
et = [minpt(S), maxpt(S)] \label{epid:10} \\
S =
\bigcup_{p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(repairing, 2)(g(occ^v), \ensuremath{\mathit{f_{cons}}}\xspace(john), \ensuremath{\mathit{f_{cons}}}\xspace(eng2))}p
\label{epid:11}
\end{gather}
The denotation of \pref{epid:3} w.r.t.\ $st$ is $T$ (i.e.\ the
answer to \pref{epid:1} is affirmative), iff for some $et \in
\ensuremath{\mathit{PERIODS}}\xspace$ and $g \in G$, \pref{epid:5} and \pref{epid:7} --
\pref{epid:10} hold.
For $et$ as in \pref{epid:21} and $g$ the variable assignment of
\pref{epid:20}, \pref{epid:5} and \pref{epid:7}
hold. \pref{epid:11} becomes \pref{epid:13}, and \pref{epid:9} holds.
\pref{epid:10} becomes \pref{epid:21}, which holds ($et$ was chosen to
satisfy it). \pref{epid:8} also holds, because
the third repair is completed at the end of $p_5$.
\begin{gather}
et = [minpt(p_4), maxpt(p_5)] = p_7 \label{epid:21} \\
g(\beta) =
\begin{cases}
et & \text{if } \beta = e^v \\
\ensuremath{\mathit{f_{cons}}}\xspace(occ3) & \text{if } \beta = occ^v \\
o & \text{otherwise ($o$ is an arbitrary element of \ensuremath{\mathit{OBJS}}\xspace)}
\end{cases}
\label{epid:20} \\
S = p_4 \union p_5 \label{epid:13}
\end{gather}
Hence, there is some $et \in \ensuremath{\mathit{PERIODS}}\xspace$ and $g \in G$ for which
\pref{epid:5} and \pref{epid:7} -- \pref{epid:10} hold, i.e.\ the
answer to \pref{epid:1} will be affirmative as wanted.
Occurrence identifiers are a step towards formalisms that treat
occurrences of situations (or ``events'' or ``episodes'') as objects
in the modelled world (e.g.\ \cite{Parsons1990}, \cite{Kamp1993},
\cite{Blackburn1994}, \cite{Hwang1994}). In \textsc{Top}\xspace all terms (constants
and variables) denote elements of \ensuremath{\mathit{OBJS}}\xspace, i.e.\ objects of the modelled
world. Thus, allowing occurrence-identifying terms (like $occ^v$ in
\pref{epid:3}) implies that occurrences of situations are also world
objects. Unlike other formalisms (e.g.\ those mentioned above),
however, \textsc{Top}\xspace does not treat these occurrence-identifying terms in
any special way, and there is nothing in the definition of \textsc{Top}\xspace to
distinguish objects denoted by occurrence-identifiers from objects
denoted by other terms.
\section{Tense anaphora and localisation time} \label{lt_anaphora}
Although tense anaphora (section
\ref{temporal_anaphora}) was not considered during the work of this thesis,
it seems that \textsc{Top}\xspace's localisation time could prove useful if
this phenomenon were to be supported. As noted in section
\ref{temporal_anaphora}, some cases of tense anaphora can be
handled by storing the temporal window established by adverbials and
tenses of previous questions, and by requiring the situations of
follow-up questions to fall within that window. \textsc{Top}\xspace's $lt$ can
capture this notion of previous window. Assuming that \pref{ltan:1} is
submitted after 1993, the \ensuremath{\mathit{At}}\xspace and \ensuremath{\mathit{Past}}\xspace operators of the corresponding
formula \pref{ltan:2} narrow $lt$ to the period that covers exactly
1993. This period could be stored, and used as the initial value of
$lt$ in \pref{ltan:4}, that expresses the follow-up question
\pref{ltan:3}. In effect, \pref{ltan:3} would be taken to mean \pref{ltan:5}.
\begin{examps}
\item Was Mary the personnel manager in 1993? \label{ltan:1}
\item $\ensuremath{\mathit{At}}\xspace[1993, \ensuremath{\mathit{Past}}\xspace[e^v, manager\_of(mary, personnel)]]$ \label{ltan:2}
\item Who was the personnel manager? \label{ltan:3}
\item $?wh^v \; \ensuremath{\mathit{Past}}\xspace[e^v, manager\_of(wh^v, personnel)]$ \label{ltan:4}
\item Who was the personnel manager in 1993? \label{ltan:5}
\end{examps}
Substantial improvements are needed to make these ideas workable. For
example, if \pref{ltan:1} and \pref{ltan:3} are followed by
\pref{ltan:6} (expressed as \pref{ltan:7}), and the dialogue takes
place after 1993, the \textsc{Nlitdb}\xspace must be intelligent enough to reset $lt$
to the whole time axis. Otherwise, no person will ever be reported,
because the \ensuremath{\mathit{Pres}}\xspace of \pref{ltan:7} requires $et$ to contain $st$, and
an $et$ that contains $st$ can never fall within the past year 1993
(the $lt$ of the previous question).
\begin{examps}
\item Who is (now) the personnel manager? \label{ltan:6}
\item $?wh^v \; \ensuremath{\mathit{Pres}}\xspace[manager\_of(wh^v, personnel)]$ \label{ltan:7}
\end{examps}
\section{Expressing habituals} \label{hab_problems}
As noted in section \ref{pres_op}, habitual readings of
sentences are taken to involve habitual homonyms of verbs.
Habitual homonyms introduce different predicates than the
corresponding non-habitual ones. For example, \pref{habp:1} and
\pref{habp:3} would be expressed as \pref{habp:2} and \pref{habp:4}
respectively. Different predicates would used in the two cases.
\begin{examps}
\item Last month BA737 (habitually) departed from gate 2. \label{habp:1}
\item $\ensuremath{\mathit{Part}}\xspace[month^c, mon^v, -1] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, hab\_depart\_from(ba737, gate2)]]$
\label{habp:2}
\item Yesterday BA737 (actually) departed from gate 2. \label{habp:3}
\item $\ensuremath{\mathit{Part}}\xspace[day^c, y^v, -1] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[y^v, \ensuremath{\mathit{Past}}\xspace[e^v, actl\_depart\_from(ba737, gate2)]]$
\label{habp:4}
\end{examps}
$hab\_depart\_from(ba737, gate2)$ is intended to hold at $et$s that
fall within periods where BA737 has the habit of departing from gate
2. If BA737 departed habitually from gate 2 throughout 1994,
$hab\_depart\_from(ba737, gate2)$ would be true at any $et$ that is a
subperiod of 1994. In contrast, $actl\_depart\_from(ba737, gate2)$ is
intended to hold only at $et$s where BA737 actually departs from gate
2. If departures are modelled as instantaneous (as in the airport
application), $actl\_depart\_from(ba737, gate2)$ is true only at
instantaneous $et$s where BA737 leaves gate 2.
One would expect that if BA737 had the habit of departing from gate 2
during some period, it would also have actually departed from gate 2
at least some times during that period:
if $hab\_depart\_from(ba737, gate2)$ is true at an $et$,
$actl\_depart\_from(ba737, gate2)$ would also be true at
some subperiods $et'$ of $et$. There is nothing in the
definition of \textsc{Top}\xspace, however, to guarantee that this implication
holds. The event times where $hab\_depart\_from(ba737, gate2)$ and
$actl\_depart\_from(ba737, gate2)$ hold are ultimately determined by
\ensuremath{\mathit{f_{pfuns}}}\xspace (that specifies the maximal
periods where the two predicates hold; see section
\ref{top_model}). There is no restriction in the definition of \textsc{Top}\xspace
to prohibit whoever defines \ensuremath{\mathit{f_{pfuns}}}\xspace from specifying that
$hab\_depart\_from(ba737, gate2)$ is true at some $et$ that does not
contain any $et'$ where $actl\_depart\_from(ba737, gate2)$ is true.
Another issue is how to represent \pref{habp:5}.
\pref{habp:5} cannot be represented as \pref{habp:6}. \pref{habp:6}
says that at 5:00pm on some day in the previous month BA737 had the
habit of departing. I have found no elegant solution to this
problem. \pref{habp:5} is mapped to \pref{habp:7}, where the constant
$\text{\textit{5:00pm}}$ is intended to denote a generic
representative of 5:00pm-periods. This generic representative is taken
to be an entity in the world.
\begin{examps}
\item Last month BA737 (habitually) departed at 5:00pm. \label{habp:5}
\item $\ensuremath{\mathit{Part}}\xspace[month^c, mon^v, -1] \land
\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, hab\_depart(ba737)]]$ \label{habp:6}
\item $\ensuremath{\mathit{Part}}\xspace[month^c, mon^v, -1] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, hab\_depart\_time(ba737,
\text{\textit{5:00pm}})]]$ \label{habp:7}
\end{examps}
Unlike \pref{habp:5}, where \qit{at 5:00pm} introduces a
constant ($\text{\textit{5:00pm}}$) as a predicate-argument in \pref{habp:7},
the \qit{at 5:00pm} of \pref{habp:8} introduces
an \ensuremath{\mathit{At}}\xspace operator in \pref{habp:9}.
\begin{examps}
\item Yesterday BA737 (actually) departed at 5:00pm. \label{habp:8}
\item $\ensuremath{\mathit{Part}}\xspace[day^c, y^v, -1] \land
\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[y^v, \ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, actl\_depart(ba737)]]]$ \label{habp:9}
\end{examps}
The fact that \qit{at 5:00pm} is treated in such different ways in
the two cases is admittedly counter-intuitive, and it also complicates
the translation from English to \textsc{Top}\xspace (to be discussed in chapter
\ref{English_to_TOP}).
\section{Summary}
\textsc{Top}\xspace is a formal language, used to represent the meanings of the
English questions that are submitted to the \textsc{Nlitdb}\xspace. The denotation
with respect to $st$ of a \textsc{Top}\xspace formula specifies what the answer to
the corresponding English question should report ($st$ is the
time-point where the question is submitted to the \textsc{Nlitdb}\xspace). The
denotations with respect to $st$ of \textsc{Top}\xspace formulae are defined in
terms of the denotations of \textsc{Top}\xspace formulae with respect to $st$, $et$,
and $lt$. $et$ (event time) is a time period where the situation
described by the formula holds, and $lt$ (localisation time) is a
temporal window within which $et$ must be placed.
Temporal linguistic mechanisms are expressed in \textsc{Top}\xspace using temporal
operators that manipulate $st$, $et$, and $lt$. There are thirteen
operators in total. \ensuremath{\mathit{Part}}\xspace picks a period from a
partitioning. \ensuremath{\mathit{Pres}}\xspace and \ensuremath{\mathit{Past}}\xspace are used when
expressing present and past tenses. \ensuremath{\mathit{Perf}}\xspace is
used in combination with \ensuremath{\mathit{Past}}\xspace to express the past
perfect. \ensuremath{\mathit{Culm}}\xspace is used to represent non-progressive forms
of verbs whose base forms are culminating activities. \ensuremath{\mathit{At}}\xspace,
\ensuremath{\mathit{Before}}\xspace, and \ensuremath{\mathit{After}}\xspace are employed when expressing punctual
and period adverbials, and when expressing \qit{while~\dots},
\qit{before~\dots}, and \qit{after~\dots} subordinate
clauses. Duration \qit{in~\dots} and \qit{for~\dots} adverbials are
expressed using \ensuremath{\mathit{For}}\xspace. \ensuremath{\mathit{Fills}}\xspace can be used to
represent readings of sentences where the situation of the verb
covers the whole localisation time; \ensuremath{\mathit{Fills}}\xspace, however,
is not used in the rest of this thesis, nor in the prototype
\textsc{Nlitdb}\xspace. \ensuremath{\mathit{Begin}}\xspace and \ensuremath{\mathit{End}}\xspace are used to
refer to time-points where situations start or stop. Finally, \ensuremath{\mathit{Ntense}}\xspace
allows noun phrases to refer either to $st$ or to the time of the
verb's tense.
\chapter{From English to TOP} \label{English_to_TOP}
\proverb{One step at a time.}
\section{Introduction}
This chapter shows how \textsc{Hpsg}\xspace \cite{Pollard1} \cite{Pollard2} was
modified to map English questions directed to a \textsc{Nlitdb}\xspace to appropriate
\textsc{Top}\xspace formulae.\footnote{The \textsc{Hpsg}\xspace version of this thesis is based on
the revised \textsc{Hpsg}\xspace version of chapter 9 of \cite{Pollard2}.} Although
several modifications to \textsc{Hpsg}\xspace were introduced, the \textsc{Hpsg}\xspace version of
this thesis remains very close to \cite{Pollard2}. The main
differences from \cite{Pollard2} are that: (a) \textsc{Hpsg}\xspace mechanisms for
phenomena not examined in this thesis (e.g.\ pronouns, relative
clauses) were removed, and (b) the situation-theoretic semantic
constructs of \textsc{Hpsg}\xspace were replaced by feature structures that represent
\textsc{Top}\xspace expressions.
Readers with a rudimentary grasp of modern unification-based grammars
\cite{Shieber} should be able to follow most of the discussion in this
chapter. Some of the details, however, may be unclear to readers not
familiar with \textsc{Hpsg}\xspace. The \textsc{Hpsg}\xspace version of this thesis was implemented
as a grammar for the \textsc{Ale} system (see chapter
\ref{implementation}).
\section{HPSG basics} \label{HPSG_basics}
In \textsc{Hpsg}\xspace, each word and syntactic constituent is mapped to a
\emph{sign}, a feature structure of a particular form, that provides
information about the word or syntactic constituent. An \textsc{Hpsg}\xspace grammar
consists of signs for words (I call these \emph{lexical signs}),
\emph{lexical rules}, \emph{schemata}, \emph{principles}, and a
\emph{sort hierarchy}, all discussed below.
\subsection{Lexical signs and sort hierarchy}
Lexical signs provide information about individual words. (Words with
multiple uses may receive more than one lexical sign.)
\pref{lentr:1} shows a lexical sign for the base form of \qit{to land}
in the airport domain.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval land\>} \\
synsem & [loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$ ]} \\
aspect & culmact \\
spr & \<\> \\
subj & \< \feat np[-prd]$_{@1}$ \> \\
comps & \<
\feat pp[-prd, pform {\fval on}]$_{@2}$
\> ]\\
cont & \sort{landing\_on}{
[arg1 & occr\_var \\
arg2 & @1 \\
arg3 & @2]} ]]]
\end{avm}
\label{lentr:1}
\end{examps}
The $<$ and $>$ delimiters denote lists. The {\feat phon} feature
shows the list of words to which the sign corresponds
(\pref{lentr:1} corresponds to the single word \qit{land}). Apart from
{\feat phon}, every sign has a {\feat synsem} feature (as well as
other features not shown in \pref{lentr:1}; I often omit features that
are not relevant to the discussion). The value of {\feat synsem} in
\pref{lentr:1} is a feature structure that has a feature {\feat loc}.
The value of {\feat loc} is in turn a feature
structure that has the features {\feat cat} (intuitively, syntactic
category) and {\feat cont} (intuitively, semantic content).
Each \textsc{Hpsg}\xspace feature structure belongs to a particular sort. The sort
hierarchy of \textsc{Hpsg}\xspace shows the available sorts, as well as which sort is
a subsort of which other sort. It also specifies which features the
members of each sort must have, and the sorts to which the values of
these features must belong. (Some modifications were made to the sort
hierarchy of \cite{Pollard2}. These will be discussed in sections
\ref{TOP_FS} and \ref{more_ind}.) In \pref{lentr:1}, for example, the
value of {\feat head} is a feature structure of sort {\srt verb}. The
value of {\feat head} signals that the word is the base form ({\feat
vform} {\fval bse}) of a non-auxiliary ({\feat aux}~{\fval $-$})
verb. The sort hierarchy of \cite{Pollard2} specifies that the value
of {\feat head} must be of sort {\srt head}, and that {\srt verb}\/ is
a subsort of {\srt head}. This allows feature structures of sort {\srt
verb}\/ to be used as values of {\feat head}. The value of {\feat
vform} in \pref{lentr:1} is an \emph{atomic feature structure} (a
feature structure of no features) of sort {\srt bse}. For simplicity,
when showing feature structures I often omit uninteresting sort names.
{\feat aspect} \index{aspect@{\feat aspect} (\textsc{Hpsg}\xspace feature)} is
the only new \textsc{Hpsg}\xspace feature of this thesis. It is a feature of feature
structures of sort {\srt cat}\/ (feature structures that can be used
as values of {\feat cat}), and its values are feature structures of
sort {\srt aspect}. {\srt aspect}\/ contains only atomic feature
structures, and has the subsorts: {\srt state},
\index{state@{\srt state}\/ (\textsc{Hpsg}\xspace sort, state aspectual class)}
{\srt activity},
\index{activity@{\srt activity}\/ (\textsc{Hpsg}\xspace sort, activity aspectual class)}
{\srt culmact}\/
\index{culmact@{\srt culmact}\/ (\textsc{Hpsg}\xspace sort, culminating activity)}
(culminating activity), and {\srt point}.
\index{point@{\srt point}\/ (\textsc{Hpsg}\xspace sort, point aspectual class)}
{\srt state}\/ is in turn partitioned into: {\srt lex\_state}\/
\index{lexstate@{\srt lex\_state}\/ (\textsc{Hpsg}\xspace sort, lexical state)}
(lexical state), {\srt progressive}\/
\index{progressive@{\srt progressive}\/ (\textsc{Hpsg}\xspace sort, progressive state)}
(progressive state), and {\srt cnsq\_state}\/
\index{cnsqstate@{\srt cnsq\_state}\/ (\textsc{Hpsg}\xspace sort, consequent state)}
(consequent state). This agrees with the aspectual
taxonomy of chapter \ref{linguistic_data}. Following table
\vref{airport_verbs}, \pref{lentr:1} classifies the base form of
\qit{to land} as culminating activity.
The {\feat spr}, {\feat subj}, and {\feat comps} features of
\pref{lentr:1} provide information about the specifier, subject, and
complements with which the verb has to combine. Specifiers are
determiners (e.g.\ \qit{a}, \qit{the}), and words like \qit{much} (as
in \qit{much more}) and \qit{too} (as in \qit{too late}). Verbs do not
admit specifiers, and hence the value of {\feat spr} in \pref{lentr:1}
is the empty list.
The {\feat subj} value of \pref{lentr:1} means that the verb requires
a noun-phrase as its subject. The {\feat np[-prd]$_{\avmbox{1}}$} in
\pref{lentr:1} has the same meaning as in \cite{Pollard2}. Roughly
speaking, it is an abbreviation for a sign that corresponds to a noun
phrase. The {\feat -prd} means that the noun phrase must be
non-predicative (see section \ref{hpsg:nouns} below). The \avmbox{1}
is intuitively a pointer to the world entity described by the noun
phrase. Similarly, the {\feat comps} value of \pref{lentr:1} means
that the verb requires as its complement a non-predicative
prepositional phrase (section \ref{hpsg:pps} below), introduced by
\qit{on}. The \avmbox{2} is intuitively a pointer to the world entity
of the prepositional phrase (e.g.\ if the prepositional phrase is
\qit{on a runway}, the \avmbox{2} is a pointer to the runway).
The value of {\feat cont} in \pref{lentr:1} represents the \textsc{Top}\xspace
predicate $landing\_on(\beta, \tau_1, \tau_2)$, where $\tau_1$ and
$\tau_2$ are \textsc{Top}\xspace terms corresponding to \avmbox{1} and \avmbox{2},
and $\beta$ is a \textsc{Top}\xspace variable acting as an occurrence identifier
(section \ref{occurrence_ids}).\footnote{I follow the approach of
section 8.5.1 of \cite{Pollard2}, whereby the {\feat relation}
feature is dropped, and its role is taken up by the sort of the
feature structure.} The exact relation between \textsc{Hpsg}\xspace feature
structures and \textsc{Top}\xspace expressions will be discussed in the following sections.
\subsection{Lexical rules}
Lexical rules generate new lexical signs from existing ones. In section
\ref{hpsg:verb_forms}, for example, I introduce lexical rules that
generate automatically lexical signs for (single-word) non-base
verb forms (e.g.\ a sign for the simple past \qit{landed}) from signs
for base forms (e.g.\ \pref{lentr:1}). This reduces the number of
lexical signs that need to be listed in the grammar.
\subsection{Schemata and principles} \label{schemata_principles}
\textsc{Hpsg}\xspace schemata specify basic patterns that are used when words or
syntactic constituents combine to form larger constituents. For
example, the \emph{head-complement schema} is the pattern that is used
when a verb combines with its complements (e.g.\ when \qit{landed}
combines with its complement \qit{on runway 2}; in this case, the verb
is the ``head-daughter'' of the constituent \qit{landed on runway 2}).
The \emph{head-subject schema} is the one used when a verb phrase (a
verb that has combined with its complements but not its subject)
combines with its subject (e.g.\ when \qit{landed on runway 2}
combines with \qit{BA737}; in this case, the verb phrase is the
head-daughter of \qit{BA737 landed on runway 2}). No modifications to
the schemata of \cite{Pollard2} are introduced in this thesis, and
hence schemata will not be discussed further.
\textsc{Hpsg}\xspace principles control the propagation of feature values from the
signs of words or syntactic constituents to the signs of their
super-constituents. The \emph{head feature principle}, for example,
specifies that the sign of the super-constituent inherits the {\feat
head} value of the head-daughter's sign. This causes the sign of
\qit{landed on runway 2} to inherit the {\feat head} value of the sign
of \qit{landed}, and the same value to be inherited by the sign of
\qit{BA737 landed on runway 2}. This thesis uses simplified versions
of Pollard and Sag's semantics principle and constituent ordering
principle (to be discussed in sections \ref{non_pred_nps} and
\ref{fronted}), and introduces one new principle (the aspect principle,
to be discussed in section \ref{hpsg:punc_adv}). All other principles
are as in \cite{Pollard2}.
\section{Representing TOP yes/no formulae in HPSG} \label{TOP_FS}
According to \cite{Pollard2}, the {\feat cont} value of \pref{lentr:1}
should actually be \pref{lentr:2}.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
\sort{psoa}{
[quants & \<\> \\
nucleus & \sort{landing\_on}{
[arg1 & occr\_var \\
arg2 & @1 \\
arg3 & @2]} ]}
\end{avm}
\label{lentr:2}
\end{examps}
In \cite{Pollard2}, feature structures of sort {\srt psoa} have two
features: {\feat quants} and {\feat nucleus}.\footnote{``{\srt Psoa}''
stands for ``parameterised state of affairs'', a term from situation
theory \cite{Cooper1990}. The semantic analysis here is not
situation-theoretic, but the term ``psoa'' is still used for
compatibility with \cite{Pollard2}.} {\feat quants}, which is part
of \textsc{Hpsg}\xspace's quantifier storage mechanism, is not used in this thesis.
This leaves only one feature ({\feat nucleus}) in {\srt psoa}s. For
simplicity, {\feat nucleus} was also dropped, and the {\srt psoa}\/
sort was taken to contain the feature structures that would be values
of {\feat nucleus} in \cite{Pollard2}.
More precisely, in this thesis {\srt psoa}\/ has two subsorts: {\srt
predicate}\/
\index{predicate@{\srt predicate}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace predicates)}
and {\srt operator}\/
\index{operator@{\srt operator}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace operators)}
(figure \ref{psoa_fig}). {\srt predicate}\/ contains feature
structures that represent \textsc{Top}\xspace predicates, while {\srt operator}\/
contains feature structures that represent all other \textsc{Top}\xspace yes/no
formulae. (Hence, {\srt psoa}\/ corresponds to all yes/no formulae.)
{\srt predicate}\/ has domain-specific subsorts, corresponding to
predicate functors used in the domain for which the \textsc{Nlitdb}\xspace is
configured. In the airport domain, for example, {\srt landing\_on}\/
is a subsort of {\srt predicate}. The feature structures in the
subsorts of {\srt predicate}\/ have features named {\feat arg1},
{\feat arg2}, {\feat arg3},
\index{arg123@{\feat arg1}, {\feat 2}, {\feat 3} (new
\textsc{Hpsg}\xspace features, correspond to \textsc{Top}\xspace predicate arguments)}
etc. These represent the first, second,
third, etc.\ arguments of the predicates. The values of {\feat arg1},
{\feat arg2}, etc.\ are of sort {\srt ind}\/ ({\srt occr\_var}\/
\index{occrvar@{\srt occr\_var}\/ (\textsc{Hpsg}\xspace sort, represents occurrence identifiers)}
is a subsort of {\srt ind}). {\srt ind}\/ will be discussed further below.
\begin{figure}
\avmoptions{}
\setlength{\GapWidth}{5mm}
\hrule
\begin{center}
\begin{bundle}{{\srt psoa}}
\chunk{
\begin{bundle}{{\srt predicate}}
\chunk{\begin{avm}
\sort{circling}{
\[arg1 & ind\]}
\end{avm}}
\chunk{\begin{avm}
\osort{landing\_on}{
\[arg1 & occr\_var \\
arg2 & ind \\
arg3 & ind\]}
\end{avm}}
\chunk{\dots}
\end{bundle}}
\chunk{
\begin{bundle}{{\srt operator}}
\chunk{\begin{avm}
\osort{pres}{
\[main\_psoa & psoa\]}
\end{avm}}
\chunk{\dots}
\end{bundle}}
\end{bundle}
\caption{Subsorts of {\srt psoa}}
\label{psoa_fig}
\index{pres2@{\srt pres}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Pres}}\xspace)}
\index{past2@{\srt past}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Past}}\xspace)}
\index{perf2@{\srt perf}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Perf}}\xspace)}
\index{atop2@{\srt at\_op}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{At}}\xspace)}
\index{beforeop2@{\srt before\_op} (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Before}}\xspace)}
\index{afterop2@{\srt after\_op} (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{After}}\xspace)}
\index{part2@{\srt part}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Part}}\xspace)}
\index{culm2@{\srt culm}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Culm}}\xspace)}
\index{end2@{\srt end}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{End}}\xspace)}
\index{begin2@{\srt begin}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Begin}}\xspace)}
\index{and2@{\srt and}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's conjunction)}
\index{ntense2@{\srt ntense}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{Ntense}}\xspace)}
\index{forop2@{\srt for\_op}\/ (\textsc{Hpsg}\xspace sort, corresponds to \textsc{Top}\xspace's \ensuremath{\mathit{For}}\xspace)}
\index{mainpsoa@{\feat main\_psoa} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{ethandle@{\feat et\_handle} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{timespec@{\feat time\_spec} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{partng@{\feat partng} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{partvar@{\feat part\_var} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{conjunct12@{\feat conjunct1}, {\feat 2} (\textsc{Hpsg}\xspace features, used in the representation of \textsc{Top}\xspace formulae)}
\index{durunit@{\feat dur\_unit} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\index{duration@{\feat duration} (\textsc{Hpsg}\xspace feature, used in the representation of \textsc{Top}\xspace formulae)}
\end{center}
\hrule
\end{figure}
The {\srt operator}\/ sort has thirteen subsorts, shown in figure
\ref{operator_sorts}. These correspond to the twelve \textsc{Top}\xspace operators
(\ensuremath{\mathit{Fills}}\xspace is ignored), plus one sort for conjunction.\footnote{The sorts
that correspond to the \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, \ensuremath{\mathit{After}}\xspace, and \ensuremath{\mathit{For}}\xspace operators are
called {\srt at\_op}, {\srt before\_op}, {\srt after\_op}, and {\srt
for\_op}\/ to avoid name clashes with existing \textsc{Hpsg}\xspace sorts.} The
order of the features in figure \ref{operator_sorts} corresponds to
the order of the arguments of the \textsc{Top}\xspace operators. For example, the
{\feat et\_handle} and {\feat main\_psoa} features of the {\srt
past}\/ sort correspond to the first and second arguments
respectively of \textsc{Top}\xspace's $\ensuremath{\mathit{Past}}\xspace[\beta, \phi]$. For simplicity, in the
rest of this thesis I drop the $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta, \nu_{ord}]$
version of \ensuremath{\mathit{Part}}\xspace (section \ref{denotation}), and I represent words
like \qit{yesterday} using \textsc{Top}\xspace constants (e.g.\ $yesterday$) rather
than expressions like $\ensuremath{\mathit{Part}}\xspace[day^c, \beta, -1]$. This is why there
is no sort for $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta, \nu_{ord}]$ in figure
\ref{operator_sorts}.
\begin{figure}
\avmoptions{active}
\hrule
\medskip
\hspace*{9mm}
\begin{tabular}{lll}
\begin{avm}
\osort{pres}{
[main\_psoa & psoa]}
\end{avm}
&&
\hspace*{7mm}
\begin{avm}
\osort{culm}{
[main\_psoa & predicate]}
\end{avm}
\\
\begin{avm}
\osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & psoa]}
\end{avm}
&&
\hspace*{7mm}
\begin{avm}
\osort{and}{
[conjunct1 & psoa \\
conjunct2 & psoa]}
\end{avm}
\\
\begin{avm}
\osort{perf}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & psoa]}
\end{avm}
&&
\hspace*{7mm}
\begin{avm}
\osort{begin}{
[main\_psoa & psoa]}
\end{avm}
\\
\begin{avm}
\osort{at\_op}{
[time\_spec & temp\_ent $\lor$ psoa \\
main\_psoa & psoa]}
\end{avm}
&&
\hspace*{7mm}
\begin{avm}
\osort{end}{
[main\_psoa & psoa]}
\end{avm}
\\
\begin{avm}
\osort{before\_op}{
[time\_spec & temp\_ent $\lor$ psoa \\
main\_psoa & psoa]}
\end{avm}
&&
\begin{avm}
\osort{ntense}{
[et\_handle & now $\lor$ \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & psoa]}
\end{avm}
\\
\begin{avm}
\osort{after\_op}{
[time\_spec & temp\_ent $\lor$ psoa \\
main\_psoa & psoa]}
\end{avm}
&&
\begin{avm}
\osort{for\_op}{
[dur\_unit & compl\_partng \\
duration & \sort{sem\_num}{
[tvar $-$]} \\
main\_psoa & psoa]}
\end{avm}
\\
\multicolumn{3}{l}{\avmoptions{}\begin{avm}
\osort{part}{
\[partng & compl\_partng $\lor$ \sort{gappy\_partng}{
\[tvar $-$ \]} \\
part\_var & \sort{temp\_ent}{
\[tvar $+$ \]} \]}
\end{avm}
}
\end{tabular}
\caption{Subsorts of {\srt operator}}
\label{operator_sorts}
\medskip
\hrule
\end{figure}
In \cite{Pollard2}, feature structures of sort {\srt ind}\/ (called
indices) have the features {\srt person}, {\srt number}, and {\srt
gender}, which are used to enforce person, number, and gender
agreement. For simplicity, these features are ignored here, and no
agreement checks are made. Pollard and Sag's subsorts of {\srt ind}\/
({\srt ref}, {\srt there}, {\srt it}\/), which are used in \textsc{Hpsg}\xspace's
binding theory, are also ignored here. In this thesis, indices
represent \textsc{Top}\xspace terms (they also represent gappy partitioning names,
but let us ignore this temporarily). The situation is roughly
speaking as in figure \ref{simple_ind_hierarchy}. For each \textsc{Top}\xspace
constant (e.g.\ $ba737$, $gate2$), there is a subsort of {\srt ind}\/
that represents that constant. There is also a subsort {\srt var}\/ of
{\srt ind}, whose indices represent \textsc{Top}\xspace variables. A {\feat tvar}
\index{tvar@{\feat tvar} (\textsc{Hpsg}\xspace feature, shows if an index represents a \textsc{Top}\xspace variable)}
feature is used to distinguish indices that represent constants from
indices that represent variables. All indices of constant-representing
sorts (e.g.\ {\srt ba737}, {\srt uk160}\/) have their {\feat tvar} set
to $-$. Indices of {\srt var}\/ have their {\feat tvar} set to $+$.
\begin{figure}
\avmoptions{}
\hrule
\begin{center}
\begin{bundle}{{\srt ind}}
\chunk{\begin{avm}
\sort{ba737}{
\[tvar & $-$\]}
\end{avm}}
\chunk{\begin{avm}
\sort{uk160}{
\[tvar & $-$\]}
\end{avm}}
\chunk{\begin{avm}
\sort{gate2}{
\[tvar & $-$\]}
\end{avm}}
\chunk{\dots}
\chunk{\begin{avm}
\sort{var}{
\[tvar & $+$\]}
\end{avm}}
\end{bundle}
\caption{{\srt ind} and its subsorts -- simplified version}
\label{simple_ind_hierarchy}
\end{center}
\hrule
\end{figure}
The fact that there is only one subsort ({\srt var}\/) for \textsc{Top}\xspace
variables in figure \ref{simple_ind_hierarchy} does not mean that only
one \textsc{Top}\xspace variable can be represented. {\srt var} is a \emph{sort} of
feature structures, containing infinitely many feature-structure
members. Although the members of {\srt var} cannot be distinguished by
their feature values (they all have {\feat tvar} set to $+$), they are
still different; i.e.\ they are ``structurally identical'' but not
``token-identical'' (see chapter 1 of \cite{Pollard2}). Each one of
the feature-structure members of {\srt var}\/ represents a different
\textsc{Top}\xspace variable. The subsorts that correspond to \textsc{Top}\xspace constants also
contain infinitely many different feature-structure members. In this
case, however, all members of the same subsort are taken to represent
the same constant. For example, any feature structure of sort {\srt
gate2} represents the \textsc{Top}\xspace constant $gate2$.
\section{More on the subsorts of ind} \label{more_ind}
The subsorts of {\srt ind}\/ are actually more complicated than in
figure \ref{simple_ind_hierarchy}. Natural language front-ends (e.g.\
\textsc{Masque} \cite{Auxerre2}, \textsc{Team} \cite{Grosz1987},
\textsc{Cle} \cite{Alshawi}, \textsc{SystemX} \cite{Cercone1993})
often employ a domain-dependent hierarchy of types of world entities.
This hierarchy is typically used in disambiguation, and to detect
semantically anomalous sentences like \qit{Gate 2 departed from runway
1}. Here, a hierarchy of this kind is mounted under the {\srt ind}\/
sort. (Examples illustrating the use of this hierarchy are given in
following sections.)
In the airport domain, there are temporal world entities (the Monday
16/10/95, the year 1995, etc.), and non-temporal world entities
(flight BA737, gate 2, etc.). Indices representing temporal entities
are classified into a {\srt temp\_ent}\/
\index{tempent@{\srt temp\_ent}\/ (\textsc{Hpsg}\xspace sort, represents temporal
entities)}
subsort of {\srt ind}, while indices representing non-temporal
entities are classified into {\srt non\_temp\_ent}\/
\index{nontempent@{\srt non\_temp\_ent} (\textsc{Hpsg}\xspace sort, represents
non-temporal entities)}
(see figure \ref{ind_hierarchy}; ignore {\srt partng}\/ and its
subsorts for the moment). {\srt non\_temp\_ent}\/ has in turn
subsorts like {\srt mass}\/ (indices representing mass entities, e.g.\
foam or water), {\srt flight\_ent}\/ (indices representing flights,
e.g.\ BA737), etc. {\srt flight\_ent}\/ has one subsort for each
\textsc{Top}\xspace constant that denotes a flight (e.g.\ {\srt ba737}, {\srt
uk160}\/), plus one sort ({\srt flight\_ent\_var}\/) whose indices
represent \textsc{Top}\xspace variables that denote flights. The other
children-sorts of {\srt non\_temp\_ent}\/ have similar subsorts.
\begin{figure}
\begin{center}
\includegraphics[scale=.6]{handle_sorts}
\caption{{\srt partng}, {\srt ind}, and their subsorts}
\label{ind_hierarchy}
\end{center}
\end{figure}
{\srt temp\_ent}\/ has subsorts like {\srt minute\_ent}\/ (indices
representing particular minutes, e.g.\ the 5:00pm minute of 1/1/91),
{\srt day\_ent}\/ (indices representing particular days), etc. {\srt
minute\_ent}\/ has one subsort for each \textsc{Top}\xspace constant that denotes
a particular minute (e.g.\ {\srt 5:00pm1/1/91}\/), plus one sort
({\srt minute\_ent\_var}\/) whose indices represent \textsc{Top}\xspace variables
that denote particular minutes. The other children-sorts of {\srt
temp\_ent}\/ have similar subsorts.
The indices of {\srt other\_temp\_ent\_var}\/
\index{othertempentvar@{\srt other\_temp\_ent\_var}\/ (\textsc{Hpsg}\xspace sort,
represents some time-denoting \textsc{Top}\xspace variables)}
(figure \ref{ind_hierarchy}) represent \textsc{Top}\xspace variables that denote
temporal-entities which do not correspond to sister-sorts of {\srt
other\_temp\_ent\_var}\/ ({\srt minute\_ent}, {\srt day\_ent},
etc.). This is needed because not all \textsc{Top}\xspace variables denote
particular minutes, days, months, etc. In \pref{lentr:3}, for example,
$e^v$ denotes a past period that covers exactly a taxiing of UK160 to
gate 1 (from start to completion). The taxiing may have started at
5:00pm on 1/1/95, and it may have been completed at 5:05pm on the same
day. In that case, $e^v$ denotes a period that is neither a
minute-period, nor a day-period, nor a month-period, etc.
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[taxiing\_to(occr^v,uk160, gate1)]]$ \label{lentr:3}
\end{examps}
{\srt occr\_var}\/ contains indices that represent \textsc{Top}\xspace
variables used as occurrence identifiers (section
\ref{occurrence_ids}).
Indices of sorts that represent \textsc{Top}\xspace constants (e.g.\ {\srt foam},
{\srt 5:00pm1/1/91}, {\srt Jun93}\/ in figure \ref{ind_hierarchy}) have
their {\feat tvar}\/
\index{tvar@{\feat tvar} (\textsc{Hpsg}\xspace feature, shows if an index represent a \textsc{Top}\xspace variable or not)}
set to $-$. Indices of sorts that represent \textsc{Top}\xspace
variables (e.g.\ {\srt flight\_ent\_var}, {\srt minute\_ent\_var},
{\srt other\_temp\_ent\_var}, {\srt occr\_var}\/) have their {\feat
tvar}\/ set to $+$. There is also a special sort {\srt now}\/ (not
shown in figure \ref{ind_hierarchy}) that is used to represent the
\textsc{Top}\xspace expression $now^*$
\index{now2@{\srt now}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace's $now^*$)}
(section \ref{ntense_op}).
The sorts of figure \ref{operator_sorts} mirror the definitions of
\textsc{Top}\xspace's operators. For example, the {\srt ntense}\/ sort reflects that
fact that the first argument of an \ensuremath{\mathit{Ntense}}\xspace operator must be $now^*$
or a variable ({\feat tvar}~$+$) denoting a period ({\srt temp\_ent}),
while the second argument must be a yes/no formula ({\srt psoa}). (The
{\srt sem\_num}\/
\index{semnum@{\srt sem\_num}\/ (\textsc{Hpsg}\xspace sort, represents numbers)}
sort in {\srt for\_op} is a child-sort of {\srt
non\_temp\_ent}, with subsorts that represent the numbers 1, 2, 3,
etc. The {\srt compl\_partng}\/ and {\srt gappy\_partng}\/ sorts in
{\srt for\_op}\/ and {\srt part}\/ are discussed below.)
The hierarchy under {\srt ind}\/ is domain-dependent. For example, in
an application where the database contains information about a
company, the subsorts of {\srt non\_temp\_ent}\/ would correspond to
departments, managers, etc.\ I assume, however, that in all
application domains, {\srt ind} would have the children-sorts {\srt
temp\_ent}, {\srt non\_temp\_ent}, {\srt occr\_var}, {\srt
gappy\_partng} (to be discussed below), and possibly more. I also
assume that the subsorts of {\srt partng} (see below) and {\srt
temp\_ent}\/ would have the general form of figure
\ref{ind_hierarchy}, though they would have to be adjusted to reflect
the partitionings and temporal entities used in the particular
application.
I now turn to the {\srt partng}\/
\index{partng2@{\srt partng}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace partitioning names)}
sort of figure \ref{ind_hierarchy},
which has the subsorts {\srt compl\_partng}\/
\index{complpartng@{\srt compl\_partng}\/ (\textsc{Hpsg}\xspace sort, represents
\textsc{Top}\xspace complete partitioning names)}
and {\srt gappy\_partng}\/
\index{gappypartng@{\srt gappy\_partng}\/ (\textsc{Hpsg}\xspace sort, represents
\textsc{Top}\xspace gappy part.\ names and some terms)}
(these three sorts do not exist in
\cite{Pollard2}). For each \textsc{Top}\xspace complete or gappy partitioning name
(e.g.\ $minute^c$, $day^c$, $\text{\textit{5:00pm}}^g$, $monday^g$)
there is a leaf-subsort of {\srt compl\_partng}\/ or {\srt
gappy\_partng}\/ respectively that represents that name. (The
leaf-subsorts of {\srt gappy\_partng} are also used to represent some
\textsc{Top}\xspace terms; this is discussed below.) In figure \ref{ind_hierarchy},
the sorts {\srt 5:00pm}, {\srt 9:00am}, etc.\ are grouped under {\srt
minute\_gappy}\/ to reflect the fact that the corresponding
partitionings contain minute-periods. (I assume here that these
partitioning names denote the obvious partitionings.) Similarly, {\srt
monday}, {\srt tuesday}, etc.\ are grouped under {\srt day\_gappy}\/
to reflect the fact that the corresponding partitionings contain
day-periods. Section \ref{habituals} provides examples where sorts
like {\srt minute\_gappy} and {\srt day\_gappy} prove useful.
Apart from gappy partitioning names, the subsorts of {\srt
gappy\_partng}\/ are also used to represent \textsc{Top}\xspace terms that denote
generic representatives of partitionings (section \ref{hab_problems}).
(To allow the subsorts of {\srt gappy\_partng}\/ to represent \textsc{Top}\xspace
terms, {\srt gappy\_partng}\/ is not only a subsort of {\srt partng},
but also of {\srt ind}; see figure \ref{ind_hierarchy}.) For example,
\pref{lentr:6} (that expresses the habitual reading of \pref{lentr:5})
is represented as \pref{lentr:7}. In this case, the subsort {\srt
5:00pm}\/ of {\srt gappy\_partng}\/ represents the \textsc{Top}\xspace constant
\textit{5:00pm}.
\avmoptions{active}
\begin{examps}
\item BA737 departs (habitually) at 5:00pm \label{lentr:5}
\item $\ensuremath{\mathit{Pres}}\xspace[hab\_departs\_at(ba737, \text{\textit{5:00pm}})]$ \label{lentr:6}
\item \begin{avm}
\osort{pres}{
[main\_psoa & \sort{hab\_departs\_at}{
[arg1 & \sort{ba737}{
[tvar $-$]} \\
arg2 & \sort{5:00pm}{
[tvar $-$]}]}]}
\end{avm}
\label{lentr:7}
\end{examps}
In contrast, \pref{lentr:9} (that expresses the
non-habitual reading of \pref{lentr:8}) is represented as
\pref{lentr:10}. In this case, the subsort {\srt
5:00pm}\/ of {\srt gappy\_partng}\/ represents the \textsc{Top}\xspace gappy
partitioning name $\text{\textit{5:00pm}}^g$. (It cannot represent a
\textsc{Top}\xspace term, because \textsc{Top}\xspace terms cannot be used as first arguments of
\ensuremath{\mathit{Part}}\xspace operators.) The \avmbox{1}s in
\pref{lentr:10} mean that the values of {\feat part\_var} and {\feat
time\_spec} must be token-identical, i.e.\ they must represent the
same \textsc{Top}\xspace variable.
\avmoptions{active}
\begin{examps}
\item BA737 departed (actually) at 5:00pm. \label{lentr:8}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, depart(ba737)]]$ \label{lentr:9}
\item \begin{avm}
\osort{and}{
[conjunct1 & \osort{part}{
[partng & \sort{5:00pm}{
[tvar $-$]} \\
part\_var & \sort{minute\_ent\_var}{
[tvar $+$]}@1 ]} \\
conjunct2 & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \sort{depart}{
[arg1 & \sort{ba737}{
[tvar $-$]} ]}]}]}]}
\end{avm}
\label{lentr:10}
\end{examps}
The {\srt minute\_gappy}\/ and {\srt gappy\_var}\/ sorts of figure
\ref{ind_hierarchy} are used only to represent \textsc{Top}\xspace variables that
denote generic representatives of unknown {\srt minute\_gappy}\/ or
{\srt day\_gappy}\/ partitionings. The $t^v$ variable of
\pref{lentr:16}, for example, denotes the generic representative of an
unknown {\srt minute\_gappy}\/ partitioning. (If BA737 departs
habitually at 5:00pm, $t^v$ denotes the generic representative of the
$\text{\textit{5:00pm}}^g$ partitioning, the same generic
representative that the \textit{5:00pm} constant of \pref{lentr:6}
denotes.) The $\ensuremath{\mathit{Pres}}\xspace[hab\_departs\_at(ba737, t^v)]$ part of
\pref{lentr:16} is represented as \pref{lentr:17}. (The
feature-structure representation of quantifiers will be discussed in
section \ref{TOP_FS_WH}.)
\begin{examps}
\item When does BA737 (habitually) depart? \label{lentr:15}
\item $?t^v \; \ensuremath{\mathit{Pres}}\xspace[hab\_departs\_at(ba737, t^v)]$ \label{lentr:16}
\item \begin{avm}
\osort{pres}{
[main\_psoa & \sort{hab\_departs\_at}{
[arg1 & \sort{ba737}{
[tvar $-$]} \\
arg2 & \sort{minute\_gappy\_var}{
[tvar $+$]} ]} ]}
\end{avm}
\label{lentr:17}
\end{examps}
The indices of sorts like {\srt minute\_gappy\_var}\/ and {\srt
day\_gappy\_var}\/ have their {\feat tvar}
\index{tvar@{\feat tvar} (\textsc{Hpsg}\xspace feature, shows if an index represent a \textsc{Top}\xspace variable or not)}
set to $+$. The indices of all other leaf-subsorts of {\srt
gappy\_partng}\/ (e.g.\ {\srt 5:00pm}, {\srt monday}\/) have their
{\feat tvar} set to $-$.
\section{Representing TOP quantifiers in HPSG}
\label{TOP_FS_WH}
\textsc{Top}\xspace yes/no formulae are represented in the \textsc{Hpsg}\xspace version of this
thesis as feature-structures of sort {\srt psoa} (figure
\ref{psoa_fig}). To represent \textsc{Top}\xspace wh-formulae (formulae with
interrogative or interrogative-maximal quantifiers) additional
feature-structure sorts are needed. I discuss these below.
Feature structures of sort {\srt quant}\/ represent unresolved
quantifiers (quantifiers whose scope is not known yet). They have
two features: {\feat det} and {\feat restind} (restricted index), as
shown in \pref{nps:5}. The {\feat det} feature shows the type of the
quantifier. In this thesis, {\feat det} can have the values {\srt
exists}\/ (existential quantifier), {\srt interrog}\/
\index{interrog5@{\srt interrog}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace
interrogative quantifiers)}
(interrogative quantifier), and {\srt interrog\_mxl}\/
\index{interrogmxl5@{\srt interrog\_mxl}\/ (\textsc{Hpsg}\xspace sort, represents \textsc{Top}\xspace
interrogative-maximal quantifiers)}
(interrogative-maximal quantifier). (Apart from the values of {\feat
det}, {\srt quant}\/ is as in \cite{Pollard2}.)
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
\sort{quant}{
[det & exists $\lor$ interrog $\lor$ interrog\_mxl \\
restind & \osort{nom\_obj}{
[index & \sort{ind}{
[tvar & $+$]} \\
restr & set\(psoa\)]}]}
\end{avm}
\label{nps:5}
\end{examps}
The values of {\feat restind} are feature structures of sort {\srt
nom\_obj}\/ (nominal object).\footnote{In \cite{Pollard2}, {\srt
nom\_obj}\/ has the subsorts {\srt npro}\/ (non-pronoun) and {\srt
pron}\/ (pronoun). These subsorts are not used in this thesis.}
These have the features {\feat index} (whose values are of sort {\srt
ind}\/) and {\feat restr} (whose values are sets of {\srt psoa}s).
When a {\srt nom\_obj}\/ feature structure is the value of {\feat
restind}, the {\feat index} corresponds to the \textsc{Top}\xspace variable being
quantified, and the {\feat restr} corresponds to the restriction of
the quantifier. (If the {\feat restr} set contains more than one {\srt
psoa}s, the {\srt psoa}-elements of the set are treated as forming a
conjunction.) For example, \pref{nps:6} represents \pref{nps:7}.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
\sort{quant}{
[det & interrog \\
restind & \osort{nom\_obj}{
[index & \sort{ind}{
[tvar & $+$]}@1 \\
restr & \{\sort{flight}{
[arg1 & @1]} \}]}]}
\end{avm}
\label{nps:6}
\item $?f^v \; flight(f^v)$ \label{nps:7}
\end{examps}
Although \textsc{Top}\xspace does not use explicit existential quantifiers
(universal quantification is not supported, and \textsc{Top}\xspace variables can be
thought of as existentially quantified), the \textsc{Hpsg}\xspace version of this
thesis employs explicit existential quantifiers ({\srt quant}s whose
{\feat det} is {\srt exists}\/) for compatibility with
\cite{Pollard2}. These explicit existential quantifiers are removed
when extracting \textsc{Top}\xspace formulae from signs (this is discussed in
section \ref{extraction_hpsg} below).
\section{Extracting TOP formulae from HPSG signs}
\label{extraction_hpsg}
The parser maps each question to a sign. (Multiple signs are generated
when the parser understands a question to be ambiguous.) For example,
\qit{Which inspector was at gate 2?} is mapped to \pref{nps:14b}
(exactly how \pref{nps:14b} is generated will become clearer in the
following sections; see also the comments about \ensuremath{\mathit{Ntense}}\xspace{s} in section
\ref{non_pred_nps} below).
\begin{examps}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{inspector}{
[arg1 & @1]}
\end{avm}}
\avmoptions{active,center}
\item
\begin{avm}
[\avmspan{phon \; \<\fval which, inspector, was, at, gate2\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\>] \\
cont & \osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{located\_at}{
[arg1 & @1 \\
arg2 & gate2]}]}] \\
\avmspan{qstore \; \{\sort{quant}{
[det & interrog \\
restind & \osort{nom\_obj}{
[index & \sort{person\_ent}{
[tvar & $+$]}@1 \\
restr & \{\box\avmboxa\}
]}
]}\}
}
]
\end{avm}
\label{nps:14b}
\end{examps}
Apart from the features that were discussed in section
\ref{HPSG_basics}, signs also have the feature {\feat qstore}, whose
values are sets of {\srt quant}s (section \ref{TOP_FS_WH}). The {\feat
cont} value of signs that correspond to questions is of sort {\srt
psoa}\/, i.e.\ it represents a \textsc{Top}\xspace yes/no formula. In the \textsc{Hpsg}\xspace
version of this thesis, the {\feat qstore} value represents
quantifiers that must be ``inserted'' in front of the formula of
{\feat cont}. In the prototype \textsc{Nlitdb}\xspace (to be discussed in chapter
\ref{implementation}), there is an ``extractor'' of \textsc{Top}\xspace formulae
that examines the {\feat cont} and {\feat qstore} features of the
question's sign, and generates the corresponding \textsc{Top}\xspace formula. This
is a trivial process, which I discuss only at an abstract level: the
extractor first examines recursively the features and feature values
of {\feat cont}, rewriting them in term notation (in \pref{nps:14b},
this generates \pref{nps:15}); then, for each element of {\feat
qstore}, the extractor adds a suitable quantifier in front of the
formula of {\feat cont} (in \pref{nps:14b}, this transforms
\pref{nps:15} into \pref{nps:17}).
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, located\_at(p^v, gate2)]$ \label{nps:15}
\item $?p^v \; inspector(p^v) \land
\ensuremath{\mathit{Past}}\xspace[e^v, located\_at(p^v, gate2)]$ \label{nps:17}
\end{examps}
In the case of elements of {\feat qstore} that correspond to
existential quantifiers, no explicit existential quantifier is added
to the formula of {\feat cont} (only the expression that corresponds
to the {\feat restr} of the {\srt quant}-element is added). For example,
if the {\feat det} of \pref{nps:14b} were {\srt exists}, \pref{nps:17}
would be \pref{nps:17b}.
\begin{examps}
\item $inspector(p^v) \land
\ensuremath{\mathit{Past}}\xspace[e^v, located\_at(p^v, gate2)]$ \label{nps:17b}
\end{examps}
The extracted formula then undergoes an additional post-processing
phase (to be discussed in section \ref{post_processing}). This is a
collection of transformations that need to be applied to some of the
extracted formulae. (In \pref{nps:17} and \pref{nps:17b}, the
post-processing has no effect.)
\section{Verb forms} \label{hpsg:verb_forms}
I now present the treatment of the various linguistic constructs,
starting from verb forms (simple present, past continuous, etc.).
(Pollard and Sag do not discuss temporal linguistic mechanisms.)
\subsection{Single-word verb forms} \label{single_word_forms}
Let us first examine the lexical rules that generate signs for
(single-word) non-base verb forms from signs for base forms. The
signs for simple present forms are generated by \pref{vforms:1}.
\begin{examps}
\item \lexrule{Simple Present Lexical Rule:}
\begin{center}
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<$\lambda$\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$]} \\
aspect & lex\_state] \\
cont & @1]]
\end{avm}
\\
$\Downarrow$
\\
\begin{avm}
[\avmspan{phon \; \<\fval morph\($\lambda$, simple\_present\)\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$]} \\
aspect & lex\_state] \\
cont & \sort{pres}{
[main\_psoa & @1]} ]]
\end{avm}
\end{center}
\label{vforms:1}
\end{examps}
\pref{vforms:1} means that for each lexical sign that matches the
first feature structure (the ``left hand side'', LHS) of the rule, a
new lexical sign should be generated as shown in the second feature
structure (the ``right hand side'', RHS) of the rule. (Following
standard \textsc{Hpsg}\xspace notation, I write {\feat synsem$\mid$loc} to refer to
the {\feat loc} feature of the value of {\feat synsem}.) The {\feat
head}s of the LHS and RHS mean that the original sign must
correspond to the base form of a non-auxiliary verb (auxiliary verbs
are treated separately), and that the resulting sign corresponds to a
finite verb form (a form that does not need to combine with an
auxiliary verb). The {\feat cont} of the new sign is the same as the
{\feat cont} of the original one, except that it contains an
additional \ensuremath{\mathit{Pres}}\xspace operator. Features of the original sign not shown in
the LHS (e.g.\ {\feat subj}, {\feat comps}) have the same
values in the generated sign. \pref{vforms:1} requires the original
sign to correspond to a (lexical) state base form. No simple present
signs are generated for verbs whose base forms are not states. This is
in accordance with the assumption of section \ref{simple_present} that
the simple present can be used only with state verbs.
$morph(\lambda, simple\_present)$ denotes a morphological
transformation that generates the simple present form (e.g.\
\qit{contains}) from the base form (e.g.\ \qit{contain}). The
prototype \textsc{Nlitdb}\xspace actually employs two different simple present
lexical rules. These generate signs for singular and plural simple
present forms respectively. As mentioned in sections
\ref{ling_not_supported} and \ref{TOP_FS}, plurals are treated
semantically as singulars, and no number-agreement checks are made.
Hence, the two lexical rules differ only in the {\feat phon} values of
the generated signs.
\pref{vforms:2} shows the base form sign of \qit{to contain} in the
airport domain. From \pref{vforms:2}, \pref{vforms:1} generates
\pref{vforms:3}. The {\srt tank\_ent}\/ and {\srt mass\_ent}\/ in
\pref{vforms:2} and \pref{vforms:3} mean that the
indices introduced by the subject and the object must be of sort {\srt
tank\_ent} and {\srt mass\_ent}\/ respectively ({\srt tank\_ent}\/ is
a sister of {\srt flight\_ent}\/ in figure \ref{ind_hierarchy}). Hence,
the semantically anomalous \qit{Gate 2 contains water.}
(where the subject introduces an index of sort {\srt gate2}, which is
not a subsort of {\srt tank\_ent}\/) would be rejected. All lexical
signs of verb forms have their {\feat qstore} set to $\{\}$. For
simplicity, I do not show the {\feat qstore} feature here.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval contain\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{tank\_ent@1}$\> \\
comps & \<\feat np[-prd]$_{mass\_ent@2}$\> ]\\
cont & \sort{contains}{
[arg1 & @1 \\
arg2 & @2]}]]
\end{avm}
\label{vforms:2}
\item
\begin{avm}
[\avmspan{phon \; \<\fval contains\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{tank\_ent@1}$\> \\
comps & \<\feat np[-prd]$_{mass\_ent@2}$\> ]\\
cont & \osort{pres}{
[main\_psoa & \osort{contains}{
[arg1 & @1 \\
arg2 & @2]}]}]]
\end{avm}
\label{vforms:3}
\end{examps}
The simple past signs of culminating activity verbs are generated by
\pref{vforms:4}, shown below. The simple past signs of non-culminating activity
verbs are generated by a lexical rule that is similar to
\pref{vforms:4}, except that it does not introduce a \ensuremath{\mathit{Culm}}\xspace operator in
the resulting sign.
The signs of past participles (e.g.\ \qit{inspected} in \qit{Who had
inspected BA737?}) are generated by two lexical rules
which are similar to the simple past ones. There is a rule for
culminating activity verbs (which introduces a \ensuremath{\mathit{Culm}}\xspace in the past
participle sign), and a rule for non-culminating activity verbs (that
introduces no \ensuremath{\mathit{Culm}}\xspace). Both rules do not introduce \ensuremath{\mathit{Past}}\xspace operators. The
generated signs have their {\feat vform} set to {\fval psp}\/ (past
participle), and the same {\feat aspect} as the base signs, i.e.\
their {\feat aspect} is not changed to {\fval cnsq\_state} (consequent
state). The shift to consequent state takes place when the auxiliary
\qit{had} combines with the past participle (this will be discussed in
section \ref{multi_forms}).
\newpage
\begin{examps}
\item
\lexrule{Simple Past Lexical Rule (Culminating Activity Base Form):}
\avmoptions{active}
\begin{center}
\begin{avm}
[\avmspan{phon \; \<$\lambda$\>} \\
synsem|loc & [cat & [head & \sort{verb}{
[vform & bse \\
aux & $-$]} \\
aspect & culmact] \\
cont & @1]]
\end{avm}
\\
$\Downarrow$
\\
\begin{avm}
[\avmspan{phon \; \<\fval morph\($\lambda$, simple\_past\)\>} \\
synsem|loc & [cat & [head & \sort{verb}{
[vform & fin \\
aux & $-$]} \\
aspect & culmact] \\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \sort{culm}{
[main\_psoa & @1]} ]} ]]
\end{avm}
\end{center}
\label{vforms:4}
\end{examps}
The signs for present participles (e.g.\ \qit{servicing} in
\qit{Which company is servicing BA737?}) are generated by
\pref{vforms:10}. The present participle signs are the same
as the base ones, except that their {\feat vform} is {\fval
prp}\/ (present participle), and their {\feat aspect} is {\srt
progressive}\/ (progressive state).
\begin{examps}
\item
\lexrule{Present Participle Lexical Rule:}
\avmoptions{active}
\begin{center}
\begin{avm}
[\avmspan{phon \; \<$\lambda$\>} \\
synsem|loc & [cat & [head & \sort{verb}{
[vform & bse \\
aux & $-$]} \\
aspect & aspect] \\
cont & @1]]
\end{avm}
\\
$\Downarrow$
\\
\begin{avm}
[\avmspan{phon \; \<\fval morph\($\lambda$, present\_participle\)\>} \\
synsem|loc & [cat & [head & \sort{verb}{
[vform & prp \\
aux & $-$]} \\
aspect & progressive] \\
cont & @1]]
\end{avm}
\end{center}
\label{vforms:10}
\end{examps}
Gerund signs are generated by a lexical rule that is similar to
\pref{vforms:10}, except that the generated signs retain the {\feat
aspect} of the original ones, and have their {\feat vform} set to
{\srt ger}. In English, there is no morphological distinction between
gerunds and present participles. \textsc{Hpsg}\xspace and most traditional grammars
(e.g.\ \cite{Thomson}), however, distinguish between the two. In
\pref{vforms:10x1}, the \qit{inspecting} is the gerund of \qit{to
inspect}, while in \pref{vforms:10x2}, the \qit{inspecting} is
the present participle.
\begin{examps}
\item J.Adams finished inspecting BA737. \label{vforms:10x1}
\item J.Adams was inspecting BA737. \label{vforms:10x2}
\end{examps}
The fact that gerund signs retain the {\feat aspect} of the base signs
is used in the treatment of \qit{to finish} (section
\ref{special_verbs}). The simple past \qit{finished} receives multiple
signs. (These are generated from corresponding base form signs by the
simple past lexical rules.) \pref{vforms:14} is used when
\qit{finished} combines with a culminating activity verb phrase, and
\pref{vforms:23} when it combines with a state or activity verb
phrase.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval finished\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$ ]} \\
aspect & point \\
spr & \<\> \\
subj & \<@1\> \\
comps & \<\feat
vp[subj \<@1\>, vform {\fval ger},
aspect {\fval culmact}]:@2
\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \osort{end}{
[main\_psoa & \sort{culm}{
[main\_psoa & @2]}]}]}]]
\end{avm}
\label{vforms:14}
\item
\begin{avm}
[\avmspan{phon \; \<\fval finished\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$ ]} \\
aspect & point \\
spr & \<\> \\
subj & \<@1\> \\
comps & \<\feat
vp[subj \<@1\>, vform {\fval ger}, \\
aspect {\fval state $\lor$ activity}]:@2
\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \sort{end}{
[main\_psoa & @2]}]}]]
\end{avm}
\label{vforms:23}
\end{examps}
In \pref{vforms:14}, the {\feat vp[subj $<$\avmbox{1}$>$, vform {\fval
ger}, aspect {\fval culmact}]:\avmbox{2}} means that
\qit{finished} requires as its complement a gerund verb phrase (a
gerund that has combined with its complements but not its subject)
whose aspect is culminating activity. The \avmbox{1} of {\feat comps}
points to a description of the required subject of the gerund verb
phrase, and the \avmbox{2} is a pointer to the {\feat cont} value of
the sign of the gerund verb phrase. The two \avmbox{1}s in
\pref{vforms:14} have the effect that \qit{finished} requires as its
subject whatever the gerund verb phrase requires as its subject. The
two \avmbox{2}s cause the sign of \qit{finished} to inherit the {\feat
cont} value of the sign of the gerund verb phrase, but with
additional \ensuremath{\mathit{Past}}\xspace, \ensuremath{\mathit{End}}\xspace, and \ensuremath{\mathit{Culm}}\xspace operators. \pref{vforms:23} is
similar, but it introduces no \ensuremath{\mathit{Culm}}\xspace.
In \pref{vforms:10x1}, the sign of the gerund \qit{inspecting} retains
the {\feat aspect} of the base sign, which in the airport domain is
{\srt culmact}. The sign of the gerund verb phrase \qit{inspecting
BA737} inherits the {\srt culmact}\/ {\feat aspect} of the gerund
sign (following the aspect principle, to be discussed in section
\ref{hpsg:punc_adv}). Hence, \pref{vforms:14} is used. This causes
\pref{vforms:10x1} to receive a sign whose {\feat cont} represents
\pref{vforms:24x1}, which requires the inspection to have been completed.
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, jadams, ba737)]]]$
\label{vforms:24x1}
\end{examps}
In \pref{vforms:24}, the sign of \qit{circling} inherits the {\srt
activity}\/ {\feat aspect} of the base sign, causing
\pref{vforms:23} to be used. This leads to \pref{vforms:25}, which
does not require any completion to have been reached.
\begin{examps}
\item BA737 finished circling. \label{vforms:24}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{End}}\xspace[circling(ba737)]$ \label{vforms:25}
\end{examps}
There is also a sign of the simple past \qit{finished} for the case
where the gerund verb phrase is a point. In that case, the {\feat
cont} of the sign of \qit{finished} is identical to the the {\feat
cont} of the sign of the gerund verb phrase, i.e.\ the
\qit{finished} has no semantic contribution. This is in accordance
with the arrangements of section \ref{special_verbs}. The signs of
\qit{started}, \qit{stopped}, and \qit{began} are similar, except that
they introduce \ensuremath{\mathit{Begin}}\xspace operators instead of \ensuremath{\mathit{End}}\xspace ones. Unlike
\qit{finished}, the signs of \qit{stopped} do not introduce \ensuremath{\mathit{Culm}}\xspace
operators when \qit{stopped} combines with culminating activities,
reflecting the fact that there is no need for a completion to have
been reached.
\subsection{Auxiliary verbs and multi-word verb forms} \label{multi_forms}
I now move on to auxiliary verbs and multi-word verb forms (e.g.\
\qit{had departed}, \qit{is inspecting}). \pref{vforms:30} shows the
sign of the simple past auxiliary \qit{had}. According to
\pref{vforms:30}, \qit{had} requires as its complement a past
participle verb phrase. The \avmbox{1}s mean that \qit{had}
requires as its subject whatever the past participle verb phrase
requires as its subject. The \avmbox{2}s mean that the {\feat
main\_psoa} value of the {\srt perf}\/ is the {\feat cont} value of
the sign of the past participle verb phrase.
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval had\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & cnsq\_state \\
spr & \<\> \\
subj & \<@1\> \\
comps & \<\feat
vp[subj \<@1\>, vform {\fval psp}]:@2
\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \osort{perf}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & @2]}]}]]
\end{avm}
\label{vforms:30}
\end{examps}
In the airport domain, the past participle \qit{departed} receives
multiple signs (for various habitual and non-habitual uses; these
signs are generated from the corresponding base form signs by the
lexical rules of section \ref{single_word_forms}). The sign of
\pref{vforms:32} is used in \pref{vforms:31}.
\begin{examps}
\avmoptions{active}
\item BA737 had departed. \label{vforms:31}
\item
\begin{avm}
[\avmspan{phon \; \<\fval departed\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & psp \\
aux & $-$ ]} \\
aspect & point \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{flight\_ent@3}$\> \\
comps & \<\> ]\\
cont & \sort{actl\_depart}{
[arg1 & @3]}]]
\end{avm}
\label{vforms:32}
\end{examps}
According to \pref{vforms:32}, \qit{departed} requires no complements,
i.e.\ it counts as a verb phrase, and can be used as the complement of
\qit{had}. When \qit{had} combines with \qit{departed}, the {\feat
subj} of \pref{vforms:30} becomes the same as the {\feat subj} of
\pref{vforms:32} (because of the \avmbox{1}s in \pref{vforms:30}), and
the {\feat main\_psoa} of the {\srt perf}\/ in \pref{vforms:30}
becomes the same as the {\feat cont} of \pref{vforms:32} (because of
the \avmbox{2}s in \pref{vforms:30}). The resulting constituent
\qit{had departed} receives \pref{vforms:33}.
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval had, departed\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & cnsq\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{flight\_ent@3}$\> \\
comps & \<\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \osort{perf}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \sort{actl\_depart}{
[arg1 & @3]}]}]}]]
\end{avm}
\label{vforms:33}
\end{examps}
The \textsc{Hpsg}\xspace principles (including the semantics and aspect principles
that will be discussed in sections \ref{non_pred_nps} and
\ref{hpsg:punc_adv}) cause \pref{vforms:33} to inherit the {\feat
head}, {\feat aspect}, {\feat spr}, {\feat subj}, and {\feat cont}
values of \pref{vforms:30}. Notice that this causes the aspect of
\qit{had departed} to become consequent state (\qit{departed} was a
point). As will be discussed in section \ref{hpsg:nouns}, the proper
name \qit{BA737} contributes an index that represents the flight
BA737. When \qit{had departed} combines with its subject \qit{BA737},
the index of \qit{BA737} becomes the {\feat arg1} value of
\pref{vforms:33} (because of the \avmbox{3}s of \pref{vforms:33}).
This causes \pref{vforms:31} to receive a sign whose {\feat cont}
represents \pref{vforms:35}.
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, actl\_depart(ba737)]]$ \label{vforms:35}
\end{examps}
As mentioned in sections \ref{present_perfect} and \ref{perf_op},
present perfect forms are treated semantically as simple past forms.
This is why, unlike the sign of \qit{had}, the sign of \qit{has}
(shown in \pref{vforms:36}) does not introduce a \ensuremath{\mathit{Perf}}\xspace operator, and
preserves the aspect of the past participle. This causes \qit{BA737
has departed.} to receive the same \textsc{Top}\xspace formula as \qit{BA737
departed.}.
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval has\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & @1 \\
spr & \<\> \\
subj & \<@2\> \\
comps & \<\feat
vp[subj \<@2\>, vform {\fval psp},
aspect @1]:@3
\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & @3]}]]
\end{avm}
\label{vforms:36}
\end{examps}
\qit{Does} receives the sign of \pref{vforms:40.1}, which indicates
that it requires as its complement a base verb phrase. The verb phrase
must be a (lexical) state. (This is in accordance
with the assumption of section \ref{simple_present} that the simple
present can be used only with state verbs.) \pref{vforms:40.1} and the
(habitual) base sign of \pref{vforms:41.1} cause \pref{vforms:41} to
receive \pref{vforms:42}.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval does\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state @1 \\
spr & \<\> \\
subj & \<@2\> \\
comps & \<\feat
vp[subj \<@2\>, vform {\fval bse}, aspect @1]:@3
\> ]\\
cont & \sort{pres}{
[main\_psoa & @3]}]]
\end{avm}
\label{vforms:40.1}
\item
\begin{avm}
[\avmspan{phon \; \<\fval service\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{company\_ent@4}$\> \\
comps & \<\feat np[-prd]$_{flight\_ent@5}$\> ]\\
cont & \sort{hab\_servicer\_of}{
[arg1 & @4 \\
arg2 & @5]} ]]
\end{avm}
\label{vforms:41.1}
\item Does Airserve service BA737? \label{vforms:41}
\item
\begin{avm}
[\avmspan{phon \; \<\fval Does, Airserve, service, BA737\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{pres}{
[main\_psoa & \sort{hab\_servicer\_of}{
[arg1 & airserve \\
arg2 & ba737]}]}]]
\end{avm}
\label{vforms:42}
\end{examps}
In the airport domain, the base form of \qit{to service} receives also
a sign that corresponds to the non-habitual homonym. This is similar
to \pref{vforms:41.1}, but it introduces the predicate functor
$actl\_servicing$, and its {\feat aspect} is {\srt culmact}. This sign
cannot be used in \pref{vforms:41}, because \pref{vforms:40.1}
requires the verb-phrase complement to be a state not a culminating
activity. This correctly predicts that \pref{vforms:41} cannot be
asking if Airserve is actually servicing BA737 at the present moment.
\qit{Did} receives two signs: one for culminating-activity
verb-phrase complements (shown in \pref{vforms:43}), and one for
state, activity, or point verb-phrase complements (this is similar to
\pref{vforms:43}, but introduces no \ensuremath{\mathit{Culm}}\xspace). In both cases, a \ensuremath{\mathit{Past}}\xspace
operator is added. In the case of culminating-activity complements, a
\ensuremath{\mathit{Culm}}\xspace operator is added as well.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval did\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & culmact @1 \\
spr & \<\> \\
subj & \<@2\> \\
comps & \<\feat
vp[subj \<@2\>, vform {\fval bse}, aspect @1]:@3
\> ]\\
cont & \osort{past}{
[et\_handle & \sort{temp\_ent}{
[tvar $+$]} \\
main\_psoa & \sort{culm}{
[main\_psoa & @3]}]}]]
\end{avm}
\label{vforms:43}
\end{examps}
The non-habitual sign of \qit{service} and \pref{vforms:43} cause
\pref{vforms:45} to be mapped to \pref{vforms:46}, which requires
Airserve to have actually serviced BA737 in the past. The habitual
sign of \pref{vforms:41.1} and the \qit{did} sign for non-culminating
activity complements cause \pref{vforms:45} to be mapped to
\pref{vforms:46x1}, which requires Airserve to have been a past habitual
servicer of BA737.
\begin{examps}
\item Did Airserve service BA737? \label{vforms:45}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[actl\_servicing(occr^v, airserve, ba737)]]$
\label{vforms:46}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, hab\_servicer\_of(airserve, ba737)]$ \label{vforms:46x1}
\end{examps}
The sign for the auxiliary \qit{is} is shown in \pref{vforms:50}. The
present participle \qit{servicing} receives two signs, a non-habitual
one (shown in \pref{vforms:51}) and a habitual one. The latter is
similar to \pref{vforms:51}, but it introduces the functor
$hab\_servicer\_of$, and its {\feat aspect} is {\srt lex\_state}. (The
two present participle signs are generated from the base ones by the
present participle lexical rule of section \ref{single_word_forms}.)
\pref{vforms:50} and \pref{vforms:51} cause \pref{vforms:52} to be
mapped to \pref{vforms:53}, which requires Airserve to be actually
servicing BA737 at the present. \pref{vforms:50} and the habitual
present participle sign cause \pref{vforms:52} to be mapped to
\pref{vforms:53x1}, which requires Airserve to be the current habitual
servicer of BA737.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval is\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & progressive \\
spr & \<\> \\
subj & \<@1\> \\
comps & \<\feat
vp[subj \<@1\>, vform {\fval prp}]:@2
\> ]\\
cont & \sort{pres}{
[main\_psoa & @2]}]]
\end{avm}
\label{vforms:50}
\item
\begin{avm}
[\avmspan{phon \; \<\fval servicing\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & prp \\
aux & $-$ ]} \\
aspect & culmact \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{company\_ent@1}$\> \\
comps & \<\feat np[-prd]$_{flight\_ent@2}$\> ]\\
cont & \sort{actl\_servicing}{
[arg1 & occr\_var \\
arg2 & @1 \\
arg3 & @2]} ]]
\end{avm}
\label{vforms:51}
\item Airserve is servicing BA737. \label{vforms:52}
\item $\ensuremath{\mathit{Pres}}\xspace[actl\_servicing(occr^v, airserve, ba737)]$
\label{vforms:53}
\item $\ensuremath{\mathit{Pres}}\xspace[hab\_servicer\_of(airserve, ba737)]$ \label{vforms:53x1}
\end{examps}
The sign for the auxiliary \qit{was} is similar to \pref{vforms:50},
except that it introduces a \ensuremath{\mathit{Past}}\xspace operator instead of a \ensuremath{\mathit{Pres}}\xspace one.
\section{Predicative and non-predicative prepositions} \label{hpsg:pps}
\avmoptions{active}
Following Pollard and Sag (\cite{Pollard1}, p.65), prepositions
receive separate signs for their predicative and non-predicative uses.
In sentences like \pref{pps:3} and \pref{pps:4}, where the
prepositions introduce complements of \qit{to be}, the prepositions
are said to be predicative. In \pref{pps:1} and \pref{pps:2}, where
they introduce complements of other verbs, the prepositions are
non-predicative.
\begin{examps}
\item BA737 is at gate 2. \label{pps:3}
\item BA737 was on runway 3. \label{pps:4}
\item BA737 (habitually) arrives at gate 2. \label{pps:1}
\item BA737 landed on runway 3. \label{pps:2}
\end{examps}
Predicative prepositions introduce their own \textsc{Top}\xspace predicates, while
non-predicative prepositions have no semantic contribution.
\subsection{Predicative prepositions}
\pref{pps:5} shows the predicative sign of \qit{at}. (The predicative
signs of other prepositions are similar.) The {\feat prd}~$+$ shows
that the sign is predicative. ({\feat prd} is also used to distinguish
predicative adjectives and nouns; this will be discussed in sections
\ref{hpsg:nouns} and \ref{hpsg:adjectives}.) {\feat pform} reflects
the preposition to which the sign corresponds. Signs for prepositional
phrases inherit the {\feat pform} of the preposition's sign. This is
useful in verbs that require prepositional phrases introduced by
particular prepositions.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval at\>} \\
synsem|loc & [cat & [head & \osort{prep}{
[pform & at \\
prd & $+$]} \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\feat np[-prd]$_{@2}$\> ]\\
cont & \sort{located\_at}{
[arg1 & @1 \\
arg2 & non\_temp\_ent@2 ]}]]
\end{avm}
\label{pps:5}
\end{examps}
According to \pref{pps:5}, \qit{at} requires a (non-predicative)
noun-phrase (\qit{BA737} in \pref{pps:3}) as its subject, and another
one (\qit{gate 2} in \pref{pps:3}) as its complement. As will be
discussed in section \ref{hpsg:nouns}, \qit{BA737} and \qit{gate 2}
contribute indices that represent the corresponding world entities.
The \avmbox{2} of \pref{pps:5} denotes the index of \qit{gate 2}.
\pref{pps:5} causes \qit{at gate 2} to receive \pref{pps:6}.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval at, gate2\>} \\
synsem|loc & [cat & [head & \osort{prep}{
[pform & at \\
prd & $+$]} \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \sort{located\_at}{
[arg1 & @1 \\
arg2 & gate2 ]}]]
\end{avm}
\label{pps:6}
\end{examps}
Apart from \pref{vforms:50} (which is used when \qit{is}
combines with a present-participle complement), \qit{is} also receives
\pref{pps:7} (which is used when \qit{is} combines with
predicative prepositional-phrases).
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval is\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<@3\> \\
comps & \<\feat pp[subj \<@3\>, prd $+$]:@4 \> ]\\
cont & \sort{pres}{
[main\_psoa & @4]}]]
\end{avm}
\label{pps:7}
\end{examps}
According to \pref{pps:7}, \qit{is} requires as its complement a predicative
prepositional phrase (a predicative preposition that has combined with
its complements but not its subject), like the \qit{at gate 2} of
\pref{pps:6}. \pref{pps:6} and \pref{pps:7} cause \pref{pps:3} to
receive \pref{pps:10}.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval BA737, is, at, gate2\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\>] \\
cont & \osort{pres}{
[main\_psoa & \osort{located\_at}{
[arg1 & ba737 \\
arg2 & gate2]}]}]]
\end{avm}
\label{pps:10}
\end{examps}
Like \qit{is}, \qit{was} receives two signs: one for
present-participle complements (as in \qit{BA737 was circling.}), and
one for predicative prepositional-phrase complements (as in
\pref{pps:4}). These are similar to the signs of \qit{was}, but they
introduce \ensuremath{\mathit{Past}}\xspace operators rather than \ensuremath{\mathit{Pres}}\xspace ones.
\subsection{Non-predicative prepositions}
The non-predicative sign of \qit{at} is shown in \pref{pps:12}. (The
non-predicative signs of other prepositions are similar.) The
\avmbox{1} is a pointer to the {\feat cont} value of the sign that
corresponds to the noun-phrase complement of \qit{at}. Notice that in
this case the \qit{at} has no semantic contribution (the \qit{at} sign
simply copies the {\feat cont} of the noun-phrase complement).
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval at\>} \\
synsem|loc & [cat & [head & \osort{prep}{
[pform & at \\
prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\feat np[-prd]:@1\> ]\\
cont & @1]]
\end{avm}
\label{pps:12}
\end{examps}
\pref{pps:12} and the habitual sign of \qit{arrives} of \pref{pps:13}
cause \pref{pps:1} to receive \pref{pps:16}.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval arrives\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{flight\_ent@1}$\> \\
comps & \<\feat
pp[-prd, pform {\fval at}]$_{gate\_ent@2}$
\> ]\\
cont & \osort{pres}{
[main\_psoa & \osort{hab\_arrive\_at}{
[arg1 & @1 \\
arg2 & @2]}]}]]
\end{avm}
\label{pps:13}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval BA737, arrives, at, gate2\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{pres}{
[main\_psoa & \osort{hab\_arrive\_at}{
[arg1 & ba737 \\
arg2 & gate2]}]}]]
\end{avm}
\label{pps:16}
\end{examps}
The (predicative and non-predicative) prepositional signs
of this section are not used when prepositions introduce temporal
adverbials (e.g.\ \qit{BA737 departed at 5:00pm.}). There are
additional prepositional signs for these cases (see section
\ref{hpsg:pupe_adv} below).
\section{Nouns} \label{hpsg:nouns}
\avmoptions{active}
Like prepositions, nouns receive different signs for their predicative
and non-predicative uses. Nouns used in noun-phrase complements of
\qit{to be} (more precisely, the lexical heads of such
noun-phrase complements), like the \qit{president} of \pref{nps:3},
are \emph{predicative}. The corresponding noun phrases (e.g.\
\qit{the president} of \pref{nps:3}) are also said to be
predicative. In all other cases (e.g.\ \qit{the president} of
\pref{nps:1}), the nouns and noun phrases are \emph{non-predicative}.
\begin{examps}
\item J.Adams is the president. \label{nps:3}
\item The president was at gate 2. \label{nps:1}
\end{examps}
\subsection{Non-predicative nouns}
\label{non_pred_nps}
Let us first examine non-predicative nouns. \pref{nps:2} shows the
sign of \qit{president} that would be used in \pref{nps:1}. The {\feat
prd} value shows that the sign corresponds to a non-predicative use
of the noun. The {\feat spr} value means that the noun requires as its
specifier a determiner (e.g.\ \qit{a}, \qit{the}).
\begin{examps}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{president}{
[arg1 & @1]}]}
\end{avm}}
\avmoptions{active,center}
\begin{avm}
[\avmspan{phon \; \<\fval president\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<[loc|cat|head & {\fval det}] \> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & person\_ent@1 \\
restr & \{\box\avmboxa\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:2}
\end{examps}
The {\feat cont} values of signs that correspond to non-predicative
nouns are of sort {\srt nom\_obj}\/ (section \ref{TOP_FS_WH}). The
{\feat index} value stands for the world entity described by the noun,
and the {\feat restr} value represents \textsc{Top}\xspace expressions that are
introduced by the noun.
\pref{nps:4} shows the sign of \qit{the} that is used in \pref{nps:1}.
(In this thesis, \qit{the} is treated semantically as \qit{a}. This is
of course an over-simplification.)
\begin{examps}
\setbox\avmboxa=\hbox{\begin{avm}
\osort{det}{
[spec|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\_\> \\
subj & \<\> \\
comps & \<\> ] \\
cont & @2]]}
\end{avm}}
\avmoptions{active,center}
\item
\begin{avm}
[\avmspan{phon \; \<\fval the\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{quant}{
[det & exists \\
restind & [index & [tvar $+$]]@2
]}@3
] \\
\avmspan{qstore \; \{@3\}} ]
\end{avm}
\label{nps:4}
\end{examps}
The {\feat spec} feature of \pref{nps:4} means that \qit{the} must be
used as the specifier of a non-predicative $\bar{N}$, i.e.\ as the
specifier of a non-predicative noun that has combined with its
complements and that requires a specifier. The \avmbox{3}s of
\pref{nps:4} cause an existential quantifier to be inserted into the
quantifier store, and the \avmbox{2}s cause the {\feat restind} of
that quantifier to be unified with the {\feat cont} of the $\bar{N}$'s
sign.
According to \pref{nps:2}, \qit{president} is non-predicative, it does not
need to combine with any complements, and it requires a specifier.
Hence, it satisfies the {\feat spec} restrictions of \pref{nps:4}, and
\qit{the} can be used as the specifier of \qit{president}. When
\qit{the} combines with \qit{president}, the {\feat restind} of
\pref{nps:4} is unified with the {\feat cont} of \pref{nps:2} (because of
the \avmbox{2}s in \pref{nps:4}), and the {\feat qstore} of
\pref{nps:4} becomes \pref{nps:7.1} (because of the \avmbox{3}s in
\pref{nps:4}). The resulting noun phrase receives \pref{nps:8}.
\begin{examps}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{president}{
[arg1 & @1]}]}
\end{avm}}
\avmoptions{active,center}
\begin{avm}
\{\sort{quant}{
[det & exists \\
restind & [index & \sort{person\_ent}{
[tvar & $+$]}@1 \\
restr & \{\box\avmboxa\}
]@2
]}@3\}
\end{avm}
\label{nps:7.1}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{president}{
[arg1 & @1]}]}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval the, president\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & \sort{person\_ent}{
[tvar & $+$]}@1 \\
restr & \{\box\avmboxa\}]}@2] \\
\avmspan{qstore \; \{\sort{quant}{
[det & exists \\
restind & @2]}\}}]
\end{avm}
\label{nps:8}
\end{examps}
According to the head feature principle (section
\ref{schemata_principles}), \pref{nps:8} inherits the {\feat head} of
\pref{nps:2} (which is the sign of the ``head daughter'' in this
case). The propagation of {\feat cont} and {\feat qstore}
is controlled by the semantics principle, which in this thesis has the
simplified form of \pref{nps:9}. (\pref{nps:9} uses the terminology of
\cite{Pollard2}. I explain below what \pref{nps:9} means for readers
not familiar with \cite{Pollard2}.)
\begin{examps}
\item
\principle{Semantics Principle (simplified version of this thesis):}\\
In a headed phrase, (a) the {\feat qstore} value is the union of the
{\feat qstore} values of the daughters, and (b) the {\feat
synsem$\mid$loc$\mid$cont} value is token-identical with that of
the semantic head. (In a headed phrase, the \emph{semantic head} is
the {\feat adjunct-daughter} if any, and the {\feat head-daughter}
otherwise.)
\label{nps:9}
\end{examps}
Part (a) means that the {\feat qstore} of each (non-lexical) syntactic
constituent is the union of the {\feat qstore}s of its
subconstituents. Part (b) means that each syntactic constituent
inherits the {\feat cont} of its head-daughter (the noun in
noun-phrases, the verb in verb phrases, the preposition in
prepositional phrases), except for cases where the head-daughter
combines with an adjunct-daughter (a modifier). In the latter case,
the mother syntactic constituent inherits the {\feat cont} of the
adjunct-daughter. (This will be discussed further in section
\ref{hpsg:pupe_adv}.) Readers familiar with \cite{Pollard2} will have
noticed that \pref{nps:9} does not allow quantifiers to be unstored
from {\feat qstore}. Apart from this, \pref{nps:9} is the same as in
\cite{Pollard2}.
\pref{nps:9} causes the {\feat qstore} of \pref{nps:8} to become the
union of the {\feat qstore}s of \pref{nps:2} (the empty set) and
\pref{nps:4} (which has become \pref{nps:7.1}). Since \qit{the
president} involves no adjuncts, the ``semantic head'' is the
``head-daughter'' (i.e.\ \qit{president}), and \pref{nps:8} inherits
the {\feat cont} of \pref{nps:2} (which is now the {\feat restind} of
\pref{nps:7.1}).
The \qit{gate 2} of \pref{nps:1} is treated as a one-word proper name.
(In the prototype \textsc{Nlitdb}\xspace, the user has to type \qit{terminal 2} as a
single word; the same is true for \qit{J.Adams} of \pref{nps:3}. This
will be discussed in section \ref{preprocessor}.) Proper names are
mapped to signs whose {\feat cont} is a {\srt nom\_obj}\/ with
an empty-set {\feat restr}.\footnote{In \cite{Pollard2}, the signs of
proper names involve {\srt naming}\/ relations, and {\feat context}
and {\feat background} features. These are not used in this thesis.}
\qit{Gate 2}, for example, receives \pref{nps:10}.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval gate2\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & gate2 \\
restr & \{\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:10}
\end{examps}
The predicative sign of \qit{at} of \pref{pps:5}, the predicative sign
of \qit{was} (which is similar to \pref{pps:7}, except that it
introduces a \ensuremath{\mathit{Past}}\xspace), and \pref{nps:10} cause the \qit{was at gate 2}
of \pref{nps:1} to receive \pref{nps:13}.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval was, at, gate2\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{located\_at}{
[arg1 & @1 \\
arg2 & gate2]}]}] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{nps:13}
\end{examps}
When \qit{was at gate 2} combines with \qit{the president},
\pref{nps:1} receives \pref{nps:14}. According to the semantics
principle, the {\feat qstore} of \pref{nps:14} is the union of the
{\feat qstore}s of \pref{nps:13} and \pref{nps:8}, and the {\feat
cont} of \pref{nps:14} is the same as the {\feat cont} of
\pref{nps:13}.
\begin{examps}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{president}{
[arg1 & @1]}
]}
\end{avm}}
\avmoptions{active,center}
\item
\begin{avm}
[\avmspan{phon \; \<\fval the, president, was, at, gate2\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\>] \\
cont & \osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{located\_at}{
[arg1 & @1 \\
arg2 & gate2]}]}] \\
\avmspan{qstore \; \{[det & exists \\
restind & \osort{nom\_obj}{
[index & \sort{person\_ent}{
[tvar & $+$]}@1 \\
restr & \{\box\avmboxa\}
]}
]\}
}
]
\end{avm}
\label{nps:14}
\end{examps}
\pref{nps:18} is then extracted from \pref{nps:14}, as discussed in
section \ref{extraction_hpsg}. Whenever an \ensuremath{\mathit{Ntense}}\xspace operator is
encountered during the extraction of the \textsc{Top}\xspace formulae, if there is
no definite information showing that the first argument of the \ensuremath{\mathit{Ntense}}\xspace
should be $now^*$, the first argument is taken to be a variable.
\pref{nps:14}, for example, shows that the first argument of the
\ensuremath{\mathit{Ntense}}\xspace could be either a \textsc{Top}\xspace variable or $now^*$. Hence, in
\pref{nps:18} the first argument of the \ensuremath{\mathit{Ntense}}\xspace has become a variable
($t^v$). During the post-processing phase (section
\ref{post_processing} below), the \ensuremath{\mathit{Ntense}}\xspace of \pref{nps:18} would give
rise to two separate formulae: one where the first argument of the
\ensuremath{\mathit{Ntense}}\xspace has been replaced by $now^*$ (current president), and one
where the first argument of the \ensuremath{\mathit{Ntense}}\xspace has been replaced by the $e^v$
of the \ensuremath{\mathit{Past}}\xspace operator (president when at gate 2). In contrast, if the
sign shows that the first argument of the \ensuremath{\mathit{Ntense}}\xspace is definitely
$now^*$, the first argument of the \ensuremath{\mathit{Ntense}}\xspace in the extracted formula is
$now^*$, and the post-processing has no effect on this argument.
\begin{examps}
\item $\ensuremath{\mathit{Ntense}}\xspace[t^v, president(p^v)] \land \ensuremath{\mathit{Past}}\xspace[e^v, located\_at(p^v, gate2)]$
\label{nps:18}
\end{examps}
It is possible to force a (non-predicative) noun to be interpreted as
referring always to the speech time, or always to the time of the verb
tense. (This also applies to the non-predicative adjectives of section
\ref{hpsg:adjectives} below.) To force a noun to refer always to the
speech time, one sets the {\feat et\_handle} of the {\srt ntense}\/ in
the noun's sign to simply {\srt now}\/ (instead of allowing it to be
either {\srt now}\/ or a variable-representing index as in
\pref{nps:2}). This way, the {\feat et\_handle} of the {\srt ntense}\/
in \pref{nps:14} would be {\srt now}. \pref{nps:18} would contain
$now^*$ instead of $t^v$ (because in this case the sign shows that the
first argument of the \ensuremath{\mathit{Ntense}}\xspace should definitely be $now^*$), and the
post-processing mechanism would have no effect.
To force a noun to refer always to the time of the verb tense, one
simply omits the \ensuremath{\mathit{Ntense}}\xspace from the noun's sign. This would cause the
formula extracted from the sign of \pref{nps:1} to be
\pref{nps:25}.
\begin{examps}
\item $president(p^v) \land \ensuremath{\mathit{Past}}\xspace[e^v, located\_at(p^v, gate2)]$
\label{nps:25}
\end{examps}
The semantics of \textsc{Top}\xspace's conjunction (section \ref{denotation}) and of
the \ensuremath{\mathit{Past}}\xspace operator (section \ref{past_op}) require $president(p^v)$
and $located\_at(p^v, gate2)$ to be true at the same (past) event
time. Hence, \pref{nps:25} expresses the reading where the person at
gate 2 was the president of that time.
There are however, two complications when (non-predicative) noun signs
do not introduce \ensuremath{\mathit{Ntense}}\xspace{s}. (These also apply to adjective signs, to
be discussed in section \ref{hpsg:adjectives}.) First, a past perfect
sentence like \pref{nps:26} receives only \pref{nps:27}, which
requires $president(p^v)$ to be true at the event time pointed to by
$e1^v$ (the ``reference time'', which is required to fall within
1/1/95). That is, \qit{the president} is taken to refer to somebody
who was the president on 1/1/95, and who may not have been the president
during the visit.
\begin{examps}
\item The president had visited Rome on 1/1/95. \label{nps:26}
\item $president(p^v) \land
\ensuremath{\mathit{At}}\xspace[\text{\textit{1/1/95}}, \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, visiting(p^v, rome)]]]$
\label{nps:27}
\end{examps}
In contrast, if the sign of \qit{president} introduces an
\ensuremath{\mathit{Ntense}}\xspace, the formula extracted from the sign of \pref{nps:26} is
\pref{nps:28}. The post-processing generates three different formulae
from \pref{nps:28}. These correspond to readings where \qit{president}
refers to the time of the visit ($t^v$ replaced by $e2^v$), the
reference time ($t^v$ replaced by $e1^v$, equivalent to
\pref{nps:27}), or the speech time ($t^v$ replaced by $now^*$).
\begin{examps}
\item $\ensuremath{\mathit{Ntense}}\xspace[t^v, president(p^v)] \land$ \\
$\ensuremath{\mathit{At}}\xspace[\text{\textit{1/1/95}}, \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, visiting(p^v, rome)]]]$
\label{nps:28}
\end{examps}
The second complication is that (non-predicative) nouns that do not
introduce \ensuremath{\mathit{Ntense}}\xspace{s} are taken to refer to the time of the \emph{main
clause's} tense, even if the nouns appear in subordinate clauses
(subordinate clauses will be discussed in section
\ref{hpsg:subordinates}). For example, if \qit{president} does not
introduce an \ensuremath{\mathit{Ntense}}\xspace, \pref{nps:31.1} is mapped to \pref{nps:31.2}.
The semantics of \pref{nps:31.2} requires the visitor to have been
president during the building of terminal 2 (the visitor is not
required to have been president during the visit to terminal 3).
\begin{examps}
\item Housecorp built terminal 2 before the president visited terminal 3.
\label{nps:31.1}
\item $\begin{aligned}[t]
president(p^v) \land \ensuremath{\mathit{Before}}\xspace[&\ensuremath{\mathit{Past}}\xspace[e1^v, visiting(p^v, term3)],\\
& \ensuremath{\mathit{Past}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[building(housecorp, term2)]]]
\end{aligned}$
\label{nps:31.2}
\end{examps}
In contrast, if
\qit{president} introduces an \ensuremath{\mathit{Ntense}}\xspace, the post-processing (section
\ref{post_processing} below) generates
three readings, where \qit{president}
refers to the speech time, the time of the building, or the time of
the visit.
\medskip
The non-predicative signs of nouns like \qit{day} or \qit{summer},
that refer to members of partitionings (section \ref{top_model}) are
similar to the non-predicative signs of ``ordinary'' nouns like
\qit{president}, except that they introduce \ensuremath{\mathit{Part}}\xspace operators, and
they do not introduce \ensuremath{\mathit{Ntense}}\xspace{s}. \pref{nps:32}, for example, shows
the non-predicative sign of \qit{day}. (The {\srt day}\/ and {\srt
day\_ent\_var}\/ sorts are as in figure \vref{ind_hierarchy}.)
\begin{examps}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\sort{part}{
[partng & day \\
part\_var & @1]}
\end{avm}}
\avmoptions{active,center}
\begin{avm}
[\avmspan{phon \; \<\fval day\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<[loc|cat|head & {\fval det}] \> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & day\_ent\_var@1 \\
restr & \{\box\avmboxa\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:32}
\end{examps}
Names of months and days (e.g.\ \qit{Monday}, \qit{January}) that can
be used both with and without determiners (e.g.\ \qit{on a Monday},
\qit{on Monday}) receive two non-predicative signs each: one that
requires a determiner, and one that does not. Finally, proper names
that refer to particular time-periods (e.g.\ the year-name \qit{1991},
the date \qit{25/10/95}) receive non-predicative signs that are
similar to those of ``normal'' proper names (e.g.\ \qit{gate 2}),
except that their {\feat index} values are subsorts of {\srt
temp\_ent}\/ rather than {\srt non\_temp\_ent}. I demonstrate in
following sections how the signs of temporal nouns and proper names
(e.g.\ \qit{day}, \qit{25/10/95}) are used to form the signs of
temporal adverbials (e.g.\ \qit{for two days}, \qit{before 25/10/95}).
\subsection{Predicative nouns} \label{pred_nps}
I now turn to predicative nouns, like the \qit{president} of
\pref{nps:3}. \pref{nps:41} shows the predicative sign of
\qit{president}. Unlike non-predicative noun-signs, whose {\feat cont}
values are of sort {\srt nom\_obj}, the {\feat cont} values of
predicative noun-signs are of sort {\srt psoa}. The {\srt president}\/
in \pref{nps:41} is a subsort of {\srt psoa}.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval president\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
spr & \<[loc|cat|head & {\fval det}]\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \sort{president}{
[arg1 & person\_ent@1]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:41}
\end{examps}
Unlike non-predicative nouns that do not require subjects (e.g.\
\pref{nps:2}), predicative nouns do require subjects. In
\pref{nps:41}, \qit{president} requires a non-predicative noun phrase
as its subject. The \avmbox{1} denotes the index of that noun phrase.
In the \textsc{Hpsg}\xspace version of this thesis, the predicative signs of nouns
are generated automatically from the non-predicative ones by
\pref{nps:42}.\footnote{Apart from the $remove\_ntense$, \pref{nps:42}
is essentially the same as Borsley's ``predicative NP lexical
rule'', discussed in the footnote of p.360 of \cite{Pollard2}.}
\begin{examps}
\item \lexrule{Predicative Nouns Lexical Rule:}
\avmoptions{active}
\begin{center}
\begin{avm}
[synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
subj & \<\>]\\
cont & \osort{nom\_obj}{
[index & @1 \\
restr & \{@2\}]}]]
\end{avm}
\\
$\Downarrow$
\\
\begin{avm}
[synsem|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
subj & \<\feat np[-prd]$_{@1}$\>]\\
cont & remove\_ntense\(@2\)]]
\end{avm}
\end{center}
\label{nps:42}
\end{examps}
The $remove\_ntense($\avmbox{2}$)$ in \pref{nps:42} means that if
\avmbox{2} (the single element of the {\feat restr} set of the
non-predicative sign) is of sort {\srt ntense}, then the {\feat cont}
of the predicative sign should be the {\feat main\_psoa} of \avmbox{2}
(see also \pref{nps:2}). Otherwise, the {\feat cont} of the
predicative sign should be \avmbox{2}. In other words, if the
non-predicative sign introduces an \ensuremath{\mathit{Ntense}}\xspace, the \ensuremath{\mathit{Ntense}}\xspace is removed in
the predicative sign. This is related to the observation in section
\ref{noun_anaphora}, that noun phrases that are complements of \qit{to
be} always refer to the time of the verb tense. For example,
\pref{nps:43} means that J.Adams was the president in 1992, not at the
speech time. \pref{nps:43} is represented correctly by \pref{nps:44}
which contains no \ensuremath{\mathit{Ntense}}\xspace{s}.
\begin{examps}
\item J.Adams was the president in 1992. \label{nps:43}
\item $\ensuremath{\mathit{At}}\xspace[1992, \ensuremath{\mathit{Past}}\xspace[e^v, president(j\_adams)]]$ \label{nps:44}
\end{examps}
\textsc{Top}\xspace predicates introduced by predicative nouns (e.g.\
$president(j\_adams)$ in \pref{nps:44}) end up within the operators of
the tenses of \qit{to be} (e.g.\ the \ensuremath{\mathit{Past}}\xspace of \pref{nps:44}). This
requires the predicates to hold at the times of the tenses.
As with previous lexical rules, features not shown in \pref{nps:42}
(e.g.\ {\feat spr}, {\feat comps}) have the same values in both the
original and the generated signs. For example, \pref{nps:42}
generates \pref{nps:41} from \pref{nps:2}.
In this thesis, determiners also receive different signs for their
uses in predicative and non-predicative noun phrases. (Pollard and Sag
do not provide much information on determiners of predicative noun
phrases. The footnote of p.360 of \cite{Pollard2}, however, seems to
acknowledge that determiners of predicative noun phrases have to be
treated differently from determiners of non-predicative noun phrases.)
For example, apart from \pref{nps:4}, \qit{the} is also given
\pref{nps:45}. The {\feat spec} of \pref{nps:45} shows that
\pref{nps:45} can only be used with predicative nouns (cf.\
\pref{nps:4}). Unlike determiners of non-predicative noun phrases,
determiners of predicative noun-phrases have no semantic contribution
(the {\feat synsem$\mid$loc$\mid$cont} of \pref{nps:45} is simply a
copy of the {\feat cont} of the noun, and no quantifier is introduced
in {\feat qstore}; cf.\ \pref{nps:4}).
\begin{examps}
\setbox\avmboxa=\hbox{\begin{avm}
\osort{det}{
[spec|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
spr & \<\_\> \\
subj & \<\_\> \\
comps & \<\> ] \\
cont & @2]]}
\end{avm}}
\avmoptions{active,center}
\item
\begin{avm}
[\avmspan{phon \; \<\fval the\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & @2
] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:45}
\end{examps}
In \pref{nps:3}, when \qit{the} combines with \qit{president}, the
resulting noun phrase receives \pref{nps:46}. (\textsc{Hpsg}\xspace's principles,
including the semantics principle of \pref{nps:9}, cause \pref{nps:46}
to inherit the {\feat head}, {\feat subj}, and {\feat cont} of
\pref{nps:41}.)
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval the, president\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \sort{president}{
[arg1 & person\_ent@1]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:46}
\end{examps}
Apart from \pref{vforms:50} and \pref{pps:7}, \qit{is} also receives
\pref{nps:47}, which allows the complement of \qit{is} to be a
predicative noun phrase. (There is also a sign of \qit{is} for
adjectival complements, as in \qit{Runway 2 is closed.}; this will be
discussed in section \ref{hpsg:adjectives}. \qit{Was} receives similar
signs.) The \avmbox{4}s in \pref{nps:47} denote the {\feat cont} of
the predicative noun-phrase.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval is\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<@3\> \\
comps & \<\feat np[subj \<@3\>, prd $+$]:@4 \> ]\\
cont & \osort{pres}{
[main\_psoa & @4]}] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{nps:47}
\end{examps}
\pref{nps:47} and \pref{nps:46} cause the \qit{is the president} of
\pref{nps:3} to receive \pref{nps:48}. Finally, when \qit{is the
president} combines with \qit{J.Adams}, \pref{nps:3} receives a sign
with an empty {\feat qstore}, whose {\feat cont} represents
\pref{nps:50}.
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval is, the, president\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \osort{pres}{
[main\_psoa & \osort{president}{
[arg1 & person\_ent@1]}]}] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{nps:48}
\item $\ensuremath{\mathit{Pres}}\xspace[president(j\_adams)]$ \label{nps:50}
\end{examps}
There are currently two complications with predicative noun phrases. The
first is that in the non-predicative signs of proper names like
\qit{gate2}, the value of {\feat restr} is the empty set (see
\pref{nps:10}). Hence, \pref{nps:42} does not generate the
corresponding predicative signs, because the non-predicative signs do
not match the {\feat restr} description of the LHS of \pref{nps:42}
(which requires the {\feat restr} value to be a one-element set). This
causes \pref{nps:51} to be rejected, because there is no predicative
sign for \qit{J.Adams}.
\begin{examps}
\item The inspector is J.Adams. \label{nps:51}
\end{examps}
One way to solve this problem is to employ the additional rule of
\pref{nps:52}.
\begin{examps}
\item \lexrule{Additional Predicative Nouns Lexical Rule:}
\avmoptions{active}
\begin{center}
\begin{avm}
[synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
subj & \<\>]\\
cont & \osort{nom\_obj}{
[index & @1 \\
restr & \{\}]}]]
\end{avm}
\\
$\Downarrow$
\\
\begin{avm}
[synsem|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
subj & \<\feat np[-prd]$_{@2}$\>]\\
cont & \sort{identity}{
[arg1 & @1 \\
arg2 & @2]}]]
\end{avm}
\end{center}
\label{nps:52}
\end{examps}
This would generate \pref{nps:53} from the non-predicative sign of
\qit{J.Adams} (which is similar to \pref{nps:10}). \pref{nps:47} and
\pref{nps:53} would cause \pref{nps:51} to be mapped to \pref{nps:54}.
(I assume here that the non-predicative \qit{inspector} does not
introduce an \ensuremath{\mathit{Ntense}}\xspace.)
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval J.Adams\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $+$]} \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@2}$\> \\
comps & \<\> ]\\
cont & \sort{identity}{
[arg1 & j\_adams \\
arg2 & @2]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{nps:53}
\item $inspector(insp^v) \land
\ensuremath{\mathit{Pres}}\xspace[identity(j\_adams, insp^v)]$ \label{nps:54}
\end{examps}
$identity(\tau_1,\tau_2)$ is intended to be true at event times
where its two arguments denote the same entity. This calls for a
special domain-independent semantics for $identity(\tau_1,\tau_2)$. I
have not explored this issue any further, however, and \pref{nps:52}
is not used in the prototype \textsc{Nlitdb}\xspace.
A second complication is that the non-predicative sign of \qit{Monday}
(which is similar to \pref{nps:32}) and the treatment of predicative
noun phrases above lead to an attempt to map \pref{nps:55} to
\pref{nps:56}.
\begin{examps}
\item 23/10/95 was a Monday. \label{nps:55}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Part}}\xspace[monday^g, \text{\textit{23/10/95}}]]$ \label{nps:56}
\end{examps}
\pref{nps:56} is problematic for two reasons. (a) The past tense of
\pref{nps:55} is in effect ignored, because the denotation of
$\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ does not depend on $lt$, which is what the
\ensuremath{\mathit{Past}}\xspace operator affects (see sections \ref{denotation} and
\ref{past_op}). Hence, the implication of \pref{nps:55} that 23/10/95
is a past day is not captured. This problem could be solved by adding
the constraint $g(\beta) \subper lt$ in the semantics of
$\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ (section \ref{denotation}). (b)
\pref{nps:56} violates the syntax of \textsc{Top}\xspace (section \ref{top_syntax}),
which does not allow the second argument of a \ensuremath{\mathit{Part}}\xspace operator to be a
constant. This problem could be solved by modifying \textsc{Top}\xspace to allow the
second argument of \ensuremath{\mathit{Part}}\xspace to be a constant.
\section{Adjectives} \label{hpsg:adjectives}
Following Pollard and Sag (\cite{Pollard1}, pp.\ 64 -- 65), adjectives
also receive different signs for their predicative and non-predicative
uses. When used as complements of \qit{to be} (e.g.\ \qit{closed} in
\pref{adj:1}) adjectives are said to be predicative. In all other
cases (e.g.\ \qit{closed} in \pref{adj:2}), adjectives are
non-predicative. (\pref{adj:1} is actually ambiguous. The \qit{closed}
may be a predicative adjective, or the passive form of \qit{to close}.
As noted in section \ref{ling_not_supported}, however, passives are
ignored in this thesis. Hence, I ignore the passive reading of
\pref{adj:1}.)
\begin{examps}
\item Runway 2 was closed. \label{adj:1}
\item BA737 landed on a closed runway. \label{adj:2}
\end{examps}
In the airport domain, the predicative sign of the adjective
\qit{closed} is \pref{adj:3}.
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval closed\>} \\
synsem|loc & [cat & [head & \osort{adj}{
[prd & $+$]} \\
spr & \<\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\> ]\\
cont & \sort{closed}{
[arg1 & \(gate\_ent $\lor$ runway\_ent\)@1]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{adj:3}
\end{examps}
As noted in section \ref{hpsg:nouns}, \qit{is} and \qit{was}
receive four signs each. One for progressive forms (see
\pref{vforms:50}), one for prepositional phrase complements (see
\pref{pps:7}), one for noun-phrase complements (see \pref{nps:47}),
and one for adjectival complements (\pref{adj:4} below). \pref{adj:4}
and \pref{adj:3} cause \pref{adj:1} to be mapped to \pref{adj:6}.
\begin{examps}
\avmoptions{active,center}
\setbox\avmboxa=\hbox{\begin{avm}
[loc & [cat & [head & \osort{adj}{
[prd $+$]} \\
subj & \<@3\> \\
comps & \<\>] \\
cont & @2]]
\end{avm}}
\item
\begin{avm}
[\avmspan{phon \; \<\fval was\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<@3\> \\
comps & \<\box\avmboxa\> ]\\
cont & \osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & @2]}] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{adj:4}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, closed(runway2)]$ \label{adj:6}
\end{examps}
\pref{adj:7} shows the non-predicative sign of \qit{closed}. The
\qit{closed} in \pref{adj:2} is a modifier (adjunct) of \qit{runway}.
The {\feat mod} in \pref{adj:7} refers to the {\feat synsem} of the
sign of the noun that the adjective modifies. The {\feat
synsem$\mid$loc$\mid$cont} of \pref{adj:7} is the same as the one of
the noun-sign, except that an \ensuremath{\mathit{Ntense}}\xspace is added to the {\feat restr} of
the noun-sign (i.e.\ to the set denoted by \avmbox{2}). The additional
\ensuremath{\mathit{Ntense}}\xspace requires the entity described by the noun (the entity
represented by \avmbox{1}) to be closed at some unspecified time. The
{\feat index} of the noun's sign is also required to represent a gate
or runway.
\begin{examps}
\item
\avmoptions{active,center}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{closed}{
[arg1 & @1]}]}
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
[cat & [head & noun \\
spr & \<\_\> \\
comps & \<\>] \\
cont & \osort{nom\_obj}{
[index & @1 \\
restr & @2]}]
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval closed\>} \\
synsem|loc & [cat & [head & \osort{adj}{
[\avmspan{\feat prd \; $-$} \\
mod|loc & \box\avmboxb]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & \(gate\_ent $\lor$ runway\_ent\)@1 \\
restr & @2 $\union$ \{\box\avmboxa\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{adj:7}
\end{examps}
In the airport domain, the non-predicative sign of \qit{runway} is
\pref{adj:8}. (I assume that \qit{runway} does not introduce an
\ensuremath{\mathit{Ntense}}\xspace.)
\begin{examps}
\item
\avmoptions{active,center}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{runway}{
[arg1 & @1]}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval runway\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<[loc|cat|head & {\fval det}] \> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & runway\_ent@1 \\
restr & \{\box\avmboxa\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{adj:8}
\end{examps}
In \pref{adj:2}, \qit{closed} combines with \qit{runway} according to
\textsc{Hpsg}\xspace's head-adjunct schema (see \cite{Pollard2}).
\qit{Closed runway} receives the sign of \pref{adj:9},
where \avmbox{3} is the set of \pref{adj:9.2}. (Sets of {\srt psoa}s
are treated as conjunctions.)
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval closed, runway\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<[loc|cat|head & {\fval det}] \> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & runway\_ent@1 \\
restr & @3]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{adj:9}
\item
\begin{avm}
\{
\sort{runway}{
[arg1 & @1]},
\osort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & \osort{closed}{
[arg1 @1]}]}
\}@3
\end{avm}
\label{adj:9.2}
\end{examps}
The principles of \textsc{Hpsg}\xspace cause \pref{adj:9} to inherit the {\feat head}
and {\feat spr} of \pref{adj:8}. \pref{adj:9} inherits the {\feat
cont} of \pref{adj:7} according to the semantics principle of
\pref{nps:9} (in this case, the ``semantic head'' is the adjunct
\qit{closed}). \pref{adj:9}, the sign of \qit{landed} (which is the
same as \pref{lentr:1}, except that it also introduces \ensuremath{\mathit{Past}}\xspace and \ensuremath{\mathit{Culm}}\xspace
operators), and the non-predicative sign of \qit{on} (which is similar
to \pref{pps:12}), cause \pref{adj:2} to be mapped to the
\pref{adj:10}. During the post-processing (section
\ref{post_processing} below), \pref{adj:10} gives rise to two
different formulae, one where $t^v$ is replaced by $now^*$ (currently
closed runway), and one where $t^v$ is replaced by $e^v$ (closed
during the landing).
\begin{examps}
\item $runway(r^v) \land \ensuremath{\mathit{Ntense}}\xspace[t^v, closed(r^v)] \; \land$\\
$\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[landing\_on(occr^v, ba737, r^v)]]$ \label{adj:10}
\end{examps}
An additional sign is needed for each non-predicative adjective
to allow sentences like \pref{adj:13}, where a non-predicative
adjective (\qit{closed}) combines with a predicative noun (\qit{runway}).
\begin{examps}
\item Runway 2 is a closed runway. \label{adj:13}
\end{examps}
\pref{adj:7} cannot be used in \pref{adj:13}, because here
\qit{runway} is predicative, and hence the {\feat cont} of its sign is
a {\srt psoa}\/ (the predicative sign of \qit{runway} is similar to
\pref{nps:41}). In contrast, \pref{adj:7} assumes that the {\feat
cont} of the noun is a {\srt nom\_obj}. One has to use the
additional sign of \pref{adj:14}.\footnote{It is unclear how
\pref{adj:14} could be written in the \textsc{Hpsg}\xspace version of
\cite{Pollard2}. In \cite{Pollard2}, the {\srt and}\/ sort does not
exist, and conjunctions of {\srt psoa}s can only be expressed using
sets of {\srt psoa}s, as in \pref{adj:9.2}. In \pref{adj:14},
however, the value of {\feat sysnsem$\mid$loc$\mid$cont} cannot be a
set of {\srt psoa}s, because {\feat cont} accepts only values whose
sort is {\srt psoa}, {\srt nom\_obj}, or {\srt quant}.} Using
\pref{adj:14}, \pref{adj:13} is mapped to \pref{adj:15},
which requires runway 2 to be closed at the speech time.
\begin{examps}
\item
\avmoptions{active,center}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{ntense}{
[et\_handle & \osort{temp\_ent}{
[tvar $+$]} $\lor$ now \\
main\_psoa & ]}
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
[cat & [head & \osort{noun}{
[prd \; $+$]} \\
spr & \<\_\> \\
subj & \<\feat np[-prd]$_{@1}$\> \\
comps & \<\>] \\
cont & @2]
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval closed\>} \\
synsem|loc & [cat & [head & \osort{adj}{
[\avmspan{\feat prd \; $-$} \\
mod|loc & \box\avmboxb]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{and}{
[conjunct1 & @2 \\
conjunct2 & \sort{closed}{
[arg1 & @1]}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{adj:14}
\item $\ensuremath{\mathit{Pres}}\xspace[runway(runway2) \land closed(runway2)]$ \label{adj:15}
\end{examps}
As discussed in section \ref{temporal_adjectives}, temporal adjectives
(e.g.\ \qit{former}, \qit{annual}) are not considered in this thesis.
The prototype \textsc{Nlitdb}\xspace allows only non-predicative uses of the temporal
adjective \qit{current} (as in \pref{adj:20}), by mapping
\qit{current} to a sign that sets the first argument of the noun's
\ensuremath{\mathit{Ntense}}\xspace to $now^*$. (This does not allow \qit{current} to be used with
nouns that do not introduce \ensuremath{\mathit{Ntense}}\xspace{s}; see section
\ref{non_pred_nps}.)
\begin{examps}
\item The current president was at terminal 2. \label{adj:20}
\end{examps}
\section{Temporal adverbials} \label{hpsg:pupe_adv}
I now discuss temporal adverbials, starting from punctual adverbials
(section \ref{point_adverbials}).
\subsection{Punctual adverbials} \label{hpsg:punc_adv}
Apart from \pref{pps:5} and \pref{pps:12} (which are used in sentences
like \qit{BA737 is at gate 2.} and \qit{BA737 (habitually) arrives at
gate 2.}), \qit{at} also receives signs that are used when it
introduces punctual adverbials, as in \pref{pupe:1}. \pref{pupe:2}
shows one of these signs.
\begin{examps}
\item Tank 2 was empty at 5:00pm. \label{pupe:1}
\avmoptions{active, center}
\setbox\avmboxa=\hbox{\begin{avm}
\osort{prep}{
[\avmspan{prd \; $-$} \\
\avmspan{mod \; \feat s[vform {\fval fin}]:@2 $\lor$
\feat vp[vform {\fval psp}]:@2} \\
mod|loc|cat|aspect & {\fval state $\lor$ activity} \\
& {\fval $\lor$ point}]}
\end{avm}}
\item
\begin{avm}
[\avmspan{phon \; \<\fval at\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\feat np[-prd]$_{minute\_ent@1}$\> \\
aspect & point]\\
cont & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & @2]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{pupe:2}
\end{examps}
The {\feat mod} feature refers to the {\feat synsem} of the sign of
the constituent modified by \qit{at}. {\feat s[vform {\fval
fin}]:\avmbox{2}} is an abbreviation for a finite sentence (a
finite verb form that has combined with its subject and complements).
The \avmbox{2} refers to the {\feat cont} of the sign of the finite
sentence. Similarly, {\feat vp[vform {\fval psp}]:\avmbox{2}} stands
for a past participle verb phrase (a past participle that has combined
with its complements but not its subject). The {\feat mod} of
\pref{pupe:2} means that \pref{pupe:2} can be used when \qit{at}
modifies finite sentences or past participle verb phrases, whose
aspect is state, activity, or point. Generally, in this thesis
temporal adverbials (punctual adverbials, period adverbials, duration
adverbials) and temporal subordinate clauses (to be discussed in
section \ref{hpsg:subordinates}) are allowed to modify only finite
sentences and past participle verb phrases.
\pref{pupe:2} and the sign of \qit{5:00pm} (shown in \pref{pupe:3})
cause \qit{at 5:00pm} to receive \pref{pupe:4} (\qit{5:00pm} acts as
the noun-phrase complement of \qit{at}).
\begin{examps}
\avmoptions{active,center}
\setbox\avmboxa=\hbox{\begin{avm}
\sort{part}{
[partng & 5:00pm \\
part\_var & @1]}
\end{avm}}
\item
\begin{avm}
[\avmspan{phon \; \<\fval 5:00pm\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & minute\_ent\_var@1 \\
restr & \{\box\avmboxa\}]}@3 ] \\
\avmspan{qstore \; \{[det & exists \\
restind & @3]\}}]
\end{avm}
\label{pupe:3}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\osort{prep}{
[\avmspan{prd \; $-$} \\
\avmspan{mod \; \feat s[vform {\fval fin}]:@2 $\lor$
\feat vp[vform {\fval psp}]:@2} \\
mod|loc|cat|aspect & {\fval state $\lor$ activity}\\
& {\fval $\lor$ point}]}
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
\{
\sort{part}{
[partng & 5:00pm \\
part\_var & @1]}
\}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval at, 5:00pm\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> \\
aspect & point]\\
cont & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & @2]}] \\
\avmspan{qstore \; \{[det & exists \\
restind & [index & minute\_ent\_var@1 \\
restr & \box\avmboxb]]\} } ]
\end{avm}
\label{pupe:4}
\end{examps}
According to \textsc{Hpsg}\xspace's head feature principle (section
\ref{schemata_principles}), \pref{pupe:4} inherits the {\feat head} of
\pref{pupe:2} (\qit{at} is the ``head-daughter'' of \qit{at 5:00pm},
and \qit{5:00pm} is the ``complement-daughter''). Following the
semantics principle of \pref{nps:9}, the {\feat qstore} of
\pref{pupe:4} is the union of the {\feat qstore}s of \pref{pupe:2} and
\pref{pupe:3}, and the {\feat cont} of \pref{pupe:4} is the same as
the {\feat cont} of \pref{pupe:2} (in this case, the ``semantic head''
is the head-daughter, i.e.\ \qit{at}).
The propagation of {\feat aspect} is controlled by \pref{pupe:5}, a
new principle of this thesis. (As with \pref{nps:9}, in \pref{pupe:5} I use the
terminology of \cite{Pollard2}.)
\begin{examps}
\item \principle{Aspect Principle:}
\index{aspect@{\feat aspect} (\textsc{Hpsg}\xspace feature)} \\
In a headed-phrase, the {\feat synsem$\mid$loc$\mid$cat$\mid$aspect} value is
token-identical with that of the semantic head. (In a headed phrase,
the \emph{semantic head} is the {\feat adjunct-daughter} if any, and
the {\feat head-daughter} otherwise.)
\label{pupe:5}
\end{examps}
\pref{pupe:5} means that each syntactic constituent inherits the
{\feat aspect} of its head-daughter (the noun in noun phrases, the
verb in verb phrases, the preposition in prepositional phrases),
except for cases where the head-daughter combines with an
adjunct-daughter (a modifier). In the latter case, the mother
syntactic constituent inherits the {\feat cont} of the
adjunct-daughter. \pref{pupe:5} causes \pref{pupe:4} to inherit the
{\feat aspect} value of the semantic head \qit{at}.
The \qit{tank 2 was empty} of \pref{pupe:1} receives \pref{pupe:6}.
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval tank2, was, empty\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \<\> \\
comps & \<\>] \\
cont & \osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{empty}{
[arg1 & tank2]}]}] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{pupe:6}
\setbox\avmboxa=\hbox{\begin{avm}
\osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{empty}{
[arg1 & tank2]}]}
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
\{
\sort{part}{
[partng & 5:00pm \\
part\_var & @1]}
\}
\end{avm}}
\end{examps}
When \qit{tank 2 was empty} combines with \qit{at 5:00pm},
\pref{pupe:1} receives \pref{pupe:7}. In this case, \qit{tank 2 was
empty} is the head-daughter, and \qit{at 5:00pm} is an
adjunct-daughter (a modifier). Hence, according to \pref{pupe:5},
\pref{pupe:7} inherits the {\feat aspect} of \pref{pupe:4} (i.e.\
{\srt point}\/; in contrast, the {\feat aspect} of \pref{pupe:6} was
{\srt lex\_state}.) This is in accordance with the arrangements of
section \ref{point_adverbials}, whereby punctual adverbials trigger an
aspectual shift to point.
\begin{examps}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\osort{past}{
[et\_handle & \osort{temp\_ent}{
[tvar & $+$]} \\
main\_psoa & \osort{empty}{
[arg1 & tank2]}]}
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
\{
\sort{part}{
[partng & 5:00pm \\
part\_var & @1]}
\}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval tank2, was, empty, at, 5:00pm\>} \\
synsem|loc & [cat & [head & \osort{verb}{
[vform & fin \\
aux & $+$]} \\
aspect & point \\
spr & \<\> \\
subj & \<\> \\
comps & \<\>] \\
cont & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & \box\avmboxa]}] \\
\avmspan{qstore \; \{[det & exists \\
restind & [index & minute\_ent\_var@1 \\
restr & \box\avmboxb]]\}}]
\end{avm}
\label{pupe:7}
\end{examps}
According to the semantics principle, \pref{pupe:7} also inherits the
{\feat cont} of \pref{pupe:4} (the sign of the modifier), and the
{\feat qstore} of \pref{pupe:7} is the union of the {\feat qstore}s of
\pref{pupe:4} and \pref{pupe:6}. Finally, according to the head
feature principle (section \ref{schemata_principles}), \pref{pupe:7}
inherits the {\feat head} of \pref{pupe:6} (the sign of the
head-daughter). The {\feat qstore} and {\feat cont} of \pref{pupe:7}
represent \pref{pupe:7.1}.
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{pupe:7.1}
\end{examps}
The reader may wonder why temporal adverbials (e.g.\ \qit{at 5:00pm}
in \pref{pupe:1}) are taken to modify whole finite sentences
(\qit{tank 2 was empty}), rather than finite verb phrases (\qit{was
empty}). The latter approach leads to problems in questions like
\qit{Was tank 2 empty at 5:00pm?}, where \qit{was} combines in one
step with both its subject \qit{tank 2} and its complement
\qit{empty}, following the head-subject-complement schema of
\cite{Pollard2}. In this case, there is no verb phrase constituent
(verb that has combined with its complements but not its subject) to
be modified by \qit{at 5:00pm}.
Apart from finite sentences, temporal adverbials are also allowed to
modify past participle verb phrases (see the {\feat mod} of
\pref{pupe:2}). This is needed in past perfect sentences like
\pref{pupe:9}.
\begin{examps}
\item BA737 had entered sector 2 at 5:00pm. \label{pupe:9}
\end{examps}
As discussed in section \ref{past_perfect}, \pref{pupe:9} has two
readings: one where the entrance occurs at 5:00pm, and one where
5:00pm is a ``reference time'', a time where the entrance has already
occurred. The two readings are expressed by \pref{pupe:10} and
\pref{pupe:11} respectively (see also section \ref{perf_op}).
\pref{pupe:9} is taken to be syntactically ambiguous with two possible
parses, sketched in \pref{pupe:12} and \pref{pupe:13}. These give rise
to \pref{pupe:10} and \pref{pupe:11} respectively.
\begin{examps}
\item BA737 had [[entered sector 2] at 5:00pm]. \label{pupe:12}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[fv^v, enter(ba737, sector2)]]]$
\label{pupe:10}
\item {[}BA737 had [entered sector 2]] at 5:00pm. \label{pupe:13}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, enter(ba737, sector2)]]]$
\label{pupe:11}
\end{examps}
One complication of this approach is that it generates two equivalent
formulae for the present perfect \pref{pupe:17}, shown in
\pref{pupe:18} and \pref{pupe:19}. (\qit{Has} does not
introduce a \ensuremath{\mathit{Perf}}\xspace; see \pref{vforms:36}.) These correspond to the
parses of \pref{pupe:17a} and \pref{pupe:17b} respectively.
\begin{examps}
\item BA737 has entered sector 2 at 5:00pm. \label{pupe:17}
\item {[}BA737 has [entered sector 2]] at 5:00pm. \label{pupe:17a}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, enter(ba737, sector2)]]$ \label{pupe:18}
\item BA737 has [[entered sector 2] at 5:00pm]. \label{pupe:17b}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{At}}\xspace[fv^v, enter(ba737, sector2)]]$ \label{pupe:19}
\end{examps}
In the prototype \textsc{Nlitdb}\xspace, the sign of \qit{has} is slightly more
complex than \pref{vforms:36}. It requires the {\feat cont}
of the verb-phrase complement of \qit{has} to be of sort {\srt
predicate}. This blocks \pref{pupe:17b} and \pref{pupe:19},
because in \pref{pupe:17b} the \qit{at 5:00pm} causes the {\feat cont}
of \qit{entered sector 2 at 5:00pm} to become of sort {\srt
at\_op}\/ (it inserts an \ensuremath{\mathit{At}}\xspace operator), which is not a subsort
of {\srt predicate}.
\pref{pupe:2} corresponds to the interjacent meaning of punctual
adverbials, which according to table \vref{punctual_adverbials_table}
is possible only with states and activities. \pref{pupe:2} also covers
cases where punctual adverbials combine with points. There are also
other \qit{at} signs, that are similar to \pref{pupe:2} but that
introduce additional \ensuremath{\mathit{Begin}}\xspace or \ensuremath{\mathit{End}}\xspace operators. These correspond to
the inchoative (with activities and culminating activities) and
terminal (with culminating activities) meanings of punctual
adverbials.
\subsection{Period adverbials} \label{hpsg:per_advs}
I now turn to period adverbials (section \ref{period_adverbials}).
\pref{pupe:29} shows one of the signs of \qit{on} that are used when
\qit{on} introduces period adverbials.
\begin{examps}
\avmoptions{active, center}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\osort{prep}{
[prd & $-$ \\
mod & \feat s[vform {\fval fin}]:@2 $\lor$
\feat vp[vform {\fval psp}]:@2 \\
\avmspan{mod|loc|cat|aspect \; {\fval culmact}}]}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval on\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\feat np[-prd]$_{day\_ent@1}$\> \\
aspect & point]\\
cont & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & \osort{end}{
[main\_psoa & @2]}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{pupe:29}
\end{examps}
\pref{pupe:29}, which can be used only when the \qit{on~\dots}
adverbial modifies a culminating activity, corresponds to the reading
where the situation of the culminating activity simply reaches its
completion within the adverbial's period (table
\vref{period_adverbials_table}). (\pref{pupe:29} causes the aspectual
class of the culminating activity to become point. This agrees with
table \ref{period_adverbials_table}.) For example, \pref{pupe:29}
causes \pref{pupe:31} to be mapped to \pref{pupe:32}. (I assume here
that \qit{to repair} is classified as culminating activity verb.)
Intuitively, \pref{pupe:32} requires a past period $e^v$ to exist,
such that $e^v$ covers a whole repair of engine 2 by J.Adams (from
start to completion), and the end-point of $e^v$ falls within some
Monday. That is, the repair must have been completed on Monday, but it
may have started before Monday.
\begin{examps}
\item J.Adams repaired engine 2 on Monday. \label{pupe:31}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, m^v] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[m^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, [\ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, eng2)]]]]$
\label{pupe:32}
\end{examps}
There is also an \qit{on} sign that is similar to \pref{pupe:29}, but
that does not introduce an \ensuremath{\mathit{End}}\xspace operator, preserves the {\feat
aspect} of the modified expression, and can be used when
\qit{on~\dots} adverbials modify expressions from all four aspectual
classes. This sign causes \pref{pupe:31} to be mapped to
\pref{pupe:33} (the prototype \textsc{Nlitdb}\xspace would generate both
\pref{pupe:32} and \pref{pupe:33}). \pref{pupe:33} corresponds to the
reading where the repair must have both started and been completed
within a (the same) Monday. The \qit{on} sign that does not introduce
an \ensuremath{\mathit{End}}\xspace also gives rise to appropriate formulae when
\qit{on~\dots} adverbials modify state, activity, or point
expressions.
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, m^v] \; \land$\\
$\ensuremath{\mathit{At}}\xspace[m^v, \ensuremath{\mathit{Past}}\xspace[e^v, [\ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, eng2)]]]$
\label{pupe:33}
\end{examps}
Both \pref{pupe:29} and the \qit{on} sign that does not introduce an
\ensuremath{\mathit{End}}\xspace require the noun-phrase complement of \qit{on} to introduce an
index of sort {\srt day\_ent}. The signs of \qit{1/1/91} and
\qit{Monday} introduce indices of sorts {\srt 1/1/91}\/ and {\srt
day\_ent\_var}\/ respectively, which are subsorts of {\srt
day\_ent}\/ (see figure \vref{ind_hierarchy}). Hence, \qit{1/1/91}
and \qit{Monday} are legitimate complements of \qit{on} in period
adverbials. In contrast, \qit{5:00pm} introduces an index of sort
{\srt minute\_ent\_var}\/ (see \pref{pupe:3}), which is not a subsort
of {\srt day\_ent}. Hence, \pref{pupe:40} is correctly rejected.
\begin{examps}
\item \bad Tank 2 was empty on 5:00pm. \label{pupe:40}
\end{examps}
The signs of other prepositions that introduce period adverbials
(e.g.\ \qit{\underline{in} 1991}, \qit{\underline{before} 29/10/95},
\qit{\underline{after} 5:00pm}) and the signs of \qit{yesterday} and
\qit{today} are similar to the signs of \qit{on}, except that
\qit{before} and \qit{after} introduce \ensuremath{\mathit{Before}}\xspace and \ensuremath{\mathit{After}}\xspace operators
instead of \ensuremath{\mathit{At}}\xspace{s}. Also, \qit{before} is given only one sign, that does
not introduce an \ensuremath{\mathit{End}}\xspace (there is no \qit{before} sign for
culminating activities analogous to \pref{pupe:29}, that introduces an
\ensuremath{\mathit{End}}\xspace). This is related to comments in section \ref{period_adverbials}
that in the case of \qit{before~\dots} adverbials, requiring the
situation of a culminating activity to simply reach its completion
before some time (reading with \ensuremath{\mathit{End}}\xspace) is equivalent to requiring the
situation to both start and reach its completion before that time
(reading without \ensuremath{\mathit{End}}\xspace).
\subsection{Duration adverbials} \label{duration_adverbials}
The treatment of \qit{for~\dots} duration adverbials is rather ad hoc
from a syntax point of view. In an adverbial like \qit{for two days},
both \qit{two} and \qit{days} are taken to be complements of
\qit{for}, instead of treating \qit{two} as the determiner of
\qit{days}, and \qit{two days} as a noun-phrase complement of
\qit{for}.
Number-words like \qit{one}, \qit{two}, \qit{three}, etc.\ are mapped
to signs of the form of \pref{dadv:4}. Their {\feat restr}s are empty,
and their indices represent the corresponding numbers. (The {\srt 2}\/
of \pref{dadv:4} is a subsort of {\srt sem\_num}\/; see section
\ref{more_ind}.)
\begin{examps}
\item
\avmoptions{active}
\begin{avm}
[\avmspan{phon \; \<\fval two\>} \\
synsem|loc & [cat & [head & \sort{det}{
[spec & none]}\\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & 2 \\
restr & \{\}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{dadv:4}
\end{examps}
Although words like \qit{one}, \qit{two}, \qit{three}, etc.\ are
classified as determiners (the {\feat head} of \pref{dadv:4} is
of sort {\srt det}), the {\srt none}\/ value of their {\feat spec}
does not allow them to be used as determiners of
any noun. (Determiners combining with nouns are the specifiers
of the nouns. The {\srt none}\/ means that the word of the sign cannot
be the specifier of any constituent, and hence cannot be used as
the determiner of any noun.)
\pref{dadv:1} shows the sign of \qit{for} that is used in duration
adverbials (for typesetting reasons, I show the feature structures
that correspond to \avmbox{5} and \avmbox{6} separately, in
\pref{dadv:2} and \pref{dadv:3} respectively).
\begin{examps}
\avmoptions{active, center}
\item
\setbox\avmboxa=\hbox{\begin{avm}
\osort{prep}{
[\avmspan{prd \; $-$} \\
\avmspan{mod \; \feat s[vform {\fval fin}]:@4 $\lor$
\feat vp[vform {\fval psp}]:@4} \\
mod|loc|cat|aspect & \({\fval lex\_state} \\
& $\lor$ {\fval progressive} \\
& $\lor$ {\fval activity}\)@1]}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval for\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<@5, @6\> \\
aspect & @1]\\
cont & \osort{for\_op}{
[dur\_unit & @2 \\
duration & @3 \\
main\_psoa & @4]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{dadv:1}
\item
\begin{avm}
[loc & [cat|head & det \\
cont|index & sem\_num@3]]@5
\end{avm}
\label{dadv:2}
\item
\begin{avm}
[loc & [cat & [head & noun \\
spr & \<\_\> \\
subj & \<\> \\
comps & \<\>] \\
cont|restr & \{\sort{part}{
[partng & compl\_partng@2]}\}]]@6
\end{avm}
\label{dadv:3}
\end{examps}
The {\feat comps} of \pref{dadv:1} means that \qit{for} requires two
complements: a determiner that introduces a number-denoting ({\srt
sem\_num}\/) index (like the \qit{two} of \pref{dadv:4}), and a noun
that introduces a \ensuremath{\mathit{Part}}\xspace operator whose first argument is a complete
partitioning name (like the \qit{day} of \pref{nps:32}). In
\pref{dadv:6}, \qit{for two days} receives \pref{dadv:7}. (As already
mentioned, no number-agreement checks are made, and plural nouns are
treated semantically as singular ones. Apart from {\feat phon},
the sign of \qit{days} is the same as \pref{nps:32}.)
\begin{examps}
\item Tank 2 was empty for two days. \label{dadv:6}
\item
\avmoptions{active, center}
\setbox\avmboxa=\hbox{\begin{avm}
\osort{prep}{
[\avmspan{prd \; $-$} \\
\avmspan{mod \; \feat s[vform {\fval fin}]:@4 $\lor$
\feat vp[vform {\fval psp}]:@4} \\
mod|loc|cat|aspect & \({\fval lex\_state}\\
& $\lor$ {\fval progressive} \\
& $\lor$ {\fval activity}\)@1]}
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval for, two, days\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> \\
aspect & @1]\\
cont & \osort{for\_op}{
[dur\_unit & day \\
duration & 2 \\
main\_psoa & @4]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{dadv:7}
\end{examps}
When \qit{tank 2 was empty} combines with its temporal-adverbial
modifier \qit{for two days}, the \avmbox{4} of \pref{dadv:7} becomes a
feature structure that represents the \textsc{Top}\xspace formula for \qit{tank 2
was empty}, i.e.\ \pref{dadv:8}. According to the semantics
principle of \pref{nps:9}, the sign of \pref{dadv:6} inherits the
{\feat cont} of \pref{dadv:7} (where \avmbox{4} now represents
\pref{dadv:8}). Hence, \pref{dadv:6} is mapped to \pref{dadv:9}.
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]$ \label{dadv:8}
\item $\ensuremath{\mathit{For}}\xspace[day^c, 2, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{dadv:9}
\end{examps}
Following table \vref{for_adverbials_table}, \pref{dadv:1} does not
allow \qit{for~\dots} adverbials to modify point expressions (the
{\feat mod$\mid$loc$\mid$cat$\mid$aspect} of \pref{dadv:1} cannot be
{\srt point}\/). It also does not allow \qit{for~\dots} adverbials to
modify consequent states. If \qit{for~\dots} adverbials were allowed
to modify consequent states, \pref{dadv:10} would receive
\pref{dadv:11} and \pref{dadv:12}.
\begin{examps}
\item BA737 had circled for two hours. \label{dadv:10}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{For}}\xspace[hour^c, 2, circling(ba737)]]]$
\label{dadv:11}
\item $\ensuremath{\mathit{For}}\xspace[hour^c, 2, \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, circling(ba737)]]]$
\label{dadv:12}
\end{examps}
\pref{dadv:11} corresponds to the parse of \pref{dadv:10} where
\qit{for two hours} modifies the past participle \qit{circled} before
\qit{circled} combines with \qit{had}. In that case, the
\qit{for~\dots} adverbial modifies an activity, because past
participles retain the aspectual class of the base form (\qit{to
circle} is an activity verb in the airport domain). \pref{dadv:12}
corresponds to the parse where \qit{for two hours} modifies the whole
sentence \qit{BA737 had circled}. In that case, the \qit{for~\dots}
adverbial modifies a consequent state, because the \qit{had} has
caused the aspectual class of \qit{BA737 had circled} to become
consequent state. By not allowing \qit{for~\dots} adverbials to modify
consequent states, \pref{dadv:12} is blocked. This is needed, because
in \pref{dadv:12} two hours is the duration of a period (pointed to by
$e1^v$) that follows a period (pointed to by $e2^v$) where BA737 was
circling. This reading is never possible when \qit{for~\dots}
adverbials are used in past perfect sentences. The \qit{for~\dots}
adverbial of \pref{dadv:10} can only specify the duration of the
circling (a reading captured by \pref{dadv:11}). (A similar
observation is made on p.~587 of \cite{Kamp1993}.)
The present treatment of \qit{for~\dots} duration adverbials causes
\pref{dadv:13} to receive \pref{dadv:14}. \pref{dadv:14} does not
capture correctly the meaning of \pref{dadv:13}, because it requires
the taxiing to have been completed, i.e.\ BA737 to have reached gate
2. In contrast, as discussed in section \ref{for_adverbials}, the
\qit{for~\dots} adverbial of \pref{dadv:13} cancels the normal
implication of \qit{BA737 taxied to gate 2.} that the taxiing was
completed. The post-processing (section \ref{post_processing} below)
removes the \ensuremath{\mathit{Culm}}\xspace of \pref{dadv:14}, generating a formula that does
not require the taxiing to have been completed.
\begin{examps}
\item BA737 taxied to gate 2 for five minutes. \label{dadv:13}
\item $\ensuremath{\mathit{For}}\xspace[minute^c, 5, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[taxiing\_to(ba737, gate2)]]]$
\label{dadv:14}
\end{examps}
Duration adverbials introduced by \qit{in} (e.g.\ \pref{dadv:15}) are
treated by mapping \qit{in} to a sign that is the same as
\pref{dadv:1}, except that it allows the adverbial to modify
only culminating activities. (The framework of this
thesis does not allow \qit{in~\dots} duration adverbials to
modify states, activities, or points; see section \ref{in_adverbials}.)
\begin{examps}
\item BA737 taxied to gate 2 in five minutes. \label{dadv:15}
\end{examps}
This causes \pref{dadv:15} to be mapped to \pref{dadv:14}, which
correctly requires the taxiing to have been completed, and the
duration of the taxiing (from start to completion) to be
five minutes. (In this case, the post-processing does not remove the \ensuremath{\mathit{Culm}}\xspace.)
\section{Temporal complements of habituals} \label{habituals}
Let us now examine more closely the status of temporal prepositional-phrases,
like \qit{at 5:00pm} and \qit{on Monday} in \pref{hab:1} -- \pref{hab:4}.
\begin{examps}
\item BA737 departed at 5:00pm. \label{hab:1}
\item BA737 departs at 5:00pm. \label{hab:2}
\item J.Adams inspected gate 2 on Monday. \label{hab:3}
\item J.Adams inspects gate 2 on Monday. \label{hab:4}
\end{examps}
\pref{hab:1} has both a habitual and a non-habitual reading. Under the
non-habitual reading, it refers to an actual departure that took place
at 5:00pm. Under the habitual reading, it means that BA737 had the
habit of departing at 5:00pm (this reading is easier to accept if an
adverbial like \qit{in 1992} is added). In \pref{hab:2}, only the
habitual reading is possible, i.e.\ BA737 currently has the habit of
departing at 5:00pm. (A scheduled-to-happen reading is also possible,
but as discussed in section \ref{simple_present} this is ignored in
this thesis.) Similar comments apply to \pref{hab:3} and \pref{hab:4}.
To account for the habitual and non-habitual readings of \qit{to
depart} in \pref{hab:1} and \pref{hab:2}, the base form of \qit{to
depart} is given the signs of \pref{hab:7} and \pref{hab:8}. These
correspond to what chapter \ref{linguistic_data} called informally the
habitual and non-habitual homonyms of \qit{to depart}. \pref{hab:7}
classifies the habitual homonym as (lexical) state, while \pref{hab:8}
classifies the non-habitual homonym as point (this agrees with table
\vref{airport_verbs}). According to \pref{hab:7}, the habitual homonym
requires an \qit{at~\dots} prepositional phrase that specifies the
habitual departure time (this is discussed further below). In
contrast, the non-habitual homonym of \pref{hab:8} requires no
complement.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval depart\>} \\
synsem & [loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$ ]} \\
aspect & lex\_state \\
spr & \<\> \\
subj & \< \feat np[-prd]$_{flight\_ent@1}$ \> \\
comps & \<
\feat pp[-prd, pform {\fval at}]$_{minute\_gappy@2}$
\> ]\\
cont & \sort{hab\_departs\_at}{
[arg1 & @1 \\
arg2 & @2]} ]] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{hab:7}
\item
\begin{avm}
[\avmspan{phon \; \<\fval depart\>} \\
synsem & [loc & [cat & [head & \osort{verb}{
[vform & bse \\
aux & $-$ ]} \\
aspect & point \\
spr & \<\> \\
subj & \< \feat np[-prd]$_{flight\_ent@1}$ \> \\
comps & \<\> ]\\
cont & \sort{actl\_depart}{
[arg1 & @1]} ]] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{hab:8}
\end{examps}
In the airport domain, there are actually two habitual signs for
\qit{to depart}, one where \qit{to depart} requires an \qit{at~\dots}
prepositional-phrase complement (as in \pref{hab:7}), and one where
\qit{to depart} requires a \qit{from~\dots} prepositional-phrase
complement (this is needed in \pref{hab:8.1}). There are also two
non-habitual signs of \qit{to depart}, one where \qit{to depart}
requires no complement (as in \pref{hab:8}), and one where \qit{to
depart} requires a \qit{from~\dots} prepositional-phrase complement
(needed in \pref{hab:8.2}). For simplicity, here I ignore these extra
signs.
\begin{examps}
\item BA737 (habitually) departs from gate 2. \label{hab:8.1}
\item BA737 (actually) departed from gate 2. \label{hab:8.2}
\end{examps}
\pref{hab:7}, \pref{hab:8}, and the simple-past lexical rules of
section \ref{single_word_forms} give rise to two signs (a habitual and
a non-habitual one) for the simple past \qit{departed}. These are the
same as \pref{hab:7} and \pref{hab:8}, except that they contain
additional \ensuremath{\mathit{Past}}\xspace operators. In contrast, the simple-present lexical
rule of section \ref{single_word_forms} generates only one sign for the
simple present \qit{departs}. This is the same as \pref{hab:7}, except
that it contains an additional \ensuremath{\mathit{Pres}}\xspace operator. No simple-present sign is
generated from \pref{hab:8}, because the simple-present lexical rule
requires the aspect of the base sign to be state.
The non-habitual simple-past sign of \qit{departed}, the \qit{at} sign
of \pref{pupe:2}, and the \qit{5:00pm} sign of \pref{pupe:3}, cause
\pref{hab:1} to be mapped to \pref{hab:13}, which expresses the
non-habitual reading of \pref{hab:1}. In this case, \qit{at
5:00pm} is treated as a temporal-adverbial modifier of
\qit{BA737 departed}, as discussed in section \ref{hpsg:punc_adv}.
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, act\_depart(ba737)]]$ \label{hab:13}
\end{examps}
In the habitual reading of \pref{hab:1}, where the habitual sign of
\qit{departed} (derived from \pref{hab:7}) is used, \qit{at 5:00pm} is
treated as a prepositional-phrase complement of \qit{departed}. In
this case, the sign of \qit{at} that introduces non-predicative
prepositional-phrase complements (i.e.\ \pref{pps:12}) is used. The
intention is to map \pref{hab:1} to \pref{hab:14}, where
\textit{5:00pm} is a constant acting as a ``generic representative''
of 5:00pm minutes (section \ref{hab_problems}).
\begin{examps}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, hab\_departs\_at(ba737, \text{\textit{5:00pm}})]$
\label{hab:14}
\end{examps}
The problem is that in this case the \qit{5:00pm} sign of
\pref{pupe:3} cannot be used, because it inserts a \ensuremath{\mathit{Part}}\xspace operator in
{\feat qstore}. The semantics principle would cause this \ensuremath{\mathit{Part}}\xspace
operator to be inherited by the sign of the overall \pref{hab:1}, and
thus the \ensuremath{\mathit{Part}}\xspace operator would appear in the resulting formula. In
contrast, \pref{hab:14} (the intended formula for \pref{hab:1})
contains no \ensuremath{\mathit{Part}}\xspace operators. To solve this problem, one has to allow
an extra sign for \qit{5:00pm}, shown in \pref{hab:15}, which does not
introduce a \ensuremath{\mathit{Part}}\xspace. Similarly, an extra \qit{Monday} sign is needed
in \pref{hab:3}. (The fact that these extra signs have to be
introduced is admittedly inelegant. This is caused by the fact that
\qit{at 5:00pm} is treated differently in \pref{hab:13} and
\pref{hab:14}; see also the discussion in section \ref{hab_problems}.)
The \qit{5:00pm} sign of \pref{hab:15}, the \qit{at} sign that is used
when \qit{at} introduces non-predicative prepositional-phrase
complements (i.e.\ \pref{pps:12}), and the habitual \qit{departed}
sign (derived from \pref{hab:7}) cause \pref{hab:1} to be mapped to
\pref{hab:14}.
\avmoptions{active}
\begin{examps}
\item
\begin{avm}
[\avmspan{phon \; \<\fval 5:00pm\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & 5:00pm \\
restr & \{\}]} ] \\
\avmspan{qstore \; \{\}}]
\end{avm}
\label{hab:15}
\end{examps}
The habitual \qit{departed} sign (which derives from \pref{hab:7}) requires
the index of the prepositional-phrase complement to be of sort {\srt
minute\_gappy}. As wanted, this does not allow the \qit{5:00pm} sign
of \pref{pupe:3} (the one that introduces a \ensuremath{\mathit{Part}}\xspace) to be used in the
prepositional-phrase complement of the habitual \qit{departed},
because if \pref{pupe:3} is used, the index of the prepositional
phrase will be of sort {\srt minute\_ent\_var}, which is not a subsort
of {\srt minute\_gappy}\/ (see figure \vref{ind_hierarchy}). In
contrast, \pref{hab:15} introduces an index of sort {\srt 5:00pm},
which is a subsort of {\srt minute\_gappy}, and hence that sign can be
used in the complement of the habitual \qit{departed}.
The treatment of the simple present \pref{hab:2} is similar. In this
case, the habitual simple present sign (that is derived from
\pref{hab:7}) is used, and \pref{hab:2} is mapped to \pref{hab:16}.
No \textsc{Top}\xspace formula is generated for the (impossible)
non-habitual reading of \pref{hab:2}, because there is no non-habitual
sign for the simple present \qit{departs} (see comments above about
the simple present lexical rule).
\begin{examps}
\item $\ensuremath{\mathit{Pres}}\xspace[hab\_departs\_at(ba737, \text{\textit{5:00pm}})]$
\label{hab:16}
\end{examps}
\section{Fronted temporal modifiers} \label{fronted}
As discussed in section \ref{hpsg:pupe_adv}, in this thesis
temporal-adverbial modifiers (e.g.\ \qit{at 5:00pm} in \pref{front:1}
-- \pref{front:2}, \qit{on Monday} in \pref{front:3} --
\pref{front:4}) can modify either whole finite sentences or past
participle verb phrases.
\begin{examps}
\item BA737 entered sector 2 at 5:00pm. \label{front:1}
\item At 5:00pm BA737 entered sector 2. \label{front:2}
\item Tank 2 was empty on Monday. \label{front:3}
\item On Monday tank 2 was empty. \label{front:4}
\end{examps}
In \textsc{Hpsg}\xspace, the order in which a modifier and the constituent to which
the modifier attaches can appear in a sentence is controlled by the
``constituent-ordering principle'' (\textsc{Cop}). This is a general
(and not fully developed) principle that controls the order in which
the various constituents can appear in a sentence (see chapter 7 of
\cite{Pollard1}). This thesis uses an over-simplified version of
\textsc{Cop}, that places no restriction on the order between temporal
modifiers and modified constituents when the modified constituents are
sentences. This allows \qit{at 5:00pm} to either follow \qit{BA737
entered sector 2} (as in \pref{front:1}), or to precede it (as in
\pref{front:2}). Similarly, \qit{on Monday} may either follow
\qit{tank 2 was empty} (as in \pref{front:3}), or precede it (as in
\pref{front:4}).\footnote{An alternative approach is to allow temporal
modifiers to participate in unbounded dependency constructions; see
pp.~176 -- 181 of \cite{Pollard2}.} When temporal modifiers attach
to past-participle verb phrases, however, I require the modifiers to
follow the verb phrases, as in \pref{front:6}. This rules out
unacceptable sentences like \pref{front:5}, where \qit{at 5:00pm}
precedes the \qit{entered sector 2}.\footnote{Constituent-ordering
restrictions are enforced in the \textsc{Ale}\xspace grammar of the prototype
\textsc{Nlitdb}\xspace in a rather ad hoc manner, which involves partitioning the
{\srt synsem}\/ sort into {\srt pre\_mod\_synsem}\/ and {\srt
post\_mod\_synsem}, and using feature structures from the two
subsorts as values of {\feat mod} to signal that the
modifier can only precede or follow the modified constituent. This
idea was borrowed from a grammar written by Suresh Manandhar.}
\begin{examps}
\item BA737 had [[entered sector 2] at 5:00pm]. \label{front:6}
\item \bad BA737 had [at 5:00pm [entered sector 2]]. \label{front:5}
\end{examps}
This approach causes \pref{front:7} to receive only \pref{front:8},
because in \pref{front:7} \qit{at 5:00pm} can modify only the whole
\qit{BA737 had entered sector 2} (it cannot modify just \qit{entered
sector 2} because of the intervening \qit{BA737 had}). In
\pref{front:8}, 5:00pm is a reference time, a time where the entrance
had already occurred. In contrast, \pref{front:8.5} receives both
\pref{front:8} and \pref{front:11}, because in that case \qit{at
5:00pm} can modify either the whole \qit{BA737 had entered sector 2}
or only \qit{entered sector 2}. In \pref{front:11}, 5:00pm is the time
where the entrance occurred.
\begin{examps}
\item At 5:00pm [BA737 had entered sector 2]. \label{front:7}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}, fv^v] \land
\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, enter(ba737, sector2)]]]$
\label{front:8}
\item BA737 had entered sector 2 at 5:00pm. \label{front:8.5}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}, fv^v] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[fv^v, enter(ba737, sector2)]]]$
\label{front:11}
\end{examps}
The fact that \pref{front:7} does not receive \pref{front:11} does not
seem to be a disadvantage, because in \pref{front:7} the reading where
\qit{at 5:00pm} specifies the time of the entrance seems unlikely (or
at least much more unlikely than in \pref{front:8.5}).
\section{Temporal subordinate clauses}
\label{hpsg:subordinates}
I now discuss temporal subordinate clauses (section
\ref{subordinate_clauses}), focusing on \qit{while~\dots} clauses. The
treatment of \qit{before~\dots} and \qit{after~\dots} clauses is very
similar.
As with period adverbials, \qit{while~\dots} clauses are treated as
temporal modifiers of finite sentences or past participle verb
phrases. As with prepositions introducing period adverbials,
\qit{while} is given two signs. The first one, shown in \pref{subs:1},
introduces an \ensuremath{\mathit{End}}\xspace operator, causes an aspectual shift to point, and
can be used only with culminating activity main clauses (\pref{subs:1}
is similar to \pref{pupe:29}.) The second one is the same as
\pref{subs:1}, except that it does not introduce an \ensuremath{\mathit{End}}\xspace, it
preserves the aspectual class of the main clause, and it can be used
with main clauses of any aspectual class. In both cases, \qit{while}
requires as its complement a finite sentence whose aspect must not be
consequent state (this agrees with table \vref{while_clauses_table},
which does not allow the aspectual class of the \qit{while}-clause to
be consequent state).
\begin{examps}
\avmoptions{active, center}
\item
\setbox\avmboxa=\hbox{\begin{avm}
[mod & \feat s[vform {\fval fin}]:@2 $\lor$
\feat vp[vform {\fval psp}]:@2 \\
\avmspan{mod|loc|cat|aspect \; {\fval culmact}}]
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval while\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\feat s[vform & {\fval fin}\\
aspect & $\neg$cnsq\_state]:@1\> \\
aspect & point]\\
cont & \osort{at\_op}{
[time\_spec & @1 \\
main\_psoa & \osort{end}{
[main\_psoa & @2]}]}] \\
\avmspan{qstore \; \{\}} ]
\end{avm}
\label{subs:1}
\end{examps}
The \avmbox{1} in \pref{subs:1} denotes the {\feat cont} of the sign
of the complement of \qit{while} (the subordinate clause). The two
\qit{while} signs cause \pref{subs:3} to receive \pref{subs:4} and
\pref{subs:5}. (\qit{To land} is a culminating activity verb in the
airport domain). \pref{subs:4} requires the landing to have simply been
completed during the inspection, while \pref{subs:5} requires
the landing to have both started and been completed during the
inspection.
\begin{examps}
\item UK160 landed while J.Adams was inspecting BA737. \label{subs:3}
\item $\begin{aligned}[t]
\ensuremath{\mathit{At}}\xspace[&\ensuremath{\mathit{Past}}\xspace[e1^v, inspecting(j\_adams, ba737)], \\
&\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[landing(occr^v, uk160)]]]]
\end{aligned}$
\label{subs:4}
\item $\begin{aligned}[t]
\ensuremath{\mathit{At}}\xspace[&\ensuremath{\mathit{Past}}\xspace[e1^v, inspecting(j\_adams, ba737)], \\
&\ensuremath{\mathit{Past}}\xspace[e2^v, \ensuremath{\mathit{Culm}}\xspace[landing(occr^v, uk160)]]]
\end{aligned}$
\label{subs:5}
\end{examps}
Since \qit{while~\dots} clauses are treated as temporal modifiers, the
ordering arrangements of section \ref{fronted} apply to
\qit{while~\dots} clauses as well. Hence, \qit{while~\dots} clauses
can either precede or follow finite sentences (e.g.\ \pref{subs:8.1},
\pref{subs:9}).
\begin{examps}
\item UK160 arrived while J.Adams was inspecting BA737. \label{subs:8.1}
\item While J.Adams was inspecting BA737, UK160 arrived. \label{subs:9}
\end{examps}
One problem with the present treatment of \qit{while~\dots} clauses is
that it maps \pref{subs:10} to \pref{subs:11}, which requires the
inspection to have been completed. This does not agree with table
\vref{while_clauses_table}, according to which any requirement that
the situation of a culminating activity sentence must have been
reached is cancelled when the sentence is used as a \qit{while~\dots}
clause. To overcome this problem, the post-processing (section
\ref{post_processing} below) removes any \ensuremath{\mathit{Culm}}\xspace operators that are within
first arguments of \ensuremath{\mathit{At}}\xspace operators. This removes the \ensuremath{\mathit{Culm}}\xspace of
\pref{subs:11}, generating a formula that no longer requires the
inspection to have been completed.
\begin{examps}
\item UK160 departed while J.Adams inspected BA737. \label{subs:10}
\item $\begin{aligned}[t]
\ensuremath{\mathit{At}}\xspace[&\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(j\_adams, ba737)]], \\
&\ensuremath{\mathit{Past}}\xspace[e2^v, [actl\_depart(uk160)]]]
\end{aligned}$
\label{subs:11}
\end{examps}
\section{Interrogatives} \label{unb_dep}
So far, this chapter has considered mainly assertions (e.g.\
\pref{unb:1}). (The reader is reminded that assertions are treated as
yes/no questions; e.g.\ \pref{unb:1} is treated as \pref{unb:3}.) I
now explain how the \textsc{Hpsg}\xspace version of this thesis copes with questions
(e.g.\ \pref{unb:3} -- \pref{unb:8}).
\begin{examps}
\item Tank 2 was empty. \label{unb:1}
\item Was tank 2 empty? \label{unb:3}
\item Did J.Adams inspect BA737? \label{unb:4}
\item Which tank was empty? \label{unb:5}
\item Who inspected BA737? \label{unb:6}
\item What did J.Adams inspect? \label{unb:7}
\item When did J.Adams inspect BA737? \label{unb:8}
\end{examps}
Yes/no questions (e.g.\ \pref{unb:3}, \pref{unb:4}) constitute no
particular problem. \textsc{Hpsg}\xspace's schemata allow auxiliary verbs to be used
in sentence-initial positions, and cause \pref{unb:3} to receive the
same formula (shown in \pref{unb:9.1}) as \pref{unb:9}. In both cases,
the same lexical signs are used. Similar comments apply to
\pref{unb:4} and \pref{unb:10}, which are mapped to \pref{unb:11}.
\begin{examps}
\item Tank 2 was empty. \label{unb:9}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]$ \label{unb:9.1}
\item J.Adams did inspect BA737. \label{unb:10}
\item $\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, j\_adams, ba737)]]$
\label{unb:11}
\end{examps}
The interrogative \qit{which} is treated syntactically as a determiner
of (non-predicative) noun phrases. The sign of \qit{which} is the same
as the sign of \qit{the} of \pref{nps:4}, except that it introduces an
interrogative quantifier rather than an existential one. For example,
\pref{unb:5} is analysed syntactically in the same way as
\pref{unb:12} (punctuation is ignored). However, the formula of
\pref{unb:5} (shown in \pref{unb:14}) contains an additional
interrogative quantifier (cf.\ the formula of \pref{unb:12}, shown in
\pref{unb:13}). (I assume here that \qit{tank} does not introduce an
\ensuremath{\mathit{Ntense}}\xspace. The \qit{a} of \pref{unb:12} introduces an existential
quantifier which is removed during the extraction of \pref{unb:13}
from the sign of \pref{unb:12}, as discussed in section
\ref{extraction_hpsg}.)
\begin{examps}
\item A tank was empty. \label{unb:12}
\item $tank(tk^v) \land \ensuremath{\mathit{Past}}\xspace[e^v, empty(tk^v)]$ \label{unb:13}
\item $?tk^v \; tank(tk^v) \land \ensuremath{\mathit{Past}}\xspace[e^v, empty(tk^v)]$ \label{unb:14}
\end{examps}
The interrogative \qit{who} is treated syntactically as a
non-predicative noun-phrase. Its sign, shown in \pref{unb:15},
introduces an interrogative quantifier.
\begin{examps}
\avmoptions{active}
\item
\begin{avm}
[\avmspan{phon \; \<\fval who\>} \\
synsem|loc & [cat & [head & \osort{noun}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & \sort{person\_ent}{
[tvar & $+$]} \\
restr & \{\}]}@1] \\
\avmspan{qstore \; \{\sort{quant}{
[det & interrog \\
restind & @1]}\}}]
\end{avm}
\label{unb:15}
\end{examps}
\pref{unb:6} is analysed syntactically in the same way as
\pref{unb:16}. The sign of \qit{who}, however, gives rise to an
interrogative quantifier in the formula of \pref{unb:6} (shown in
\pref{unb:18}), which is not present in the formula of \pref{unb:16}
(shown in \pref{unb:17}). The interrogative \qit{what} is treated
similarly.
\begin{examps}
\item J.Adams inspected BA737. \label{unb:16}
\item $\ensuremath{\mathit{Past}}\xspace[\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, j\_adams, ba737)]]$ \label{unb:17}
\item $?wh^v \; \ensuremath{\mathit{Past}}\xspace[\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, wh^v, ba737)]]$
\label{unb:18}
\end{examps}
The \textsc{Hpsg}\xspace version of this thesis admits questions like \pref{unb:19},
which are unacceptable in most contexts. \pref{unb:19} is licensed by the
same syntactic analysis that allows \pref{unb:20}, and receives the same
formula as \pref{unb:21}.
\begin{examps}
\item \odd Did J.Adams inspect which flight? \label{unb:19}
\item Did J.Adams inspect a flight? \label{unb:20}
\item Which flight did J.Adams inspect? \label{unb:21}
\end{examps}
Questions like \pref{unb:21}, where the interrogative refers to the
object of the verb, are treated using \textsc{Hpsg}\xspace's unbounded-dependencies
mechanisms (more precisely, using the {\feat slash} feature; see
chapter 4 of \cite{Pollard2}).\footnote{Pollard and Sag also reserve a
{\feat que} feature, which is supposed to be used in the treatment
of interrogatives. They provide virtually no information on the role
of {\feat que}, however, pointing to \cite{Ginzburg1992} where
{\feat que} is used in a general theory of interrogatives.
Ginzburg's theory is intended to address issues well beyond the
scope of this thesis (e.g.\ the relation between a question and the
facts that can be said to \emph{resolve} that question; see also
\cite{Ginzburg1995}, \cite{Ginzburg1995b}). {\feat que} is not used
in this thesis.} Roughly speaking, \pref{unb:21} is analysed as
being a form of \pref{unb:19}, where the object \qit{which flight} has
moved to the beginning of the question. \textsc{Hpsg}\xspace's unbounded-dependencies
mechanisms will not be discussed here (see \cite{Pollard2}; the
prototype \textsc{Nlitdb}\xspace uses the traceless analysis of unbounded
dependencies, presented in chapter 9 of \cite{Pollard2}).
The present treatment of interrogatives allows questions with multiple
interrogatives, like \pref{unb:24} which receives \pref{unb:24.1}.
(\pref{unb:24} is parsed in the same way as \pref{unb:25}.)
Unfortunately, it also allows ungrammatical questions like
\pref{unb:26}, which is treated as a version of \pref{unb:24} where
the \qit{what} complement has moved to the beginning of the sentence.
(\pref{unb:26} receives \pref{unb:24}.)
\begin{examps}
\item Who inspected what. \label{unb:24}
\item $?w1^v \; ?w2^v \; \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w1^v, w2^v)]]$
\label{unb:24.1}
\item J.Adams inspected BA737. \label{unb:25}
\item \bad What who inspected. \label{unb:26}
\end{examps}
The interrogative \qit{when} of \pref{unb:27} is treated as a
temporal-adverbial modifier of finite sentences. \pref{unb:28} shows
the sign of \qit{when} that is used in \pref{unb:27}. \pref{unb:28}
causes \pref{unb:27} to receive \pref{unb:27.1}.
\avmoptions{active, center}
\begin{examps}
\item When was tank 2 empty? \label{unb:27}
\item $?_{mxl}w^v \; \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]$ \label{unb:27.1}
\item
\setbox\avmboxa=\hbox{\begin{avm}
[mod & \feat s[vform {\fval fin}]:@1 \\
\avmspan{mod|loc|cat|aspect \; @2}]
\end{avm}}
\setbox\avmboxb=\hbox{\begin{avm}
[det & interrog\_mxl \\
restind & [index & \sort{temp\_ent}{
[tvar & $+$]} \\
restr & \{\}]]
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval when\>} \\
synsem|loc & [cat & [head & \box\avmboxa \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> \\
aspect & @2]\\
cont & @1] \\
\avmspan{qstore \; \{\box\avmboxb\}} ]
\end{avm}
\label{unb:28}
\end{examps}
\pref{unb:28} introduces interrogative-maximal
quantifiers whose variables ($w^v$ in \pref{unb:27.1}) do not appear
elsewhere in the formula. The post-processing
(to be discussed in section \ref{post_processing}) replaces the
variables of interrogative-maximal quantifiers by
variables that appear as first arguments of \ensuremath{\mathit{Past}}\xspace or \ensuremath{\mathit{Perf}}\xspace
operators. In \pref{unb:27.1}, this would replace $w^v$
by $e^v$, generating a formula that asks for the maximal past
periods where tank 2 was empty.
There is also a second sign for the interrogative \qit{when} (shown in
\pref{unb:32}), that is used in habitual questions like \pref{unb:29}.
In \pref{unb:29}, \qit{when} is taken to play the same role as
\qit{at 5:00pm} in \pref{unb:30}, i.e.\ it is treated as the
prepositional-phrase complement of the habitual \qit{depart} (see
section \ref{habituals}), which has moved to the beginning of the
sentence via the unbounded-dependencies mechanisms.
\avmoptions{active, center}
\begin{examps}
\item When does BA737 depart (habitually)? \label{unb:29}
\item Does BA737 depart (habitually) at 5:00pm? \label{unb:30}
\item
\setbox\avmboxa=\hbox{\begin{avm}
[det & interrog \\
restind & @1]
\end{avm}}
\begin{avm}
[\avmspan{phon \; \<\fval when\>} \\
synsem|loc & [cat & [head & \osort{prep}{
[prd & $-$]} \\
spr & \<\> \\
subj & \<\> \\
comps & \<\> ]\\
cont & \osort{nom\_obj}{
[index & \sort{gappy\_partng}{
[tvar & $+$]} \\
restr & \{\}]}@1] \\
\avmspan{qstore \; \{\box\avmboxa\}}]
\end{avm}
\label{unb:32}
\end{examps}
In the simple past \pref{unb:33}, both the (state) habitual homonym of
\qit{to depart} (that of \pref{hab:7}, which requires a prepositional
phrase complement) and the (point) non-habitual homonym (that of
\pref{hab:8}, which requires no complement) can be used. Hence,
\qit{when} can be either a prepositional-phrase complement of the
habitual \qit{depart} (using \pref{unb:32}), or a temporal modifier of
the non-habitual sentence \qit{did BA737 depart} (using
\pref{unb:28}). This gives rise to \pref{unb:34} and \pref{unb:35},
which correspond to the habitual and non-habitual readings of
\pref{unb:33} (the $w^v$ of \pref{unb:35} would be replaced by $e^v$
during the post-processing).
\begin{examps}
\item When did BA737 depart? \label{unb:33}
\item $?w^v \; \ensuremath{\mathit{Past}}\xspace[e^v, hab\_departs\_at(ba737, w^v)]$ \label{unb:34}
\item $?_{mxl}w^v \; \ensuremath{\mathit{Past}}\xspace[e^v, actl\_depart(ba737)]$ \label{unb:35}
\end{examps}
\section{Multiple temporal modifiers} \label{hpsg:mult_mods}
The framework of this thesis currently runs into several problems
in sentences with multiple temporal modifiers. This section discusses these
problems.
\paragraph{Both preceding and trailing temporal modifiers:}
Temporal modifiers are allowed to either
precede or follow finite sentences (section
\ref{fronted}). When a finite sentence is modified by both a
preceding and a trailing temporal modifier (as in \pref{mults:1}),
two parses are generated: one where the trailing modifier
attaches first to the sentence (as in \pref{mults:2}), and one where
the preceding modifier attaches first (as in \pref{mults:4}). In most
cases, this generates two semantically equivalent formulae
(\pref{mults:3} and \pref{mults:5} in the case of \pref{mults:1}). A
mechanism is needed to eliminate one of the two formulae.
\begin{examps}
\item Yesterday BA737 was at gate 2 for two hours. \label{mults:1}
\item Yesterday [[BA737 was at gate 2] for two hours.] \label{mults:2}
\item $\ensuremath{\mathit{At}}\xspace[yesterday, \ensuremath{\mathit{For}}\xspace[hour^c, 2, \ensuremath{\mathit{Past}}\xspace[e^v, located\_at(ba737, gate2)]]]$
\label{mults:3}
\item {[}Yesterday [BA737 was at gate 2]] for two hours. \label{mults:4}
\item $\ensuremath{\mathit{For}}\xspace[hour^c, 2, \ensuremath{\mathit{At}}\xspace[yesterday, \ensuremath{\mathit{Past}}\xspace[e^v, located\_at(ba737, gate2)]]]$
\label{mults:5}
\end{examps}
\paragraph{Multiple temporal modifiers and anaphora:}
Another problem is that a question like \pref{mults:10} is mapped to
\pref{mults:11}. (I assume here that \qit{flight} does not introduce
an \ensuremath{\mathit{Ntense}}\xspace.) The problem with \pref{mults:11} is that it does not
require $fv^v$ to be the particular 5:00pm-minute of 2/11/95.
\pref{mults:11} requires the flight to have arrived on 2/11/95 and
after an arbitrary 5:00pm-minute (e.g.\ the 5:00pm-minute of 1/11/95).
In effect, this causes the \qit{after 5:00pm} to be ignored.
\begin{examps}
\item Which flight arrived after 5:00pm on 2/11/95? \label{mults:10}
\item $?fl^v \; flight(fl^v) \land \ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v]
\land$\\
$\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}}, \ensuremath{\mathit{After}}\xspace[fv^v, \ensuremath{\mathit{Past}}\xspace[e^v, arrive(fl^v)]]]$
\label{mults:11}
\end{examps}
This problem seems related to the need for temporal anaphora
resolution mechanisms (section \ref{temporal_anaphora}). In
\pref{mults:16}, for example, the user most probably has a particular
(contextually-salient) 5:00pm-minute in mind, and an anaphora
resolution mechanism is needed to determine that minute. A similar
mechanism could be responsible for reasoning that in \pref{mults:10}
the most obvious contextually salient 5:00pm-minute is that of
2/11/95.
\begin{examps}
\item Which tanks were empty before/at/after 5:00pm? \label{mults:16}
\end{examps}
\paragraph{Culminating activity with both punctual and period adverbial:}
A further problem appears when a culminating activity is modified by
both a punctual and a period adverbial.\footnote{The problems of this
section that involve period adverbials also arise when temporal
subordinate clauses are used instead of period adverbials.} The
problem is that, unlike what one would expect, \pref{mults:18} and
\pref{mults:19} do not receive equivalent \textsc{Top}\xspace formulae. (I assume
here that \qit{to repair} is classified as culminating activity verb.)
\begin{examps}
\item J.Adams repaired fault 2 at 5:00pm on 2/11/95. \label{mults:18}
\item J.Adams repaired fault 2 on 2/11/95 at 5:00pm. \label{mults:19}
\end{examps}
In \pref{mults:18}, the punctual adverbial \qit{at 5:00pm} modifies
the culminating activity sentence \qit{J.Adams repaired fault 2}. The
punctual adverbial causes \qit{J.Adams repaired fault 2 at 5:00pm} to
become a point (see table \vref{punctual_adverbials_table}). Two
formulae are generated: one that requires the repair to have started
at 5:00pm, and one that requires the repair to have been completed at
5:00pm. \qit{On 2/11/95} then modifies the point expression
\qit{J.Adams repaired fault 2 at 5:00pm}. This leads to
\pref{mults:22} and \pref{mults:23}. In \pref{mults:22} the repair
\emph{starts} at the 5:00pm-minute of 2/11/95, while in
\pref{mults:23} the repair is \emph{completed} at the 5:00pm-minute of
2/11/95. (The first reading is easier to accept in \qit{J.Adams
inspected BA737 at 5:00pm on 2/11/95}.)
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},$\\
$\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Begin}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:22}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},$\\
$\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:23}
\end{examps}
(A digression: this example also demonstrates why punctual adverbials
are taken to trigger an aspectual shift to point; see section
\ref{point_adverbials}. Without this shift, the aspectual class of
\qit{J.Adams repaired fault 2 at 5:00pm} would be culminating
activity, and the \qit{on} signs of section \ref{hpsg:per_advs} would
lead to the additional formulae of \pref{mults:22f} and
\pref{mults:23f}. These are equivalent to \pref{mults:22} and
\pref{mults:23} respectively.)
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}}, $\\
$\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{Begin}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v,
\ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]]$
\label{mults:22f}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},$\\
$\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{At}}\xspace[fv^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v,
\ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]]$
\label{mults:23f}
\end{examps}
In \pref{mults:19}, \qit{J.Adams repaired fault 2} is
first modified by the period adverbial \qit{on 2/11/95}. Two formulae
(shown in \pref{mults:24} and \pref{mults:25}) are
generated. \pref{mults:24} requires the repair to simply reach its
completion on 2/11/95, while \pref{mults:25} requires the repair to
both start and reach its completion on 2/11/95. In the first case (where
\pref{mults:24} is generated), the aspectual class of \qit{J.Adams repaired
fault 2 on 2/11/95} becomes point, while in the other case the
aspectual class remains culminating activity (see also table
\vref{period_adverbials_table}).
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},
\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]$
\label{mults:24}
\item $\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},
\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]$
\label{mults:25}
\end{examps}
In the case of \pref{mults:24}, where the aspectual class of \qit{J.Adams
repaired fault 2 on 2/11/95} is point,
the signs of section \ref{hpsg:punc_adv} lead to \pref{mults:26},
while in the case of \pref{mults:25}, they lead to \pref{mults:27} and \pref{mults:28}.
\begin{examps}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g, fv^v] \land \ensuremath{\mathit{At}}\xspace[fv^v,$\\
$\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},
\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:26}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g] \land \ensuremath{\mathit{At}}\xspace[fv^v,$\\
$\ensuremath{\mathit{Begin}}\xspace[
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},
\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:27}
\item $\ensuremath{\mathit{Part}}\xspace[\text{\textit{5:00pm}}^g] \land \ensuremath{\mathit{At}}\xspace[fv^v,$\\
$\ensuremath{\mathit{End}}\xspace[
\ensuremath{\mathit{At}}\xspace[\text{\textit{2/11/95}},
\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:28}
\end{examps}
Hence, \pref{mults:18} receives two formulae (\pref{mults:22} and
\pref{mults:23}), while \pref{mults:19} receives three
(\pref{mults:26} -- \pref{mults:28}). \pref{mults:26} is equivalent to
\pref{mults:23}. They both require the repair to reach its completion
within the 5:00pm-minute of 2/11/95. Unlike what one might expect,
however, \pref{mults:27} is not equivalent to \pref{mults:22}.
\pref{mults:27} requires a past period that covers exactly the whole
repair (from start to completion) to fall within 2/11/95, and the
beginning of that period to fall within some 5:00pm-minute. This means
that the repair must start at the 5:00pm-minute of 2/11/95 (as in
\pref{mults:22}), but it also means that the repair must reach its
completion within 2/11/95 (this is not a requirement in
\pref{mults:22}). Also, unlike what one might expect, \pref{mults:28}
is not equivalent to \pref{mults:23} and \pref{mults:26}.
\pref{mults:28} requires the repair to reach its completion within the
5:00pm-minute of 2/11/95 (as in \pref{mults:23} and \pref{mults:26}),
but it also requires the repair to start within 2/11/95 (which is not
a requirement in \pref{mults:23} and \pref{mults:26}).
The differences in the number and semantics of the generated formulae
in \pref{mults:18} and \pref{mults:19} lead to differences in the
behaviour of the \textsc{Nlitdb}\xspace that are difficult to explain to the user. A
tentative solution is to adopt some mechanism that would reorder the
temporal modifiers, so that the punctual adverbial attaches before the
period one. This would reverse the order of \qit{on 2/11/95} and
\qit{at 5:00pm} in \pref{mults:19}, and would cause \pref{mults:19} to
be treated in the same way as \pref{mults:18} (i.e.\ to be mapped to
\pref{mults:22} and \pref{mults:23}; these seem to capture the most
natural readings of \pref{mults:18} and \pref{mults:19}).
\paragraph{Culminating activity and multiple period adverbials:}
A further problem is that a sentence like \pref{mults:30}, where a
culminating activity is modified by two period adverbials, receives
three formulae, shown in \pref{mults:32} -- \pref{mults:31}. It
turns out that \pref{mults:33} is equivalent to \pref{mults:31}, and
hence one of the two should be eliminated.
\begin{examps}
\item J.Adams repaired fault 2 in June in 1992. \label{mults:30}
\item $\ensuremath{\mathit{Part}}\xspace[june^g, j^v] \land \ensuremath{\mathit{At}}\xspace[1992,$\\
$\ensuremath{\mathit{At}}\xspace[j^v, \ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:32}
\item $\ensuremath{\mathit{Part}}\xspace[june^g, j^v] \land \ensuremath{\mathit{At}}\xspace[1992,$\\
$\ensuremath{\mathit{End}}\xspace[\ensuremath{\mathit{At}}\xspace[j^v, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]]$
\label{mults:33}
\item $\ensuremath{\mathit{Part}}\xspace[june^g, j^v] \land \ensuremath{\mathit{At}}\xspace[1992,$\\
$\ensuremath{\mathit{At}}\xspace[j^v, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[repairing(occr^v, j\_adams, fault2)]]]]$
\label{mults:31}
\end{examps}
A period adverbial combining with a culminating activity can either
insert an \ensuremath{\mathit{End}}\xspace operator and cause an aspectual shift to point, or
insert no \ensuremath{\mathit{End}}\xspace and leave the aspectual class unchanged (see section
\ref{hpsg:per_advs}). In the case where \pref{mults:32} is generated,
\qit{in June} inserts an \ensuremath{\mathit{End}}\xspace and changes the aspectual class to
point. This does not allow \qit{in 1992} (which attaches after \qit{in
June}) to insert an \ensuremath{\mathit{End}}\xspace, because period adverbials combining with
points are not allowed to insert \ensuremath{\mathit{End}}\xspace{s} (the \qit{on} sign of
\pref{pupe:29} cannot be used with points). In the cases where
\pref{mults:33} or \pref{mults:31} are generated, \qit{in June} does
not insert an \ensuremath{\mathit{End}}\xspace, and the aspectual class remains culminating
activity. \qit{In 1992} can then insert an \ensuremath{\mathit{End}}\xspace (as in
\pref{mults:33}) or not (as in \pref{mults:31}). \pref{mults:31}
requires the whole repair to be located within a June and 1992 (i.e.\
within the June of 1992). \pref{mults:32} is weaker: it requires only
the completion point of the repair to be located within the June of
1992. Finally, \pref{mults:33} requires the whole of the repair to be
located within a June, and the completion point of the repair to fall
within 1992. This is equivalent to requiring the whole of the repair
to fall within the June of 1992, i.e.\ \pref{mults:33} is equivalent
to \pref{mults:31}, and one of the two should be eliminated.
\section{Post-processing} \label{post_processing}
The parsing maps each English question to an \textsc{Hpsg}\xspace sign (or multiple
signs, if the parser understands the question to be ambiguous). From
that sign, a \textsc{Top}\xspace formula is extracted as discussed in section
\ref{extraction_hpsg}. The extracted formula then undergoes an
additional post-processing phase. This is a collection of minor
transformations, discussed below, that cannot be carried out easily
during the parsing.
\paragraph{Removing Culms:} \pref{post:2} shows the \textsc{Top}\xspace formula that
is extracted from the sign of \pref{post:1}. As discussed in section
\ref{duration_adverbials}, \pref{post:2} does not represent correctly
\pref{post:1}, because \pref{post:2} requires the taxiing to have been
completed. In contrast, as discussed in section \ref{for_adverbials},
the \qit{for~\dots} adverbial of \pref{post:1} cancels the normal
implication of \qit{BA737 taxied to gate 2} that the taxiing must have
been completed. To express correctly \pref{post:1}, the \ensuremath{\mathit{Culm}}\xspace of
\pref{post:2} has to be removed.
\begin{examps}
\item BA737 taxied to gate 2 for five minutes. \label{post:1}
\item $\ensuremath{\mathit{For}}\xspace[minute^c, 5, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[taxiing\_to(ba737, gate2)]]]$
\label{post:2}
\end{examps}
A first solution would be to remove during the post-processing any
\ensuremath{\mathit{Culm}}\xspace operator that is within the scope of a \ensuremath{\mathit{For}}\xspace operator. The
problem with this approach is that duration \qit{in~\dots} adverbials
also introduce \ensuremath{\mathit{For}}\xspace operators (see section \ref{duration_adverbials}), but
unlike \qit{for~\dots} adverbials they do not cancel the implication
that the completion must have been reached. For example, the formula
extracted from the sign of \pref{post:5} is \pref{post:2}. In this
case, \pref{post:2} is a correct rendering of \pref{post:5} (because
\pref{post:5} \emph{does} imply that BA737 reached gate 2), and hence
the \ensuremath{\mathit{Culm}}\xspace operator should not be removed. To overcome this problem, the
prototype \textsc{Nlitdb}\xspace attaches to each \ensuremath{\mathit{For}}\xspace operator a flag showing
whether it was introduced by a \qit{for~\dots} or an \qit{in~\dots}
adverbial. Only \ensuremath{\mathit{For}}\xspace operators introduced by \qit{for~\dots}
adverbials cause \ensuremath{\mathit{Culm}}\xspace operators within their scope to be removed.
\begin{examps}
\item BA737 taxied to gate 2 in five minutes. \label{post:5}
\end{examps}
The post-processing also removes any \ensuremath{\mathit{Culm}}\xspace operator from within the
first argument of an \ensuremath{\mathit{At}}\xspace operator. As explained in section
\ref{hpsg:subordinates}, this is needed to express correctly
\qit{while~\dots} clauses.
\paragraph{$\mathbf{?_{mxl}}$ quantifiers:} As noted in section \ref{unb_dep},
before the post-processing the variables of interrogative-maximal
quantifiers introduced by \qit{when} do not occur elsewhere in their
formulae. For example, \pref{post:9} and \pref{post:6} are
extracted from the signs of \pref{post:8} and \pref{post:6}. In both
formulae, $w^v$ occurs only immediately after the $?_{mxl}$.
\begin{examps}
\item When was J.Adams a manager? \label{post:8}
\item $?_{mxl}w^v \; \ensuremath{\mathit{Past}}\xspace[e^v, manager(j\_adams)]$ \label{post:9}
\item When while BA737 was circling was runway 2 open? \label{post:6}
\item $?_{mxl}w^v \; \ensuremath{\mathit{At}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, circling(ba737)],
\ensuremath{\mathit{Past}}\xspace[e2^v, open(runway2)]]$
\label{post:7}
\end{examps}
During the post-processing, the variables of interrogative-maximal
quantifiers are replaced by variables that appear as first arguments
of \ensuremath{\mathit{Past}}\xspace or \ensuremath{\mathit{Perf}}\xspace operators, excluding \ensuremath{\mathit{Past}}\xspace and \ensuremath{\mathit{Perf}}\xspace operators that
are within first arguments of \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, or \ensuremath{\mathit{After}}\xspace operators. In
\pref{post:9}, this causes $w^v$ to be replaced by $e^v$. The
resulting formula asks for the maximal past periods where J.Adams was
a manager. Similarly, the $w^v$ of \pref{post:7} is replaced by
$e2^v$. The resulting formula asks for the maximal past periods
$e2^v$, such that runway 2 was open at $e2^v$, and $e2^v$ is a
subperiod of a period $e1^v$ where BA737 was circling. In
\pref{post:7}, $w^v$ cannot be replaced by $e1^v$, because
$\ensuremath{\mathit{Past}}\xspace[e1^v, circling(ba737)]$ is within the first argument of an \ensuremath{\mathit{At}}\xspace.
\ensuremath{\mathit{Past}}\xspace and \ensuremath{\mathit{Perf}}\xspace operators located within first arguments of \ensuremath{\mathit{At}}\xspace,
\ensuremath{\mathit{Before}}\xspace, or \ensuremath{\mathit{After}}\xspace operators are excluded, to avoid interpreting
\qit{when} as referring to the time where the situation of a
subordinate clause held (formulae that express subordinate clauses
end-up within first arguments of \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, or \ensuremath{\mathit{After}}\xspace operators).
The interrogative \qit{when} always refers to the situation of the
main clause. For example, \pref{post:6} cannot be asking for maximal
periods where BA737 was circling that subsume periods where runway 2
was open (this would be the meaning of \pref{post:7} if $w^v$ were
replaced by $e1^v$).
When the main clause is in the past perfect, this arrangement allows
the variable of $?_{mxl}$ to be replaced by either the first argument
of the main-clause's \ensuremath{\mathit{Past}}\xspace operator, or the first argument of the
main-clause's \ensuremath{\mathit{Perf}}\xspace operator. \pref{post:11}, for example, shows the
formula extracted from the sign of \pref{post:10}. The
post-processing generates two formulae: one where $w^v$ is replaced by
$e1^v$, and one where $w^v$ is replaced by $e2^v$. The first one asks
for what section \ref{point_adverbials} called the ``consequent
period'' of the inspection (the period from the end of the inspection
to the end of time). The second one asks for the time of the actual
inspection.
\begin{examps}
\item When had J.Adams inspected BA737? \label{post:10}
\item $?_{mxl}w^v \; \ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v,
\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, j\_adams, ba737)]]]$
\label{post:11}
\end{examps}
\paragraph{Ntense operators:} As noted in section \ref{non_pred_nps},
when extracting \textsc{Top}\xspace formulae from signs, if an \ensuremath{\mathit{Ntense}}\xspace operator is
encountered and the sign contains no definite indication that the
first argument of the \ensuremath{\mathit{Ntense}}\xspace should be $now^*$, in the extracted
formula the first argument of the \ensuremath{\mathit{Ntense}}\xspace becomes a variable. That
variable does not occur elsewhere in the extracted formula. Assuming,
for example, that the (non-predicative) \qit{queen} introduces an
\ensuremath{\mathit{Ntense}}\xspace, the formula extracted from the sign of \pref{post:14} is
\pref{post:15}. The $t^v$ of the \ensuremath{\mathit{Ntense}}\xspace does not occur elsewhere in
\pref{post:15}.
\begin{examps}
\item The queen was in Rome. \label{post:14}
\item $\ensuremath{\mathit{Ntense}}\xspace[t^v, queen(q^v)] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, located\_at(q^v, rome)]$ \label{post:15}
\end{examps}
During the post-processing, variables appearing as first arguments of
\ensuremath{\mathit{Ntense}}\xspace{s} give rise to multiple formulae, where the first arguments
of the \ensuremath{\mathit{Ntense}}\xspace{s} are replaced by $now^*$ or by first arguments of
\ensuremath{\mathit{Past}}\xspace or \ensuremath{\mathit{Perf}}\xspace operators. In \pref{post:15}, for example, the
post-processing generates two formulae: one where $t^v$ is replaced by
$now^*$ (queen at the speech time), and one where $t^v$ is replaced by $e^v$
(queen when in Rome).
In \pref{post:17} (the formula extracted from the sign of
\pref{post:16}), there is no \ensuremath{\mathit{Past}}\xspace or \ensuremath{\mathit{Perf}}\xspace operator, and hence $t^v$
can only become $now^*$. This captures the fact that the \qit{queen}
in \pref{post:16} most probably refers to the queen of the speech
time.
\begin{examps}
\item The queen is in Rome. \label{post:16}
\item $\ensuremath{\mathit{Ntense}}\xspace[t^v, queen(q^v)] \land
\ensuremath{\mathit{Pres}}\xspace[located\_at(q^v, gate2)]$ \label{post:17}
\end{examps}
In \pref{post:19} (the formula extracted from the sign of
\pref{post:18}), the post-processing leads to three formulae, where
$t^v$ is replaced by $now^*$ (queen at speech time), $e2^v$ (queen
during the visit), or $e1^v$ (queen at a ``reference time'' after
the visit).
\begin{examps}
\item The queen had visited Rome. \label{post:18}
\item $\ensuremath{\mathit{Ntense}}\xspace[t^v, queen(q^v)] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, visiting(q^v, rome)]]$ \label{post:19}
\end{examps}
\section{Summary}
This chapter has shown how \textsc{Hpsg}\xspace can be used to translate English
questions directed to a \textsc{Nlitdb}\xspace to appropriate \textsc{Top}\xspace formulae. During
the parsing, each question receives one or more \textsc{Hpsg}\xspace signs, from
which \textsc{Top}\xspace formulae are extracted. The extracted formulae then
undergo an additional post-processing phase, which leads to formulae
that capture the semantics of the original English questions.
Several modifications were made to \textsc{Hpsg}\xspace. The main modifications were:
(i) \textsc{Hpsg}\xspace features and sorts that are intended to account for
phenomena not examined in this thesis (e.g.\ pronouns, relative
clauses, number agreement) were dropped. (ii) The quantifier storage
mechanism of \textsc{Hpsg}\xspace was replaced by a more primitive one, that does not
allow quantifiers to be unstored during the parsing; the semantics
principle was modified accordingly. (iii) An {\feat aspect} feature
was added, along with a principle that controls its propagation. (iv)
The possible values of {\feat cont} and {\feat qstore} were modified,
to represent \textsc{Top}\xspace expressions rather than situation-theory
constructs. (v) A hierarchy of world-entity types was mounted under
the {\srt ind}\/ sort; this is used to disambiguate sentences, and to
block semantically ill-formed ones. (vi) New lexical signs and lexical
rules were introduced to cope with temporal linguistic mechanisms
(verb tenses, temporal adverbials, temporal subordinate clauses, etc.).
Apart from these modifications, the \textsc{Hpsg}\xspace version of this thesis
follows closely \cite{Pollard2}.
\chapter{From TOP to TSQL2} \label{tdb_chapter}
\proverb{Time is money.}
\section{Introduction}
This chapter describes the translation from \textsc{Top}\xspace to \textsc{Tsql2}\xspace. The
discussion starts with an introduction to \textsc{Tsql2}\xspace and the version of the
relational model on which \textsc{Tsql2}\xspace is based. This thesis adopts some
modifications to \textsc{Tsql2}\xspace. These are described next, along with some
minor alterations in the \textsc{Top}\xspace definition of chapter
\ref{TOP_chapter}. The translation from \textsc{Top}\xspace to \textsc{Tsql2}\xspace requires
\textsc{Top}\xspace's model to be linked to the database; this is explained
next. The translation is carried out by a set of rules. I explore
formally the properties that these rules must possess for the
translation to be correct, and I describe the intuitions behind the
design of the rules. An illustration of how some of the rules work is
also given. The full set of the translation rules, along with a proof
that they possess the necessary properties, is given in appendix
\ref{trans_proofs}. The chapter ends with a discussion of related work
and reflections on how the generated \textsc{Tsql2}\xspace code could be optimised.
\section{An introduction to TSQL2} \label{TSQL2_intro}
This section introduces \textsc{Tsql2}\xspace and the version of the relational model
on which \textsc{Tsql2}\xspace is based. Some definitions that are not part of the
\textsc{Tsql2}\xspace documentation are also given; these will be used in following
sections. I note that although \cite{TSQL2book} defines \textsc{Tsql2}\xspace's
syntax rigorously, the semantics of the language is defined very
informally, with parts of the semantics left to the intuition of the
reader. There are also some inconsistencies in the \textsc{Tsql2}\xspace definition
(several of these were pointed out in \cite{Androutsopoulos1995b}).
\subsection{The traditional relational model} \label{relational}
As explained in section \ref{tdbs_general}, the traditional relational
model stores information in relations, which can be thought of as
tables. For example, $salaries$ below is a relation showing the
current salaries of a company's employees. $salaries$ has two
\emph{attributes} (intuitively, columns), $employee$ and $salary$. The
\emph{tuples of the relation} are intuitively the rows of the table
($salaries$ has three tuples).
\adbtable{2}{|l|l|}{$salaries$}
{$employee$ & $salary$ }
{$J.Adams$ & $17000$ \\
$T.Smith$ & $19000$ \\
$G.Papas$ & $14500$
}
I adopt a set-theoretic definition of relations (see section 2.3 of
\cite{Ullman} for alternative approaches). A set of attributes
${\cal D}_A$
\index{da@${\cal D}_A$ (set of all attributes)}
is assumed (e.g.\ $employee$ and $salary$ are elements of ${\cal
D}_A$). A \emph{relation schema} is an ordered tuple of one or more
attributes (e.g.\ $\tup{employee, salary}$). A set of \emph{domains}
${\cal D}_D = \{D_1, D_2, \dots, D_{n_D}\}$
\index{dd@${\cal D}_D$ (set of all domains)}
is also assumed. Each element $D_i$ of ${\cal D}_D$ is itself a
set. For example, $D_1$ may contain all strings, $D_2$ all positive
integers, etc. Each attribute (element of ${\cal D}_A$) is assigned a
domain (element of ${\cal D}_D$). $D(A)$
\index{d()@$D(A)$ (domain of the attribute $A$)}
denotes the domain of attribute $A$. $D$
\index{d@$D$ (universal domain)}
on its own refers to the \emph{universal domain}, the union of all $D_i
\in {\cal D}_D$.
A \emph{relation} over a relation schema $R = \tup{A_1, A_2, \dots,
A_n}$ is a subset of $D(A_1) \times D(A_2) \times \dots \times
D(A_n)$, where $\times$ denotes the cartesian product, and $D(A_1)$,
$D(A_2)$,~\dots, $D(A_n)$ are the domains of the attributes $A_1$,
$A_2$,~\dots, $A_n$ respectively. That is, a relation over $R$ is a
set of tuples of the form $\tup{v_1, v_2, \dots, v_n}$, where $v_1 \in
D(A_1)$, $v_2 \in D(A_2)$, \dots, $v_n \in D(A_n)$. In each tuple
$\tup{v_1, v_2, \dots, v_n}$, $v_1$ is the \emph{attribute value} of
$A_1$, $v_2$ is the attribute value of $A_2$, etc. The universal
domain $D$ is the set of all possible attribute values. Assuming, for
example, that $employee, salary \in {\cal D}_A$, that $D_1$ and $D_2$
are as in the previous paragraph, and that $employee$ and $salary$ are
assigned $D_1$ and $D_2$, $r$ below is a relation over $\tup{employee,
salary}$. ($r$ is a mathematical representation of $salaries$ above.)
On its own, ``relation'' will be used to refer to a relation over any
relation schema.
\[
r = \{\tup{J.Adams, 17000}, \tup{T.Smith, 19000}, \tup{G.Papas, 14500}\}
\]
The \emph{arity} of a relation over $R$ is the number of attributes in
$R$ (e.g.\ the arity of $r$ is 2). The \emph{cardinality} of a relation is
the number of tuples it contains (the cardinality of $r$ is
3). A relational \emph{database} is a set of relations (more elaborate
definitions are possible, but this is sufficient for our purposes).
I assume that every element of $D$ (universal domain) denotes an
object in the modelled world. (``Object in the world'' is used here
with a very loose meaning, that covers qualifications of employees,
salaries, etc.) \ensuremath{\mathit{OBJS^{db}}}\xspace
\index{objsdb@\ensuremath{\mathit{OBJS^{db}}}\xspace (\textsc{Bcdm}\xspace's world objects)}
is the set of all the world objects that are each denoted by a single
element of $D$. (Some world objects may be represented in the database
as collections of elements of $D$, e.g.\ as whole tuples. \ensuremath{\mathit{OBJS^{db}}}\xspace
contains only world objects that are denoted by \emph{single} elements
of $D$.) I also assume that a function $f_D : D \mapsto \ensuremath{\mathit{OBJS^{db}}}\xspace$
\index{fd@$f_D()$ (maps attribute values to world objects)}
is available, that maps each element $v$ of $D$ to the world object
denoted by $v$. $f_D$ reflects the semantics assigned to the attribute
values by the people who use the database. In practice, an element of
$D$ may denote different world objects when used as the value of
different attributes. For example, $15700$ may denote a salary when
used as the value of $salary$, and a part of an engine when used as
the value of an attribute $part\_no$. Hence, the value of $f_D$ should
also depend on the attribute where the element of $D$ is used, i.e.\
it should be a function $f_D : D \times {\cal D}_A \mapsto
\ensuremath{\mathit{OBJS^{db}}}\xspace$. For simplicity, I overlook this detail.
I also assume that $f_D$ is 1-1 (injective), i.e.\ that every element
of $D$ denotes a different world object. In practice, $f_D$ may not be
1-1: the database may use two different attribute values (e.g.\ $dpt3$
and $sales\_dpt$) to refer to the same world object. The \textsc{Top}\xspace to
\textsc{Tsql2}\xspace translation could be formulated without assuming that $f_D$ is
1-1. This assumption, however, bypasses uninteresting details. By the
definition of \ensuremath{\mathit{OBJS^{db}}}\xspace, any element of \ensuremath{\mathit{OBJS^{db}}}\xspace is a world object
denoted by some element of $D$. That is, for every $o \in \ensuremath{\mathit{OBJS^{db}}}\xspace$,
there is a $v \in D$, such that $f_D(v) = o$, i.e.\ $f_D$ is also
surjective. Since $f_D$ is both 1-1 and surjective, the inverse
mapping \ensuremath{f_D^{-1}}\xspace is a function, and \ensuremath{f_D^{-1}}\xspace is also 1-1 and surjective.
\subsection{TSQL2's model of time} \label{tsql2_time}
Like \textsc{Top}\xspace, \textsc{Tsql2}\xspace assumes that time is discrete, linear, and bounded.
In effect, \textsc{Tsql2}\xspace models time as consisting of
\emph{chronons}. Chronons are the shortest representable units of
time, and correspond to \textsc{Top}\xspace's time-points.\footnote{\textsc{Tsql2}\xspace
distinguishes between \emph{valid-time chronons},
\emph{transaction-time chronons}, and \emph{bitemporal chronons}
(pairs each comprising a valid-time and a transaction-time chronon;
see chapter 10 of \cite{TSQL2book}). As noted in section
\ref{tdbs_general}, transaction-time is ignored in this thesis. Hence,
transaction-time and bitemporal chronons are not used, and ``chronon''
refers to valid-time chronons.} Depending on the \textsc{Tsql2}\xspace implementation,
a chronon may represent a nanosecond, a day, or a whole century. Let
us call the (implementation-specific) set of chronons \ensuremath{\mathit{CHRONS}}\xspace.
\index{chrons@\ensuremath{\mathit{CHRONS}}\xspace (set of all chronons)}
Although not stated explicitly, it is clear from the discussion in
chapter 6 of \cite{TSQL2book} that $\ensuremath{\mathit{CHRONS}}\xspace \not= \emptyset$, that
chronons are ordered by a binary precedence relation (let us call it
$\prec^{db}$), and that $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$ has the properties
of transitivity, irreflexivity, linearity, left and right boundedness,
and discreteness (section \ref{temporal_ontology}).
I define periods over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$ in the same way as
periods over $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$ (section \ref{temporal_ontology}). A
period over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$ is a non-empty and convex set
of chronons. An instantaneous period over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$
is a set that contains a single chronon.
$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$
\index{periods2@$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$ (set of all
periods over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$}
and $\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$
\index{instants2@$\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$ (set of all instantaneous periods over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$)}
are the sets of all periods and all instantaneous periods over
$\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$ respectively. In section
\ref{resulting_model}, I set the point structure $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}$
of \textsc{Top}\xspace's model to $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$. Hence,
$\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$ and $\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}}$
become $\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$ and
$\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$. As in
chapter \ref{TOP_chapter}, I write \ensuremath{\mathit{PERIODS}}\xspace
\index{periods@$\ensuremath{\mathit{PERIODS}}\xspace$ (set of all periods)}
and \ensuremath{\mathit{INSTANTS}}\xspace
\index{instants@$\ensuremath{\mathit{INSTANTS}}\xspace$ (set of all instantaneous periods)}
to refer to these sets, and $\ensuremath{\mathit{PERIODS}}\xspace^*$
\index{periods*@$\ensuremath{\mathit{PERIODS}}\xspace^*$ ($\ensuremath{\mathit{PERIODS}}\xspace \union \emptyset$)}
to refer to $\ensuremath{\mathit{PERIODS}}\xspace \union \{\emptyset\}$.
A \emph{temporal element} over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$ is a
non-empty (but not necessarily convex) set of
chronons. $\ensuremath{\mathit{TELEMS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$
\index{telems2@$\ensuremath{\mathit{TELEMS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}}$ (set of all temporal elements over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$)}
(or simply \ensuremath{\mathit{TELEMS}}\xspace)
\index{telems@\ensuremath{\mathit{TELEMS}}\xspace (set of all temporal elements)}
is the set of all temporal elements over $\tup{\ensuremath{\mathit{CHRONS}}\xspace,
\prec^{db}}$. Obviously, $\ensuremath{\mathit{PERIODS}}\xspace \subseteq \ensuremath{\mathit{TELEMS}}\xspace$. For every $l
\in \ensuremath{\mathit{TELEMS}}\xspace$, $mxlpers(l)$
\index{mxlpers@$mxlpers()$ (maximal periods of a set or temporal element)}
is the set of the \emph{maximal periods} of $l$, defined as follows:
\begin{align*}
mxlpers(l) \defeq \{p \subseteq l \mid & \; p \in \ensuremath{\mathit{PERIODS}}\xspace \text{ and for no }
p' \in \ensuremath{\mathit{PERIODS}}\xspace \text{ is it true that } \\
& \; p' \subseteq l \text{ and } p \propsubper p'\}
\end{align*}
The $mxlpers$ symbol is overloaded. When $l \in \ensuremath{\mathit{TELEMS}}\xspace$, $mxlpers(l)$
is defined as above. When $S$ is a set of periods, $mxlpers(S)$ is defined
as in section \ref{temporal_ontology}.
\textsc{Tsql2}\xspace supports multiple \emph{granularities}. These correspond to
\textsc{Top}\xspace complete partitionings. A granularity can be
thought of as a set of periods over $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{db}}$
(called \emph{granules}), such that no two periods overlap, and the
union of all the periods is \ensuremath{\mathit{CHRONS}}\xspace. A lattice is used to capture
relations between granularities (e.g.\ a year-granule contains twelve
month-granules, etc; see chapter 19 of \cite{TSQL2book}). \ensuremath{\mathit{INSTANTS}}\xspace,
also called the \emph{granularity of chronons}, is the finest
available granularity.
\textsc{Tsql2}\xspace allows periods and temporal elements to be specified at any
granularity. For example, one may specify that the first day of a
period is 25/11/95, and the last day is 28/11/95. If the granularity
of chronons is finer than the granularity of days, the exact chronons
within 25/11/95 and 28/11/95 where the period starts and ends are
unknown. Similarly, if a temporal element is specified at a
granularity coarser than \ensuremath{\mathit{INSTANTS}}\xspace, the exact chronon-boundaries of
its maximal periods are unknown.\footnote{To bypass this problem, in
\cite{Androutsopoulos1995b} periods and temporal elements are defined
as sets of granules (of any granularity) rather than sets of
chronons.} These are examples of \emph{indeterminate temporal
information} (see chapter 18 of \cite{TSQL2book}). Information of this
kind is ignored in this thesis. I assume that all periods and temporal
elements are specified at the granularity of chronons, and that we
know exactly which chronons are or are not included in periods and
temporal elements. Granularities other than \ensuremath{\mathit{INSTANTS}}\xspace will be used
only to express durations (see below).
Finally, \textsc{Tsql2}\xspace uses the term \emph{interval} to refer to a
duration (see comments in section \ref{top_intro}). An interval is a
number of consecutive granules of some particular granularity (e.g.\
two day-granules, five minute-granules).
\subsection{BCDM} \label{bcdm}
As noted in section \ref{tdbs_general}, numerous temporal versions of
the relational model have been proposed. \textsc{Tsql2}\xspace is based on a version
called \textsc{Bcdm}\xspace. Apart from the relations of the traditional relational
model (section \ref{relational}), which are called \emph{snapshot
relations} in \textsc{Tsql2}\xspace, \textsc{Bcdm}\xspace provides \emph{valid-time relations},
\emph{transaction-time relations}, and \emph{bitemporal
relations}. Transaction-time and bitemporal relations are not used in
this thesis (see chapter 10 of \cite{TSQL2book}). Valid-time relations
are similar to snapshot relations, except that they have a special
extra attribute (the \emph{implicit attribute}) that shows when the
information of each tuple was/is/will be true.
A special domain $D_T \in {\cal D}_D$
\index{dt@$D_T$ (set of all attribute values that denote temporal elements)}
is assumed, whose elements denote the elements of \ensuremath{\mathit{TELEMS}}\xspace (temporal
elements). For every $v_t \in D_T$, $f_D(v_t) \in
\ensuremath{\mathit{TELEMS}}\xspace$; and for every $l \in \ensuremath{\mathit{TELEMS}}\xspace$, $\ensuremath{f_D^{-1}}\xspace(l) \in D_T$. $D_T$ is
the domain of the implicit attribute. Since $D_T \in {\cal
D}_D$, $D_T \subseteq D$ ($D$ is the union of all
the domains in ${\cal D}_D$). The assumptions of section
\ref{relational} about $f_D$ still hold: I assume that $f_D$
is an injective and surjective function from $D$ (which now includes
$D_T$) to \ensuremath{\mathit{OBJS^{db}}}\xspace. Since the elements of $D_T$ denote all the elements
of \ensuremath{\mathit{TELEMS}}\xspace, $D_T \subseteq D$, and \ensuremath{\mathit{OBJS^{db}}}\xspace contains all the objects
denoted by elements of $D$, it must be the case that $\ensuremath{\mathit{TELEMS}}\xspace
\subseteq \ensuremath{\mathit{OBJS^{db}}}\xspace$. Then, the fact that $\ensuremath{\mathit{PERIODS}}\xspace \subseteq \ensuremath{\mathit{TELEMS}}\xspace$
(section \ref{tsql2_time}) implies that $\ensuremath{\mathit{PERIODS}}\xspace \subseteq
\ensuremath{\mathit{OBJS^{db}}}\xspace$.
A \emph{valid-time relation} $r$ over a relation-schema $R = \tup{A_1,
A_2, \dots, A_n}$ is a subset of $D(A_1) \times D(A_2) \times \dots
\times D(A_n) \times D_T$, where $D(A_1)$, $D(A_2)$,~\dots,
$D(A_n)$ are the domains of $A_1$, $A_2$,~\dots, $A_n$. $A_1$,
$A_2$,~\dots, $A_n$ are the \emph{explicit attributes} of $r$. I use
the notation $\tup{v_1, v_2, \dots, v_n; v_t}$ to refer to tuples of
valid-time relations. If $r$ is as above and $\tup{v_1, v_2, \dots,
v_n; v_t} \in r$, then $v_1 \in D(A_1)$, $v_2 \in D(A_2)$,~\dots,
$v_n \in D(A_n)$, and $v_t \in D_T$. $v_1$, $v_2$,~\dots, $v_n$ are
the \emph{values of the explicit attributes}, while $v_t$ is the
\emph{value of the implicit attribute} and the \emph{time-stamp} of
the tuple. In snapshot relations, all attributes count as explicit.
In the rest of this thesis, ``valid-time relation'' on its own
refers to a valid-time relation over any relation-schema.
\textsc{Tsql2}\xspace actually distinguishes between \emph{state valid-time relations}
and \emph{event valid-time relations} (see chapter 16 of
\cite{TSQL2book}). These are intended to model situations that have
duration or are instantaneous respectively. This distinction seems
particularly interesting, because it appears to capture some facets of
the aspectual taxonomy of chapter
\ref{linguistic_data}. Unfortunately, it is also one of the most
unclear and problematically defined features of \textsc{Tsql2}\xspace. The time-stamps
of state and event valid-time relations are supposed to denote
``temporal elements'' and ``instant sets'' respectively. ``Temporal
elements'' are said to be unions of periods, while ``instant sets''
simply sets of chronons (see p.314 of \cite{TSQL2book}). This
distinction between ``temporal elements'' and ``instant sets'' is
problematic. A union of periods is a union of convex sets of chronons,
i.e.\ simply a set of chronons. (The union of two convex sets of
chronons is not necessarily convex.) Hence, one cannot distinguish
between unions of periods and sets of chronons (see also section 2 of
\cite{Androutsopoulos1995b}). In section 3.3 of
\cite{Androutsopoulos1995b} we also argue that \textsc{Tsql2}\xspace does not allow
specifying whether a computed valid-time relation should be state or
event. Given these problems, I chose to drop the distinction between
state and event valid-time relations. I assume that the time-stamps of
all valid-time relations denote temporal elements, with temporal
elements being sets of chronons.
For example, assuming that the domains of $employee$ and $salary$ are
as in section \ref{relational}, $val\_salaries$ below is a valid-time
relation over $\tup{employee, salary}$, shown in its tabular form (the
double vertical line separates the explicit attributes from the
implicit one). According to chapter 10 of \cite{TSQL2book}, the
elements of $D_T$ are non-atomic. Each element $v_t$ of $D_T$ is in
turn a set, whose elements denote the chronons that belong to the
temporal element represented by $v_t$.
\adbtable{3}{|l|l||l|}{$val\_salaries$}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $\{c^1_1, c^1_2, c^1_3, \dots, c^1_{n_1}\}$ \\
$J.Adams$ & $18000$ & $\{c^2_1, c^2_2, c^2_3, \dots, c^2_{n_2}\}$ \\
$J.Adams$ & $18500$ & $\{c^3_1, c^3_2, c^3_3, \dots, c^3_{n_3}\}$ \\
$T.Smith$ & $19000$ & $\{c^4_1, c^4_2, c^4_3, \dots, c^4_{n_4}\}$ \\
$T.Smith$ & $21000$ & $\{c^5_1, c^5_2, c^5_3, \dots, c^5_{n_5}\}$
}
For example, $c^1_1, c^1_2, c^1_3, \dots, c^1_{n_1}$ in the first
tuple for J.Adams above represent all the chronons where the salary of
J.Adams was/is/will be 17000. $\{c^1_1, c^1_2, c^1_3, \dots,
c^1_{n_1}\}$ is an element of $D_T$. For simplicity, when depicting
valid-time relations I often show (in an informal manner) the temporal
elements denoted by the time-stamps rather the time-stamps
themselves. $val\_salaries$ would be shown as below, meaning that the
time-stamp of the first tuple represents a temporal element of two
maximal periods, 1/1/92 to 12/6/92 and 8/5/94 to 30/10/94. (I assume
here that chronons correspond to days. $now$ refers to the current
chronon.)
\begin{examps}
\item \label{tlang:4}
\dbtable{3}{|l|l||l|}{$val\_salaries$}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1/1/92, \; 12/6/92] \union [8/5/94, \; 30/10/94]$ \\
$J.Adams$ & $18000$ & $[13/6/92, \; 7/5/94] \union [31/10/94, \; now]$ \\
$T.Smith$ & $21000$ & $[15/6/92, \; now]$
}
\end{examps}
Two tuples $\tup{v_1^1, \dots, v_n^1; v_t^1}$ and $\tup{v^2_1, \dots,
v_n^2; v_t^2}$ are \emph{value-equivalent} iff if $v^1_1 = v^2_1$,
\dots, $v^1_n = v^2_n$. A valid-time relation is \emph{coalesced} iff
it contains no value-equivalent tuples. \textsc{Bcdm}\xspace requires all valid-time
relations to be coalesced (see p.188 of \cite{TSQL2book}). For
example, \pref{bcdm:1} is not allowed (its first and third tuples are
value-equivalent). In this thesis, this \textsc{Bcdm}\xspace restriction is dropped,
and \pref{bcdm:1} is allowed.
\begin{examps}
\item \label{bcdm:1}
\dbtablec{|l|l||l|}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1/1/92, \; 12/6/92]$ \\
$J.Adams$ & $18000$ & $[13/6/92, \; 7/5/94]$ \\
$J.Adams$ & $17000$ & $[8/5/94, \; 30/10/94]$ \\
$J.Adams$ & $18000$ & $[31/10/94, \; now]$ \\
$T.Smith$ & $21000$ & $[15/6/92, \; now]$
}
\end{examps}
By the definition of $D_T$, the elements of $D_T$ denote all the
elements of $\ensuremath{\mathit{TELEMS}}\xspace$ (temporal elements). Since $\ensuremath{\mathit{PERIODS}}\xspace \subseteq
\ensuremath{\mathit{TELEMS}}\xspace$, some of the elements of $D_T$ denote periods. $D_P$
\index{dp@$D_P$ (set of all attribute values that denote periods)}
is the subset of all elements of $D_T$ that denote
periods.\footnote{\cite{TSQL2book} seems to adopt a different
approach, where $D_P \intersect D_T = \emptyset$.} I also assume that
there is a special value $\ensuremath{v_\varepsilon}\xspace \in D$,
\index{ve@$\ensuremath{v_\varepsilon}\xspace$ (attribute value denoting $\emptyset$)}
that is used to denote the empty set (of chronons). For example, a
\textsc{Tsql2}\xspace expression that computes the intersection of two non-overlapping
periods evaluates to \ensuremath{v_\varepsilon}\xspace.\footnote{Table 8.3 of \cite{TSQL2book}
implies that \ensuremath{v_\varepsilon}\xspace is the special ``null'' value. In \textsc{Sql}\xspace, null has
several roles. Here, I assume that there is a special value \ensuremath{v_\varepsilon}\xspace
whose only role is to denote the empty set.} I use $D_P^*$
\index{dp*@$D_P^*$ ($D_P \union \emptyset$)}
to refer to $D_P \union \{\ensuremath{v_\varepsilon}\xspace\}$.
The following notation will prove useful:
\begin{itemize}
\item \ensuremath{\mathit{VREL}_P}\xspace
\index{vrelp@\ensuremath{\mathit{VREL}_P}\xspace (set of all valid time relations time-stamped by elements of $D_P$)}
is the set of all valid-time relations whose time-stamps
are all elements of $D_P$ (all the time-stamps denote periods).
\item \ensuremath{\mathit{NVREL}_P}\xspace
\index{nvrelp@\ensuremath{\mathit{NVREL}_P}\xspace (``normalised'' elements of \ensuremath{\mathit{VREL}_P}\xspace)}
is the set of all the (intuitively, ``normalised'') relations $r \in
\ensuremath{\mathit{VREL}_P}\xspace$ with the following property: if $\tup{v_1, \dots, v_n; v^1_t}
\in r$, $\tup{v_1, \dots, v_n; v^2_t} \in r$, and $f_D(v^1_t) \union
f_D(v^2_t) \in \ensuremath{\mathit{PERIODS}}\xspace$, then $v^1_t = v^2_t$. This definition
ensures that in any $r \in \ensuremath{\mathit{NVREL}_P}\xspace$, there is no pair of different
value-equivalent tuples whose time-stamps $v^1_t$ and $v^2_t$ denote
overlapping or adjacent periods (because if the periods of $v^1_t$ and
$v^2_t$ overlap or they are adjacent, their union is also a period,
and then it must be true that $v^1_t = v^2_t$, i.e.\ the
value-equivalent tuples are not different).
\item \ensuremath{\mathit{SREL}}\xspace
\index{srel@\ensuremath{\mathit{SREL}}\xspace (set of all snapshot relations)}
is the set of all snapshot relations.
\item For every $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{VREL}_P}\xspace(n)$
\index{vrelpn@$\ensuremath{\mathit{VREL}_P}\xspace(n)$ (relations in \ensuremath{\mathit{VREL}_P}\xspace with $n$ explicit attributes)}
contains all the relations of \ensuremath{\mathit{VREL}_P}\xspace that have $n$ explicit
attributes. Similarly, $\ensuremath{\mathit{NVREL}_P}\xspace(n)$
\index{nvrelpn@$\ensuremath{\mathit{NVREL}_P}\xspace(n)$ (set of all relations in \ensuremath{\mathit{VREL}_P}\xspace with $n$ explicit attributes)}
and $\ensuremath{\mathit{SREL}}\xspace(n)$
\index{sreln@$\ensuremath{\mathit{SREL}}\xspace(n)$ (set of all snapshot relations of $n$ attributes)}
contain all the relations of \ensuremath{\mathit{NVREL}_P}\xspace and \ensuremath{\mathit{SREL}}\xspace respectively that have
$n$ explicit attributes.
\end{itemize}
To simplify the proofs in the rest of this chapter, I include the
empty relation in all $\ensuremath{\mathit{VREL}_P}\xspace(n)$, $\ensuremath{\mathit{NVREL}_P}\xspace(n)$, $\ensuremath{\mathit{SREL}}\xspace(n)$, for $n=
1,2,3,\dots$.
\subsection{The TSQL2 language} \label{tsql2_lang}
This section is an introduction to the features of \textsc{Tsql2}\xspace that
are used in this thesis.
\subsubsection*{SELECT statements}
As noted in section \ref{tdbs_general}, \textsc{Tsql2}\xspace is an extension of
\textsc{Sql-92}\xspace. Roughly speaking, \textsc{Sql-92}\xspace queries (e.g.\ \ref{tlang:1}) consist
of three clauses: a \sql{SELECT},
\index{select@\sql{SELECT} (\textsc{Tsql2}\xspace keyword, introduces a \textsc{Tsql2}\xspace query)}
a \sql{FROM},
\index{from@\sql{FROM} (\textsc{Tsql2}\xspace keyword, shows the relations on which a \sql{SELECT} operates)}
and a \sql{WHERE}
\index{where@\sql{WHERE} (\textsc{Tsql2}\xspace keyword, introduces restrictions)}
clause. (The term \emph{\sql{SELECT} statement} will
be used to refer to the whole of a \textsc{Sql-92}\xspace or \textsc{Tsql2}\xspace query.)
\begin{examps}
\item
\index{as@\sql{AS} (\textsc{Tsql2}\xspace keyword, introduces correlation names)}
\index{and@\sql{AND} (\textsc{Tsql2}\xspace's conjunction)}
\label{tlang:1}
\select{SELECT DISTINCT sal.salary \\
FROM salaries AS sal, managers AS mgr \\
WHERE mgr.manager = 'J.Adams' AND sal.employee = mgr.managed}
\end{examps}
Assuming that $salaries$ and $managers$ are as below, \pref{tlang:1}
generates the third relation below.
\begin{examps}
\item[]
\dbtable{2}{|l|l|}{$salaries$}
{$employee$ & $salary$ }
{$J.Adams$ & $17000$ \\
$T.Smith$ & $18000$ \\
$G.Papas$ & $14500$ \\
$B.Hunter$ & $17000$ \\
$K.Kofen$ & $16000$
}
\ \
\dbtable{2}{|l|l|}{$managers$}
{$manager$ & $managed$ }
{$J.Adams$ & $G.Papas$ \\
$J.Adams$ & $B.Hunter$ \\
$J.Adams$ & $J.Adams$ \\
$T.Smith$ & $K.Kofen$ \\
$T.Smith$ & $T.Smith$
}
\ \
\dbtable{1}{|l|}{$(result)$}
{$salary$}
{$17000$ \\
$14500$
}
\end{examps}
\pref{tlang:1} generates a snapshot one-attribute relation that
contains the salaries of all employees managed by J.Adams. The
\sql{FROM} clause of \pref{tlang:1} shows that the query operates on
the $salaries$ and $managers$ relations. \sql{sal} and \sql{mgr} are
\emph{correlation names}.
They can be thought of as tuple-variables ranging over the tuples of
$salaries$ and $managers$ respectively. The (optional) \sql{WHERE}
clause imposes restrictions on the possible combinations of
tuple-values of \sql{sal} and \sql{mgr}. In every combination, the
$manager$ value of \sql{mgr} must be $J.Adams$, and the $managed$
value of \sql{mgr} must be the same as the $employee$ value of
\sql{sal}. For example, $\tup{J.Adams, G.Papas}$ and $\tup{G.Papas,
14500}$ is an acceptable combination of
\sql{mgr} and \sql{sal} values respectively, while $\tup{J.Adams,
G.Papas}$ and $\tup{B.Hunter, 17000}$ is not.
In \textsc{Sql-92}\xspace (and \textsc{Tsql2}\xspace), correlation names are optional, and relation
names can be used to refer to attribute values. In \pref{tlang:1}, for
example, one could omit \sql{AS mgr}, and replace \sql{mgr.manager} and
\sql{mgr.managed} by \sql{managers.manager} and
\sql{managers.managed}. To simplify the definitions of section
\ref{additional_tsql2} below, I treat correlation names as
mandatory, and I do not allow relation names to be used to refer to
attribute values.
The \sql{SELECT} clause specifies the contents of the resulting
relation. In \pref{tlang:1}, it specifies that the resulting relation
should have only one attribute, $salary$, and that for each acceptable
combination of \sql{sal} and \sql{mgr} values, the corresponding tuple
of the resulting relation should contain the $salary$ value of
\sql{sal}'s tuple. The \sql{DISTINCT}
\index{distinct@\sql{DISTINCT} (\textsc{Tsql2}\xspace keyword, removes duplicate tuples)}
in \pref{tlang:1} causes duplicates of tuples to be removed from the
resulting relation. Without the \sql{DISTINCT} duplicates are not
removed. The result of \pref{tlang:1} would contain two identical
tuples $\tup{17000}$, deriving from the tuples for J.Adams and
B.Hunter in $salaries$. This is against the set-theoretic definition
of relations of sections \ref{relational} and \ref{bcdm} (relations were
defined to be \emph{sets} of tuples, and hence cannot contain
duplicates.) To ensure that relations contain no duplicates, in this
thesis \sql{SELECT} statements always have a
\sql{DISTINCT} in their \sql{SELECT} clauses.
\textsc{Tsql2}\xspace allows \sql{SELECT} statements to operate
on valid-time relations as well. A \sql{SNAPSHOT}
\index{snapshot@\sql{SNAPSHOT} (\textsc{Tsql2}\xspace keyword, signals that a snapshot relation is to be created)}
keyword in the \sql{SELECT} statement indicates that the resulting
relation is snapshot. When the resulting relation is valid-time, an
additional \sql{VALID} clause is present. In the latter case, the
\sql{SELECT} clause specifies the values of the explicit attributes of
the resulting relation, while the
\sql{VALID} clause specifies the time-stamps of the resulting tuples.
Assuming, for example, that $val\_salaries$ is as in
\pref{tlang:4}, \pref{tlang:7} returns \pref{tlang:8}.
\begin{examps}
\item \label{tlang:7}
\sql{SELECT DISTINCT sal.employee, sal.salary \\
VALID PERIOD(BEGIN(VALID(sal)), END(VALID(sal))) \\
FROM val\_salaries AS sal}
\item \label{tlang:8}
\dbtablec{|l|l||l|}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1/1/92, \; 30/10/94]$ \\
$J.Adams$ & $18000$ & $[13/6/92, \; now]$ \\
$T.Smith$ & $21000$ & $[15/6/92, \; now]$
}
\end{examps}
The \sql{VALID}
\index{valid@\sql{VALID} (\textsc{Tsql2}\xspace keyword, refers to time-stamps of tuples)}
keyword is used both to start a \sql{VALID}-clause (a clause that
specifies the time-stamps of the resulting relation) and to refer to
the time-stamp of the tuple-value of a correlation name. In
\pref{tlang:7}, \sql{VALID(sal)} refers to the time-stamp of
\sql{sal}'s tuple (i.e.\ to the time-stamp of a tuple from
$val\_salaries$). \sql{BEGIN(VALID(sal))}
\index{begin2@\sql{BEGIN} (\textsc{Tsql2}\xspace keyword, returns the start-point of a temporal element)}
refers to the first chronon of the temporal element represented by
that time-stamp, and \sql{END(VALID(sal))}
\index{end2@\sql{END} (\textsc{Tsql2}\xspace keyword, returns the end-point of a temporal element)}
to the last chronon of that temporal
element.\footnote{Section 30.5 of \cite{TSQL2book} allows
\sql{BEGIN} and \sql{END} to be used only with periods. I see no
reason for this limitation. I allow \sql{BEGIN} and \sql{END} to be
used with any temporal element.} The
\sql{PERIOD}
\index{period@\sql{PERIOD} (\textsc{Tsql2}\xspace keyword, constructs periods or introduces period literals)}
function generates a period that starts at the
chronon of its argument, and ends at the chronon of its second
argument. Hence, each time-stamp of \pref{tlang:8} represents a
period that starts/ends at the earliest/latest chronon of the temporal
element of the corresponding time-stamp of $val\_salaries$.
\subsubsection*{Literals}
\textsc{Tsql2}\xspace provides \sql{PERIOD}
\index{period@\sql{PERIOD} (\textsc{Tsql2}\xspace keyword, constructs periods or introduces period literals)}
literals, \sql{INTERVAL}
\index{interval@\sql{INTERVAL} (\textsc{Tsql2}\xspace keyword, returns intervals or introduces interval literals)}
literals, and \sql{TIMESTAMP}
\index{timestamp@\sql{TIMESTAMP} (\textsc{Tsql2}\xspace keyword, introduces chronon-denoting literals)}
literals (the use of ``\sql{TIMESTAMP}'' in this case is unfortunate;
these literals specify time-points, not time-stamps of valid-time
relations, which denote temporal-elements). For example,
\sql{PERIOD '[March 3, 1995 - March 20, 1995]'} is a literal that
specifies a period at the granularity of days. If
chronons are finer than days, the assumption in
\textsc{Tsql2}\xspace is that the exact chronons within March 3 and March 20 where the period
starts and ends are unknown (section \ref{tsql2_time}). In this
thesis, \sql{PERIOD} literals that refer to granularities other than
that of chronons are abbreviations for literals that refer to the
granularity of chronons. The denoted period contains all the chronons
that fall within the granules specified by the literal. For example,
if chronons correspond to minutes,
\sql{PERIOD '[March 3, 1995 - March 20, 1995]'} is an
abbreviation for \sql{PERIOD '[00:00 March 3, 1995 - 23:59 March 20,
1995]'}.
\textsc{Tsql2}\xspace supports multiple calendars (e.g.\ Gregorian, Julian, lunar
calendar; see chapter 7 of \cite{TSQL2book}). The strings that
can appear between the quotes of \sql{PERIOD} literals (e.g.\
\sql{'[March 3, 1995 - March 20, 1995]'}, \sql{'(3/4/95 - 20/4/95]'})
depend on the available calendars and the selected formatting
options (see chapter 7 of \cite{TSQL2book}). The convention seems to
be that the boundaries are separated by a dash, and that the first and last
characters of the quoted string are square or round brackets,
depending on whether the boundaries are to be included or not.
I also assume that \sql{PERIOD 'today'} can be used (provided
that chronons are at least as fine as days) to refer to the
period that covers all the chronons of the present
day. (There are other \textsc{Tsql2}\xspace expressions that can be used to refer to
the current day, but I would have to discuss \textsc{Tsql2}\xspace
granularity-conversion commands to explain these.
Assuming that \sql{PERIOD 'today'} is available allows
me to avoid these commands.)
\sql{TIMESTAMP} literals specify chronons. Only the following special
\sql{TIMESTAMP} literals are used in this thesis: \sql{TIMESTAMP
'beginning'}, \sql{TIMESTAMP 'forever'}, \sql{TIMESTAMP 'now'}. These
refer to the beginning of time, the end of time, and the present chronon.
An example of an \sql{INTERVAL} literal is \sql{INTERVAL '5' DAY},
which specifies a duration of five consecutive day-granules. The
available granularities depend on the calendars that are active. The
granularities of years, months, days, hours, minutes, and seconds are
supported by default. Intervals can also be used to shift periods or
chronons towards the past or the future. For example, \sql{PERIOD
'[1991 - 1995]' + INTERVAL '1' YEAR} is the same as \sql{PERIOD '[1992
- 1996]'}. If chronons correspond to minutes, \sql{PERIOD(TIMESTAMP
'beginning', TIMESTAMP 'now' - INTERVAL '1' MINUTE)} specifies the
period that covers all the chronons from the beginning of time up to
(but not including) the current chronon.
\subsubsection*{Other TSQL2 functions and predicates}
The \sql{INTERSECT}
\index{intersect@\sql{INTERSECT} (\textsc{Tsql2}\xspace keyword, computes the intersection of two sets of chronons)}
function computes the intersection of two sets of
chronons.\footnote{Section 8.3.3 of \cite{TSQL2book} requires both
arguments of \sql{INTERSECT} to denote periods, but section 30.14
allows the arguments of \sql{INTERSECT} to denote temporal elements. I
follow the latter. I also allow the arguments of \sql{INTERSECT} to
denote the empty set.} For example, \sql{INTERSECT(PERIOD '[May 1,
1995 - May 10, 1995]', PERIOD '[May 3, 1995 - May 15, 1995]')} is the
same as \sql{PERIOD '[May 3, 1995 - May 10, 1995]'}. If the intersection is
the empty set, \sql{INTERSECT} returns \ensuremath{v_\varepsilon}\xspace (section
\ref{bcdm}).
The \sql{CONTAINS}
\index{contains@\sql{CONTAINS} (\textsc{Tsql2}\xspace keyword, checks if a chronon belongs to a set of chronons)}
predicate checks if a chronon belongs to a
set of chronons. For example, if $val\_salaries$ is as in
\pref{tlang:4}, \pref{tlang:9} generates a snapshot relation showing
the current salary of each employee. \sql{CONTAINS} can also be used
to check if a set of chronons is a subset of another set of
chronons.\footnote{Table 8.7 in section 8.3.6 and additional syntax
rule 3 in section 32.4 of \cite{TSQL2book} allow the arguments of
\sql{CONTAINS} to denote periods but not generally temporal
elements. Table 32.1 in section 32.4 of \cite{TSQL2book}, however,
allows the arguments of \sql{CONTAINS} to denote temporal elements. I
follow the latter. I also allow the arguments of \sql{CONTAINS} to
denote the empty set. The same comments apply in the case of
\sql{PRECEDES}.}
\begin{examps}
\item \label{tlang:9}
\select{SELECT DISTINCT SNAPSHOT sal.employee, sal.salary \\
FROM val\_salaries AS sal \\
WHERE VALID(sal) CONTAINS TIMESTAMP 'now'}
\end{examps}
The \sql{PRECEDES}
\index{precedes@\sql{PRECEDES} (\textsc{Tsql2}\xspace keyword, checks temporal precedence)}
predicate checks if a chronon or set of chronons strictly precedes
another chronon or set of chronons. Section 8.3.6 of
\cite{TSQL2book} specifies the semantics of \sql{PRECEDES} only in
cases where its arguments are chronons or periods. I assume that
$expr_1$ \sql{PRECEDES} $expr_2$ is true, iff the chronon of $expr_1$
(if $expr_1$ specifies a single chronon) or all the chronons of
$expr_1$ (if $expr_1$ specifies a set of chronons) strictly precede
the chronon of $expr_2$ (if $expr_2$ specifies a single chronon) or
all the chronons of $expr_2$ (if $expr_2$ specifies a set of
chronons). For example,
\sql{PERIOD '[1/6/95 - 21/6/95]' PRECEDES PERIOD '[24/6/95 -
30/6/95]'} is true, but \sql{PERIOD '[1/6/95 - 21/6/95]' PRECEDES
PERIOD '[19/6/95 - 30/6/95]'} is not.
\subsubsection*{Embedded SELECT statements}
\textsc{Tsql2}\xspace (and \textsc{Sql-92}\xspace) allow embedded \sql{SELECT} statements to be used
in the \sql{FROM} clause, in the same way that relation names are
used (e.g.\ \pref{tlang:10}).
\begin{examps}
\item \label{tlang:10}
\select{SELECT DISTINCT SNAPSHOT sal2.salary \\
FROM (\select{SELECT DISTINCT sal1.salary \\
VALID VALID(sal1) \\
FROM val\_salaries AS sal1} \\
\ \ \ \ \ ) AS sal2 \\
WHERE sal2.salary > 17500}
\end{examps}
Assuming that $val\_salaries$ is as in \pref{tlang:4}, the embedded
\sql{SELECT} statement above simply drops the $employee$ attribute of
$val\_salaries$, generating \pref{tlang:11}. \sql{sal2} ranges over
the tuples of \pref{tlang:11}. \pref{tlang:10} generates a relation
that is the same as \pref{tlang:11}, except that tuples whose $salary$
values are not greater than 17500 are dropped.
\begin{examps}
\item \label{tlang:11}
\dbtablec{|l||l|}
{$salary$ &}
{$17000$ & $[1/1/92, \; 12/6/92] \union [8/5/94, \; 30/10/94]$ \\
$18000$ & $[13/6/92, \; 7/5/94] \union [31/10/94, \; now]$ \\
$21000$ & $[15/6/92, \; now]$
}
\end{examps}
\subsubsection*{Partitioning units}
In \textsc{Tsql2}\xspace, relation names and embedded \sql{SELECT} statements in the
\sql{FROM} clause can be followed by \emph{partitioning
units}.\footnote{Section 30.3 of \cite{TSQL2book} allows relation
names but not embedded \sql{SELECT} statements to be followed by partitioning
units in \sql{FROM} clauses. \cite{Snodgrass1994d} (queries
Q.1.2.2, Q.1.2.5, Q.1.7.6), however, shows \sql{SELECT} statements
embedded in \sql{FROM} clauses and followed by partitioning units. I
follow \cite{Snodgrass1994d}.}
\textsc{Tsql2}\xspace currently provides two partitioning units: \sql{(PERIOD)} and
\sql{(INSTANT)}
\index{instant@\sql{(INSTANT)} (\textsc{Tsql2}\xspace partitioning unit)}
(see section 30.3 and chapter 12 of \cite{TSQL2book}). \sql{(INSTANT)}
is not used in this thesis. Previous \textsc{Tsql2}\xspace
versions (e.g.\ the September 1994 version
of chapter 12 of \cite{TSQL2book}) provided an additional
\sql{(ELEMENT)}. For reasons explained below, \sql{(ELEMENT)} is still
used in this thesis.
\sql{(ELEMENT)}
\index{element@\sql{(ELEMENT)} (\textsc{Tsql2}\xspace partitioning unit)}
merges value-equivalent tuples.\footnote{The semantics of
\sql{(ELEMENT)} was never clear. The discussion here reflects my
understanding of the September 1994 \textsc{Tsql2}\xspace documentation, and the
semantics that is assigned to \sql{(ELEMENT)} in this thesis.} For
example, if $rel1$ is the relation of \pref{pus:1}, \pref{pus:2}
generates the coalesced relation of
\pref{pus:3}.
\begin{examps}
\item \label{pus:1}
\dbtable{3}{|l|l||l|}{$rel1$}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1986, \; 1988]$ \\
$J.Adams$ & $17000$ & $[1987, \; 1990]$ \\
$J.Adams$ & $17000$ & $[1992, \; 1994]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1990]$ \\
$G.Papas$ & $14500$ & $[1990, \; 1992]$
}
\item \label{pus:2}
\select{SELECT DISTINCT r1.employee, r1.salary \\
VALID VALID(r1) \\
FROM rel1(ELEMENT) AS r1}
\item \label{pus:3}
\dbtablec{|l|l||l|}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1986, \; 1990] \union [1992, \; 1994]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1992]$
}
\end{examps}
The effect of \sql{(ELEMENT)} on a valid-time relation $r$ is captured
by the $coalesce$ function:
\index{coalesce@$coalesce()$ (effect of \sql{(ELEMENT)})}
\[
\begin{aligned}
coalesce(r) \defeq
\{&\tup{v_1, \dots, v_n; v_t} \mid
\tup{v_1, \dots, v_n; v_t'} \in r \text{ and} \\
&f_D(v_t) = \bigcup_{\tup{v_1, \dots, v_n; v_t''} \in r}f_D(v_t'') \} \\
\end{aligned}
\]
\sql{(ELEMENT)} has no effect on already coalesced valid-time relations.
Hence, in the \textsc{Bcdm}\xspace version of
\cite{TSQL2book}, where all valid-time relations are
coalesced, \sql{(ELEMENT)} is redundant (and this is probably why it
was dropped). In this thesis, valid-time relations are not necessarily
coalesced (section \ref{bcdm}), and \sql{(ELEMENT)} plays an important role.
\sql{(PERIOD)}
\index{period2@\sql{(PERIOD)} (\textsc{Tsql2}\xspace partitioning unit)}
intuitively breaks each tuple of a valid-time relation into
value-equivalent tuples, each corresponding to a maximal period of the
temporal element of the original time-stamp. Assuming, for example,
that $rel2$ is the relation of \pref{pus:3}, \pref{pus:4} generates
\pref{pus:5}.
\begin{examps}
\item \label{pus:4}
\select{SELECT DISTINCT r2.employee, r2.salary \\
VALID VALID(r2) \\
FROM rel2(PERIOD) AS r2}
\item \label{pus:5}
\dbtablec{|l|l||l|}
{$employee$ & $salary$ &}
{$J.Adams$ & $17000$ & $[1986, \; 1990]$ \\
$J.Adams$ & $17000$ & $[1992, \; 1994]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1992]$
}
\end{examps}
As the example shows, \sql{(PERIOD)} may generate non-coalesced
relations. This is mysterious in the \textsc{Bcdm}\xspace version of
\cite{TSQL2book}, where non-coalesced valid-time
relations are not allowed. The assumption seems to be that
although non-coalesced valid-time relations are not allowed, during
the execution of \sql{SELECT} statements temporary non-coalesced
valid-time relations may be generated. Any resulting
valid-time relations, however, are coalesced automatically at the end of
the statement's execution. \pref{pus:5} would be coalesced
automatically at the end of the execution of \pref{pus:4} (cancelling,
in this particular example, the effect of \sql{(PERIOD)}).
In this thesis, no automatic coalescing takes place, and the result of
\pref{pus:4} is \pref{pus:5}.
To preserve the spirit of \sql{(PERIOD)} in the \textsc{Bcdm}\xspace version of this
thesis where valid-time relations are not necessarily coalesced, I
assume that \sql{(PERIOD)} operates on a coalesced copy of the original
relation. Intuitively, \sql{(PERIOD)} first causes \pref{pus:1} to
become \pref{pus:3}, and then generates \pref{pus:5}.
The effect of \sql{(PERIOD)} on a
valid-time relation $r$ is captured by the $pcoalesce$ function:
\index{pcoalesce@$pcoalesce()$ (effect of \sql{(PERIOD)})}
\[
\begin{aligned}
pcoalesce(r) \defeq
\{&\tup{v_1, \dots, v_n; v_t} \mid
\tup{v_1, \dots, v_n; v_t'} \in coalesce(r) \text { and} \\
&f_D(v_t) \in mxlpers(f_D(v_t'))\}
\end{aligned}
\]
\section{Modifications of TSQL2} \label{TSQL2_mods}
This thesis adopts some modifications of \textsc{Tsql2}\xspace. Some of the
modifications were mentioned in section \ref{TSQL2_intro}. The main of
those were:
\begin{itemize}
\item The requirement that all valid-time relations must be
coalesced was dropped.
\item The distinction between state and event valid-time relations was
abandoned.
\item \sql{(ELEMENT)} was re-introduced.
\item The semantics of \sql{(PERIOD)} was
enhanced, to reflect the fact that in this thesis
valid-time relations are not necessarily coalesced.
\item All periods and temporal
elements are specified at the granularity of chronons. Literals
referring to other granularities are used as abbreviations for
literals that refer to the granularity of chronons.
\end{itemize}
This section describes the remaining \textsc{Tsql2}\xspace modifications of this
thesis.
\subsection{Referring to attributes by number} \label{by_num}
In \textsc{Tsql2}\xspace (and \textsc{Sql-92}\xspace) explicit attributes are referred to by their
names. In \pref{tlang:1b}, for example, \sql{sal.salary} refers to the
$salary$ attribute of $val\_salaries$.
\begin{examps}
\item \label{tlang:1b}
\select{SELECT DISTINCT sal.salary \\
VALID VALID(sal) \\
FROM val\_salaries AS sal}
\end{examps}
In the \textsc{Tsql2}\xspace version of this thesis, explicit attributes are referred
to by number, with numbers corresponding to the order in which the
attributes appear in the relation schema (section
\ref{relational}). For example, if the relation schema of
$val\_salaries$ is $\tup{employee, salary}$, $employee$ is the first
explicit attribute and $salary$ the second one.
\pref{tlang:1c} would be used instead of \pref{tlang:1b}. To refer to
the implicit attribute, one still uses \sql{VALID} (e.g.\ \sql{VALID(sal)}).
\begin{examps}
\item \label{tlang:1c}
\select{SELECT DISTINCT sal.2 \\
VALID VALID(sal) \\ FROM salaries AS sal}
\end{examps}
Referring to explicit attributes by number simplifies the \textsc{Top}\xspace to
\textsc{Tsql2}\xspace translation, because this way there is no need to keep track of
the attribute names of the various relations.
\subsection{(SUBPERIOD) and (NOSUBPERIOD)} \label{new_pus}
Two new partitioning units, \sql{(SUBPERIOD)} and \sql{(NOSUBPERIOD)},
were introduced for the purposes of this thesis. \sql{(SUBPERIOD)} is
designed to be used with relations from \ensuremath{\mathit{VREL}_P}\xspace (section \ref{bcdm}).
The effect of \sql{(SUBPERIOD)} \index{subperiod2@\sql{(SUBPERIOD)}
(\textsc{Tsql2}\xspace partitioning unit)} on a relation $r$ is captured by the
$subperiod$ function: \index{subperiod@$subperiod()$ (effect of
\sql{(SUBPERIOD)})}
\[
subperiod(r) \defeq
\{ \tup{v_1, \dots, v_n; v_t} \mid
\tup{v_1, \dots, v_n; v_t'} \in r \text{ and }
f_D(v_t) \subper f_D(v_t') \}
\]
For each tuple $\tup{v_1, \dots, v_n; v_t'} \in r$, the resulting
relation contains many value-equivalent tuples of the form $\tup{v_1,
\dots, v_n; v_t}$, one for each period $f_D(v_t)$ that is a subperiod of
$f_D(v_t')$. Assuming, for example, that chronons correspond to years, and
that $rel$ is the relation of \pref{subper:0}, \pref{subper:1}
returns the relation of \pref{subper:2}.
\begin{examps}
\item \label{subper:0}
\dbtableb{|l|l||l|}
{$J.Adams$ & $17000$ & $[1992, \; 1993]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1990]$ \\
$G.Papas$ & $14500$ & $[1990, \; 1991]$
}
\item \label{subper:1}
\sql{SELECT DISTINCT r.1, r.2 \\
VALID VALID(r) \\
FROM rel(SUBPERIOD) AS r}
\item \label{subper:2}
\dbtableb{|l|l||l|}
{$J.Adams$ & $17000$ & $[1992, \; 1993]$ \\
$J.Adams$ & $17000$ & $[1992, \; 1992]$ \\
$J.Adams$ & $17000$ & $[1993, \; 1993]$ \\
& & \\
$G.Papas$ & $14500$ & $[1988, \; 1990]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1988]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1989]$ \\
$G.Papas$ & $14500$ & $[1989, \; 1989]$ \\
$G.Papas$ & $14500$ & $[1989, \; 1990]$ \\
$G.Papas$ & $14500$ & $[1990, \; 1990]$ \\
& & \\
$G.Papas$ & $14500$ & $[1990, \; 1991]$ \\
$G.Papas$ & $14500$ & $[1991, \; 1991]$
}
\end{examps}
The first three tuples of \pref{subper:2} correspond to the
first tuple of \pref{subper:0}. The following six tuples correspond
to the first tuple of $G.Papas$ in \pref{subper:0}. The remaining
tuples of \pref{subper:2} derive from the second tuple of $G.Papas$
in \pref{subper:0} (the tuple for the subperiod $[1990, \;
1990]$ has already been included in \pref{subper:2}).
Notice that \sql{(SUBPERIOD)} does not coalesce the original relation
before generating the result (this is why
there is no tuple for G.Papas time-stamped by $[1988, \; 1991]$ in
\pref{subper:2}).
Obviously, the cardinality of the resulting relations can be very
large (especially if chronons are very fine, e.g.\ seconds). The
cardinality, however, is never infinite (assuming that the cardinality
of the original relation is finite): given that time is discrete,
linear, and bounded, any period $p$ is a finite set of chronons, and
there is at most a finite number of periods (convex sets of chronons)
that are subperiods (subsets) of $p$\/; hence, for any tuple in the
original relation whose time-stamp represents a period $p$, there will
be at most a finite number of tuples in the resulting relation whose
time-stamps represent subperiods of $p$. It remains, of course, to be
examined if \sql{(SUBPERIOD)} can be supported efficiently in
\textsc{Dbms}\xspace{s}. It is obviously very inefficient to store (or print)
individually all the tuples of the resulting relation. A more
space-efficient encoding of the resulting relation is needed. I have
not explored this issue.
Roughly speaking, \sql{(SUBPERIOD)} is needed because during the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translation every \textsc{Top}\xspace formula
is mapped to a valid-time relation whose time-stamps
denote the event-time periods where the formula is true. Some (but not
all) formulae are homogeneous (section
\ref{denotation}). For these formulae we need to ensure that
if the valid-time relation contains a tuple for
an event-time $et$, it also contains tuples for
all the subperiods of $et$. This will become clearer in section
\ref{trans_rules}.
\sql{(NOSUBPERIOD)}
\index{nosubperiod2@\sql{(NOSUBPERIOD)} (\textsc{Tsql2}\xspace partitioning unit)}
is roughly speaking used when the effect of
\sql{(SUBPERIOD)} needs to be cancelled.
\sql{(NOSUBPERIOD)} is designed to be used with
relations from \ensuremath{\mathit{VREL}_P}\xspace. It eliminates any tuple $\tup{v_1, \dots, v_n;
v_t}$, for which there is a value-equivalent tuple $\tup{v_1, \dots,
v_n; v_t'}$, such that $f_D(v_t) \propsubper f_D(v_t')$.
The effect of \sql{(NOSUBPERIOD)} on a valid-time relation $r$
is captured by the $nosubperiod$ function:
\index{nosubperiod@$nosubperiod()$ (effect of \sql{(NOSUBPERIOD)})}
\[
\begin{aligned}
nosubperiod(r) \defeq
\{ \tup{v_1, \dots, v_n; v_t} \in r \mid
&\text{ there is no } \tup{v_1, \dots, v_n; v_t'} \in r \\
&\text{ such that } f_D(v_t) \propsubper f_D(v_t') \}
\end{aligned}
\]
Applying \sql{(NOSUBPERIOD)} to \pref{subper:2} generates
\pref{subper:3}.
\begin{examps}
\item \label{subper:3}
\dbtableb{|l|l||l|}
{$J.Adams$ & $17000$ & $[1992, \; 1993]$ \\
$G.Papas$ & $14500$ & $[1988, \; 1990]$ \\
$G.Papas$ & $14500$ & $[1990, \; 1991]$
}
\end{examps}
Although \sql{(SUBPERIOD)} and \sql{(NOSUBPERIOD)} are designed to be
used (and in practice will always be used) with relations from \ensuremath{\mathit{VREL}_P}\xspace,
I allow \sql{(SUBPERIOD)} and \sql{(NOSUBPERIOD)} to be used with any
valid-time relation. In the proofs of appendix
\ref{trans_proofs}, this saves me having to prove that the
original relation is an element of \ensuremath{\mathit{VREL}_P}\xspace whenever \sql{(SUBPERIOD)}
and \sql{(NOSUBPERIOD)} are used.
\subsection{Calendric relations} \label{calrels}
As mentioned in section \ref{tsql2_lang}, \textsc{Tsql2}\xspace supports multiple
calendars. Roughly speaking, a \textsc{Tsql2}\xspace calendar describes a system that
people use to measure time (Gregorian calendar, Julian calendar,
etc.). \textsc{Tsql2}\xspace calendars also specify the meanings of strings within
the quotes of temporal literals, and the available
granularities. According to section 3.2 of \cite{TSQL2book}, \textsc{Tsql2}\xspace
calendars are defined by the database administrator, the \textsc{Dbms}\xspace vendor,
or third parties. In this thesis, I assume that \textsc{Tsql2}\xspace calendars can
also provide \emph{calendric relations}. Calendric relations behave
like ordinary relations in the database, except that they are
defined by the creator of the \textsc{Tsql2}\xspace calendar, and cannot be updated.
The exact purpose and contents of each calendric relation are left to
the calendar creator. I assume, however, that a calendric relation
provides information about the time-measuring system of the
corresponding \textsc{Tsql2}\xspace calendar.\footnote{Future work could establish a
more systematic link between calendric relations and
\textsc{Tsql2}\xspace calendars. For example, calendric relations could be required to
reflect (as a minimum) the lattice that shows how the granularities of
the calendar relate to each other (section \ref{tsql2_time}).} The
Gregorian \textsc{Tsql2}\xspace calendar could, for example, provide the calendric
valid-time relation $gregorian$ below. (I assume here that chronons are
finer than minutes.)
\adbtable{7}{|c|c|c|c|c|c||c|}{$gregorian$}
{$year$ & $month$ & $dnum$ & $dname$ & $hour$ & $minute$ &}
{
\ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots \\
$1994$ & $Sept$ & $4$ & $Sun$ & $00$ & $00$ & $\{c_{n_1}, \dots, c_{n_2}\}$ \\
$1994$ & $Sept$ & $4$ & $Sun$ & $00$ & $01$ & $\{c_{n_3}, \dots, c_{n_4}\}$ \\
\ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots \\
$1995$ & $Dec$ & $5$ & $Tue$ & $21$ & $35$ & $\{c_{n_5}, \dots, c_{n_6}\}$ \\
\ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots & \ \dots
}
The relation above means that the first minute (00:00) of September
4th 1994 (which was a Sunday) covers exactly the period that starts at
the chronon $c_{n_1}$ and ends at the chronon $c_{n_2}$. Similarly,
the period that starts at $c_{n_3}$ and ends at $c_{n_4}$ is the
second minute (00:01) of September 4th 1994. Of course, the
cardinality of $gregorian$ is very large, though not infinite (time in
\textsc{Tsql2}\xspace is bounded, and hence there is at most a finite number of
minute-granules). It is important, however, to realise that although
$gregorian$ behaves like a normal relation in the database, it does
not need to be physically present in the database. Its tuples could be
computed dynamically, whenever they are needed, using some algorithm
specified by the \textsc{Tsql2}\xspace calendar. Other calendric relations may list
the periods that correspond to seasons (spring-periods,
summer-periods, etc.), special days (e.g.\ Easter days), etc.
Calendric relations like $gregorian$ can be used to construct
relations that represent the periods of partitionings. \pref{calrels:6}, for
example, constructs a one-attribute snapshot relation, that contains
all the time-stamps of $gregorian$ that correspond to
21:36-minutes. The resulting relation represents all the
periods of the partitioning of 21:36-minutes.
\begin{examps}
\item \select{SELECT DISTINCT SNAPSHOT VALID(greg) \\
FROM gregorian AS greg \\
WHERE greg.5 = 21 AND greg.6 = 36}
\label{calrels:6}
\end{examps}
Similarly, \pref{calrels:5} generates a one-attribute snapshot
relation that represents the periods of the partitioning of
Sunday-periods. The embedded \sql{SELECT} statement generates a
valid-time relation of one explicit attribute (whose value is
$\mathit{Sun}$ in all tuples). The time-stamps of this relation are
all the time-stamps of $gregorian$ that correspond to
Sundays (there are many tuples for each Sunday). The
\sql{(PERIOD)} coalesces tuples that correspond to the same Sunday,
leading to a single period-denoting time-stamp for each Sunday. These
time-stamps become the attribute values of the
relation generated by the overall \pref{calrels:5}.
\begin{examps}
\item \select{SELECT DISTINCT SNAPSHOT VALID(greg2) \\
FROM (\select{SELECT DISTINCT greg1.4 \\
VALID VALID(greg1) \\
FROM gregorian AS greg1 \\
WHERE greg1.4 = 'Sun'} \\
\ \ \ \ \ )(PERIOD) AS greg2}
\label{calrels:5}
\end{examps}
In \cite{Androutsopoulos1995b} we argue that calendric relations
constitute a generally useful addition to \textsc{Tsql2}\xspace, and that unless
appropriate calendric relations are available, it
is not possible to formulate \textsc{Tsql2}\xspace queries for questions involving
existential or universal quantification or counts over day-names,
month names, season-names, etc.\ (e.g.\ \pref{calrels:1} --
\pref{calrels:3}).
\begin{examps}
\item Which technicians were at some site on a Sunday?
\label{calrels:1}
\item Which technician was at Glasgow Central on every Monday in 1994?
\label{calrels:2}
\item On how many Sundays was J.Adams at Glasgow Central in 1994?
\label{calrels:3}
\end{examps}
\subsection{The INTERVAL function} \label{interv_fun}
\index{interval@\sql{INTERVAL} (\textsc{Tsql2}\xspace keyword, returns intervals or introduces interval literals)}
\textsc{Tsql2}\xspace provides a function \sql{INTERVAL} that accepts a
period-denoting expression as its argument, and returns an interval
reflecting the duration of the period. The assumption seems to be that
the resulting interval is specified at whatever granularity the period
is specified. For example, \sql{INTERVAL(PERIOD '[1/12/95 -
3/12/95]')} is the same as \sql{INTERVAL '3' DAY}. In this thesis, all
periods are specified at the granularity of chronons, and if chronons
correspond to minutes, \sql{PERIOD '[1/12/95 - 3/12/95]'} is an
abbreviation for \sql{PERIOD '[00:00 1/12/95 - 23:59 3/12/95]'}
(sections \ref{tsql2_time} and \ref{tsql2_lang}). Hence, the results of
\sql{INTERVAL} are always specified at the granularity of chronons.
When translating from \textsc{Top}\xspace to \textsc{Tsql2}\xspace, however, there are cases where
we want the results of \sql{INTERVAL} to be specified at other granularities.
This could be achieved by converting the results of
\sql{INTERVAL} to the desired granularities. The \textsc{Tsql2}\xspace mechanisms for
converting intervals from one granularity to another, however, are
very obscure (see section 19.4.6 of \cite{TSQL2book}). To avoid these
mechanisms, I introduce an additional version of the \sql{INTERVAL}
function. If $expr_1$ is a \textsc{Tsql2}\xspace expression that specifies a period
$p$, and $expr_2$ is the \textsc{Tsql2}\xspace name (e.g.\
\sql{DAY}, \sql{MONTH}) of a granularity $G$, then
\sql{INTERVAL(}$expr_1$, $expr_2$\sql{)} specifies an interval of $n$
granules (periods) of $G$, where $n$ is as follows. If there
are $k$ consecutive granules $g_1, g_2, g_3, \dots, g_k$ in $G$ such
that $g_1 \union g_2 \union g_3 \union \dots \union g_k = p$, then $n
= k$. Otherwise, $n = 0$. For example,
\sql{INTERVAL(PERIOD '[May 5, 1995 - May 6, 1995]', DAY)} is
the same as \sql{INTERVAL '2' DAY}, because the period covers exactly
2 consecutive day-granules. Similarly, \sql{INTERVAL(PERIOD '[May 1,
1995 - June 30, 1995]', MONTH)} is the same as \sql{INTERVAL '2'
MONTH}, because the period covers exactly two consecutive
month-granules. In contrast, \sql{INTERVAL(PERIOD '[May 1, 1995 - June
15, 1995]', MONTH)} is the same as \sql{INTERVAL '0' MONTH} (zero
duration), because there is no union of consecutive month-granules
that covers exactly the period of \sql{PERIOD '[May 1, 1995 -
June 15, 1995]'}.
\subsection{Correlation names used in the same FROM clause where they
are defined} \label{same_FROM}
The syntax of \textsc{Tsql2}\xspace (and \textsc{Sql-92}\xspace) does not allow a correlation name to
be used in a \sql{SELECT} statement that is embedded in the same
\sql{FROM} clause that defines the correlation name. For example,
\pref{sfrom:1} is not allowed, because the embedded \sql{SELECT}
statement uses \sql{r1}, which is defined by the same \sql{FROM}
clause that contains the embedded \sql{SELECT} statement.
\begin{examps}
\item \label{sfrom:1}
\select{SELECT \dots \\
VALID VALID(r1) \\
FROM rel1 AS r1, \\
\ \ \ \ \ (\select{SELECT \dots \\
VALID VALID(r2) \\
FROM rel2 AS r2 \\
WHERE VALID(r1) CONTAINS VALID(r2)} \\
\ \ \ \ \ ) AS r3 \\
WHERE \dots}
\end{examps}
By \emph{definition of a correlation name} $\alpha$, I mean the expression
``\sql{AS $\alpha$}'' that associates $\alpha$ with a relation. For
example, in \pref{sfrom:1} the definition of \sql{r1} is the ``\sql{AS
r1}''.\footnote{In \sql{SELECT} statements that contain other embedded
\sql{SELECT} statements, multiple definitions of the same correlation
name may be present (there are rules that determine the scope of each
definition). We do not need to worry about such cases, however,
because the generated \textsc{Tsql2}\xspace code of this chapter never contains
multiple definitions of the same correlation name.} A correlation
name $\alpha$ is \emph{defined by a \sql{FROM} clause} $\xi$, if $\xi$
contains the definition of $\alpha$, and this definition is not within
a \sql{SELECT} statement which is embedded in $\xi$. For example, in
\pref{sfrom:1} the \sql{r2} is defined by the ``\sql{FROM rel2 AS
r2}'' clause, not by the ``\sql{FROM rel1 AS r1, (\dots) AS r3}''
clause.
In this thesis, I allow a correlation name to be used in a
\sql{SELECT} statement that is embedded in the same \sql{FROM} clause
that defines the correlation name, provided that the definition of
the correlation name precedes the embedded \sql{SELECT} statement.
\pref{sfrom:1} is acceptable, because the definition of \sql{r1}
precedes the embedded \sql{SELECT} statement where \sql{r1} is
used. In contrast, \pref{sfrom:2} is not acceptable,
because the definition of \sql{r1} follows the
embedded \sql{SELECT} statement where \sql{r1} is used.
\begin{examps}
\item \label{sfrom:2}
\select{SELECT \dots \\
VALID VALID(r1) \\
FROM (\select{SELECT \dots \\
VALID VALID(r2) \\
FROM rel2 AS r2 \\
WHERE VALID(r1) CONTAINS VALID(r2)} \\
\ \ \ \ \ ) AS r3, \\
\ \ \ \ \ rel1 AS r1 \\
WHERE \dots}
\end{examps}
The intended semantics of statements like \pref{sfrom:1}
should be easy to see: when evaluating the embedded \sql{SELECT}
statement, \sql{VALID(r1)} should represent the time-stamp of a tuple
from $rel1$. The restriction that the definition of the correlation
name must precede the embedded \sql{SELECT} is imposed to make this
modification easier to implement.
The modification of this section is used in the \textsc{Top}\xspace to \textsc{Tsql2}\xspace
translation rules for $\ensuremath{\mathit{At}}\xspace[\phi_1,
\phi_2]$, $\ensuremath{\mathit{Before}}\xspace[\phi_1, \phi_2]$, and $\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]$
(section \ref{trans_rules} below and appendix
\ref{trans_proofs}).
\subsection{Equality checks and different domains} \label{eq_checks}
Using the equality predicate (\sql{=}) with expressions that refer to
values from different domains often causes the \textsc{Tsql2}\xspace (or \textsc{Sql-92}\xspace)
interpreter to report an error. If, for example, the domain of the
first explicit attribute of $rel$ is the set of all integers,
\sql{r.1} in \pref{eqs:1} stands for an integer. \textsc{Tsql2}\xspace (and \textsc{Sql-92}\xspace)
does not allow integers to be compared to strings (e.g.\
``J.Adams''). Consequently, \pref{eqs:1} would be rejected, and an
error message would be generated.
\begin{examps}
\item \select{SELECT DISTINCT SNAPSHOT r.2 \\
FROM rel AS r \\
WHERE r.1 = 'J.Adams'} \label{eqs:1}
\end{examps}
In other cases (e.g.\ if a real number is compared to an integer),
type conversions take place before the comparison. To by-pass
uninteresting details, in this thesis I assume that no type
conversions occur when ``\sql{=}'' is used. The equality predicate is
satisfied iff both of its arguments refer to the same element of $D$
(universal domain). No error occurs if the arguments refer to values
from different domains. In the example of \pref{eqs:1}, \sql{r.1 =
'J.Adams'} is not satisfied, because \sql{r.1} refers to an integer in
$D$, \sql{'J.Adams'} to a string in $D$, and integers are different from
strings. Consequently, in the \textsc{Tsql2}\xspace version of this thesis
\pref{eqs:1} generates the empty relation (no errors occur).
\subsection{Other minor changes}
\textsc{Tsql2}\xspace does not allow partitioning units to follow
\sql{SELECT} statements that are not embedded into other \sql{SELECT}
statements. For example, \pref{pus:10} on its own is not acceptable.
\begin{examps}
\item \label{pus:10}
\sql{(}\select{SELECT DISTINCT r1.1, r1.2 \\
VALID VALID(r1) \\
FROM rel AS r1}\\
\sql{)(PERIOD)}
\end{examps}
\sql{SELECT} statements like \pref{pus:10} can be easily made
acceptable by embedding them into another
\sql{SELECT} statement (e.g.\ \pref{pus:11}).
\begin{examps}
\item \label{pus:11}
\select{SELECT DISTINCT r2.1, r2.2 \\
VALID VALID(r2) \\
FROM (\select{SELECT DISTINCT r1.1, r1.2 \\
VALID VALID(r1) \\
FROM rel AS r1} \\
\ \ \ \ \ )(PERIOD) AS r2}
\end{examps}
For simplicity, I allow stand-alone statements like
\pref{pus:10}. I assume that \pref{pus:10}
generates the same relation as \pref{pus:11}. I also allow stand-alone
\sql{SELECT} statements enclosed in brackets (e.g.\ \pref{pus:12}).
I assume that the enclosing brackets are simply ignored.
\begin{examps}
\item \label{pus:12}
\sql{(}\select{SELECT DISTINCT r1.1, r1.2 \\
VALID VALID(r1) \\
FROM rel AS r1}\\
\sql{)}
\end{examps}
\section{Additional TSQL2 terminology} \label{additional_tsql2}
This section defines some additional terminology, that is used to formulate
and prove the correctness of the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation.
\paragraph{Column reference:} A \emph{column
reference} is an expression of the form $\alpha.i$ or
\sql{VALID(}$\alpha$\sql{)}, where $\alpha$ is a correlation name and
$i \in \{1,2,3,\dots\}$ (e.g.\ \sql{sal.2}, \sql{VALID(sal)}).
\paragraph{Binding context:} A \sql{SELECT} statement $\Sigma$ is a
\emph{binding context} for a column reference $\alpha.i$ or
\sql{VALID(}$\alpha$\sql{)} iff:
\begin{itemize}
\item the column reference is part of $\Sigma$,
\item $\alpha$ is defined (in the sense of section \ref{same_FROM}) by
the topmost \sql{FROM} clause of $\Sigma$, and
\item the column reference is not in the topmost \sql{FROM} clause of
$\Sigma$; or it is in the topmost \sql{FROM} clause of $\Sigma$, but
the definition of $\alpha$ precedes the column reference.
\end{itemize}
By \emph{topmost \sql{FROM} clause of $\Sigma$} I mean the (single)
\sql{FROM} clause of $\Sigma$ that does not appear in any \sql{SELECT}
statement embedded in $\Sigma$ (e.g.\ the topmost \sql{FROM} clause of
\pref{fcn:1} is the ``\sql{FROM tab1 AS r1, (\dots) AS r3}''). We will
often have to distinguish between individual \emph{occurrences} of column
references. For example, \pref{fcn:2} is a binding context for the
occurrence of \sql{VALID(r1)} in the \sql{VALID} clause, because that
occurrence is part of \pref{fcn:2}, \sql{r1} is defined by the topmost
\sql{FROM} clause of \pref{fcn:2}, and the occurrence of
\sql{VALID(r1)} is not in the topmost \sql{FROM} clause of
\pref{fcn:2}. \pref{fcn:2}, however, is \emph{not} a binding context
for the occurrence of \sql{VALID(r1)} in the embedded \sql{SELECT}
statement of \pref{fcn:2}, because that occurrence is in the topmost
\sql{FROM} clause, and it does not follow the definition of \sql{r1}.
\begin{examps}
\item \label{fcn:2}
\select{SELECT DISTINCT r1.1, r3.2 \\
VALID VALID(r1) \\
FROM (\select{SELECT DISTINCT SNAPSHOT r2.1, r2.2 \\
FROM tab2 AS r2 \\
WHERE VALID(r2) CONTAINS VALID(r1)} \\
\ \ \ \ \ ) AS r3, \\
\ \ \ \ \ tab1 AS r1 \\
WHERE r1.1 = 'J.Adams'}
\end{examps}
In contrast, \pref{fcn:1} \emph{is} a binding context for the
\sql{VALID(r1)} in the embedded \sql{SELECT}, because the definition
of \sql{r1} precedes that occurrence of \sql{VALID(r1)}.
\begin{examps}
\item \label{fcn:1}
\select{SELECT DISTINCT r1.1, r3.2 \\
VALID VALID(r1) \\
FROM tab1 AS r1, \\
\ \ \ \ \ (\select{SELECT DISTINCT SNAPSHOT r2.1, r2.2 \\
FROM tab2 AS r2 \\
WHERE VALID(r2) CONTAINS VALID(r1)} \\
\ \ \ \ \ ) AS r3 \\
WHERE r1.1 = 'J.Adams'}
\end{examps}
In both \pref{fcn:2} and \pref{fcn:1}, the overall \sql{SELECT}
statement is not a binding context for
\sql{r2.1}, \sql{r2.2}, and \sql{VALID(r2)}, because \sql{r2} is not
defined by the topmost \sql{FROM} clause of the overall \sql{SELECT}
statement. The embedded \sql{SELECT} statement of \pref{fcn:2} and
\pref{fcn:1}, however, \emph{is} a binding context for \sql{r2.1},
\sql{r2.2}, and \sql{VALID(r2)}.
\paragraph{Free column reference:} A column reference $\alpha.i$ or
\sql{VALID($\alpha$)} is a \emph{free column reference} in a \textsc{Tsql2}\xspace
expression $\xi$, iff:
\begin{itemize}
\item the column reference is part of $\xi$, and
\item there is no \sql{SELECT} statement in $\xi$ (possibly being the
whole $\xi$) that is a binding context for the column reference.
\end{itemize}
The \sql{VALID(r1)} in the embedded \sql{SELECT} statement of
\pref{fcn:2} is free in \pref{fcn:2}, because there is no
binding context for that occurrence in \pref{fcn:2}. In contrast, the
\sql{VALID(r1)} in the \sql{VALID} clause of \pref{fcn:2} is not free
in \pref{fcn:2}, because \pref{fcn:2} is a binding context for that
occurrence. The \sql{VALID(r2)} of \pref{fcn:2} is not free in
\pref{fcn:2}, because the embedded \sql{SELECT}
statement is a binding context for \sql{VALID(r2)}.
A correlation name $\alpha$ \emph{has a free column reference in} a
\textsc{Tsql2}\xspace expression $\xi$, iff there is a free column reference $\alpha.i$ or
\sql{VALID($\alpha$)} in $\xi$. For every \textsc{Tsql2}\xspace expression $\xi$,
$\ensuremath{\mathit{FCN}}\xspace(\xi)$
\index{fcn@$\ensuremath{\mathit{FCN}}\xspace(\xi)$ (set of all correlation names with free column references in $\xi$)}
is the set of all correlation names that have a free column reference
in $\xi$. For example, if $\xi$ is \pref{fcn:2}, $\ensuremath{\mathit{FCN}}\xspace(\xi) =
\{$\sql{r1}$\}$ (the \sql{VALID(r1)} of the embedded
\sql{SELECT} statement is free in \pref{fcn:2}).
There must be no free column references in the overall \sql{SELECT}
statements that are submitted to the \textsc{Tsql2}\xspace (or \textsc{Sql-92}\xspace) interpreter
(though there may be free column references in their embedded
\sql{SELECT} statements). Hence, it is important to prove that there
are no free column references in the overall \sql{SELECT} statements
generated by the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation.
\paragraph{Value expression:}
In \textsc{Tsql2}\xspace (and \textsc{Sql-92}\xspace), \emph{value expression} refers to expressions
that normally evaluate to elements of $D$ (universal domain). (The
meaning of ``normally'' will be explained in following paragraphs.)
For example, \sql{'J.Adams'}, \sql{VALID(sal)}, and
\sql{INTERSECT(PERIOD '[1993 - 1995]', PERIOD '[1994 - 1996]')} are
all value expressions.
\paragraph{Assignment to correlation names:} An \emph{assignment to
correlation names} is a function $g^{db}$
\index{gdb@$g^{db}()$, $(g^{db})^{\alpha}_{\tup{v_1, v_2, \dots}}()$ (assignment to correlation names)}
that maps every \textsc{Tsql2}\xspace correlation name to a possible tuple of a
snapshot or valid-time relation. $G^{db}$
\index{Gdb@$G^{db}$ (set of all assignments to correlation names)}
is the set of all assignments to correlation names
If $\alpha$ is a (particular) correlation name, $\tup{v_1, v_2,
\dots}$ is a (particular) tuple of a snapshot or valid-time relation,
and $g^{db} \in G^{db}$, $(g^{db})^{\alpha}_{\tup{v_1, v_2, \dots}}$
\index{gdb@$g^{db}()$, $(g^{db})^{\alpha}_{\tup{v_1, v_2, \dots}}()$ (assignment to correlation names)}
is the same as $g^{db}$, except that it assigns $\tup{v_1, v_2,
\dots}$ to $\alpha$. (For every other correlation name, the values of
$g^{db}$ and $(g^{db})^{\alpha}_{\tup{v_1, v_2, \dots}}$ are
identical.)
\paragraph{eval:}
\index{eval@$eval()$ (evaluates \textsc{Tsql2}\xspace expressions)}
For every \textsc{Tsql2}\xspace \sql{SELECT} statement
or value expression $\xi$, and every $st \in \ensuremath{\mathit{CHRONS}}\xspace$ and $g^{db} \in
G^{db}$, $eval(st, \xi, g^{db})$ is the relation (if $\xi$ is a
\sql{SELECT} statement) or the element of $D$ (if $\xi$ is a value
expression) that is generated when the \textsc{Tsql2}\xspace interpreter evaluates
$\xi$ in the following way:
\begin{itemize}
\item $st$ is taken to be the current chronon.
\item Every free column reference of the form
$\alpha.i$ is treated as a value expression that evaluates to $v_i$,
where $v_i$ is the $i$-th attribute value in the tuple
$g^{db}(\alpha)$.
\item Every free column reference of the form \sql{VALID($\alpha$)} is
treated as a value expression that evaluates to $v_t$, where $v_t$ is
the time-stamp of $g^{db}(\alpha)$.
\end{itemize}
If $\xi$ cannot be evaluated in this way (e.g.\ $\xi$ contains a free
column reference of the form $\alpha.4$, and $g^{db}(\alpha) =
\tup{v_1, v_2, v_3}$), $eval(st, \xi, g^{db})$ returns the special
value $error$.
\index{error@$error$ (signals evaluation error)}
(I assume that $error \not\in D$.) A value expression $\xi$
\emph{normally} (but not always) evaluates to an element of $D$,
because when errors arise $eval(st, \xi, g^{db}) = error \not\in
D$. If, however, $eval(st, \xi, g^{db}) \not= error$, $eval(st, \xi,
g^{db}) \in D$.
Strictly speaking, $eval$ should also have as its argument the
database against which $\xi$ is evaluated. For simplicity, I
overlook this detail. Finally, if $\ensuremath{\mathit{FCN}}\xspace(\xi) = \emptyset$ ($\xi$
contains no free column references), $eval(st, \xi, g^{db})$ does not
depend on $g^{db}$. In this case, I write simply $eval(st, \xi)$.
\section{Modifications in TOP and additional TOP terminology} \label{TOP_mods}
In the formulae generated by the English to \textsc{Top}\xspace translation, each
$\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ is conjoined with a subformula
that is (or contains another subformula) of the form $\ensuremath{\mathit{At}}\xspace[\beta,
\phi]$, $\ensuremath{\mathit{Before}}\xspace[\beta, \phi]$, or $\ensuremath{\mathit{After}}\xspace[\beta, \phi]$ ($\sigma \in \ensuremath{\mathit{PARTS}}\xspace$, $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, $\beta \in
\ensuremath{\mathit{VARS}}\xspace$, and the $\beta$ of \ensuremath{\mathit{Part}}\xspace is the same as that of
\ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, or \ensuremath{\mathit{After}}\xspace). For example, \pref{tmods:1} and
\pref{tmods:3} are mapped to \pref{tmods:2} and \pref{tmods:4}.
Also the reading of \pref{tmods:5} where Monday is the time when the
tank was empty (rather than a reference time; section
\ref{past_perfect}) is mapped to \pref{tmods:6}.
\begin{examps}
\item Tank 2 was empty on a Monday. \label{tmods:1}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, mon^v] \land \ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$
\label{tmods:2}
\item On which Monday was tank 2 empty? \label{tmods:3}
\item $?mon^v \; \ensuremath{\mathit{Part}}\xspace[monday^g, mon^v] \land
\ensuremath{\mathit{At}}\xspace[mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{tmods:4}
\item Tank 2 had been empty on a Monday. \label{tmods:5}
\item $\ensuremath{\mathit{Part}}\xspace[monday^g, mon^v] \land
\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[mon^v, empty(tank2)]]]$ \label{tmods:6}
\end{examps}
In this chapter, I use a slightly different version of \textsc{Top}\xspace, where the
\ensuremath{\mathit{Part}}\xspace is merged with the corresponding \ensuremath{\mathit{At}}\xspace,
\ensuremath{\mathit{Before}}\xspace, or \ensuremath{\mathit{After}}\xspace. For example, \pref{tmods:2}, \pref{tmods:4}, and
\pref{tmods:6} become \pref{tmods:7}, \pref{tmods:8}, and \pref{tmods:9}
respectively.
\begin{examps}
\item $\ensuremath{\mathit{At}}\xspace[monday^g, mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{tmods:7}
\item $?mon^v \; \ensuremath{\mathit{At}}\xspace[monday^g, mon^v, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$
\label{tmods:8}
\item $\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Perf}}\xspace[e2^v, \ensuremath{\mathit{At}}\xspace[monday^g, mon^v, empty(tank2)]]]$
\label{tmods:9}
\end{examps}
The semantics of $\ensuremath{\mathit{At}}\xspace[\sigma, \beta, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\sigma, \beta,
\phi]$, and $\ensuremath{\mathit{After}}\xspace[\sigma, \beta, \phi]$ follow ($f$ is
$\ensuremath{\mathit{f_{gparts}}}\xspace$ if $\sigma \in \ensuremath{\mathit{GPARTS}}\xspace$, and $\ensuremath{\mathit{f_{cparts}}}\xspace$ if $\sigma
\in \ensuremath{\mathit{CPARTS}}\xspace$.)
\begin{itemize}
\item $\denot{st,et,lt,g}{\ensuremath{\mathit{At}}\xspace[\sigma, \beta, \phi]} = T$ iff
$g(\beta) \in f(\sigma)$ and
$\denot{st, et, lt \intersect g(\beta), g}{\phi} = T$.
\item $\denot{st,et,lt,g}{\ensuremath{\mathit{Before}}\xspace[\sigma, \beta, \phi]} = T$ iff
$g(\beta) \in f(\sigma)$ and
$\denot{st, et, lt \intersect [t_{first}, minpt(\denot{g}{\beta})), g}
{\phi} = T$.
\item $\denot{st,et,lt,g}{\ensuremath{\mathit{After}}\xspace[\sigma, \beta, \phi]} = T$ iff
$g(\beta) \in f(\sigma)$ and
$\denot{st, et, lt \intersect (maxpt(\denot{g}{\beta}), t_{last}], g}
{\phi} = T$.
\end{itemize}
In the \textsc{Top}\xspace version of this chapter, $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$,
$\ensuremath{\mathit{At}}\xspace[\beta, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\beta, \phi]$, and $\ensuremath{\mathit{After}}\xspace[\beta, \phi]$
($\beta \in \ensuremath{\mathit{VARS}}\xspace$) are no longer yes/no formulae.
$\ensuremath{\mathit{At}}\xspace[\kappa, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\kappa, \phi]$, and $\ensuremath{\mathit{After}}\xspace[\kappa,
\phi]$ ($\kappa \in \ensuremath{\mathit{CONS}}\xspace$), however, are still
yes/no formulae.
The \textsc{Top}\xspace version of chapter \ref{TOP_chapter} is more convenient for
the English to \textsc{Top}\xspace mapping, while the version of this chapter
simplifies the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation. In the prototype \textsc{Nlitdb}\xspace,
there is a converter between the module that translates from English
to \textsc{Top}\xspace and the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator. The module that translates
from English to \textsc{Top}\xspace maps \pref{tmods:1}, \pref{tmods:3}, and
\pref{tmods:5} to \pref{tmods:2}, \pref{tmods:4}, and \pref{tmods:6}
respectively. The converter turns \pref{tmods:2}, \pref{tmods:4}, and
\pref{tmods:6} into \pref{tmods:7}, \pref{tmods:8}, and
\pref{tmods:9}, which are then passed to the \textsc{Top}\xspace to \textsc{Tsql2}\xspace
translator.
The reader is reminded that the $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta,
\nu_{ord}]$ version of \ensuremath{\mathit{Part}}\xspace is not used
in the translation from English to \textsc{Top}\xspace (section
\ref{TOP_FS}). Hence, only the $\ensuremath{\mathit{Part}}\xspace[\sigma, \beta]$ form of
\ensuremath{\mathit{Part}}\xspace is possible in formulae generated by the
English to \textsc{Top}\xspace translation. In the \textsc{Top}\xspace version of this
chapter, \ensuremath{\mathit{Part}}\xspace operators of this form are merged with \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace,
or \ensuremath{\mathit{After}}\xspace operators. Therefore, no \ensuremath{\mathit{Part}}\xspace operators occur in the
formulae that are passed to the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator.
As with the \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace, and \ensuremath{\mathit{After}}\xspace of chapter
\ref{TOP_chapter} (section \ref{top_syntax}), in every $\ensuremath{\mathit{At}}\xspace[\sigma, \beta,
\phi]$, $\ensuremath{\mathit{Before}}\xspace[\sigma, \beta, \phi]$, and $\ensuremath{\mathit{After}}\xspace[\sigma, \beta,
\phi]$, I require $\beta$ not to occur within
$\phi$. This is needed to prove the correctness of the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translation.
To avoid complications in the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation, I require
that in any $\ensuremath{\mathit{At}}\xspace[\kappa, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\kappa, \phi]$, or
$\ensuremath{\mathit{After}}\xspace[\kappa, \phi]$ ($\kappa \in \ensuremath{\mathit{CONS}}\xspace$, $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$) that
is passed to the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator, $\ensuremath{\mathit{f_{cons}}}\xspace(\kappa) \in
\ensuremath{\mathit{PERIODS}}\xspace$. (The definitions of section \ref{at_before_after_op} are
more liberal: they allow $\ensuremath{\mathit{f_{cons}}}\xspace(\kappa)$ not to belong to \ensuremath{\mathit{PERIODS}}\xspace,
though if $\ensuremath{\mathit{f_{cons}}}\xspace(\kappa) \not\in \ensuremath{\mathit{PERIODS}}\xspace$, the denotation of
$\ensuremath{\mathit{At}}\xspace[\kappa, \phi]$, $\ensuremath{\mathit{Before}}\xspace[\kappa, \phi]$, or $\ensuremath{\mathit{After}}\xspace[\kappa,
\phi]$ is always $F$.) In practice, formulae generated by the English
to \textsc{Top}\xspace mapping never violate this constraint.
For every $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, $\corn{\phi}$
\index{'`@$\corn{}$ (corners)}
(pronounced ``corners $\phi$'') is the tuple $\tup{\tau_1, \tau_2,
\tau_3, \dots, \tau_n}$, where $\tau_1,
\dots, \tau_n$ are all the constants that are used as arguments
of predicates in $\phi$, and all the variables that occur in $\phi$,
in the same order (from left to right) they appear in $\phi$. If a
constant occurs more than once as a predicate argument in $\phi$, or
if a variable occurs more than once in $\phi$, there are multiple
$\tau_i$s in $\corn{\phi}$ for that constant or variable. If $\corn{\phi} =
\tup{\tau_1, \tau_2, \tau_3, \dots,
\tau_n}$, the \emph{length} of $\corn{\phi}$ is
$n$. For example, if:
\[
\phi = \ensuremath{\mathit{Ntense}}\xspace[t^v, woman(p^v)] \land
\ensuremath{\mathit{At}}\xspace[1991, \ensuremath{\mathit{Past}}\xspace[e^v, manager\_of(p^v, sales)]]
\]
then $\corn{\phi} = \tup{t^v, p^v, e^v, p^v, sales}$, and the
length of $\corn{\phi}$ is 5.
\section{Linking the TOP model to the database} \label{linking_model}
As discussed in section \ref{denotation}, the answer to an English
question submitted at $st$ must report the denotation
$\denot{M,st}{\phi}$ of the corresponding \textsc{Top}\xspace formula $\phi$.
$\denot{M,st}{\phi}$ follows from the semantics of \textsc{Top}\xspace, provided
that the model $M$, which intuitively provides all the necessary
information about the modelled world, has been defined. In a \textsc{Nlidb}\xspace,
the only source of information about the world is the
database.\footnote{This is not entirely true in the framework of this
thesis, as there is also a type-hierarchy of world-entities in the
\textsc{Hpsg}\xspace grammar (section \ref{HPSG_basics}).} Hence, $M$ has to be
defined in terms of the information in the database. This mainly
involves defining \ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace
(which are parts of $M$) in terms of database concepts.
\begin{figure}
\hrule
\begin{center}
\medskip
\includegraphics[scale=.6]{link_paths}
\caption{Paths from basic \textsc{Top}\xspace expressions to the modelled world}
\label{link_paths_fig}
\end{center}
\hrule
\end{figure}
\ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace show how certain basic
\textsc{Top}\xspace expressions (constants, predicates, and partitioning names)
relate to the modelled world. These functions will be defined in terms
of the functions \ensuremath{\mathit{h_{cons}}}\xspace, \ensuremath{\mathit{h_{pfuns}}}\xspace, \ensuremath{\mathit{h_{culms}}}\xspace, \ensuremath{\mathit{h_{cparts}}}\xspace, and
\ensuremath{\mathit{h_{gparts}}}\xspace (to be discussed in section \ref{h_funs}), and $f_D$ (section
\ref{relational}). Roughly speaking, the $h$ functions map basic \textsc{Top}\xspace
expressions to database constructs (attribute values or relations),
and $f_D$ maps the attribute values of these constructs to world
objects (figure \ref{link_paths_fig}). \ensuremath{\mathit{h_{cons}}}\xspace, \ensuremath{\mathit{h_{pfuns}}}\xspace, \ensuremath{\mathit{h_{culms}}}\xspace,
\ensuremath{\mathit{h_{cparts}}}\xspace, and \ensuremath{\mathit{h_{gparts}}}\xspace will in turn be defined in terms of the
functions \ensuremath{\mathit{h'_{cons}}}\xspace, \ensuremath{\mathit{h'_{pfuns}}}\xspace, \ensuremath{\mathit{h'_{culms}}}\xspace, \ensuremath{\mathit{h'_{cparts}}}\xspace, and \ensuremath{\mathit{h'_{gparts}}}\xspace (to be
discussed in section \ref{via_TSQL2}), and $eval$ (section
\ref{additional_tsql2}). The $h'$ functions map basic \textsc{Top}\xspace
expressions to \textsc{Tsql2}\xspace expressions, and $eval$ maps \textsc{Tsql2}\xspace expressions to
database constructs.
After defining the $h'$ functions, one could compute
$\denot{M,st}{\phi}$ using a reasoning system, that would contain rules
encoding the semantics of \textsc{Top}\xspace, and that would use the path basic
\textsc{Top}\xspace expressions $\rightarrow$ \textsc{Tsql2}\xspace expressions $\rightarrow$
database constructs $\rightarrow$ modelled world (figure
\ref{link_paths_fig}) to compute any necessary values of \ensuremath{\mathit{f_{cons}}}\xspace,
\ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace. That is, only basic \textsc{Top}\xspace
expressions would be translated into \textsc{Tsql2}\xspace, and the \textsc{Dbms}\xspace would be
used only to evaluate the \textsc{Tsql2}\xspace translations of these expressions. The
rest of the processing to compute $\denot{M,st}{\phi}$
would be carried out by the reasoning system.
This thesis adopts an alternative approach that exploits the
capabilities of the \textsc{Dbms}\xspace to a larger extent, and that requires no
reasoning system. Based on the $h'$ functions (that map only basic
\textsc{Top}\xspace expressions to \textsc{Tsql2}\xspace expressions), a method to translate
\emph{any} \textsc{Top}\xspace formula into \textsc{Tsql2}\xspace will be developed. Each \textsc{Top}\xspace
formula $\phi$ will be mapped to a single \textsc{Tsql2}\xspace query
(figure \ref{trans_paths_fig}). This will be executed by the \textsc{Dbms}\xspace,
generating a relation that represents (via an interpretation function)
$\denot{M,st}{\phi}$. It will be proven formally that this approach
generates indeed $\denot{M,st}{\phi}$ (i.e.\, that paths 1 and 2
of figure \ref{trans_paths_fig} lead to the same result).
\begin{figure}
\hrule
\begin{center}
\medskip
\includegraphics[scale=.6]{trans_paths}
\caption{Paths from TOP formulae to their denotations}
\label{trans_paths_fig}
\end{center}
\hrule
\end{figure}
There is one further complication: the values of \ensuremath{\mathit{f_{cons}}}\xspace,
\ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace will ultimately be obtained
by evaluating \textsc{Tsql2}\xspace expressions returned by \ensuremath{\mathit{h'_{cons}}}\xspace, \ensuremath{\mathit{h'_{pfuns}}}\xspace,
\ensuremath{\mathit{h'_{culms}}}\xspace, \ensuremath{\mathit{h'_{cparts}}}\xspace, and \ensuremath{\mathit{h'_{gparts}}}\xspace. A \textsc{Tsql2}\xspace expression, however, may
generate different results when evaluated at different times (e.g.\ a
\sql{SELECT} statement may return different results after a
database relation on which the statement operates has been
updated). This causes the values of \ensuremath{\mathit{f_{cons}}}\xspace,
\ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace to become sensitive to the
time where the \textsc{Tsql2}\xspace expressions of the $h'$ functions are
evaluated. We want this time to be $st$, so that the \textsc{Tsql2}\xspace expressions
of the $h'$ functions will operate on the information that is in the
database when the question is submitted, and so that a \textsc{Tsql2}\xspace literal
like \sql{PERIOD 'today'} (section \ref{tsql2_lang}) in the
expressions of the $h'$ functions will be correctly taken to refer to
the day that contains $st$. To accommodate this, \ensuremath{\mathit{f_{cons}}}\xspace,
\ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace must be made sensitive to
$st$:
\ensuremath{\mathit{f_{cons}}}\xspace becomes a function $\ensuremath{\mathit{PTS}}\xspace \mapsto (\ensuremath{\mathit{CONS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace)$ instead
of $\ensuremath{\mathit{CONS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace$. This allows the world objects that are
assigned to \textsc{Top}\xspace constants via \ensuremath{\mathit{f_{cons}}}\xspace to be different at different
$st$s. Similarly, \ensuremath{\mathit{f_{pfuns}}}\xspace is now a function over \ensuremath{\mathit{PTS}}\xspace. For every $st
\in \ensuremath{\mathit{PTS}}\xspace$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)$ is in turn a function that maps each pair
$\tup{\pi,n}$, where $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$, to
another function $(\ensuremath{\mathit{OBJS}}\xspace)^n \mapsto pow(\ensuremath{\mathit{PERIODS}}\xspace)$ (cf.\ the
definition of \ensuremath{\mathit{f_{pfuns}}}\xspace in section \ref{top_model}). The definitions of
\ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace, and \ensuremath{\mathit{f_{gparts}}}\xspace are modified accordingly.
Whatever restrictions applied to \ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace,
and \ensuremath{\mathit{f_{gparts}}}\xspace, now apply to $\ensuremath{\mathit{f_{cons}}}\xspace(st)$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)$, $\ensuremath{\mathit{f_{culms}}}\xspace(st)$,
$\ensuremath{\mathit{f_{cparts}}}\xspace(st)$, and $\ensuremath{\mathit{f_{gparts}}}\xspace(st)$, for every $st \in \ensuremath{\mathit{CHRONS}}\xspace$. Also,
wherever \ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{gparts}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace were used in the
semantics of \textsc{Top}\xspace, $\ensuremath{\mathit{f_{cons}}}\xspace(st)$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)$, $\ensuremath{\mathit{f_{culms}}}\xspace(st)$, and
$\ensuremath{\mathit{f_{gparts}}}\xspace(st)$ should now be used. The \textsc{Top}\xspace model also becomes
sensitive to $st$, and is now defined as follows:
\[
M(st) =
\tup{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}, \ensuremath{\mathit{OBJS}}\xspace,
\ensuremath{\mathit{f_{cons}}}\xspace(st), \ensuremath{\mathit{f_{pfuns}}}\xspace(st), \ensuremath{\mathit{f_{culms}}}\xspace(st), \ensuremath{\mathit{f_{gparts}}}\xspace(st), \ensuremath{\mathit{f_{cparts}}}\xspace(st)}
\]
Intuitively, $M(st)$ reflects the history of the world as recorded in
the database at $st$. (If the database supports both valid and
transaction time, $M(st)$ reflects the ``beliefs'' of the database at
$st$; see section \ref{tdbs_general}.) The answer to an English
question submitted at $st$ must now report the denotation
$\denot{M(st),st}{\phi}$ of the corresponding \textsc{Top}\xspace formula $\phi$.
\section{The $h$ functions} \label{h_funs}
I first discuss \ensuremath{\mathit{h_{cons}}}\xspace, \ensuremath{\mathit{h_{pfuns}}}\xspace, \ensuremath{\mathit{h_{culms}}}\xspace, \ensuremath{\mathit{h_{cparts}}}\xspace, and \ensuremath{\mathit{h_{gparts}}}\xspace, the
functions that -- roughly speaking -- map basic \textsc{Top}\xspace expressions to
database constructs. As with \ensuremath{\mathit{f_{cons}}}\xspace, \ensuremath{\mathit{f_{pfuns}}}\xspace, \ensuremath{\mathit{f_{culms}}}\xspace, \ensuremath{\mathit{f_{cparts}}}\xspace,
and \ensuremath{\mathit{f_{gparts}}}\xspace, the values of \ensuremath{\mathit{h_{cons}}}\xspace, \ensuremath{\mathit{h_{pfuns}}}\xspace, \ensuremath{\mathit{h_{culms}}}\xspace, \ensuremath{\mathit{h_{cparts}}}\xspace, and
\ensuremath{\mathit{h_{gparts}}}\xspace will ultimately be obtained by evaluating \textsc{Tsql2}\xspace expressions
at $st$. The results of these evaluations can be different at different
$st$s, and hence the definitions of the $h$ functions must be
sensitive to $st$.
\paragraph{$\mathbf{h_{cons}}$:}
\index{hcons@$\ensuremath{\mathit{h_{cons}}}\xspace()$ (\textsc{Top}\xspace constants to attribute values)}
\ensuremath{\mathit{h_{cons}}}\xspace is a function $\ensuremath{\mathit{PTS}}\xspace \mapsto (\ensuremath{\mathit{CONS}}\xspace \mapsto D)$. For every $st
\in \ensuremath{\mathit{PTS}}\xspace$, $\ensuremath{\mathit{h_{cons}}}\xspace(st)$ is in turn a function that maps each \textsc{Top}\xspace
constant to an attribute value that represents the same
world-entity. For example, $\ensuremath{\mathit{h_{cons}}}\xspace(st)$ could map the \textsc{Top}\xspace constant
$sales\_department$ to the string attribute value $Sales \;
Department$, and the constant $\mathit{today}$ to the element
of $D_P$ ($D_P \subseteq D$) which denotes the day-period that
contains $st$.
\paragraph{$\mathbf{h_{pfuns}}$:}
\index{hpfuns@$\ensuremath{\mathit{h_{pfuns}}}\xspace()$ (predicates to relations showing maximal periods of situations)}
\ensuremath{\mathit{h_{pfuns}}}\xspace is a function over \ensuremath{\mathit{PTS}}\xspace. For every $st \in \ensuremath{\mathit{PTS}}\xspace$, $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)$
is in turn a function over
$\ensuremath{\mathit{PFUNS}}\xspace \times \{1,2,3,\dots\}$, such that for every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$
and $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n) \in
\ensuremath{\mathit{NVREL}_P}\xspace(n)$ (section \ref{bcdm}). $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)$ is intended to map every
\textsc{Top}\xspace predicate of functor $\pi$ and arity $n$ to a relation that
shows for which arguments of the predicate and at which maximal
periods the situation represented by the predicate is true, according
to the ``beliefs'' of the database at $st$. For example, if
$circling(ba737)$ represents the situation where BA737 is circling,
and according to the ``beliefs'' of the database at $st$, $p$ is a maximal period where BA737 was/is/will be circling,
$\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(circling, 1)$ must contain a tuple $\tup{v;v_t}$, where
$f_D(v) = \ensuremath{\mathit{f_{cons}}}\xspace(ba737)$ ($v$ denotes the flight BA737), and $f_D(v_t)
= p$. Similarly, if $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(circling, 1)$ contains a tuple
$\tup{v;v_t}$, where $f_D(v) =
\ensuremath{\mathit{f_{cons}}}\xspace(ba737)$ and $f_D(v_t) = p$, $p$ is a maximal
period where BA737 was/is/will be circling, according to the
``beliefs'' of the database at $st$.
\paragraph{$\mathbf{h_{culms}}$:}
\index{hculms@$\ensuremath{\mathit{h_{culms}}}\xspace()$ (predicates to relations showing if situations reach their climaxes)}
\ensuremath{\mathit{h_{culms}}}\xspace is a function over \ensuremath{\mathit{PTS}}\xspace. For every $st \in \ensuremath{\mathit{PTS}}\xspace$,
$\ensuremath{\mathit{h_{culms}}}\xspace(st)$ is in turn a function over
$\ensuremath{\mathit{PFUNS}}\xspace \times \{1,2,3,\dots\}$, such that for every
$\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n) \in
\ensuremath{\mathit{SREL}}\xspace(n)$. Intuitively, \ensuremath{\mathit{h_{culms}}}\xspace plays the same role as \ensuremath{\mathit{f_{culms}}}\xspace
(section \ref{top_model}). In practice, \ensuremath{\mathit{h_{culms}}}\xspace is consulted only for
predicates that describe situations with inherent
climaxes. $\ensuremath{\mathit{h_{culms}}}\xspace(st)$ maps each \textsc{Top}\xspace predicate of functor $\pi$ and arity
$n$ to a relation that shows for which predicate arguments the situation of
the predicate reaches its climax at the latest time-point where the
situation is ongoing, according to the ``beliefs'' of the database at $st$. If, for example, $inspecting(j\_adams, ba737)$
represents the situation where J.Adams is inspecting BA737,
$\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(inspecting, 2)$ is a relation in $\ensuremath{\mathit{NVREL}_P}\xspace(2)$ and
$\ensuremath{\mathit{h_{culms}}}\xspace(st)(inspecting, 2)$ a relation in $\ensuremath{\mathit{SREL}}\xspace(2)$. If, according to the
``beliefs'' of the database at $st$, the
maximal periods where J.Adams was/is/will be inspecting BA737 are $p_1,
p_2, \dots, p_j$, $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(inspecting, 2)$ contains the tuples
$\tup{v_1, v_2; v^1_t}$, $\tup{v_1, v_2; v^2_t}$, \dots,
$\tup{v_1, v_2; v^j_t}$, where $f_D(v_1) = \ensuremath{\mathit{f_{cons}}}\xspace(j\_adams)$,
$f_D(v_2) = \ensuremath{\mathit{f_{cons}}}\xspace(ba737)$, and $f_D(v^1_t) = p_1$, $f_D(v^2_t) =
p_2$, \dots, $f_D(v^j_t) = p_j$. Let us assume that $p$ is the latest
maximal period among $p_1, \dots, p_j$. $\ensuremath{\mathit{h_{culms}}}\xspace(st)(inspecting, 2)$
contains $\tup{v_1, v_2}$ iff according to the ``beliefs'' of the
database at $st$, the inspection of BA737 by J.Adams reaches its
completion at the end of $p$.
\paragraph{$\mathbf{h_{gparts}}$:}
\index{hgparts@$\ensuremath{\mathit{h_{gparts}}}\xspace()$ (gappy part.\ names to relations representing gappy partitionings)}
\ensuremath{\mathit{h_{gparts}}}\xspace is a function over \ensuremath{\mathit{PTS}}\xspace. For every $st \in \ensuremath{\mathit{PTS}}\xspace$,
$\ensuremath{\mathit{h_{gparts}}}\xspace(st)$ is in turn a function that maps
every element of \ensuremath{\mathit{GPARTS}}\xspace to an $r \in \ensuremath{\mathit{SREL}}\xspace(1)$, such that the set $S
= \{ f_D(v) \mid \tup{v} \in r \}$ is a gappy partitioning.
$\ensuremath{\mathit{h_{gparts}}}\xspace(st)$ is intended to map each \textsc{Top}\xspace gappy partitioning
name $\sigma_g$ to a one-attribute snapshot relation $r$, whose
attribute values represent the periods of the gappy partitioning $S$
that is assigned to $\sigma_g$. For example, $\ensuremath{\mathit{h_{gparts}}}\xspace(st)$ could map
$monday^g$ to a one-attribute snapshot relation whose attribute values
denote all the Monday-periods.
As with the other $h$ functions, the
values of \ensuremath{\mathit{h_{gparts}}}\xspace will ultimately be obtained by evaluating \textsc{Tsql2}\xspace
expressions at $st$ (see section \ref{via_TSQL2} below). The
results of these evaluations can in principle be different at
different $st$s, and this is why \ensuremath{\mathit{h_{gparts}}}\xspace is defined to be sensitive
to $st$. In practice, however, the \textsc{Tsql2}\xspace expressions that are
evaluated to obtain the values of \ensuremath{\mathit{h_{gparts}}}\xspace will be insensitive to their
evaluation time, and hence the values of \ensuremath{\mathit{h_{gparts}}}\xspace will not
depend on $st$. Similar comments apply to \ensuremath{\mathit{h_{cparts}}}\xspace below.
\paragraph{$\mathbf{h_{cparts}}$:}
\index{hcparts@$\ensuremath{\mathit{h_{cparts}}}\xspace()$ (compl.\ part.\ names to relations representing compl.\ partitionings)}
\ensuremath{\mathit{h_{cparts}}}\xspace is a function over \ensuremath{\mathit{PTS}}\xspace. For every $st \in \ensuremath{\mathit{PTS}}\xspace$,
$\ensuremath{\mathit{h_{cparts}}}\xspace(st)$ is in turn a function that maps
every element of \ensuremath{\mathit{CPARTS}}\xspace to an $r \in \ensuremath{\mathit{SREL}}\xspace(1)$, such that the set $S
= \{f_D(v) \mid \tup{v} \in r \}$ is a complete partitioning.
$\ensuremath{\mathit{h_{cparts}}}\xspace(st)$ is intended to map each \textsc{Top}\xspace complete partitioning name
$\sigma_c$ to a one-attribute snapshot relation $r$, whose attribute
values represent the periods of the complete partitioning $S$ that is
assigned to $\sigma_c$. For example, $\ensuremath{\mathit{h_{cparts}}}\xspace(st)$ could map $day^c$ to a
one-attribute snapshot relation whose attribute values denote all the
day-periods.
\section{The TOP model in terms of database concepts}
\label{resulting_model}
The \textsc{Top}\xspace model (see section \ref{top_model} and the revisions of
section \ref{linking_model}) can now be defined in terms
of database concepts as follows.
\paragraph{Point structure:} $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec} \defeq
\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}$ \\
As mentioned in section \ref{tsql2_time}, $\ensuremath{\mathit{CHRONS}}\xspace \not= \emptyset$,
and $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}$ has the properties of
transitivity, irreflexivity, linearity, left and right boundedness,
and discreteness. Hence, $\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}$ qualifies as
a point structure for \textsc{Top}\xspace (section \ref{temporal_ontology}).
Since $\tup{\ensuremath{\mathit{PTS}}\xspace, \prec} =
\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}$, $\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace, \prec}} =
\ensuremath{\mathit{PERIODS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}}$, and $\ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{PTS}}\xspace,
\prec}} = \ensuremath{\mathit{INSTANTS}}\xspace_{\tup{\ensuremath{\mathit{CHRONS}}\xspace, \prec^{chrons}}}$. I write
simply \ensuremath{\mathit{PERIODS}}\xspace and \ensuremath{\mathit{INSTANTS}}\xspace to refer to these sets.
\paragraph{$\mathbf{OBJS}$:} $\ensuremath{\mathit{OBJS}}\xspace \defeq \ensuremath{\mathit{OBJS^{db}}}\xspace$ \\
Since $\ensuremath{\mathit{PERIODS}}\xspace \subseteq \ensuremath{\mathit{OBJS^{db}}}\xspace$ (section \ref{bcdm}) and $\ensuremath{\mathit{OBJS}}\xspace =
\ensuremath{\mathit{OBJS}}\xspace^{db}$, $\ensuremath{\mathit{PERIODS}}\xspace \subseteq \ensuremath{\mathit{OBJS}}\xspace$, as
required by section \ref{top_model}.
\paragraph{$\mathbf{f_{cons}}$:}
\index{fcons@$\ensuremath{\mathit{f_{cons}}}\xspace()$ (maps \textsc{Top}\xspace constants to world objects)}
For every $st \in \ensuremath{\mathit{PTS}}\xspace$ and $\kappa \in \ensuremath{\mathit{CONS}}\xspace$, I define
$\ensuremath{\mathit{f_{cons}}}\xspace(st)(\kappa) \defeq f_D(\ensuremath{\mathit{h_{cons}}}\xspace(st)(\kappa))$.
Since $\ensuremath{\mathit{h_{cons}}}\xspace(st)$ is a function $\ensuremath{\mathit{CONS}}\xspace \mapsto D$, and $f_D$ is a
function $D \mapsto \ensuremath{\mathit{OBJS^{db}}}\xspace$, and $\ensuremath{\mathit{OBJS}}\xspace = \ensuremath{\mathit{OBJS^{db}}}\xspace$,
$\ensuremath{\mathit{f_{cons}}}\xspace(st)$ is a function $\ensuremath{\mathit{CONS}}\xspace \mapsto \ensuremath{\mathit{OBJS}}\xspace$, as required by section
\ref{top_model} and the revisions of section \ref{linking_model}.
\paragraph{$\mathbf{f_{pfuns}}$:}
\index{fpfuns@$\ensuremath{\mathit{f_{pfuns}}}\xspace()$ (returns the maximal periods where predicates hold)}
According to section \ref{top_model} and the revisions of section
\ref{linking_model}, for every $st \in \ensuremath{\mathit{PTS}}\xspace$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)$ must be a
function:
\[ \ensuremath{\mathit{PFUNS}}\xspace \times \{1,2,3,\dots\} \mapsto
((\ensuremath{\mathit{OBJS}}\xspace)^n \mapsto \ensuremath{\mathit{pow}}\xspace(\ensuremath{\mathit{PERIODS}}\xspace))
\]
That is, for every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, every $n \in \{1,2,3,\dots\}$,
and every $o_1, \dots, o_n \in \ensuremath{\mathit{OBJS}}\xspace$, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi,n)(o_1, \dots,
o_n)$ must be a set of periods. I define $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi,n)(o_1, \dots,
o_n)$ as follows:
\[
\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi, n)(o_1, \dots, o_n) \defeq
\{ f_D(v_t) \mid
\tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots, \ensuremath{f_D^{-1}}\xspace(o_n); v_t} \in \ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n) \}
\]
The restrictions of section \ref{h_funs} guarantee that
$\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi,n) \in \ensuremath{\mathit{NVREL}_P}\xspace(n)$, which implies that for every
$\tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots, \ensuremath{f_D^{-1}}\xspace(o_n); v_t} \in \ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n)$,
$f_D(v_t) \in \ensuremath{\mathit{PERIODS}}\xspace$. Hence, $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi,n)(o_1, \dots, o_n)$ is a
set of periods as wanted.
As discussed in section \ref{h_funs}, if $\pi(\tau_1,
\dots, \tau_n)$ represents some situation, and $\tau_1, \dots, \tau_n$
denote $o_1, \dots, o_n$, then $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n)$ contains $\tup{\ensuremath{f_D^{-1}}\xspace(o_1),
\dots, \ensuremath{f_D^{-1}}\xspace(o_n); v_t}$ iff $f_D(v_t)$ is a maximal period
where the situation of $\pi(\tau_1, \dots, \tau_n)$ holds.
$\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi, n)(o_1, \dots, o_n)$ is supposed to be the set of
the maximal periods where the situation of $\pi(\tau_1, \dots, \tau_n)$
holds. The definition of \ensuremath{\mathit{f_{pfuns}}}\xspace above achieves this.
According to section \ref{top_model} and the revisions of section
\ref{linking_model}, it must also be the case that:
\[
\text{if } p_1, p_2 \in \ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi, n)(o_1,\dots,o_n) \text{ and }
p_1 \union p_2 \in \ensuremath{\mathit{PERIODS}}\xspace, \text{ then } p_1 = p_2
\]
\ensuremath{\mathit{f_{pfuns}}}\xspace, as defined above, has this property. The proof follows.
Let us assume that $p_1$ and $p_2$ are as above, but $p_1 \not=
p_2$. As discussed above, the assumption that $p_1, p_2 \in
\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi, n)(o_1,\dots,o_n)$ implies that $p_1, p_2 \in
\ensuremath{\mathit{PERIODS}}\xspace$.
Let $v_t^1 = \ensuremath{f_D^{-1}}\xspace(p_1)$ and $v_t^2 = \ensuremath{f_D^{-1}}\xspace(p_2)$ (i.e.\ $p_1 =
f_D(v_t^1)$ and $p_2 = f_D(v_t^2)$). Since, $p_1 \not= p_2$ and \ensuremath{f_D^{-1}}\xspace
is 1-1 (section \ref{relational}), $\ensuremath{f_D^{-1}}\xspace(p_1) \not= \ensuremath{f_D^{-1}}\xspace(p_2)$, i.e.
$v_t^1 \not= v_t^2$. The definition of $\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi,
n)(o_1,\dots,o_n)$, the assumptions that $p_1, p_2 \in
\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(\pi, n)(o_1,\dots,o_n)$ and that $p_1 \union p_2 \in
\ensuremath{\mathit{PERIODS}}\xspace$, and the fact that $p_1 = f_D(v_t^1)$ and $p_2 = f_D(v_t^2)$
imply that $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi,n)$ contains the value-equivalent tuples
$\tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots, \ensuremath{f_D^{-1}}\xspace(o_n); v_t^1}$ and $\tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots,
\ensuremath{f_D^{-1}}\xspace(o_n); v_t^2}$, where $f_D(v_t^1) \union f_D(v_t^2) \in
\ensuremath{\mathit{PERIODS}}\xspace$. This conclusion, the fact that $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi,n) \in
\ensuremath{\mathit{NVREL}_P}\xspace(n)$ (see previous paragraphs), and the definition of
$\ensuremath{\mathit{NVREL}_P}\xspace(n)$ (section \ref{bcdm}) imply that $v_t^1 = v^t_2$, which is
against the hypothesis. Hence, it cannot be the case that $p_1 \not=
p_2$, i.e.\ $p_1 = p_2$. {\small Q.E.D.}\xspace
\paragraph{$\mathbf{f_{culms}}$:}
\index{fculms@$\ensuremath{\mathit{f_{culms}}}\xspace()$ (shows if the situation of a predicate reaches its climax)}
According to section \ref{top_model} and the revisions of section
\ref{linking_model}, for every $st \in \ensuremath{\mathit{PTS}}\xspace$, $\ensuremath{\mathit{f_{culms}}}\xspace(st)$ must be a
function:
\[ \ensuremath{\mathit{PFUNS}}\xspace \times \{1,2,3,\dots\} \mapsto
((\ensuremath{\mathit{OBJS}}\xspace)^n \mapsto \{T, F\})
\]
For every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, $n \in \{1,2,3,\dots\}$, and
$o_1, \dots, o_n \in \ensuremath{\mathit{OBJS}}\xspace$, I define:
\[
\ensuremath{\mathit{f_{culms}}}\xspace(\pi, n)(o_1, \dots, o_n) \defeq
\begin{cases}
T, & \text{if } \tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots, \ensuremath{f_D^{-1}}\xspace(o_n)} \in \ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n) \\
F, & \text{otherwise}
\end{cases}
\]
The restrictions of section \ref{h_funs}, guarantee that $\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi,
n) \in \ensuremath{\mathit{SREL}}\xspace(n)$. As discussed in section \ref{h_funs}, if a
predicate $\pi(\tau_1, \dots, \tau_n)$ represents some situation with
an inherent climax, and $\tau_1$, \dots, $\tau_n$
denote $o_1$, \dots, $o_n$, then $\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n)$
contains $\tup{\ensuremath{f_D^{-1}}\xspace(o_1), \dots, \ensuremath{f_D^{-1}}\xspace(o_n)}$ iff the situation
reaches its climax at the end of the latest maximal period where
the situation is ongoing. $\ensuremath{\mathit{f_{culms}}}\xspace(st)(\pi, n)(o_1, \dots, o_n)$
is supposed to be $T$ iff the situation of $\pi(\tau_1, \dots,
\tau_n)$ reaches its climax at the end of the latest maximal
period where it is ongoing. The definition of \ensuremath{\mathit{f_{culms}}}\xspace above achieves
this.
\paragraph{$\mathbf{f_{gparts}}$:}
\index{fgparts@$\ensuremath{\mathit{f_{gparts}}}\xspace()$ (assigns gappy partitionings to elements of \ensuremath{\mathit{GPARTS}}\xspace)}
For every $st \in \ensuremath{\mathit{PTS}}\xspace$ and $\sigma_g \in \ensuremath{\mathit{GPARTS}}\xspace$,
$\ensuremath{\mathit{f_{gparts}}}\xspace(st)(\sigma_g) \defeq \{f_D(v) \mid
\tup{v} \in \ensuremath{\mathit{h_{gparts}}}\xspace(st)(\sigma_g)\}$. The restrictions on \ensuremath{\mathit{h_{gparts}}}\xspace
of section \ref{h_funs} guarantee that $\ensuremath{\mathit{f_{gparts}}}\xspace(st)(\sigma_g)$ is always
a gappy partitioning, as required by section \ref{top_model} and the
revisions of section \ref{linking_model}.
\paragraph{$\mathbf{f_{cparts}}$:}
\index{fcparts@$\ensuremath{\mathit{f_{cparts}}}\xspace()$ (assigns complete partitionings to elements of \ensuremath{\mathit{CPARTS}}\xspace)}
For every $st \in \ensuremath{\mathit{PTS}}\xspace$ and $\sigma_c \in \ensuremath{\mathit{CPARTS}}\xspace$,
$\ensuremath{\mathit{f_{cparts}}}\xspace(st)(\sigma_c) \defeq
\{f_D(v) \mid \tup{v} \in \ensuremath{\mathit{h_{cparts}}}\xspace(st)(\sigma_c)\}$.
The restrictions on \ensuremath{\mathit{h_{cparts}}}\xspace of section \ref{h_funs} guarantee that
$\ensuremath{\mathit{f_{cparts}}}\xspace(st)(\sigma_c)$ is always a complete
partitioning, as required by section \ref{top_model} and the
revisions of section \ref{linking_model}.
\section{The $h'$ functions} \label{via_TSQL2}
I now discuss $\ensuremath{\mathit{h'_{cons}}}\xspace$, $\ensuremath{\mathit{h'_{pfuns}}}\xspace$, $\ensuremath{\mathit{h'_{culms}}}\xspace$,
$\ensuremath{\mathit{h'_{gparts}}}\xspace$, and $\ensuremath{\mathit{h'_{cparts}}}\xspace$, the functions that map basic \textsc{Top}\xspace
expressions (constants, predicates, etc.) to \textsc{Tsql2}\xspace expressions. I
assume that these functions are defined by the configurer of
the \textsc{Nlitdb}\xspace (section \ref{domain_config}).
\paragraph{$\mathbf{h_{cons}'}$:}
\index{hconsp@$\ensuremath{\mathit{h'_{cons}}}\xspace()$ (similar to \ensuremath{\mathit{h_{cons}}}\xspace but returns \textsc{Tsql2}\xspace expressions)}
$\ensuremath{\mathit{h'_{cons}}}\xspace$ maps every \textsc{Top}\xspace constant $\kappa$ to a \textsc{Tsql2}\xspace value
expression $\xi$, such that $\ensuremath{\mathit{FCN}}\xspace(\xi) = \emptyset$, and for every $st
\in \ensuremath{\mathit{CHRONS}}\xspace$, $eval(st, \xi) \in D$. (The latter guarantees that
$eval(st, \xi) \not= error$.) $\xi$ is intended to represent the same world
object as $\kappa$. For example, $\ensuremath{\mathit{h'_{cons}}}\xspace$ could map the \textsc{Top}\xspace
constant $sales\_department$ to the \textsc{Tsql2}\xspace value expression \sql{'Sales
Department'}, and the \textsc{Top}\xspace constant $yesterday$ to \sql{PERIOD
'today' - INTERVAL '1' DAY}. In practice, the values of \ensuremath{\mathit{h'_{cons}}}\xspace need
to be defined only for \textsc{Top}\xspace constants that are used in the particular
application domain. The values of \ensuremath{\mathit{h'_{cons}}}\xspace for other constants are not
used, and can be chosen arbitrarily. Similar comments apply to \ensuremath{\mathit{h'_{pfuns}}}\xspace,
\ensuremath{\mathit{h'_{culms}}}\xspace, \ensuremath{\mathit{h'_{gparts}}}\xspace, and \ensuremath{\mathit{h'_{cparts}}}\xspace.
$\ensuremath{\mathit{h_{cons}}}\xspace$ is defined in terms of $\ensuremath{\mathit{h'_{cons}}}\xspace$. For every
$st \in \ensuremath{\mathit{CHRONS}}\xspace$ and $\kappa \in \ensuremath{\mathit{CONS}}\xspace$:
\index{hcons@$\ensuremath{\mathit{h_{cons}}}\xspace()$ (\textsc{Top}\xspace constants to attribute values)}
\[
\ensuremath{\mathit{h_{cons}}}\xspace(st)(\kappa) \defeq eval(st, \ensuremath{\mathit{h'_{cons}}}\xspace(\kappa))
\]
The restrictions above guarantee that $eval(st, \ensuremath{\mathit{h'_{cons}}}\xspace(\kappa)) \in
D$. Hence, $\ensuremath{\mathit{h_{cons}}}\xspace(st)$ is a function $\ensuremath{\mathit{CONS}}\xspace \mapsto D$, as required by
section \ref{h_funs}.
\paragraph{$\mathbf{h_{pfuns}'}$:}
\index{hpfunsp@$\ensuremath{\mathit{h'_{pfuns}}}\xspace()$ (similar to \ensuremath{\mathit{h_{pfuns}}}\xspace but returns \textsc{Tsql2}\xspace expressions)}
$\ensuremath{\mathit{h'_{pfuns}}}\xspace$ is a function that maps every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in
\{1,2,3,\dots\}$ to a \textsc{Tsql2}\xspace \sql{SELECT} statement $\Sigma$, such that
$\ensuremath{\mathit{FCN}}\xspace(\Sigma) = \emptyset$, and for every $st \in
\ensuremath{\mathit{CHRONS}}\xspace$, $eval(st, \Sigma) \in \ensuremath{\mathit{NVREL}_P}\xspace(n)$.
$\ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi, n)$ is intended to be a \textsc{Tsql2}\xspace \sql{SELECT} statement
that generates the relation to which $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)$ maps $\pi$ and $n$
(the relation that shows for which arguments and at which maximal
periods the situation described by $\pi(\tau_1, \dots,
\tau_n)$ is true).
\ensuremath{\mathit{h_{pfuns}}}\xspace is defined in terms of $\ensuremath{\mathit{h'_{pfuns}}}\xspace$. For
every $st \in \ensuremath{\mathit{CHRONS}}\xspace$, $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, and $n \in \{1,2,3,\dots\}$:
\index{hpfuns@$\ensuremath{\mathit{h_{pfuns}}}\xspace()$ (predicates to relations showing maximal periods of situations)}
\[
\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n) \defeq eval(st, \ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi, n))
\]
The restrictions on $\ensuremath{\mathit{h'_{pfuns}}}\xspace$ above guarantee that $eval(st,
\ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi, n)) \in \ensuremath{\mathit{NVREL}_P}\xspace(n)$. Hence, $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n) \in
\ensuremath{\mathit{NVREL}_P}\xspace(n)$, as required by section \ref{h_funs}.
Let us assume, for example, that $manager(\tau)$ means that $\tau$ is
a manager, and that $manager\_of$ is the relation of $\ensuremath{\mathit{NVREL}_P}\xspace(2)$ in
\pref{hpfuns:99a} that shows the maximal periods where somebody is the
manager of a department. (To save space, I often omit the names of the
explicit attributes. These are not needed, since explicit attributes
are referred to by number.)
\begin{examps}
\item \label{hpfuns:99a}
\dbtableb{|l|l||l|}
{$J.Adams$ & $sales$ & $[1/5/93, \; 31/12/94]$ \\
$J.Adams$ & $personnel$ & $[1/1/95, \; 31/3/95]$ \\
$J.Adams$ & $research$ & $[5/9/95, \; 31/12/95]$ \\
$T.Smith$ & $sales$ & $[1/1/95, \; 7/5/95]$ \\
\ \dots & \ \dots & \ \dots
}
\end{examps}
$\ensuremath{\mathit{h'_{pfuns}}}\xspace(manager, 1)$ could be defined to be
\pref{hpfuns:2}, which generates \pref{hpfuns:3} (\pref{hpfuns:3} is
an element of $\ensuremath{\mathit{NVREL}_P}\xspace(1)$, as required by the definition of
$\ensuremath{\mathit{h'_{pfuns}}}\xspace$). The embedded \sql{SELECT} statement of \pref{hpfuns:2}
discards the second explicit attribute of $manager\_of$. The \sql{(PERIOD)}
coalesces tuples that correspond to the same employees (e.g.\ the
three periods for J.Adams), generating one tuple for each maximal
period.
\begin{examps}
\item \select{SELECT DISTINCT mgr2.1 \\
VALID VALID(mgr2) \\
FROM (\select{SELECT DISTINCT mgr1.1 \\
VALID VALID(mgr1) \\
FROM manager\_of AS mgr1} \\
\ \ \ \ \ )(PERIOD) AS mgr2}
\label{hpfuns:2}
\item
\dbtableb{|l||l|}
{$J.Adams$ & $[1/5/93, \; 31/3/95]$ \\
$J.Adams$ & $[5/9/95, \; 31/12/95]$ \\
$T.Smith$ & $[1/1/95, \; 7/5/95]$ \\
\ \dots & \ \dots
}
\label{hpfuns:3}
\end{examps}
\paragraph{$\mathbf{h_{culms}'}$:}
\index{hculmsp@$\ensuremath{\mathit{h'_{culms}}}\xspace()$ (similar to \ensuremath{\mathit{h'_{culms}}}\xspace but returns \textsc{Top}\xspace expressions)}
\ensuremath{\mathit{h'_{culms}}}\xspace is a function that maps
every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$ and $n \in \{1,2,3,\dots\}$ to a \textsc{Tsql2}\xspace
\sql{SELECT} statement $\Sigma$, such that $\ensuremath{\mathit{FCN}}\xspace(\Sigma) = \emptyset$,
and for every $st \in \ensuremath{\mathit{CHRONS}}\xspace$, $eval(st, \Sigma) \in \ensuremath{\mathit{SREL}}\xspace(n)$.
$\ensuremath{\mathit{h'_{culms}}}\xspace(\pi, n)$ is intended to be a \textsc{Tsql2}\xspace \sql{SELECT} statement
that generates the relation to which $\ensuremath{\mathit{h_{culms}}}\xspace(st)$ maps $\pi$ and $n$
(the relation that shows for which arguments of
$\pi(\tau_1, \dots, \tau_n)$ the situation of the predicate
reaches its climax at the end of the latest maximal period where
it is ongoing).
\ensuremath{\mathit{h_{culms}}}\xspace is defined in terms of $\ensuremath{\mathit{h'_{culms}}}\xspace$. For
every $st \in \ensuremath{\mathit{CHRONS}}\xspace$, $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$, and $n \in \{1,2,3,\dots\}$:
\index{hculms@$\ensuremath{\mathit{h_{culms}}}\xspace()$ (predicates to relations showing if situations reach their climaxes)}
\[
\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n) \defeq eval(st, \ensuremath{\mathit{h'_{culms}}}\xspace(\pi, n))
\]
The restrictions on $\ensuremath{\mathit{h'_{culms}}}\xspace$ above guarantee that $eval(st,
\ensuremath{\mathit{h'_{culms}}}\xspace(\pi, n)) \in \ensuremath{\mathit{SREL}}\xspace(n)$. Hence, for every $\pi \in \ensuremath{\mathit{PFUNS}}\xspace$
and $n \in \{1,2,3,\dots\}$, $\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n) \in \ensuremath{\mathit{SREL}}\xspace(n)$, as
required by section \ref{h_funs}.
In the airport application, for example, $inspecting(\tau_1,
\tau_2, \tau_3)$ means that an occurrence $\tau_1$ of an inspection of
$\tau_3$ by $\tau_2$ is ongoing. $inspections$ is a relation of the
following form:
\adbtable{5}{|l|l|l|l||l|}{$inspections$}
{$code$ & $inspector$ & $inspected$ & $status$ &}
{$i158$ & $J.Adams$ & $UK160$ & $complete$ &
$[9\text{:}00am \; 1/5/95 - 9\text{:}45am \; 1/5/95]$ \\
&&&& $\;\; \union \;
[10\text{:}10am \; 1/5/95 - 10\text{:}25am \; 1/5/95]$ \\
$i160$ & $J.Adams$ & $UK160$ & $incomplete$ &
$[11\text{:}00pm \; 2/7/95 - 1\text{:}00am \; 3/7/95]$ \\
&&&& $\;\; \union \;
[6\text{:}00am \; 3/7/95 - 6\text{:}20am \; 3/7/95]$ \\
$i205$ & $T.Smith$ & $BA737$ & $complete$ &
$[8\text{:}00am \; 16/11/95 - 8\text{:}20am \; 16/11/95]$ \\
$i214$ & $T.Smith$ & $BA737$ & $incomplete$ &
$[8\text{:}10am \; 14/2/96 - now]$
}
The first tuple above shows that J.Adams started to inspect UK160 at
9:00am on 1/5/95, and continued the inspection up to 9:45am. He
resumed the inspection at 10:10am, and completed the inspection at
10:25am on the same day. $status$ shows whether
or not the inspection reaches its completion at the last time-point of
the time-stamp. In the first tuple, its value is $complete$,
signaling that the inspection was completed at 10:25am on 1/5/95. The
inspection of the second tuple was ongoing from 11:00pm on 2/7/95 to
1:00am on 3/7/95, and from 6:00am to 6:20am on 3/7/95. It did not
reach its completion at 6:20am on 3/7/95 (perhaps it was aborted for
ever). The inspection of the last tuple started at 8:10am on 14/2/96
and is still ongoing. Each inspection is assigned a unique inspection
code, stored as the value of the $code$ attribute. The
inspection codes are useful to distinguish, for example, J.Adams'
inspection of UK160 on 1/5/95 from that on 2-3/7/95 (section
\ref{occurrence_ids}). $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting, 3)$ and
$\ensuremath{\mathit{h'_{culms}}}\xspace(inspecting, 3)$ are defined to be \pref{hpfuns:4} and
\pref{hpfuns:5} respectively.
\begin{examps}
\item \select{SELECT DISTINCT insp.1, insp.2, insp.3 \\
VALID VALID(insp) \\
FROM inspections(PERIOD) AS insp} \label{hpfuns:4}
\item \select{SELECT DISTINCT SNAPSHOT inspcmpl.1, inspcmpl.2,
inspcmpl.3\\
FROM inspections AS inspcmpl \\
WHERE inspcmpl.4 = 'complete'}\label{hpfuns:5}
\end{examps}
This causes $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(inspecting, 2)$ and $\ensuremath{\mathit{h_{culms}}}\xspace(st)(inspecting, 2)$
to be \pref{hpfuns:6} and \pref{hpfuns:7} respectively.
\begin{examps}
\item
\dbtableb{|l|l|l||l|}
{$i158$ & $J.Adams$ & $UK160$ &
$[9\text{:}00am \; 1/5/95, \; 9\text{:}45am \; 1/5/95]$ \\
$i158$ & $J.Adams$ & $UK160$ &
$[10\text{:}10am \; 1/5/95, \; 10\text{:}25am \; 1/5/95]$ \\
$i160$ & $J.Adams$ & $UK160$ &
$[11\text{:}00pm \; 2/7/95, \; 1\text{:}00am \; 3/7/95]$ \\
$i160$ & $J.Adams$ & $UK160$ &
$[6\text{:}00am \; 3/7/95, \; 6\text{:}20am \; 3/7/95]$ \\
$i205$ & $T.Smith$ & $BA737$ &
$[8\text{:}00am \; 16/11/95, \; 8\text{:}20am \; 16/11/95]$ \\
$i214$ & $T.Smith$ & $BA737$ &
$[8\text{:}10am \; 14/2/96, \; now]$
} \label{hpfuns:6}
\item
\dbtableb{|l|l|l|}
{$i158$ & $J.Adams$ & $UK160$ \\
$i205$ & $T.Smith$ & $BA737$
} \label{hpfuns:7}
\end{examps}
\paragraph{$\mathbf{h_{gparts}'}$:}
\index{hgpartsp@$\ensuremath{\mathit{h'_{gparts}}}\xspace()$ (similar to \ensuremath{\mathit{h_{gparts}}}\xspace but returns \textsc{Tsql2}\xspace expressions)}
\ensuremath{\mathit{h'_{gparts}}}\xspace is a function that maps
every \textsc{Top}\xspace gappy partitioning name $\sigma_g$ to a \textsc{Tsql2}\xspace \sql{SELECT}
statement $\Sigma$, such that $\ensuremath{\mathit{FCN}}\xspace(\Sigma) = \emptyset$, and for
every $st \in \ensuremath{\mathit{CHRONS}}\xspace$, it is true that $eval(st, \Sigma) \in
\ensuremath{\mathit{SREL}}\xspace(1)$ and $\{f_D(v) \mid \tup{v} \in eval(st, \Sigma)\}$ is a
gappy partitioning. $\ensuremath{\mathit{h'_{gparts}}}\xspace(\sigma_g)$ is intended to generate the
relation to which $\ensuremath{\mathit{h_{gparts}}}\xspace(st)$ maps $\sigma_g$ (the relation
that represents the members of the gappy partitioning).
Assuming, for example, that the $gregorian$ calendric relation of
section \ref{calrels} is available, $\ensuremath{\mathit{h'_{gparts}}}\xspace(sunday^g)$ could be
\pref{calrels:5} of page \pageref{calrels:5}.
\ensuremath{\mathit{h_{gparts}}}\xspace is defined in terms of \ensuremath{\mathit{h'_{gparts}}}\xspace. For every
$st \in \ensuremath{\mathit{CHRONS}}\xspace$ and $\sigma_g \in \ensuremath{\mathit{GPARTS}}\xspace$:
\index{hgparts@$\ensuremath{\mathit{h_{gparts}}}\xspace()$ (gappy part.\ names to relations representing gappy partitionings)}
\[
\ensuremath{\mathit{h_{gparts}}}\xspace(st)(\sigma_g) \defeq eval(st, \ensuremath{\mathit{h'_{gparts}}}\xspace(\sigma_g))
\]
The restrictions on \ensuremath{\mathit{h'_{gparts}}}\xspace and the definition of $\ensuremath{\mathit{h_{gparts}}}\xspace(st)$
above satisfy the requirements on \ensuremath{\mathit{h_{gparts}}}\xspace of section \ref{h_funs}.
\paragraph{$\mathbf{h_{cparts}'}$:}
\index{hcpartsp@$\ensuremath{\mathit{h'_{cparts}}}\xspace()$ (similar to \ensuremath{\mathit{h_{cparts}}}\xspace but returns \textsc{Tsql2}\xspace expressions)}
I assume that for each complete partitioning used in the \textsc{Top}\xspace
formulae, there is a corresponding \textsc{Tsql2}\xspace granularity (section
\ref{tsql2_time}). \ensuremath{\mathit{h'_{cparts}}}\xspace is a function that maps each \textsc{Top}\xspace
complete partitioning name to an ordered pair $\tup{\gamma, \Sigma}$,
where $\gamma$ is the name of the corresponding \textsc{Tsql2}\xspace granularity, and
$\Sigma$ is a \sql{SELECT} statement that returns a relation representing the
periods of the partitioning. More precisely, it must be the case that
$\ensuremath{\mathit{FCN}}\xspace(\Sigma) = \emptyset$, and for every $st \in \ensuremath{\mathit{CHRONS}}\xspace$, $eval(st,
\Sigma) \in \ensuremath{\mathit{SREL}}\xspace(1)$ and $\{f_D(v) \mid
\tup{v} \in eval(st, \Sigma)\}$ is a complete partitioning.
For example, if the $gregorian$ relation of section
\ref{calrels} is available, \ensuremath{\mathit{h'_{cparts}}}\xspace could map $day^c$ to
$\langle$\sql{DAY}$,\Sigma\rangle$, where $\Sigma$ is
\pref{hpfuns:8.87}. \pref{hpfuns:8.87} returns a one-attribute
snapshot relation whose attribute values denote all the day-periods.
\begin{examps}
\item \label{hpfuns:8.87}
\select{SELECT DISTINCT SNAPSHOT VALID(greg2) \\
FROM (\select{SELECT DISTINCT greg1.4 \\
VALID VALID(greg1) \\
FROM gregorian AS greg1} \\
\ \ \ \ \ )(PERIOD) AS greg2}
\end{examps}
\ensuremath{\mathit{h_{cparts}}}\xspace is defined in terms of \ensuremath{\mathit{h'_{cparts}}}\xspace. For every
$st \in \ensuremath{\mathit{CHRONS}}\xspace$ and $\sigma_c \in \ensuremath{\mathit{CPARTS}}\xspace$, if $\ensuremath{\mathit{h'_{cparts}}}\xspace(\sigma_c) =
\tup{\gamma, \Sigma}$, then:
\index{hcparts@$\ensuremath{\mathit{h_{cparts}}}\xspace()$ (compl.\ part.\ names to relations representing compl.\ partitionings)}
\[
\ensuremath{\mathit{h_{cparts}}}\xspace(st)(\sigma_c) = eval(st, \Sigma)
\]
The restrictions on \ensuremath{\mathit{h'_{cparts}}}\xspace and the
definition of $\ensuremath{\mathit{h_{cparts}}}\xspace(st)$ above satisfy the requirements on \ensuremath{\mathit{h_{cparts}}}\xspace of
section \ref{h_funs}. The $\gamma$ is used in the translation rule
for $\ensuremath{\mathit{For}}\xspace[\sigma_c, \nu_{qty}, \phi]$ (appendix \ref{trans_proofs}).
\section{Formulation of the translation problem} \label{formulation}
Let us now specify formally what we want the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translation to achieve. I first define $interp$
(interpretation of a resulting relation). For
every $\phi \in \ensuremath{\mathit{FORMS}}\xspace$ and every relation $r$\/:
\index{interp@$interp()$ (interpretation of resulting relation)}
\begin{equation}
interp(r, \phi) \defeq
\begin{cases}
T, \text{ if } \phi \in \ensuremath{\mathit{YNFORMS}}\xspace \text{ and }
r \not= \emptyset \\
F, \text{ if } \phi \in \ensuremath{\mathit{YNFORMS}}\xspace \text{ and }
r = \emptyset \\
\{\tup{f_D(v_1), \dots, f_D(v_n)} \mid
\tup{v_1, \dots, v_n} \in r \}, \\
\text{\ \ \ \ if } \phi \in \ensuremath{\mathit{WHFORMS}}\xspace
\end{cases}
\label{formulation:2}
\end{equation}
Intuitively, if $\phi$ was translated to a \sql{SELECT} statement that
generated $r$, $interp(r, \phi)$ shows how to interpret $r$.
If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ (yes/no English question) and $r \not= \emptyset$,
the answer should be affirmative. If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and $r =
\emptyset$, the answer should be negative. Otherwise, if $\phi \in
\ensuremath{\mathit{WHFORMS}}\xspace$ (the English question contains interrogatives, e.g.\
\qit{Who~\dots?}, \qit{When~\dots?}), the answer should report all the
tuples of world objects $\tup{f_D(v_1), \dots, f_D(v_n)}$ represented
by tuples $\tup{v_1, \dots, v_n} \in r$.
A translation function $tr$ is needed, that
maps every $\phi \in \ensuremath{\mathit{FORMS}}\xspace$ to a \textsc{Tsql2}\xspace \sql{SELECT}
statement $\mathit{tr(\phi)}$,
\index{tr@$tr()$ (\textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping)}
such that for every $st \in \ensuremath{\mathit{PTS}}\xspace$, \pref{formulation:1sq} and
\pref{formulation:1} hold.
\begin{gather}
\ensuremath{\mathit{FCN}}\xspace(tr(\phi)) = \emptyset \label{formulation:1sq} \\
interp(eval(st, tr(\phi)), \phi) = \denot{M(st), st}{\phi}
\label{formulation:1}
\end{gather}
$M(st)$ must be as in section \ref{linking_model}.
As discussed in section \ref{denotation}, each (reading of
an) English question is mapped to a \textsc{Top}\xspace formula $\phi$.
The answer must report $\denot{M(st), st}{\phi}$. If $\mathit{tr}$ satisfies
\pref{formulation:1}, $\denot{M(st), st}{\phi}$ can be computed
as $interp(eval(st, tr(\phi)), \phi)$, by letting the \textsc{Dbms}\xspace
execute $tr(\phi)$ (i.e.\ compute $eval(st, tr(\phi))$).
$\mathit{tr}$ will be defined in terms of an auxiliary
function $\mathit{trans}$. $\mathit{trans}$ is a function of two
arguments:
\index{trans@$trans()$ (auxiliary \textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping)}
\[
trans(\phi, \lambda) = \Sigma
\]
where $\phi \in \ensuremath{\mathit{FORMS}}\xspace$, $\lambda$ is a \textsc{Tsql2}\xspace value expression, and
$\Sigma$ a \textsc{Tsql2}\xspace \sql{SELECT} statement. A set of ``translation
rules'' (to be discussed in section \ref{trans_rules}) specifies the
$\Sigma$-values of $\mathit{trans}$. In practice, $\lambda$ always
represents a period. Intuitively, $\lambda$ corresponds to \textsc{Top}\xspace's $lt$. When
$trans$ is first invoked (by calling $tr$, discussed below) to
translate a formula $\phi$, $\lambda$ is set to \sql{PERIOD(TIMESTAMP
'beginning', TIMESTAMP 'forever')} to reflect the fact that \textsc{Top}\xspace's
$lt$ is initially set to \ensuremath{\mathit{PTS}}\xspace (see the definition of
$\denot{M,st}{\phi}$ in section \ref{denotation}). $trans$ may call
itself recursively to translate subformulae of $\phi$ (this will
become clearer in following sections). When calling $trans$
recursively, $\lambda$ may represent a period that does not cover the
whole time-axis, to reflect the fact that already encountered \textsc{Top}\xspace
operators may have narrowed $lt$.
I define $\mathit{tr}$ as follows:
\begin{equation}
tr(\phi) \defeq trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)
\label{formulation:3}
\end{equation}
where $\ensuremath{\mathit{\lambda_{init}}}\xspace \defeq$ \sql{PERIOD (TIMESTAMP 'beginning', TIMESTAMP
'forever')}. Obviously, \ensuremath{\mathit{\lambda_{init}}}\xspace contains no correlation names, and
hence $\ensuremath{\mathit{FCN}}\xspace(\ensuremath{\mathit{\lambda_{init}}}\xspace) = \emptyset$. This implies that $eval(st, \ensuremath{\mathit{\lambda_{init}}}\xspace,
g^{db})$ does not depend on $g^{db}$. \ensuremath{\mathit{\lambda_{init}}}\xspace evaluates to the element
of $D_P$ that represents the period that covers the whole time-axis,
i.e.\ for every $st \in \ensuremath{\mathit{PTS}}\xspace$, it is true that $eval(st, \ensuremath{\mathit{\lambda_{init}}}\xspace) \in D_P$ and
$f_D(eval(st, \ensuremath{\mathit{\lambda_{init}}}\xspace)) = \ensuremath{\mathit{PTS}}\xspace$. Therefore, lemma \ref{linit_lemma}
holds.
\begin{lemma}
\label{linit_lemma}
{\rm $\ensuremath{\mathit{FCN}}\xspace(\ensuremath{\mathit{\lambda_{init}}}\xspace) = \emptyset$, and for every $st \in \ensuremath{\mathit{PTS}}\xspace$,
$eval(st, \ensuremath{\mathit{\lambda_{init}}}\xspace) \in D_P$ and $f_{D}(eval(st,\ensuremath{\mathit{\lambda_{init}}}\xspace)) = \ensuremath{\mathit{PTS}}\xspace$.}
\end{lemma}
Using \pref{formulation:3}, \pref{formulation:1sq} and
\pref{formulation:1} become \pref{formulation:6} and
\pref{formulation:4} respectively. The translation rules (that specify
the values of $\mathit{trans}$ for each $\phi$ and $\lambda$) must be
defined so that for every $\phi \in \ensuremath{\mathit{FORMS}}\xspace$ and $st \in \ensuremath{\mathit{PTS}}\xspace$,
\pref{formulation:6} and \pref{formulation:4} hold.
\begin{gather}
\ensuremath{\mathit{FCN}}\xspace(trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) = \emptyset \label{formulation:6} \\
interp(eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)), \phi) = \denot{M(st), st}{\phi}
\label{formulation:4}
\end{gather}
Appendix \ref{trans_proofs} proves that theorems \ref{wh_theorem} and
\ref{yn_theorem} hold for the translation rules of this thesis.
\begin{theorem}
\label{wh_theorem}
{\rm If $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$, $st \in \ensuremath{\mathit{PTS}}\xspace$, $trans(\phi,
\ensuremath{\mathit{\lambda_{init}}}\xspace) = \Sigma$, and the total number of interrogative and
interrogative-maximal quantifiers in $\phi$ is $n$, then:
\begin{enumerate}
\item $\ensuremath{\mathit{FCN}}\xspace(\Sigma) = \emptyset$
\item $eval(st, \Sigma) \in \ensuremath{\mathit{SREL}}\xspace(n)$
\item $\{\tup{f_D(v_1), \dots, f_D(v_n)} \mid
\tup{v_1, \dots, v_n} \in eval(st, \Sigma)\}
= \denot{M(st), st}{\phi}$
\end{enumerate}
}
\end{theorem}
That is, the translation $\Sigma$ of $\phi$ contains no free column
references, and it evaluates to a snapshot relation of $n$ attributes,
whose tuples represent $\denot{M(st), st}{\phi}$.
\begin{theorem}
\label{yn_theorem}
{\rm If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, $st \in \ensuremath{\mathit{PTS}}\xspace$, $\lambda$ is a \textsc{Tsql2}\xspace
expression, $g^{db} \in G^{db}$, $eval(st,
\lambda, g^{db}) \in D_P^*$, $\corn{\phi} = \tup{\tau_1, \dots,
\tau_n}$, and $\Sigma = trans(\phi, \lambda)$, then:
\begin{enumerate}
\item $\ensuremath{\mathit{FCN}}\xspace(\Sigma) \subseteq \ensuremath{\mathit{FCN}}\xspace(\lambda)$
\item $eval(st, \Sigma, g^{db}) \in \ensuremath{\mathit{VREL}_P}\xspace(n)$
\item $\tup{v_1, \dots, v_n; v_t} \in
eval(st, \Sigma, g^{db})$ iff for some $g \in G$: \\
$\denot{M(st), g}{\tau_1} = f_D(v_1)$, \dots,
$\denot{M(st), g}{\tau_n} = f_D(v_n)$, and \\
$\denot{M(st), st, f_D(v_t), f_D(eval(st, \lambda, g^{db})), g}{\phi} = T$
\end{enumerate}
}
\end{theorem}
$\tau_1, \tau_2, \dots, \tau_n$ are all the constants in
predicate argument positions and all the variables in $\phi$ (section
\ref{TOP_mods}). Clause 3 intuitively means that the tuples of
$eval(st, \Sigma, g^{db})$ represent all the possible combinations of
values of $\tau_1, \dots, \tau_n$ and event times $et$, such that
$\denot{M(st), st, et, lt, g}{\phi} = T$, where $lt$ is the element of
$\ensuremath{\mathit{PERIODS}}\xspace^*$ represented by $\lambda$.
I now prove that theorems \ref{wh_theorem} and \ref{yn_theorem} imply that
\pref{formulation:6} and \pref{formulation:4} hold for every $st \in
\ensuremath{\mathit{PTS}}\xspace$ and $\phi \in \ensuremath{\mathit{FORMS}}\xspace$, i.e.\ that $trans$ has the desired properties.
\textbf{Proof of \pref{formulation:6}:} Let $st \in
\ensuremath{\mathit{PTS}}\xspace$ and $\phi \in \ensuremath{\mathit{FORMS}}\xspace$. We need to show that
\pref{formulation:6} holds. Since $\ensuremath{\mathit{FORMS}}\xspace = \ensuremath{\mathit{WHFORMS}}\xspace
\union \ensuremath{\mathit{YNFORMS}}\xspace$, the hypothesis that $\phi \in \ensuremath{\mathit{FORMS}}\xspace$ implies that
$\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$ or $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$. In both cases
\pref{formulation:6} holds:
\begin{itemize}
\item If $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$, then by theorem \ref{wh_theorem},
$\ensuremath{\mathit{FCN}}\xspace(trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) = \emptyset$, i.e.\ \pref{formulation:6}
holds.
\item If $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, then by theorem \ref{yn_theorem} and
lemma \ref{linit_lemma}, the following holds, which implies that
\pref{formulation:6} also holds.
\[
\ensuremath{\mathit{FCN}}\xspace(trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \subseteq \ensuremath{\mathit{FCN}}\xspace(\ensuremath{\mathit{\lambda_{init}}}\xspace) = \emptyset
\]
\end{itemize}
\textbf{Proof of \pref{formulation:4}:} Let $st \in
\ensuremath{\mathit{PTS}}\xspace$ and $\phi \in \ensuremath{\mathit{FORMS}}\xspace$. Again, it will either be the case
that $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$ or $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$.
If $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$, then by theorem \ref{wh_theorem} the following
is true:
\[
\{\tup{f_D(v_1), \dots, f_D(v_n)} \mid
\tup{v_1, \dots, v_n} \in eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace))\}
= \denot{M(st), st}{\phi}
\]
The definition of $interp$, the hypothesis that $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$,
and the equation above imply \pref{formulation:4}.
It remains to prove \pref{formulation:4} for
$\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$. Let $\corn{\phi} = \tup{\tau_1, \dots,\tau_n}$.
By lemma \ref{linit_lemma}, for every $g^{db} \in G^{db}$, $eval(st,
\ensuremath{\mathit{\lambda_{init}}}\xspace, g^{db}) = eval(st, \ensuremath{\mathit{\lambda_{init}}}\xspace) \in D_P$ and $f_D(eval(st,
\ensuremath{\mathit{\lambda_{init}}}\xspace)) = \ensuremath{\mathit{PTS}}\xspace$. Also, \pref{formulation:6} (proven above) implies
that $eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace), g^{db})$ does not depend on
$g^{db}$. Then, from theorem \ref{yn_theorem} we get
\pref{formulation:10} and \pref{formulation:12}.
\begin{examples}
\item \label{formulation:10}
$eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \in \ensuremath{\mathit{VREL}_P}\xspace(n)$
\item \label{formulation:12}
$\tup{v_1, \dots, v_n; v_t} \in eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace))$ iff for
some $g \in G$: \\
$\denot{M(st), g}{\tau_1} = f_D(v_1), \dots,
\denot{M(st), g}{\tau_n} = f_D(v_n)$, and \\
$\denot{M(st), st, f_D(v_t), \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T$
\notag
\end{examples}
The hypothesis that $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and the definition of
$\mathit{interp}$ imply that the left-hand side of
\pref{formulation:4} has the following values:
\[
\begin{cases}
T, & \text{if } eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \not= \emptyset \\
F, & \text{if } eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) = \emptyset
\end{cases}
\]
The hypothesis that $\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$ and the definition of
$\denot{M(st), st}{\phi}$ (section \ref{denotation}) imply that the
right-hand side of \pref{formulation:4} has the following values:
\[
\begin{cases}
T, & \text{if for some } g \in G \text{ and } et \in \ensuremath{\mathit{PERIODS}}\xspace, \;
\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T \\
F, & \text{otherwise}
\end{cases}
\]
Hence, to prove \pref{formulation:4} it is enough to prove
\pref{formulation:15}.
\begin{examples}
\item \label{formulation:15}
$eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \not= \emptyset$ iff \\
for some $g \in G$ and $et \in \ensuremath{\mathit{PERIODS}}\xspace$,
$\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T$
\end{examples}
I first prove the forward direction of \pref{formulation:15}. If
it is true that $eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \not= \emptyset$,
by \pref{formulation:10} $eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace))$ contains at least
a tuple of the form $\tup{v_1, \dots, v_n; v_t}$, i.e.\
\pref{formulation:20} is true.
\begin{equation}
\label{formulation:20}
\tup{v_1, \dots, v_n; v_t} \in eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace))
\end{equation}
\pref{formulation:20} and \pref{formulation:12} imply that for some $g
\in G$, \pref{formulation:21} holds.
\begin{equation}
\label{formulation:21}
\denot{M(st), st, f_D(v_t), \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T
\end{equation}
\pref{formulation:10} and \pref{formulation:20} imply that $v_t$ is
the time-stamp of a tuple in a relation of \ensuremath{\mathit{VREL}_P}\xspace, which implies that
$f_D(v_t) \in \ensuremath{\mathit{PERIODS}}\xspace$. Let $et = f_D(v_t)$. Then,
\pref{formulation:21} becomes \pref{formulation:22}, where $g \in G$
and $et = f_D(v_t) \in \ensuremath{\mathit{PERIODS}}\xspace$. The forward direction of
\pref{formulation:15} has been proven.
\begin{equation}
\label{formulation:22}
\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T
\end{equation}
I now prove the backwards direction of \pref{formulation:15}. I
assume that $g \in G$, $et \in \ensuremath{\mathit{PERIODS}}\xspace$, and $\denot{M(st),
st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T$. Let $v_t = \ensuremath{f_D^{-1}}\xspace(et)$, which implies that
$et = f_D(v_t)$. Then \pref{formulation:23} holds.
\begin{equation}
\label{formulation:23}
\denot{M(st), st, f_D(v_t), \ensuremath{\mathit{PTS}}\xspace, g}{\phi} = T
\end{equation}
Let $v_1 = \ensuremath{f_D^{-1}}\xspace(\denot{M(st),g}{\tau_1})$, \dots, $v_n =
\ensuremath{f_D^{-1}}\xspace(\denot{M(st),g}{\tau_n})$. This implies that
\pref{formulation:24} also holds.
\begin{equation}
\label{formulation:24}
\denot{M(st), g}{\tau_1} = f_D(v_1), \; \dots, \; \denot{M(st),
g}{\tau_n} = f_D(v_n)
\end{equation}
\pref{formulation:24}, \pref{formulation:23}, the hypothesis that $g
\in G$, and \pref{formulation:12} imply \pref{formulation:25}, which
in turn implies that $eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace)) \not= \emptyset$. The
backwards direction of \pref{formulation:15} has been proven.
\begin{equation}
\label{formulation:25}
\tup{v_1, \dots, v_n; v_t} \in eval(st, trans(\phi, \ensuremath{\mathit{\lambda_{init}}}\xspace))
\end{equation}
This concludes the proof of \pref{formulation:4}. I have proven that
$trans$ satisfies \pref{formulation:6} and \pref{formulation:4} for
every $\phi \in \ensuremath{\mathit{FORMS}}\xspace$ and $st \in \ensuremath{\mathit{PTS}}\xspace$, i.e.\ that $trans$ has all the
desired properties.
\section{The translation rules} \label{trans_rules}
The values (\sql{SELECT} statements) of $trans$ are specified by a set
of ``translation rules''. These rules are of two kinds: (a) base
(non-recursive) rules that specify $trans(\phi, \lambda)$ when $\phi$
is an atomic formula or a formula of the form $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1,
\dots, \tau_n)]$; and (b) recursive rules that specify
$trans(\phi, \lambda)$ in all other cases, by recursively calling
other translation rules to translate subformulae of $\phi$. In this
section, I attempt to convey the intuitions behind the design of the
translation rules, and to illustrate the functionality of some
representative rules.
In the case of a yes/no formula $\phi$, the aim is for the resulting
\sql{SELECT} statement to return a relation of $\ensuremath{\mathit{VREL}_P}\xspace(n)$ that shows all
the combinations of event-times $et$ and values of $\tau_1,
\dots, \tau_n$ ($\tup{\tau_1, \dots, \tau_n} = \corn{\phi}$)
for which $\phi$ is satisfied. More precisely, the tuples of the
relation must represent all the combinations of event times $et$ and
world objects assigned (by $\ensuremath{\mathit{f_{cons}}}\xspace(st)$ and some variable assignment
$g$) to $\tau_1, \dots, \tau_n$, for which $\denot{M(st), st, et, lt,
g}{\phi} = T$, where $lt$ is the element of $\ensuremath{\mathit{PERIODS}}\xspace^*$ represented
by $\lambda$. In each tuple $\tup{v_1, \dots, v_n; v_t}$, $v_t$
represents $et$, while $v_1, \dots, v_n$ represent the world objects
of $\tau_1, \dots, \tau_n$. For example, the rule for predicates is as
follows:
\textbf{Translation rule for predicates:} \\
$trans(\pi(\tau_1, \dots, \tau_n), \lambda) \defeq$\\
\sql{(}\select{SELECT DISTINCT $\alpha.1$, $\alpha.2$, \dots, $\alpha.n$ \\
VALID VALID($\alpha$) \\
FROM ($\ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi, n)$)(SUBPERIOD) AS $\alpha$ \\
WHERE \dots \\
\ \ AND \dots \\
\ \ \vdots \\
\ \ AND \dots \\
\ \ AND $\lambda$ CONTAINS VALID($\alpha$))}
where the ``\dots''s in the \sql{WHERE} clause stand for all the
strings in $S_1 \union S_2$, and:
\begin{gather*}
S_1 =
\{\text{``}\alpha.i = \ensuremath{\mathit{h'_{cons}}}\xspace(\tau_i)\text{''} \mid
i \in \{1,2,3,\dots,n\} \text{ and } \tau_i \in \ensuremath{\mathit{CONS}}\xspace\} \\
S_2 =
\{\text{``}\alpha.i = \alpha.j\text{''} \mid
i,j \in \{1,2,3,\dots,n\}, \; i < j, \; \tau_i = \tau_j, \text{ and }
\tau_i, \tau_j \in \ensuremath{\mathit{VARS}}\xspace\}
\end{gather*}
I assume that whenever the
translation rule is invoked, a new correlation name $\alpha$ is used,
that is obtained by calling a
\emph{generator of correlation names}. Whenever called,
the generator returns a new correlation name that has never
been generated before. I assume that the correlation names
of the generator are of some distinctive form (e.g.\
\sql{t1}, \sql{t2}, \sql{t3},~\dots), and that the correlation
names in the \sql{SELECT} statements returned by \ensuremath{\mathit{h'_{pfuns}}}\xspace,
\ensuremath{\mathit{h'_{culms}}}\xspace, \ensuremath{\mathit{h'_{cparts}}}\xspace, and \ensuremath{\mathit{h'_{gparts}}}\xspace are not of this
distinctive form. I also assume that some mechanism is in
place to ensure that no correlation name of the distinctive form of
the generator can be used before it has been generated.
The use of the generator means that $\mathit{trans}$ is strictly
speaking not a pure function, since the same $\pi$ and $\tau_1, \dots,
\tau_n$ lead to slightly different \sql{SELECT} statements whenever
$trans(\pi(\tau_1, \dots, \tau_n), \lambda)$ is computed: each time
the resulting statement contains a different $\alpha$ (similar
comments apply to other translation
rules). There are ways to make $\mathit{trans}$ a pure function, but
these complicate the translation rules and
the proof of their correctness, without offering any practical
advantage.
Let us consider, for example, the predicate $inspecting(i158,
j\_adams, uk160)$. According to section \ref{denotation},
$\denot{M(st), st, et, lt, g}{inspecting(i158, j\_adams, uk160)} = T$ iff $et \subper lt$ and $et \subper p$,
where:
\[
p \in \ensuremath{\mathit{f_{pfuns}}}\xspace(st)(inspecting, 3)(\denot{M(st),g}{i158},
\denot{M(st),g}{j\_adams}, \denot{M(st),g}{uk160})
\]
Let us assume that $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting, 3)$ and
$\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(inspecting, 3)$ are \pref{hpfuns:4} and \pref{hpfuns:6}
respectively (p.~\pageref{hpfuns:4}), that $i158$, $j\_adams$, and
$uk160$ correspond to the obvious attribute values of \pref{hpfuns:6},
and that $\lambda$ is \sql{PERIOD '[9:00am 1/5/95 - 9:30pm 1/5/95]'}.
$lt$ is the period represented by $\lambda$. By the definition of
\ensuremath{\mathit{f_{pfuns}}}\xspace of section \ref{resulting_model}:
\[
\ensuremath{\mathit{f_{pfuns}}}\xspace(st)(inspecting,3)(\denot{M(st),g}{i158},
\denot{M(st),g}{j\_adams}, \denot{M(st),g}{uk160}) = \{p_1, p_2\}
\]
where $p_1$ and $p_2$ are the periods of the first two
tuples of \pref{hpfuns:6}. The
denotation of $inspecting(i158, j\_adams, uk160)$ is $T$ for all
the $et$s that are subperiods of $p_1$ or $p_2$ and also
subperiods of $lt$.
The translation rule above maps $inspecting(i158, j\_adams,
uk160)$ to \pref{trans:1}, where $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting, 3)$ is the
\sql{SELECT} statement of \pref{hpfuns:4} (that returns \pref{hpfuns:6}).
\begin{examps}
\item \label{trans:1}
\sql{(}\select{SELECT DISTINCT t1.1, t1.2, t1.3 \\
VALID VALID(t1) \\
FROM ($\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting, 3)$)(SUBPERIOD) AS t1 \\
WHERE t1.1 = 'i158' \\
\ \ AND t1.2 = 'J.Adams' \\
\ \ AND t1.3 = 'UK160' \\
\ \ AND PERIOD '[9:00am 1/5/95 - 9:30pm 1/5/95]'
CONTAINS VALID(t1))}
\end{examps}
\pref{trans:1} returns \pref{trans:2}, where the time-stamps
correspond to all the subperiods of $p_1$ and $p_2$ ($p_1$ and $p_2$
are the periods of the first two time-stamps of
\pref{hpfuns:6}) that are also subperiods of $lt$ (the period
represented by $\lambda$).
\begin{examps}
\item
\dbtableb{|l|l|l||l|}
{$i158$ & $J.Adams$ & $UK160$ & $[9\text{:}00am \; 1/5/95, \;
9\text{:}30pm \; 1/5/95]$ \\
$i158$ & $J.Adams$ & $UK160$ & $[9\text{:}10am \; 1/5/95, \; 9\text{:}15pm \; 1/5/95]$ \\
$i158$ & $J.Adams$ & $UK160$ & $[9\text{:}20am \; 1/5/95, \; 9\text{:}25pm \; 1/5/95]$ \\
\ \dots & \ \dots & \ \dots & \ \dots
}
\label{trans:2}
\end{examps}
In other words, the time-stamps of \pref{trans:2} represent correctly
all the $et$s where the denotation of $inspecting(i158, j\_adams,
uk160)$ is $T$. In this example, all the predicate arguments are
constants. Hence, there can be no variation in the values of the
arguments, and the values of the explicit attributes in \pref{trans:2}
are the same in all the tuples. When some of the predicate arguments
are variables, the values of the corresponding explicit attributes are
not necessarily fixed.
The $S_2$ constraints in the \sql{WHERE} clause of the translation
rule are needed when the predicate contains the same
variable in more than one argument positions. In those cases,
$S_2$ requires the attributes that correspond to the
argument positions where the variable appears to have the same
values. $S_2$ contains redundant
constraints when some variable appears in more than two
argument positions. For example, in $\pi(\beta,
\beta, \beta)$ ($\beta \in \ensuremath{\mathit{VARS}}\xspace$), $S_2$ requires the tuples
$\tup{v_1, v_2, v_3; v_t}$ of the resulting relation to satisfy:
$v_1 = v_2$, $v_1 = v_3$, and $v_2 = v_3$. The third
constraint is redundant, because it follows from the others. The
prototype \textsc{Nlitdb}\xspace employs a slightly more complex
definition of $S_2$ that does not generate the third
constraint. Similar comments apply to the rule for
$\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$ below, and the rules
for conjunction, $\ensuremath{\mathit{At}}\xspace[\phi_1, \phi_2]$, $\ensuremath{\mathit{Before}}\xspace[\phi_1, \phi_2]$, and
$\ensuremath{\mathit{After}}\xspace[\phi_1, \phi_2]$ (appendix \ref{trans_proofs}).
\textbf{Translation rule for $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$:}\\
$trans(\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)], \lambda) \defeq$\\
\sql{(}\select{SELECT DISTINCT
$\alpha_1.1$, $\alpha_1.2$, \dots, $\alpha_1.n$ \\
VALID PERIOD(BEGIN(VALID($\alpha_1$)),
END(VALID($\alpha_1$))) \\
FROM ($\ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi, n)$)(ELEMENT) AS $\alpha_1$, \\
\ \ \ \ \ ($\ensuremath{\mathit{h'_{culms}}}\xspace(\pi, n)$) AS $\alpha_2$ \\
WHERE $\alpha_1.1 = \alpha_2.1$ \\
\ \ AND $\alpha_1.2 = \alpha_2.2$ \\
\ \ \ \ \vdots \\
\ \ AND $\alpha_1.n = \alpha_2.n$ \\
\ \ AND \dots \\
\ \ \ \ \vdots \\
\ \ AND \dots \\
\ \ AND $\lambda$ CONTAINS
PERIOD(BEGIN(VALID($\alpha_1$)),
END(VALID($\alpha_1$)))}
Whenever the rule is used, $\alpha_1$ and $\alpha_2$ are
two new different correlation names, obtained by calling the
correlation names generator after $\lambda$ has been supplied. The
``\dots'' in the \sql{WHERE} clause stand for all the strings in
$S_1 \union S_2$, where $S_1$ and $S_2$ are as in the translation rule
for predicates, except that $\alpha$ is now $\alpha_1$.
The rule for $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$ is
similar to that for $\pi(\tau_1, \dots, \tau_n)$. The resulting
\sql{SELECT} statement returns an element of $\ensuremath{\mathit{VREL}_P}\xspace(n)$ that shows
the $et$s and the values of the predicate arguments for which the
denotation of $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$ is $T$. In the case
of $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$, however, the generated
relation contains only tuples $\tup{v_1, \dots, v_n; v_t}$, for which
$\tup{v_1, \dots, v_n}$ appears in $\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n)$ (the relation
returned by $\ensuremath{\mathit{h'_{culms}}}\xspace(\pi, n)$). That is, the situation of
$\pi(\tau_1, \dots, \tau_n)$ must reach its climax at the latest
time-point where it is ongoing. Also, $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n)$
(the relation returned by $\ensuremath{\mathit{h'_{pfuns}}}\xspace(\pi,n)$) is coalesced using
\sql{(ELEMENT)}. This causes all tuples of $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n)$
that refer to the same situation to be merged into one tuple,
time-stamped by a temporal element that is the union of all the
periods where the situation is ongoing. Let us refer to this coalesced
version of $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(\pi, n)$ as $r$. $\alpha_1$ ranges
over the tuples of $r$, while $\alpha_2$ over the tuples of
$\ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi,n)$. The relation returned by
$trans(\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)], \lambda)$ contains all
tuples $\tup{v_1, \dots, v_n; v_t}$, such that $\tup{v_1, \dots, v_n;
v_t'} \in r$, $v_t$ represents the period that starts at the beginning
of the temporal element of $v_t'$ and ends at the end of the temporal
element of $v_t'$, $\tup{v_1, \dots, v_n} \in \ensuremath{\mathit{h_{culms}}}\xspace(st)(\pi, n)$,
and $v_t$'s period (i.e.\ $et$) is a subperiod of $\lambda$'s period
(i.e.\ $lt$). $S_1$ and $S_2$ play the same role as in the translation
rule for predicates.
Let us assume that $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting,3)$ and
$\ensuremath{\mathit{h'_{culms}}}\xspace(inspecting,3)$ are \pref{hpfuns:4} and \pref{hpfuns:5}
respectively, that $\ensuremath{\mathit{h_{pfuns}}}\xspace(st)(inspecting,3)$ and
$\ensuremath{\mathit{h_{culms}}}\xspace(st)(inspecting,3)$ are \pref{hpfuns:6} and
\pref{hpfuns:7}, and that $\lambda =$ \sql{PERIOD '[1/5/95 -
18/11/95]'}. The translation rule above maps $\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v,
person^v, flight^v)]$ to \pref{trans:5}.
\begin{examps}
\item \label{trans:5}
\sql{(}\select{SELECT DISTINCT t1.1, t1.2, t1.3 \\
VALID PERIOD(BEGIN(VALID(t1)),
END(VALID(t1))) \\
FROM ($\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting,3)$)(ELEMENT) AS t1, \\
\ \ \ \ \ ($\ensuremath{\mathit{h'_{culms}}}\xspace(inspecting,3)$) AS t2 \\
WHERE t1.1 = t2.1 \\
\ \ AND t1.2 = t2.2 \\
\ \ AND t1.3 = t2.3 \\
\ \ AND PERIOD '[1/5/95 - 18/11/95]' CONTAINS \\
\ \ \ \ \ \ PERIOD(BEGIN(VALID(t1)),
END(VALID(t1))))}
\end{examps}
\pref{trans:5} returns \pref{trans:6}. There is (correctly) no tuple
for inspection $i160$: the semantics of \ensuremath{\mathit{Culm}}\xspace (section
\ref{culm_op}) requires the inspection to reach its completion at the
latest time-point where it is ongoing; according to \pref{hpfuns:7},
this is not the case for $i160$. There is also (correctly) no tuple
for $i214$: the semantics of \ensuremath{\mathit{Culm}}\xspace
requires $et$ (the time of the inspection) to be a
subperiod of $lt$ ($\lambda$'s period), but $i214$ does not occur
within $lt$. Finally, \pref{trans:6} does not
contain tuples for the subperiods of [9:00am 1/5/95, 10:25am
1/5/95] and [8:00am 16/11/95, 8:20am 16/11/95]. This is in accordance
with the semantics of \ensuremath{\mathit{Culm}}\xspace, that allows $\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, j\_adams,
ba737)]$ to be true only at $et$s that cover entire inspections
(from start to completion).
\begin{examps}
\item
\dbtableb{|l|l|l||l|}
{$i158$ & $J.Adams$ & $UK160$ & $[9\text{:}00am \; 1/5/95, \;
10\text{:}25am \; 1/5/95]$ \\
$i205$ & $T.Smith$ & $BA737$ & $[8\text{:}00am \; 16/11/95, \;
8\text{:}20am \; 16/11/95]$
}
\label{trans:6}
\end{examps}
All the other translation rules for yes/no formulae are recursive. For
example, $\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$ is translated using the following:
\textbf{Translation rule for $\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$:}\\
\label{past_trans_discuss}
$trans(\ensuremath{\mathit{Past}}\xspace[\beta, \phi'], \lambda) \defeq$\\
\sql{(}\select{SELECT DISTINCT VALID($\alpha$),
$\alpha$.1, $\alpha$.2, \dots, $\alpha$.$n$ \\
VALID VALID($\alpha$) \\
FROM $trans(\phi', \lambda')$ AS $\alpha$)}
$\lambda'$ is the expression \sql{INTERSECT($\lambda$, PERIOD(TIMESTAMP
'beginning', TIMESTAMP 'now' - INTERVAL '1' $\chi$))}, $\chi$
stands for the \textsc{Tsql2}\xspace name of the granularity of chronons (e.g.\
\sql{DAY}), and $n$ is the length of $\corn{\phi'}$. Whenever the rule
is used, $\alpha$ is a new correlation name obtained by calling the
correlation names generator.
The rule for
$\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$ calls recursively $\mathit{trans}$ to translate
$\phi'$. $\phi'$ is translated with
respect to $\lambda'$, which represents the intersection of
the period of the original $\lambda$ with the period that covers all
the time up to (but not including) the present chronon. This
reflects the semantics of \ensuremath{\mathit{Past}}\xspace (section \ref{past_op}),
that narrows $lt$ to $lt
\intersect [t_{first}, st)$. The relation returned by
$trans(\ensuremath{\mathit{Past}}\xspace[\beta, \phi'], \lambda)$ is the same as that of
$trans(\phi', \lambda')$, except that the relation of
$trans(\ensuremath{\mathit{Past}}\xspace[\beta, \phi'], \lambda)$ contains an additional explicit
attribute, that corresponds to the $\beta$ of $\ensuremath{\mathit{Past}}\xspace[\beta,\phi']$. The
values of that attribute are the same as the corresponding time-stamps
(that represent $et$). This reflects the semantics of
$\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$, that requires the value of $\beta$ to be $et$.
As a further example, $\ensuremath{\mathit{At}}\xspace[\kappa, \phi']$ ($\kappa \in \ensuremath{\mathit{CONS}}\xspace$) is
translated using the following:
\textbf{Translation rule for $\ensuremath{\mathit{At}}\xspace[\kappa, \phi']$:}\\
$trans(\ensuremath{\mathit{At}}\xspace[\kappa, \phi'], \lambda) \defeq trans(\phi', \lambda')$,
where $\lambda'$ is \sql{INTERSECT($\lambda$,
$\ensuremath{\mathit{h'_{cons}}}\xspace(\kappa)$)}.
The translation of $\ensuremath{\mathit{At}}\xspace[\kappa, \phi']$ is the same as the
translation of $\phi'$, but $\phi'$ is translated with respect to
$\lambda'$, which represents the intersection of $\lambda$'s period
with that of $\kappa$. This reflects the fact
that in $\ensuremath{\mathit{At}}\xspace[\kappa, \phi']$, the \ensuremath{\mathit{At}}\xspace narrows $lt$ to the
intersection of the original $lt$ with $\kappa$'s period.
There are separate translation rules for
$\ensuremath{\mathit{At}}\xspace[\sigma_c, \beta, \phi']$, $\ensuremath{\mathit{At}}\xspace[\sigma_g, \beta, \phi']$, and
$\ensuremath{\mathit{At}}\xspace[\phi_1, \phi_2]$ ($\sigma_c \in \ensuremath{\mathit{CPARTS}}\xspace$, $\sigma_g \in
\ensuremath{\mathit{GPARTS}}\xspace$, and $\phi', \phi_1, \phi_2 \in \ensuremath{\mathit{YNFORMS}}\xspace$).
The complete set of translation rules for yes/no formulae is given in
appendix \ref{trans_proofs}, along with a formal proof that
$trans(\phi, \lambda)$ satisfies theorem \ref{yn_theorem}. Theorem
\ref{yn_theorem} is proven by induction on the syntactic complexity of
$\phi$. I first prove that theorem \ref{yn_theorem} holds if $\phi$ is
a predicate or $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$. For all other
$\phi \in \ensuremath{\mathit{YNFORMS}}\xspace$, $\phi$ is non-atomic. In those cases, I prove
that theorem \ref{yn_theorem} holds if it holds for the
subformulae of $\phi$.
Let us now consider wh-formulae. These have
the form $?\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \; ?\beta_k
\; \phi'$ or $?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \;
\dots \; ?\beta_k \; \phi'$, where $\phi' \in \ensuremath{\mathit{YNFORMS}}\xspace$ (section
\ref{top_syntax}). The first case is covered by the following rule.
(The rules for wh-formulae define $trans(\phi, \lambda)$
only for $\lambda = \ensuremath{\mathit{\lambda_{init}}}\xspace$. The values of $\mathit{trans}$ for $\phi
\in \ensuremath{\mathit{WHFORMS}}\xspace$ and $\lambda \not= \ensuremath{\mathit{\lambda_{init}}}\xspace$ are not used anywhere and can
be chosen arbitrarily. Intuitively, for $\phi \in \ensuremath{\mathit{WHFORMS}}\xspace$ the goal is
to define $trans(\phi, \lambda)$ so that it satisfies theorem
\ref{wh_theorem}. That theorem is indifferent to the values of
$\mathit{trans}$ for $\lambda \not= \ensuremath{\mathit{\lambda_{init}}}\xspace$.)
\textbf{Translation rule for $?\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \;
?\beta_k \; \phi'$:} \\
$trans(?\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \;
?\beta_k \; \phi', \ensuremath{\mathit{\lambda_{init}}}\xspace) \defeq$ \\
\sql{(}\select{SELECT DISTINCT SNAPSHOT $\alpha.\omega_1$,
$\alpha.\omega_2$, \dots, $\alpha.\omega_k$ \\
FROM $trans(\phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)$ AS $\alpha$)}
Whenever the rule is used, $\alpha$ is a new correlation
name, obtained by calling the correlation names generator. Assuming
that $\corn{\phi'} = \tup{\tau_1, \dots,
\tau_n}$, for every $i \in \{1,2,3, \dots, \kappa\}$:
\[
\omega_i = min(\{j \mid
j \in \{1,2,3,\dots,n\} \text{ and } \tau_j = \beta_j\})
\]
That is, the first position (from left to right) where $\beta_i$
appears in $\tup{\tau_1, \dots, \tau_n}$ is the $\omega_i$-th one.
Intuitively, we want $?\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \;
?\beta_k \; \phi'$ to be translated to a \sql{SELECT} statement
that returns a snapshot relation, whose tuples represent
$\denot{M(st), st}{?\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \; ?\beta_k
\; \phi'}$. According to section
\ref{denotation}, $\denot{M(st), st}{?\beta_1 \; ?\beta_2 \; ?\beta_3
\dots \; ?\beta_k \; \phi'}$ is the set of all tuples that represent
combinations of values assigned to $\beta_1, \dots,
\beta_k$ by some $g \in G$, such that for some $et \in \ensuremath{\mathit{PERIODS}}\xspace$,
$\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi'} = T$.
By theorem \ref{yn_theorem}, the relation returned
by $trans(\phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)$ (see the translation rule) is a valid-time
relation, whose tuples show all the possible combinations of $et$s and
values assigned (by $\ensuremath{\mathit{f_{cons}}}\xspace(st)$ and some $g \in G$) to $\tau_1, \dots,
\tau_n$, for which $\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi'} = T$. The
syntax of \textsc{Top}\xspace (section \ref{top_syntax}) guarantees that $\beta_1, \dots,
\beta_k$ appear within $\phi'$. This in turn guarantees that $\beta_1,
\dots, \beta_k$ appear among $\tau_1, \dots, \tau_n$, i.e.\ the
relation of $trans(\phi',\ensuremath{\mathit{\lambda_{init}}}\xspace)$ contains attributes for
$\beta_1,\dots,\beta_k$. To find
all the possible combinations of values of $\beta_1, \dots, \beta_k$
for which (for some $et$) $\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi'} = T$,
we simply need to pick (to ``project'' in relational terms) from the
relation of $trans(\phi',
\ensuremath{\mathit{\lambda_{init}}}\xspace)$ the attributes that correspond to
$\beta_1, \dots,
\beta_k$. For $i \in \{1,2,3,\dots,k\}$, $\beta_i$ may appear
more than once in $\phi'$. In this case, the relation of $trans(\phi',
\ensuremath{\mathit{\lambda_{init}}}\xspace)$ contains more than one attributes for $\beta_i$ (these
attributes have the same values in each tuple). We only need to
project one of the attributes that correspond to $\beta_i$. The
translation rule projects only the first one; this is the
$\omega_i$-th attribute of $trans(\phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)$, the attribute that
corresponds to the first (from left to right) $\tau_j$ in
$\tup{\tau_1, \dots,
\tau_n}$ that is equal to $\beta_i$.
Let us consider, for example, the following wh-formula (\qit{Who
inspected what?}):
\begin{equation}
\label{trans:10.1}
?w1^v \; ?w2^v \; \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w1^v, w2^v)]]
\end{equation}
Here, $\phi' = \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w1^v,
w2^v)]]$ and $\corn{\phi'} = \tup{e^v, occr^v, w1^v, w2^v}$. Let us
assume that $trans(\phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)$ returns
\pref{trans:10}. \pref{trans:10} shows all the possible combinations
of $et$s and values that can be assigned by some $g
\in G$ to $e^v$, $occr^v$, $w1^v$, and $w2^v$, such that
$\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\phi'} = T$. In every tuple, the
time-stamp is the same as the value of the first explicit attribute,
because the semantics of \ensuremath{\mathit{Past}}\xspace requires the value of $e^v$
(represented by the first explicit attribute) to be $et$ (represented
by the time-stamp). To save space, I omit the time-stamps of
\pref{trans:10}.
\begin{examps}
\item \label{trans:10}
\dbtableb{|l|l|l|l||l|}
{$[9\text{:}00am \; 1/5/95, \; 3\text{:}00pm \; 1/5/95]$ & $i158$ & $J.Adams$ & $UK160$ & \dots \\
$[10\text{:}00am \; 4/5/95, \; 11\text{:}30am \; 4/5/95]$ & $i165$ & $J.Adams$ & $BA737$ & \dots \\
$[7\text{:}00am \; 16/11/95, \; 7\text{:}30am \; 16/11/95]$ & $i204$ & $T.Smith$ & $UK160$ & \dots
}
\end{examps}
To generate the snapshot relation that represents $\denot{M(st),
st}{?w1^v \; ?w2^v \; \phi'}$, i.e.\ the relation that shows the
combinations of values of $w1^v$ and $w2^v$ for
which (for some $et$ and $g$) $\denot{M(st), st, et, \ensuremath{\mathit{PTS}}\xspace, g}{\ensuremath{\mathit{Past}}\xspace[e^v,
\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w1^v, w2^v)]]} = T$, we simply need to
project the explicit attributes of \pref{trans:10} that correspond to
$w1^v$ and $w2^v$. The first positions where $w1^v$ and $w2^v$ appear
in $\corn{\phi'} = \tup{e^v, occr^v, w1^v, w2^v}$ are the
third and fourth (i.e.\ $\omega_1 = 3$ and $\omega_2 =
4$). Hence, we need to project the third and fourth explicit
attributes of \pref{trans:10}. The translation rule for $?\beta_1 \;
\dots \; ?\beta_k \; \phi'$ maps \pref{trans:10.1} to \pref{trans:11},
which achieves exactly that (it returns \pref{trans:12}).
\begin{examps}
\item \label{trans:11}
\sql{(}\select{SELECT DISTINCT SNAPSHOT t1.3, t1.4 \\
FROM $trans(\ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w1^v, w2^v)]],
\ensuremath{\mathit{\lambda_{init}}}\xspace)$ AS t1)}
\item \label{trans:12}
\dbtableb{|l|l|}
{$J.Adams$ & $UK160$ \\
$J.Adams$ & $BA737$ \\
$T.Smith$ & $UK160$
}
\end{examps}
Wh-formulae of the form $?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \;
\dots \; ?\beta_k \; \phi'$ ($\phi' \in \ensuremath{\mathit{YNFORMS}}\xspace$) are
translated using the following:
\textbf{Translation rule for $?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \;
?\beta_k \; \phi'$:} \\
$trans(?_{mxl}\beta_1 \; ?\beta_2 \; ?\beta_3 \dots \;
?\beta_k \; \phi', \ensuremath{\mathit{\lambda_{init}}}\xspace) \defeq$ \\
\sql{(}\select{SELECT DISTINCT SNAPSHOT VALID($\alpha_2$),
$\alpha_2$.2,
$\alpha_2$.3, \dots, $\alpha_2$.$k$ \\
FROM (\select{SELECT DISTINCT 'dummy',
$\alpha_1.\omega_2$,
$\alpha_1.\omega_3$, \dots,
$\alpha_1.\omega_k$ \\
VALID $\alpha_1.\omega_1$ \\
FROM $trans(\phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)
$ AS $\alpha_1$}\\
\ \ \ \ \ )(NOSUBPERIOD) AS $\alpha_2$)}
Whenever the rule is used, $\alpha_1$ and $\alpha_2$ are
two different new correlation names, obtained by calling the
correlation names generator. Assuming that $\corn{\phi'} =
\tup{\tau_1, \dots, \tau_n}$, $\omega_1, \dots, \omega_k$ are as
in the rule for $?\beta_1 \; \dots \; ?\beta_k \; \phi$.
That is, the first position (from left to right) where $\beta_i$
appears in $\tup{\tau_1, \dots, \tau_n}$ is the $\omega_i$-th one.
Let us consider, for example, \pref{trans:14} (\qit{What circled when.}).
\begin{equation}
?_{mxl}e^v \; ?w^v \; \ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)] \label{trans:14}
\end{equation}
Let us also assume that $trans(\ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)], \ensuremath{\mathit{\lambda_{init}}}\xspace)$
returns \pref{trans:15}. In this case, $\phi' = \ensuremath{\mathit{Past}}\xspace[e^v,
circling(w^v)]$ and $\corn{\phi'} = \tup{e^v, w^v}$. \pref{trans:15}
shows all the combinations of $et$s and values of $e^v$ and $w^v$, for
which the denotation of $\ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)]$ is $T$. In each
tuple, the value of the first explicit attribute (that corresponds to
$e^v$) is the same as the time-stamp, because the semantics of \ensuremath{\mathit{Past}}\xspace
requires the value of $e^v$ to be the same as $et$ (represented by the
time-stamp). To save space, I omit the time-stamps.
\begin{examps}
\item \label{trans:15}
\dbtableb{|l|l||l|}
{
$[5\text{:}02pm \; 22/11/95, \; 5\text{:}17pm \; 22/11/95]$ & $BA737$ & \ \dots \\
$[5\text{:}05pm \; 22/11/95, \; 5\text{:}15pm \; 22/11/95]$ & $BA737$ & \ \dots \\
$[5\text{:}07pm \; 22/11/95, \; 5\text{:}13pm \; 22/11/95]$ & $BA737$ & \ \dots \\
\ \dots & \ \dots & \ \dots \\
$[4\text{:}57pm \; 23/11/95, \; 5\text{:}08pm \; 23/11/95]$ & $BA737$ & \ \dots \\
$[4\text{:}59pm \; 23/11/95, \; 5\text{:}06pm \; 23/11/95]$ & $BA737$ & \ \dots \\
$[5\text{:}01pm \; 23/11/95, \; 5\text{:}04pm \; 23/11/95]$ & $BA737$ & \ \dots \\
\ \dots & \ \dots & \ \dots \\
$[8\text{:}07am \; 22/11/95, \; 8\text{:}19am \; 22/11/95]$ & $UK160$ & \ \dots \\
$[8\text{:}08am \; 22/11/95, \; 8\text{:}12am \; 22/11/95]$ & $UK160$ & \ \dots \\
$[8\text{:}09am \; 22/11/95, \; 8\text{:}10am \; 22/11/95]$ & $UK160$ & \ \dots \\
\ \dots & \ \dots & \ \dots
}
\end{examps}
BA737 was circling from 5:02pm to 5:17pm on 22/11/95, and from 4:57pm
to 5:08pm on 23/11/95. UK160 was circling from 8:07am to 8:19am on
22/11/95. \pref{trans:15} also contains tuples for the subperiods of
these periods, because $circling(w^v)$ (like all \textsc{Top}\xspace predicates) is
homogeneous (section \ref{denotation}). $\ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)]$ is
true at all these subperiods that end before the present chronon.
In our example, the embedded \sql{SELECT} statement of $trans(?_{mxl}\beta_1 \;
?\beta_2 \; ?\beta_3 \dots \; ?\beta_k \; \phi', \ensuremath{\mathit{\lambda_{init}}}\xspace)$ is:
\begin{examps}
\item \label{trans:16}
\sql{(}\select{SELECT DISTINCT 'dummy', t1.2 \\
VALID t1.1 \\
FROM $trans(\ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)], \ensuremath{\mathit{\lambda_{init}}}\xspace)$ AS t1)}
\end{examps}
\pref{trans:16} generates \pref{trans:17}, where
the time-stamps are the values of the first explicit attribute of
\pref{trans:15} (i.e.\ they correspond to $e^v$). The \sql{'dummy'} in
the embedded \sql{SELECT} statement (\pref{trans:16} in our example)
means that the first explicit attribute of that statement's resulting
relation should have the string ``$dummy$'' as its value in all
tuples. This is needed when $k = 1$. If, for example, \pref{trans:14}
were $?_{mxl}e^v \; \ensuremath{\mathit{Past}}\xspace[e^v, circling(ba737)]$, without the
\sql{'dummy'} the \sql{SELECT} clause of \pref{trans:16} would contain
nothing after \sql{DISTINCT} (this is not allowed in \textsc{Tsql2}\xspace).
\begin{examps}
\item \label{trans:17}
\dbtableb{|l|l||l|}
{
$dummy$ & $BA737$ & $[5\text{:}02pm \; 22/11/95, \; 5\text{:}17pm \; 22/11/95]$ \\
$dummy$ & $BA737$ & $[5\text{:}05pm \; 22/11/95, \; 5\text{:}15pm \; 22/11/95]$ \\
$dummy$ & $BA737$ & $[5\text{:}07pm \; 22/11/95, \; 5\text{:}13pm \; 22/11/95]$ \\
\ \dots & \ \dots & \ \dots \\
$dummy$ & $BA737$ & $[4\text{:}57pm \; 23/11/95, \; 5\text{:}08pm \; 23/11/95]$ \\
$dummy$ & $BA737$ & $[4\text{:}59pm \; 23/11/95, \; 5\text{:}06pm \; 23/11/95]$ \\
$dummy$ & $BA737$ & $[4\text{:}59pm \; 23/11/95, \; 5\text{:}06pm \; 23/11/95]$ \\
\ \dots & \ \dots & \ \dots \\
$dummy$ & $UK160$ & $[8\text{:}07am \; 22/11/95, \; 8\text{:}19am \; 22/11/95]$ \\
$dummy$ & $UK160$ & $[8\text{:}08am \; 22/11/95, \; 8\text{:}12am \; 22/11/95]$ \\
$dummy$ & $UK160$ & $[8\text{:}09am \; 22/11/95, \; 8\text{:}10am \; 22/11/95]$ \\
\ \dots & \ \dots & \ \dots
}
\end{examps}
The \sql{(NOSUBPERIOD)} of the translation rule removes from
\pref{trans:17} any tuples that do not correspond to maximal
periods. That is \pref{trans:17} becomes \pref{trans:18}.
\begin{examps}
\item \label{trans:18}
\dbtableb{|l|l||l|}
{
$dummy$ & $BA737$ & $[5\text{:}02pm \; 22/11/95, \; 5\text{:}17pm \; 22/11/95]$ \\
$dummy$ & $BA737$ & $[4\text{:}57pm \; 23/11/95, \; 5\text{:}08pm \; 23/11/95]$ \\
$dummy$ & $UK160$ & $[8\text{:}07am \; 22/11/95, \; 8\text{:}19am \; 22/11/95]$
}
\end{examps}
The overall \pref{trans:14} is mapped to \pref{trans:19}, which
generates \pref{trans:20}. \pref{trans:20}
represents the denotation of \pref{trans:14} w.r.t.\
$M(st)$ and $st$ (pairs of maximal circling periods and the
corresponding flights).
\begin{examps}
\item \label{trans:19}
\sql{(}\select{SELECT DISTINCT SNAPSHOT VALID(t2), t2.2 \\
FROM (\select{SELECT DISTINCT 'dummy', t1.2 \\
VALID t1.1 \\
FROM $trans(\ensuremath{\mathit{Past}}\xspace[e^v, circling(w^v)],
\ensuremath{\mathit{\lambda_{init}}}\xspace)$ AS t1} \\
\ \ \ \ \ )(NOSUBPERIOD) AS t2)}
\item \label{trans:20}
\dbtableb{|l|l|}
{
$[5\text{:}02pm \; 22/11/95, \; 5\text{:}17pm \; 22/11/95]$ & $BA737$ \\
$[4\text{:}57pm \; 23/11/95, \; 5\text{:}08pm \; 23/11/95]$ & $BA737$ \\
$[8\text{:}07am \; 22/11/95, \; 8\text{:}19am \; 22/11/95]$ & $UK160$
}
\end{examps}
Appendix \ref{trans_proofs} proves that the translation rules for
wh-formulae satisfy theorem \ref{wh_theorem}.
\section{Optimising the generated TSQL2 code} \label{tsql2_opt}
The generated \textsc{Tsql2}\xspace code is often verbose. There are usually ways in
which it could be shortened and still return the same results.
Figure \ref{optimise_code}, for example, shows the code that is
generated by the translation of \pref{opt:1}, if chronons correspond
to minutes. (\pref{opt:1} expresses
the reading of \qit{Who inspected UK160 yesterday?} where the
inspection must have both started and been completed on the
previous day.)
\begin{eqnarray}
&&?w^v \; \ensuremath{\mathit{At}}\xspace[yesterday, \ensuremath{\mathit{Past}}\xspace[e^v, \ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w^v, uk160)]]]
\label{opt:1}
\end{eqnarray}
\begin{figure}
\hrule
\medskip
{\small
\begin{verbatim}
(SELECT DISTINCT SNAPSHOT t4.3
FROM (SELECT DISTINCT VALID(t3), t3.1, t3.2, t3.3
VALID VALID(t3)
FROM (SELECT DISTINCT t1.1, t1.2, t1.3
VALID PERIOD(BEGIN(VALID(t1)), END(VALID(t1)))
FROM (SELECT DISTINCT insp.1, insp.2, insp.3
VALID VALID(insp)
FROM inspections(PERIOD) AS insp)(ELEMENT) AS t1,
(SELECT DISTINCT SNAPSHOT inspcmpl.1, inspcmpl.2, inspcmpl.3
FROM inspections AS inspcmpl
WHERE inspcmpl.4 = 'complete') AS t2
WHERE t1.1 = t2.1 AND t1.2 = t2.2
AND t1.3 = t2.3 AND t1.3 = 'UK160'
AND INTERSECT(
INTERSECT(
PERIOD(TIMESTAMP 'beginning', TIMESTAMP 'forever'),
PERIOD 'today' - INTERVAL '1' DAY),
PERIOD(TIMESTAMP 'beginning',
TIMESTAMP 'now' - INTERVAL '1' MINUTE))
CONTAINS PERIOD(BEGIN(VALID(t1)), END(VALID(t1)))
) AS t3
) AS t4)
\end{verbatim}
}
\vspace*{-5mm}
\caption{Example of generated \textsc{Tsql2}\xspace code}
\label{optimise_code}
\medskip
\hrule
\end{figure}
I assume here that $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting, 3)$ and $\ensuremath{\mathit{h'_{culms}}}\xspace(inspecting,
3)$ are \pref{hpfuns:4} and \pref{hpfuns:5} respectively. The embedded
\sql{SELECT} statements of figure \ref{optimise_code} that are
associated with \sql{t1} and \sql{t2} are \pref{hpfuns:4} and
\pref{hpfuns:5}. The embedded \sql{SELECT} statement that is
associated with \sql{t3} corresponds to $\ensuremath{\mathit{Culm}}\xspace[inspecting(occr^v, w^v,
uk160)]$ (see the rule for $\ensuremath{\mathit{Culm}}\xspace[\pi(\tau_1, \dots, \tau_n)]$ in section
\ref{trans_rules}). It generates a relation whose explicit
attributes show all the combinations of codes,
inspectors, and inspected objects that correspond to complete
inspections. The time-stamps of this relation represent periods that
cover whole inspections (from start to completion). The last
constraint in the \sql{WHERE} clause (the one with
\sql{CONTAINS}) admits only tuples whose time-stamps (whole
inspections) are subperiods of $lt$. The two nested \sql{INTERSECT}s
before \sql{CONTAINS} represent $lt$. The \ensuremath{\mathit{At}}\xspace has narrowed $lt$ to the
intersection of its original value (whole time-axis) with the previous
day (\sql{PERIOD 'today' - INTERVAL '1' DAY)}). The \ensuremath{\mathit{Past}}\xspace has
narrowed $lt$ further to the intersection with
$[t_{first}, st)$ (\sql{PERIOD(TIMESTAMP 'beginning', TIMESTAMP
'now' - INTERVAL '1' MINUTE)}).
The embedded \sql{SELECT} statement that is associated with \sql{t4}
is generated by the translation rule for $\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$
(section \ref{trans_rules}). It returns the same relation as the
statement that is associated with \sql{t3}, except that the relation
of \sql{t4}'s statement has an additional explicit attribute that
corresponds to the first argument of \ensuremath{\mathit{Past}}\xspace. In
each tuple, the value of this extra attribute is the same as the
time-stamp ($et$). The topmost
\sql{SELECT} clause projects only the third explicit attribute of the
relation returned by \sql{t4}'s statement (this attribute corresponds
to $w^v$ of \pref{opt:1}).
The code of figure \ref{optimise_code} could be shortened in several
ways. \sql{t4}'s statement, for example, simply adds an extra attribute for the
first argument of \ensuremath{\mathit{Past}}\xspace. In this particular case, this extra
attribute is not used, because
\pref{opt:1} contains no interrogative quantifier for the first
argument of \ensuremath{\mathit{Past}}\xspace. Hence, \sql{t4}'s statement could be replaced by
\sql{t3}'s (the topmost \sql{SELECT} clause would have to become
\sql{SELECT DISTINCT SNAPSHOT t3.2}). One could also drop the
top-level \sql{SELECT} statement, and replace the \sql{SELECT} clause
of \sql{t3}'s statement with \sql{SELECT DISTINCT SNAPSHOT
t1.2}. Furthermore, the intersection of the whole time-axis
(\sql{PERIOD(TIMESTAMP 'beginning', TIMESTAMP 'forever')}) with any
period $p$ is simply $p$. Hence, the second \sql{INTERSECT(\dots,
\dots)} could be replaced by its second argument. The resulting
code is shown in figure \ref{optimise_code2}. Further simplifications
are possible.
\begin{figure}
\hrule
\medskip
{\small
\begin{verbatim}
(SELECT DISTINCT SNAPSHOT t1.2
FROM (SELECT DISTINCT insp.1, insp.2, insp.3
VALID VALID(insp)
FROM inspections(PERIOD) AS insp)(ELEMENT) AS t1,
(SELECT SNAPSHOT inspcmpl.1, inspcmpl.2, inspcmpl.3
FROM inspections AS inspcmpl
WHERE inspcmpl.4 = 'complete') AS t2
WHERE t1.1 = t2.1 AND t1.2 = t2.2
AND t1.3 = t2.3 AND t1.3 = 'UK160'
AND INTERSECT(PERIOD 'today' - INTERVAL '1' DAY,
PERIOD(TIMESTAMP 'beginning',
TIMESTAMP 'now' - INTERVAL '1' MINUTE))
CONTAINS PERIOD(BEGIN(VALID(t1)), END(VALID(t1)))
\end{verbatim}
}
\vspace*{-5mm}
\caption{Shortened \textsc{Tsql2}\xspace code}
\label{optimise_code2}
\medskip
\hrule
\end{figure}
Most \textsc{Dbms}\xspace{s} employ optimisation techniques. A commercial \textsc{Dbms}\xspace
supporting \textsc{Tsql2}\xspace would probably be able to carry out at least some of
the above simplifications. Hence, the reader may wonder why should the
\textsc{Nlitdb}\xspace attempt to optimise the \textsc{Tsql2}\xspace code, rather than delegate the
optimisation to the \textsc{Dbms}\xspace. First, as mentioned in section
\ref{tdbs_general}, only a prototype \textsc{Dbms}\xspace currently supports
\textsc{Tsql2}\xspace. Full-scale \textsc{Tsql2}\xspace \textsc{Dbms}\xspace{s} with optimisers may not appear in the
near future. Second, long database language queries (like the ones
generated by the framework of this thesis) can often confuse generic
\textsc{Dbms}\xspace optimisers, causing them to produce inefficient code. Hence,
shortening the \textsc{Tsql2}\xspace code before submitting it to the \textsc{Dbms}\xspace is again
important. It would be interesting to examine if optimisations like
the ones discussed above could be automated, and integrated into the
framework of this thesis as an additional layer between the \textsc{Top}\xspace to
\textsc{Tsql2}\xspace translator and the \textsc{Dbms}\xspace. I have not explored this issue.
\section{Related work}
Various mappings from forms of logic to and from relational algebra
(e.g.\ \cite{Ullman}, \cite{VanGelder1991}), from logic programming
languages to \textsc{Sql}\xspace (e.g.\ \cite{Lucas1988}, \cite{Draxler1992}), and
from logic formulae generated by \textsc{Nlidb}\xspace{s} to \textsc{Sql}\xspace (\cite{Lowden1},
\cite{Androutsopoulos}, \cite{Androutsopoulos3}, \cite{Rayner93}) have
been discussed in the past. The mapping which is most relevant to the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translation of this chapter is that of \cite{Boehlen1996}.
Boehlen et al.\ study the relation between \textsc{Tsql2}\xspace and an extended
version of first order predicate logic (henceforth called \textsc{Sul}\xspace), that
provides the additional temporal operators $\mathbf{\bullet}\xspace$ (previous),
$\mathbf{\circ}\xspace$ (next), $\ensuremath{\mathbf{since}\xspace}$, and $\ensuremath{\mathbf{until}\xspace}$. \textsc{Sul}\xspace is point-based, in the
sense that \textsc{Sul}\xspace formulae are evaluated with respect to single
time-points. \textsc{Sul}\xspace assumes that time is discrete. Roughly speaking,
$\mathbf{\bullet}\xspace \phi$
\index{.@$\mathbf{\bullet}\xspace$ (\textsc{Sul}\xspace operator, previous time-point)}
is true at a time-point $t$ iff $\phi$ is true at the
time-point immediately before $t$. Similarly, $\mathbf{\circ}\xspace \phi$
\index{..@$\mathbf{\circ}\xspace$ (\textsc{Sul}\xspace operator, next time-point)}
is true at $t$ iff $\phi$ is true at the time-point immediately after
$t$. $\phi_1 \; \ensuremath{\mathbf{since}\xspace} \; \phi_2$
\index{since@$\ensuremath{\mathbf{since}\xspace}$ (\textsc{Sul}\xspace operator)}
is true at $t$ iff there is some
$t'$ before $t$, such that $\phi_2$ is true at $t'$, and for every
$t''$ between $t'$ and $t$, $\phi_1$ is true at $t''$. Similarly,
$\phi_1 \; \ensuremath{\mathbf{until}\xspace} \; \phi_2$
\index{until@$\ensuremath{\mathbf{until}\xspace}$ (\textsc{Sul}\xspace operator)}
is true at $t$ iff there is some $t'$ after $t$, such that $\phi_2$ is
true at $t'$, and for every $t''$ between $t$ and $t'$, $\phi_1$ is
true at $t''$. Various other operators are also defined, but these are
all definable in terms of $\mathbf{\bullet}\xspace$, $\mathbf{\circ}\xspace$, $\ensuremath{\mathbf{since}\xspace}$, and
$\ensuremath{\mathbf{until}\xspace}$. For example, $\lozenge\xspace \phi$
\index{<>@$\lozenge\xspace$ (\textsc{Sul}\xspace's past operator)}
is equivalent to $true \; \ensuremath{\mathbf{since}\xspace} \; \phi$ ($true$ is a special formula
that is true at all time-points). In effect, $\lozenge\xspace \phi$ is
true at $t$ if there is a $t'$ before $t$, and $\phi$ is true at $t'$.
For example, \pref{sul:1a} and \pref{sul:2a} can be expressed as
\pref{sul:1} and \pref{sul:2} respectively.
\begin{examps}
\item BA737 departed (at some past time). \label{sul:1a}
\item $\lozenge\xspace depart(ba737)$ \label{sul:1}
\item Tank 2 has been empty (all the time) since BA737 departed. \label{sul:2a}
\item $empty(tank2) \; \ensuremath{\mathbf{since}\xspace} \; depart(ba737)$ \label{sul:2}
\end{examps}
Boehlen et al.\ provide rules that translate from \textsc{Sul}\xspace to
\textsc{Tsql2}\xspace. (They also show how to translate from a fragment of
\textsc{Tsql2}\xspace back to \textsc{Sul}\xspace, but this direction is irrelevant here.) The
underlying ideas are very similar to those of this chapter. Roughly
speaking, there are non-recursive rules for atomic formulae, and
recursive rules for non-atomic formulae. For example, the translation
rule for $\phi_1 \; \ensuremath{\mathbf{since}\xspace} \; \phi_2$ calls recursively the
translation algorithm to translate $\phi_1$ and $\phi_2$. The result
is a \sql{SELECT} statement, that contains two embedded \sql{SELECT}
statements corresponding to $\phi_1$ and $\phi_2$. Devising
rules to map from \textsc{Sul}\xspace to \textsc{Tsql2}\xspace is much easier
than in the case of \textsc{Top}\xspace, mainly because \textsc{Sul}\xspace formulae are evaluated
with respect to only one time-parameter (\textsc{Top}\xspace formulae are
evaluated with respect to three parameters, $st$, $et$, and $lt$),
\textsc{Sul}\xspace is point-based (\textsc{Top}\xspace is period-based; section \ref{top_intro}), and
\textsc{Sul}\xspace provides only four temporal operators whose semantics are very
simple (the \textsc{Top}\xspace version of this chapter has eleven
operators, whose semantics are more complex). Consequently, proving
the correctness of the \textsc{Sul}\xspace to \textsc{Tsql2}\xspace mapping is much simpler than in
the case of \textsc{Top}\xspace.
It has to be stressed, however, that \textsc{Top}\xspace and \textsc{Sul}\xspace were designed for
very different purposes. \textsc{Sul}\xspace is interesting from a theoretical
temporal-logic point of view. Roughly speaking, it has been proven
that whatever can be expressed in traditional first-order
predicate logic with a temporal precedence connective by treating time
as an extra predicate argument (e.g.\ \pref{tlogi:2} of page
\pageref{tlogi:2}) can also be expressed in first-order predicate
logic enhanced with only a $\ensuremath{\mathbf{since}\xspace}$ and an $\ensuremath{\mathbf{until}\xspace}$ operator, subject
to some continuity conditions (the reverse is not true; see chapter
II.2 of \cite{VanBenthem}). The mapping from \textsc{Sul}\xspace to \textsc{Tsql2}\xspace (and the
reverse mapping from a fragment of \textsc{Tsql2}\xspace to \textsc{Sul}\xspace) is part of a study
of the relative expressiveness of \textsc{Sul}\xspace and \textsc{Tsql2}\xspace. The existence of a
mapping from \textsc{Sul}\xspace to \textsc{Tsql2}\xspace shows that \textsc{Tsql2}\xspace is at least as expressive
as \textsc{Sul}\xspace. (The reverse is not true. Full \textsc{Tsql2}\xspace is more expressive than
\textsc{Sul}\xspace; see \cite{Boehlen1996}.)
In contrast, \textsc{Top}\xspace was not designed to study expressiveness issues,
but to facilitate the mapping from (a fragment of) English to logical
form. Chapter \ref{English_to_TOP} showed how to translate
systematically from a non-trivial fragment of English temporal questions into
\textsc{Top}\xspace. No such systematic translation has been shown to exist in the
case of \textsc{Sul}\xspace, and it is not at all obvious how temporal English
questions (e.g.\ containing progressive and perfect tenses, temporal
adverbials, temporal subordinate clauses) could be mapped
systematically to appropriate \textsc{Sul}\xspace formulae.
Although the study of expressiveness issues is not a goal of this
thesis, I note that the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation of this chapter
implies that \textsc{Tsql2}\xspace is at least as expressive as \textsc{Top}\xspace (every \textsc{Top}\xspace
formula can be mapped to an appropriate \textsc{Tsql2}\xspace query). The reverse is not
true: it is easy to think of \textsc{Tsql2}\xspace queries (e.g.\ queries
that report cardinalities of sets) that cannot be expressed in (the
current version of) \textsc{Top}\xspace. Finally, neither
\textsc{Top}\xspace nor \textsc{Sul}\xspace can be said to be more expressive than the other, as
there are English sentences that can be expressed in \textsc{Sul}\xspace but not
\textsc{Top}\xspace, and vice-versa. For example, the \textsc{Sul}\xspace formula \pref{sul:2}
expresses \pref{sul:2a}, a sentence that cannot be expressed in
\textsc{Top}\xspace. Also, the \textsc{Top}\xspace formula \pref{sul:11} expresses
\pref{sul:10}. There does not seem to be any way to express
\pref{sul:10} in \textsc{Sul}\xspace.
\begin{examps}
\item Tank 2 was empty for two hours. \label{sul:10}
\item $\ensuremath{\mathit{For}}\xspace[hour^c, 2, \ensuremath{\mathit{Past}}\xspace[e^v, empty(tank2)]]$ \label{sul:11}
\end{examps}
\section{Summary}
\textsc{Tsql2}\xspace is an extension of \textsc{Sql-92}\xspace that provides special
facilities for manipulating temporal information. Some modifications
of \textsc{Tsql2}\xspace were adopted in this chapter. Some of these are minor, and
were introduced to bypass uninteresting details (e.g.\ referring to
explicit attributes by number) or obscure points in the \textsc{Tsql2}\xspace definition
(e.g.\ the new version of the \sql{INTERVAL} function). Other
modifications are more significant, and were introduced to facilitate the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translation (e.g.\ \sql{(SUBPERIOD)} and
\sql{(NOSUBPERIOD)}). One of these more significant modifications
(calendric relations) is generally useful. Some minor modifications of
\textsc{Top}\xspace were also adopted in this chapter.
A method to translate from \textsc{Top}\xspace to \textsc{Tsql2}\xspace was
framed. Each \textsc{Top}\xspace formula $\phi$ is mapped to a \textsc{Tsql2}\xspace query. This is
executed by the \textsc{Dbms}\xspace, generating a relation that represents (via an
interpretation function) $\denot{M(st),st}{\phi}$. Before the
translation method can be used, the configurer of the \textsc{Nlitdb}\xspace must
specify some functions (\ensuremath{\mathit{h'_{cons}}}\xspace, \ensuremath{\mathit{h'_{pfuns}}}\xspace, \ensuremath{\mathit{h'_{culms}}}\xspace, \ensuremath{\mathit{h'_{gparts}}}\xspace,
\ensuremath{\mathit{h'_{cparts}}}\xspace) that link certain basic \textsc{Top}\xspace expressions to
\textsc{Tsql2}\xspace expressions. The \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation
is then carried out by a set of translation rules. The rules have to
satisfy two theorems (\ref{wh_theorem} and \ref{yn_theorem}) for the
translation to be correct (i.e.\ for the \textsc{Tsql2}\xspace query to generate a
relation that represents $\denot{M(st),st}{\phi}$). An informal
description of the functionality of some of the rules was given. The
full set of the translation rules, along with a proof that they
satisfy theorems \ref{wh_theorem} and \ref{yn_theorem}, is given in
appendix \ref{trans_proofs}. Further work could explore how to
optimise the generated \textsc{Tsql2}\xspace code.
The \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation is in principle similar to the \textsc{Sul}\xspace to \textsc{Tsql2}\xspace
translation of \cite{Boehlen1996}. \textsc{Top}\xspace and \textsc{Sul}\xspace, however, were designed for
very different purposes, and the \textsc{Sul}\xspace to \textsc{Tsql2}\xspace translation is
much simpler than the \textsc{Top}\xspace to \textsc{Tsql2}\xspace one.
\chapter{The prototype NLITDB} \label{implementation}
\proverb{Time works wonders.}
\section{Introduction}
This chapter discusses the architecture of the prototype \textsc{Nlitdb}\xspace,
provides some information on how the modules of the system were
implemented, and explains which modules would have to be added if the
prototype \textsc{Nlitdb}\xspace were to be used in real-life applications. A
description of the hypothetical airport database is also given,
followed by sample questions from the airport domain and the
corresponding output of the \textsc{Nlitdb}\xspace. The chapter ends with information
on the speed of the system.
\section{Architecture of the prototype NLITDB} \label{prototype_arch}
Figure \ref{simple_arch_fig} shows the architecture of the prototype
\textsc{Nlitdb}\xspace. Each English question is first parsed using the \textsc{Hpsg}\xspace grammar
of chapter \ref{English_to_TOP}, generating an \textsc{Hpsg}\xspace sign. Multiple
signs are generated for questions that the parser understands to be
ambiguous. A \textsc{Top}\xspace formula is then extracted from each sign, as
discussed in section \ref{extraction_hpsg}. Each extracted formula
subsequently undergoes the post-processing of section
\ref{post_processing}. (The post-processor also converts the formulae
from the \textsc{Top}\xspace version of chapters \ref{TOP_chapter} and
\ref{English_to_TOP} to the version of \ref{tdb_chapter}; see section
\ref{TOP_mods}.) As discussed in section \ref{post_processing}, the
post-processing sometimes generates multiple formulae from the same
original formula.
\begin{figure}
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{simple_archit}
\caption{Architecture of the prototype NLITDB}
\label{simple_arch_fig}
\end{center}
\hrule
\end{figure}
Each one of the formulae that are generated at the end of the
post-processing captures what the \textsc{Nlitdb}\xspace understands to be a possible
reading of the English question. Many fully-fledged \textsc{Nlidb}\xspace{s} use
preference measures to guess which reading among the possible ones the
user had in mind, or generate ``unambiguous'' English paraphrases of
the possible readings, asking the user to select one (see
\cite{Alshawi}, \cite{Alshawi2}, \cite{DeRoeck1986} and
\cite{Lowden1986}). No such mechanism is currently present in the
prototype \textsc{Nlitdb}\xspace. All the formulae that are generated at the end of
the post-processing are translated into \textsc{Tsql2}\xspace. The \textsc{Nlitdb}\xspace prints all
the resulting \textsc{Tsql2}\xspace queries along with the corresponding \textsc{Top}\xspace
formulae. The \textsc{Tsql2}\xspace queries would be executed by the \textsc{Dbms}\xspace to
retrieve the information requested by the user. As mentioned in
section \ref{tdbs_general}, however, the prototype \textsc{Nlitdb}\xspace has not
been linked to a \textsc{Dbms}\xspace. Hence, the \textsc{Tsql2}\xspace queries are currently not
executed, and no answers are produced.
The following sections provide more information about the grammar and
parser, the module that extracts \textsc{Top}\xspace formulae from \textsc{Hpsg}\xspace signs, the
post-processor, and the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator.
\section{The grammar and parser} \label{parser}
The \textsc{Hpsg}\xspace version of chapter \ref{English_to_TOP} was coded in the
formalism of \textsc{Ale}\xspace \cite{Carpenter1992} \cite{Carpenter1994}, building
on previous \textsc{Ale}\xspace encodings of \textsc{Hpsg}\xspace fragments by Gerald Penn, Bob
Carpenter, Suresh Manandhar, and Claire Grover.\footnote{The prototype
\textsc{Nlitdb}\xspace was implemented using \textsc{Ale}\xspace version 2.0.2 and Sicstus Prolog
version 2.1.9. Chris Brew provided additional \textsc{Ale}\xspace code for
displaying feature structures. The software of the prototype
\textsc{Nlitdb}\xspace, including the \textsc{Ale}\xspace grammar, is available from
\texttt{http://www.dai.ed.ac.uk/groups/nlp/NLP\_home\_page.html}. An
earlier version of the prototype \textsc{Nlitdb}\xspace was implemented using the
\textsc{Hpsg-Pl} and \textsc{Pleuk} systems \cite{Popowich}
\cite{Calder}.} \textsc{Ale}\xspace can be thought of as a grammar-development
environment. It provides a chart parser (which is the one used in the
prototype \textsc{Nlitdb}\xspace; see \cite{Gazdar1989} for an introduction to chart
parsers) and a formalism that can be used to write
unification-grammars based on feature structures.
Coding the \textsc{Hpsg}\xspace version of chapter \ref{English_to_TOP} in \textsc{Ale}\xspace's
formalism proved straight-forward. \textsc{Ale}\xspace's formalism allows one to specify
grammar rules, definite constraints (these are similar to Prolog
rules, except that predicate arguments are feature structures),
lexical entries, lexical rules, and a hierarchy of sorts of feature
structures. The schemata and principles of the \textsc{Hpsg}\xspace version of
chapter \ref{English_to_TOP} were coded using \textsc{Ale}\xspace grammar rules and
definite constraints. \textsc{Ale}\xspace's lexical entries, lexical rules, and sort
hierarchy were used to code \textsc{Hpsg}\xspace's lexical signs, lexical rules, and
sort hierarchy respectively.
The \textsc{Ale}\xspace grammar rules and definite constraints that encode the \textsc{Hpsg}\xspace
schemata and principles are domain-independent, i.e.\ they require no
modifications when the \textsc{Nlitdb}\xspace is configured for a new application
domain. The lexical rules of the prototype \textsc{Nlitdb}\xspace are also intended
to be domain-independent, though their morphology parts need to be
extended (e.g.\ more information about the morphology of irregular
verbs is needed) before they can be used in arbitrary application
domains. The lexical entries of the system that correspond to
determiners (e.g.\ \qit{a}, \qit{some}), auxiliary verbs,
interrogative words (e.g.\ \qit{who}, \qit{when}), prepositions,
temporal subordinators (e.g.\ \qit{while}, \qit{before}), month names,
day names, etc.\ (e.g.\ \qit{January}, \qit{Monday}) are also
domain-independent. The person configuring the \textsc{Nlitdb}\xspace, however, needs
to provide lexical entries for the nouns, adjectives, and
(non-auxiliary) verbs that are used in the particular application
domain (e.g.\ \qit{flight}, \qit{open}, \qit{to land}). The largest
part of the \textsc{Nlitdb}\xspace's sort hierarchy is also domain independent. Two
parts of it need to be modified when the system is configured for a
new domain: the hierarchy of world entities that is mounted under
{\srt ind}\/ (section \ref{more_ind} and figure
\vref{ind_hierarchy}), and the subsorts of {\srt predicate}\/ that
correspond to \textsc{Top}\xspace predicates used in the domain (section
\ref{TOP_FS} and figure \vref{psoa_fig}). As will be discussed in
section \ref{modules_to_add}, tools could be added to help the
configurer modify the domain-dependent modules.
\section{The extractor of TOP formulae and the post-processor}
\label{extraction_impl}
The module that extracts \textsc{Top}\xspace formulae from \textsc{Hpsg}\xspace signs actually
generates \textsc{Top}\xspace formulae written as Prolog terms. For example, it
generates \pref{extr:5} instead of \pref{extr:3}. The correspondence
between the two notations should be obvious. The Prolog-like notation
of \pref{extr:5} is also used in the formulae that are passed to the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace translator, and in the output of the \textsc{Nlitdb}\xspace (see
section \ref{samples} below).
\begin{examps}
\item $?x1^v \; \ensuremath{\mathit{Ntense}}\xspace[x3^v, president(x1^v)] \land
\ensuremath{\mathit{Past}}\xspace[x2^v, located\_at(x1^v, terminal2)]$ \label{extr:3}
\item \texttt{\small interrog(x1\^{}v, and(\hspace*{-2mm}\begin{tabular}[t]{l}
ntense(x3\^{}v, president(x1\^{}v)),\\
past(x2\^{}v, located\_at(x1\^{}v, terminal2))))
\end{tabular}}
\label{extr:5}
\end{examps}
The extractor of the \textsc{Top}\xspace formulae is implemented using Prolog rules
and \textsc{Ale}\xspace definite constraints (Prolog-like rules whose
predicate-arguments are feature structures). Although the
functionality of the extractor's code is simple, the code itself is
rather complicated (it has to manipulate the internal data structures
that \textsc{Ale}\xspace uses to represent feature structures) and will not be
discussed.
As mentioned in section \ref{prototype_arch}, the post-processor of
figure \ref{simple_arch_fig} implements the post-processing phase of
section \ref{post_processing}. The post-processor also eliminates
\ensuremath{\mathit{Part}}\xspace operators by merging them with the corresponding \ensuremath{\mathit{At}}\xspace, \ensuremath{\mathit{Before}}\xspace,
or \ensuremath{\mathit{After}}\xspace operators, to convert the formulae into the \textsc{Top}\xspace version of
the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator (section \ref{TOP_mods}). The
post-processor's code, which is written in Prolog, presents no
particular interest and will not be discussed.
\section{The TOP to TSQL2 translator} \label{translator_module}
Implementing in Prolog the \textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping of chapter
\ref{tdb_chapter} proved easy. The code of the \textsc{Top}\xspace to \textsc{Tsql2}\xspace
translator of figure \ref{simple_arch_fig} is basically a collection
of Prolog rules for the predicate \texttt{trans}. Each one of these
rules implements one of the translation rules of section
\ref{trans_rules} and appendix \ref{trans_proofs}. For example, the
following Prolog rule implements the translation rule for
$\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$ (p.~\pageref{past_trans_discuss}). (I omit some
uninteresting details of the actual Prolog rule.)
\singlespace\small
\begin{verbatim}
trans(past(_^v, PhiPrime), Lambda, Sigma):-
chronons(Chronon),
multiappend([
"INTERSECT(", Lambda, ", ",
"PERIOD(TIMESTAMP 'beginning', ",
"TIMESTAMP 'now' - INTERVAL '1' ", Chronon, "))"
], LambdaPrime),
trans(PhiPrime, LambdaPrime, SigmaPrime),
new_cn(Alpha),
corners(PhiPrime, CList),
length(CList, N),
generate_select_list(Alpha, N, SelectList),
multiappend([
"(SELECT DISTINCT VALID(", Alpha, "), ", SelectList,
"VALID VALID(", Alpha, ")",
"FROM ", SigmaPrime, " AS ", Alpha, ")",
], Sigma).
\end{verbatim}
\normalsize\doublespace The first argument of \texttt{trans} is the \textsc{Top}\xspace formula to be
translated (in the notation of \pref{extr:5}). \texttt{Lambda} is a
string standing for the $\lambda$ argument of the $trans$ function of
section \ref{formulation} (initially \texttt{"PERIOD(TIMESTAMP
'beginning', TIMESTAMP 'forever')"}). The generated \textsc{Tsql2}\xspace code is
returned as a string in \texttt{Sigma}.
The \texttt{chronons(Chronon)} causes \texttt{Chronon} to become a
string holding the \textsc{Tsql2}\xspace name of the granularity of chronons (e.g.\
\sql{"MINUTE"}). The \texttt{chronons} predicate is supplied by the
configurer of the \textsc{Nlitdb}\xspace, along with Prolog predicates that define
the $h'$ functions of section \ref{via_TSQL2}. For example, the
following predicate defines $\ensuremath{\mathit{h'_{pfuns}}}\xspace(inspecting,3)$ to be the
\sql{SELECT} statement of \pref{hpfuns:4} on page
\pageref{hpfuns:4}. The \texttt{chronons} predicate and the predicates
that define the $h'$ functions are the only domain-dependent parts of
the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator.
\singlespace\small
\begin{verbatim}
h_prime_pfuns_map(inspecting, 3,
["SELECT DISTINCT insp.1, insp.2, insp.3",
"VALID VALID(insp)",
"FROM inspections(PERIOD) AS insp"]).
\end{verbatim}
\normalsize\doublespace The first \texttt{multiappend} in the $trans$ rule above generates the
$\lambda'$ string of the translation rule for $\ensuremath{\mathit{Past}}\xspace[\beta, \phi']$
(p.~\pageref{past_trans_discuss}). It concatenates the string-elements
of the list provided as first argument to \texttt{multiappend}, and
the resulting string ($\lambda'$) is returned in
\texttt{LambdaPrime}. As in the translation rule for $\ensuremath{\mathit{Past}}\xspace[\beta,
\phi']$, the translation mapping is then invoked recursively to
translate $\phi'$ (\texttt{PhiPrime}). The result of this translation
is stored in \texttt{SigmaPrime}.
\texttt{new\_cn(Alpha)} returns in \texttt{Alpha} a string
holding a new correlation name (\texttt{new\_cn}
implements the correlation names generator of section
\ref{trans_rules}). The \texttt{corners(PhiPrime, CList)} causes
\texttt{CList} to become $\corn{\phi'}$, and \texttt{length(CList,
N)} returns in \texttt{N} the length of $\corn{\phi'}$. The
\texttt{generate\_select\_list(Alpha, N, SelectList)} returns in
\texttt{SelectList} a string of the form \texttt{Alpha.1, Alpha.2, \dots,
Alpha.N}. Finally, the second \texttt{multiappend} returns in
\texttt{Sigma} a string that holds the overall \textsc{Tsql2}\xspace code.
\section{Modules to be added} \label{modules_to_add}
\begin{figure}
\hrule
\medskip
\begin{center}
\includegraphics[scale=.6]{architecture}
\caption{Extended architecture of the prototype NLITDB}
\label{arch_fig}
\end{center}
\hrule
\end{figure}
The prototype \textsc{Nlitdb}\xspace is intended to demonstrate that the mappings
from English to \textsc{Top}\xspace and from \textsc{Top}\xspace to \textsc{Tsql2}\xspace are
implementable. Consequently, the architecture of the prototype \textsc{Nlitdb}\xspace
is minimal. Several modules, to be sketched in the following sections,
would have to be added if the system were to be used in real-life
applications. Figure \ref{arch_fig} shows how these modules would fit
into the existing system architecture. (Modules drawn with dashed
lines are currently not present.)
\subsection{Preprocessor} \label{preprocessor}
The \textsc{Ale}\xspace parser requires its input sentence to by provided as a Prolog
list of symbols (e.g.\ \pref{prepro:2}).
\begin{examps}
\item \texttt{[was,ba737,circling,at,pm5\_00]} \label{prepro:2}
\end{examps}
As there is essentially no interface between the user and the \textsc{Ale}\xspace
parser in the prototype \textsc{Nlitdb}\xspace, English questions have to be
typed in this form. This does not allow words to start with capital
letters or numbers, or to contain characters like ``\texttt{/}'' and
``\texttt{:}'' (Prolog symbols cannot contain these characters and
must start with lower case letters). To bypass these constraints,
proper names, dates, and times currently need to be typed in unnatural
formats (e.g.\ ``\texttt{london}'', ``\texttt{d1\_5\_92}'',
``\texttt{pm5\_00}'', ``\texttt{y1991}'' instead of
``\texttt{London}'', ``\texttt{1/5/92}'' ``\texttt{5:00pm}'',
``\texttt{1991}'').
A preprocessing module is needed, that would allow English questions
to be typed in more natural formats (e.g.\ \pref{prepro:4}), and would
transform the questions into the format required by the parser (e.g.\
\pref{prepro:2}).
\begin{examps}
\item Was BA737 circling at 5:00pm? \label{prepro:4}
\end{examps}
Similar preprocessing modules are used in several natural language
front-ends (e.g.\ \textsc{Cle} \cite{Alshawi} and \textsc{Masque}
\cite{Lindop}). These modules typically also merge parts of the input
sentence that need to be processed as single words. For example, the
lexicon of the airport domain has a single lexical entry for \qit{gate
2}. The preprocessor would merge the two words of \qit{gate 2} in
\pref{prepro:7}, generating \pref{prepro:8}. (Currently, \qit{gate 2}
has to be typed as a single word.)
\begin{examps}
\item Which flights departed from gate 2 yesterday? \label{prepro:7}
\item \texttt{[which,flights,departed,from,gate2,yesterday]} \label{prepro:8}
\end{examps}
The preprocessing modules typically also handle proper names that
cannot be included in the lexicon because they are too many, or
because they are not known when creating the lexicon. In a large
airport, for example, there would be hundreds of flight names
(\qit{BA737}, \qit{UK1751}, etc.). Having a different lexical entry
for each flight name is impractical, as it would require hundreds of
entries to be added into the lexicon. Also, new flights (and hence
flight names) are created occasionally, which means that the
lexicon would have to be updated whenever a new flight is
created. Instead, the lexicon could contain entries for a small
number of pseudo-flight names (e.g.\ \qit{flight\_name1},
\qit{flight\_name2}, \dots, \qit{flight\_nameN}; N is the maximum
number of flight names that may appear in a question, e.g.\
5). Each one of these lexical entries would map a
pseudo-flight name to a \textsc{Top}\xspace constant (e.g.\ $flight1$, $flight2$,
\dots, $flightN$).\footnote{In the \textsc{Hpsg}\xspace grammar of chapter
\ref{English_to_TOP}, these constants would be represented using sorts
like {\srt flight1}, {\srt flight2}, \dots, {\srt flightN}, which
would be daughters of {\srt flight\_ent}\/ and sisters of {\srt
flight\_ent\_var}\/ in figure \vref{ind_hierarchy}.} The
preprocessor would use domain-dependent formatting conventions to identify
flight names in the English question (e.g.\ that any word that starts
with two or three capital letters and is followed by three or four
digits is a flight name). Each flight name in the question
would be replaced by a pseudo-flight name. For example, the
preprocessor would turn \pref{prepro:4.1} into \pref{prepro:4.2}.
\begin{examps}
\item Did BA737 depart before UK160 started to land? \label{prepro:4.1}
\item \texttt{[did,flight\_name1,depart,before,flight\_name2,started,to,land]}
\label{prepro:4.2}
\end{examps}
\pref{prepro:4.2} would then be parsed, giving rise to
\pref{prepro:4.3} ($flight1$ and $flight2$ are \textsc{Top}\xspace constants
introduced by the lexical entries of \qit{flight\_name1} and
\qit{flight\_name2}). An extra step would be added to the
post-processing phase of section \ref{post_processing}, to substitute
$flight1$ and $flight2$ with \textsc{Top}\xspace constants that reflect the original
flight names. For example, the preprocessor could pass to the
post-processor the original flight names (\qit{BA737} and
\qit{UK160}), and the post-processor could replace $flight1$ and
$flight2$ by the original flight names in lower case. This would cause
\pref{prepro:4.3} to become \pref{prepro:4.4}. Similar problems arise
in the case of dates, times, numbers, etc.
\begin{examps}
\item $\ensuremath{\mathit{Before}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Begin}}\xspace[landing(flight2)]],
\ensuremath{\mathit{Past}}\xspace[e2^v, depart(flight1)]]$ \label{prepro:4.3}
\item $\ensuremath{\mathit{Before}}\xspace[\ensuremath{\mathit{Past}}\xspace[e1^v, \ensuremath{\mathit{Begin}}\xspace[landing(uk160)]],
\ensuremath{\mathit{Past}}\xspace[e2^v, depart(ba737)]]$ \label{prepro:4.4}
\end{examps}
No preprocessing mechanism is currently present in the prototype
\textsc{Nlitdb}\xspace. The lexicon contains (for demonstration purposes) only a few
entries for particular (not pseudo-) flight names, times, dates, and
numbers (e.g.\ \qit{BA737}, \qit{9:00am}). For example, there is no
entry for \qit{9:10am}. This causes the parsing of \qit{Which tanks
were empty at 9:10am?} to fail. In contrast the parsing of
\qit{Which tanks were empty at 9:00am?} succeeds, because there
\emph{is} a lexical entry for \qit{9:00am}.
\subsection{Quantifier scoping} \label{quantif_scoping}
When both words that introduce existential quantification (e.g.\
\qit{a}, \qit{some}) and words that introduce universal
quantification (e.g.\ \qit{every}, \qit{each}) are allowed, it is
often difficult to decide which quantifiers should be given scope over
which other quantifiers. For example, \pref{qsco:1} has two possible
readings. Ignoring the temporal information of \pref{qsco:1},
these readings would be expressed in the traditional first-order
predicate logic (\textsc{Fopl}) using formulae like \pref{qsco:2} and
\pref{qsco:3}.
\begin{examps}
\item A guard inspected every gate. \label{qsco:1}
\item $\exists x \; (guard(x) \land
\forall y \; (gate(y) \rightarrow inspect(x, y)))$
\label{qsco:2}
\item $\forall y \; (gate(y) \rightarrow
\exists x \; (guard(x) \land inspect(x, y)))$
\label{qsco:3}
\end{examps}
In \pref{qsco:2}, the existential quantifier (introduced by \qit{a})
is given wider scope over the universal one (introduced by
\qit{every}). According to \pref{qsco:2}, all the gates were inspected
by the same guard. In contrast, in \pref{qsco:3} where the universal
quantifier is given wider scope over the existential one, each gate
was inspected by a possibly different guard.
In \pref{qsco:1}, both scopings seem possible (at least in the absence
of previous context). In many cases, however, one of the possible
scopings is the preferred one, and there are heuristics to determine
this scoping (see chapter 8 of \cite{Alshawi}). For example, universal
quantifiers introduced by \qit{each} tend to have wider scope over
existential quantifiers (e.g. if the \qit{every} of \pref{qsco:1} is
replaced by \qit{each}, the scoping of \pref{qsco:3} becomes more
likely than that of \pref{qsco:2}).
In this thesis, words that introduce universal quantifiers were
deliberately excluded from the linguistic coverage (section
\ref{ling_not_supported}). This leaves only existential quantification
and by-passes the quantifier scoping problem, because if all
quantifiers are existential ones, the relative scoping of the
quantifiers does not matter. (In \textsc{Top}\xspace, existential quantification is
expressed using free variables. There are also interrogative and
interrogative-maximal quantifiers, but these are in effect existential
quantifiers that have the additional side-effect of including the
values of their variables in the answer.)
If the linguistic coverage of the prototype \textsc{Nlitdb}\xspace were to be
extended to support words that introduce universal quantification, an
additional scoping module would have to be added (figure
\ref{arch_fig}). The input to that module would be an
``underspecified'' \textsc{Top}\xspace formula (see the discussion in chapter 2 of
\cite{Alshawi}), a formula that would not specify the exact scope of
each quantifier. In a \textsc{Fopl}-like formalism, an underspecified
formula for \pref{qsco:1} could look like \pref{qsco:10}.
\begin{examps}
\item $inspect((\exists x \; guard(x)), (\forall y \; gate(y)))$
\label{qsco:10}
\end{examps}
The scoping module would generate all the possible scopings, and
determine the most probable ones, producing formulae where the scope
of each quantifier is explicit (e.g.\ \pref{qsco:2} or \pref{qsco:3}).
Alternatively, one could attempt to use the the \textsc{Hpsg}\xspace quantifier
scoping mechanism (see \cite{Pollard2} and relevant comments in
section \ref{TOP_FS}) to reason about the possible scopings
during the parsing. That mechanism, however, is a not yet fully
developed part of \textsc{Hpsg}\xspace.
\subsection{Anaphora resolution} \label{anaphora_module}
As discussed in sections \ref{no_issues} and \ref{temporal_anaphora},
nominal anaphora (e.g.\ \qit{she}, \qit{his salary}) and most cases of
temporal anaphora (e.g.\ \qit{in January}, tense anaphora) are
currently not supported. An anaphora resolution module would be needed
if phenomena of this kind were to be supported. As in the case of
quantifier scoping, I envisage a module that would accept
``underspecified'' \textsc{Top}\xspace formulae, formulae that would not name
explicitly the entities or times to which anaphoric expressions refer
(figure \ref{arch_fig}). The module would determine the most probable
referents of these expressions, using a discourse model that would
contain descriptions of previously mentioned entities and times,
information showing in which segments of the previous discourse the
entities or times were mentioned, etc.\ (see \cite{Barros1994} for a
description of a similar module). The output of this module would be a
formula that names explicitly the referents of anaphoric expressions.
\subsection{Equivalential translator} \label{equiv_translat}
The translation from \textsc{Top}\xspace to \textsc{Tsql2}\xspace of chapter \ref{tdb_chapter}
assumes that each pair $\tup{\pi,n}$ of a \textsc{Top}\xspace predicate functor
$\pi$ and an arity $n$ can be mapped to a valid-time relation (stored
directly in the database, or computed from information in the
database) that shows the event times where $\pi(\tau_1, \dots,
\tau_n)$ is true, for all the possible world entities denoted by the
\textsc{Top}\xspace terms $\tau_1, \dots, \tau_n$. The configurer of the \textsc{Nlitdb}\xspace
specifies this mapping when defining \ensuremath{\mathit{h'_{pfuns}}}\xspace (section
\ref{via_TSQL2}). Although the assumption that each $\tup{\pi,n}$ can
be mapped to a suitable relation is valid in most situations, there
are cases where this assumption does not hold. The ``doctor on board''
problem \cite{Rayner93} is a well-known example of such a case.
Let us imagine a database that contains only the following coalesced valid-time
relation, that shows the times when a (any) doctor was on board each ship of
a fleet.
\adbtable{2}{|l||l|}{$doctor\_on\_board$}
{$ship$ & }
{$Vincent$ & $[8\text{:}30am \; 22/1/96 -
11\text{:}45am \; 22/1/96]$ \\
& $\;\;\union \; [3\text{:}10pm \; 23/1/96 -
5\text{:}50pm \; 23/1/96]$ \\
& $\;\; \union \; [9\text{:}20am \; 24/1/96 -
2\text{:}10pm \; 24/1/96]$ \\
$Invincible$ & $[8\text{:}20am \; 22/1/96 -
10\text{:}15am \; 22/1/96]$ \\
& $\;\; \union \; [1\text{:}25pm \; 23/1/96 -
3\text{:}50pm \; 23/1/96]$ \\
\; \dots & \; \dots
}
Let us also consider a question like \pref{doct:1}, which would be
mapped to the \textsc{Top}\xspace formula \pref{doct:2}. I assume here that
\qit{doctor} and \qit{ship} introduce predicates of the form
$doctor(\tau_1)$ and $ship(\tau_2)$, and that the predicative
preposition \qit{on} introduces a predicate of the form
$located\_on(\tau_3, \tau_4)$ ($\tau_1, \dots, \tau_4 \in \ensuremath{\mathit{TERMS}}\xspace$).
For simplicity, I assume that \qit{doctor} and \qit{ship} do not
introduce \ensuremath{\mathit{Ntense}}\xspace operators (section \ref{non_pred_nps}).
\begin{examps}
\item Is there a doctor on some ship? \label{doct:1}
\item $doctor(d^v) \land ship(s^v) \land \ensuremath{\mathit{Pres}}\xspace[located\_on(d^v,
s^v)]$ \label{doct:2}
\end{examps}
To apply the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation method of chapter
\ref{tdb_chapter}, one needs to map $\tup{doctor,1}$ to a valid-time
relation (computed from information in the database) that shows the
event times where $doctor(\tau_1)$ is true, i.e.\ when the entity
denoted by $\tau_1$ was a doctor. Unfortunately, the database (which
contains only $doctor\_on\_board$) does not show when particular
entities were doctors, and hence such a relation cannot be computed.
In the same manner, $\tup{ship,1}$ has to be mapped to a relation that
shows the ships that existed at each time. This relation cannot be
computed: $doctor\_on\_board$ does not list all the ships that existed
at each time; it shows only ships that had a doctor on board at each
time. Similarly, $\tup{located\_on,2}$ has to be mapped to a relation
that shows when $located\_on(\tau_3,\tau_4)$ is true, i.e.\ when the
entity denoted by $\tau_3$ was on the entity denoted by $\tau_4$.
Again, this relation cannot be computed. If, for example, $\tau_3$
denotes a doctor (e.g.\ Dr.\ Adams) and $\tau_4$ denotes Vincent,
there is no way to find out when that particular doctor was on
Vincent: $doctor\_on\_board$ shows only when some (any) doctor was on
each ship; it does not show when particular doctors (e.g.\ Dr.\ Adams)
were on each ship. Hence, the translation method of chapter
\ref{tdb_chapter} cannot be used.
It should be easy to see, however, that \pref{doct:2} is equivalent to
\pref{doct:3}, if $doctor\_on\_ship(\tau_5)$ is true at event times
where the entity denoted by $\tau_5$ is a ship, and a doctor of that
time is on that ship. What is interesting about \pref{doct:3} is that
there \emph{is} enough information in the database to map
$\tup{doctor\_on\_ship,1}$ to a relation that shows the event times
where $doctor\_on\_ship(\tau_5)$ holds. Roughly speaking, one simply
needs to map $\tup{doctor\_on\_ship,1}$ to the
$doctor\_on\_board$ relation.
Hence, the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translation method of chapter
\ref{tdb_chapter} \emph{can} be applied to \pref{doct:3},
and the answer to \pref{doct:1} can be found by evaluating the
resulting \textsc{Tsql2}\xspace code.
\begin{examps}
\item $\ensuremath{\mathit{Pres}}\xspace[doctor\_on\_ship(s^v)]$ \label{doct:3}
\end{examps}
The problem is that \pref{doct:1} cannot be mapped directly to
\pref{doct:3}: the English to \textsc{Top}\xspace mapping of chapter
\ref{English_to_TOP} generates \pref{doct:2}. We need to
convert \pref{doct:2} (whose predicates are introduced by the lexical
entries of nouns, prepositions, etc.) to \pref{doct:3} (whose
predicates are chosen to be mappable to relations computed from
information in the database). An ``equivalential translator''
similar to the ``abductive equivalential translator'' of
\cite{Rayner93} and \cite{Alshawi2} could be used to carry out this
conversion. Roughly speaking, this would be an
inference module that would use domain-dependent conversion rules, like
\pref{doct:4} which allows any formula of the form $doctor(\tau_1)
\land ship(\tau_2) \land \ensuremath{\mathit{Pres}}\xspace[located\_on(\tau_1,\tau_2)]$ ($\tau_1,
\tau_2 \in \ensuremath{\mathit{TERMS}}\xspace$) to be replaced by
$\ensuremath{\mathit{Pres}}\xspace[doctor\_on\_ship(\tau_2)]$. \pref{doct:4} would license the
conversion of \pref{doct:2} into \pref{doct:3}.
\begin{examps}
\item $doctor(\tau_1) \land ship(\tau_2) \land
\ensuremath{\mathit{Pres}}\xspace[located\_on(\tau_1, \tau_2)] \equiv
\ensuremath{\mathit{Pres}}\xspace[doctor\_on\_ship(\tau_2)]$
\label{doct:4}
\end{examps}
There would be two kinds of pairs $\tup{\pi,n}$ ($\pi$ is a predicate
functor and $n$ an arity): pairs that are mapped to relations, and
pairs for which this mapping is impossible (the value of \ensuremath{\mathit{h'_{pfuns}}}\xspace
would be undefined for the latter). The formula generated after the
scoping and anaphora resolution would be passed to the equivalential
translator (figure \ref{arch_fig}). If all the predicate functors and
arities in the formula are among the pairs that are mapped to
relations, the equivalential translator would have no effect.
Otherwise, the equivalential translator would attempt to convert the
formula into another one that contains only predicate functors and
arities that are mapped to relations (an error would be reported if
the conversion is impossible). The new formula would then be passed to
the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator.
\subsection{Response generator} \label{response_generator}
The execution of the \textsc{Tsql2}\xspace code produces the information that is
needed to answer the user's question. A response generator is needed
to report this information to the user. In the simplest case, if the
question is a yes/no one, the response generator would simply print a
\qit{yes} or \qit{no}, depending on whether or not the \textsc{Tsql2}\xspace code
retrieved at least one tuple (section \ref{formulation}). Otherwise,
the response generator would print the tuples retrieved by the \textsc{Tsql2}\xspace
code.
Ideally, the response generator would also attempt to provide
\emph{cooperative responses} (section \ref{no_issues}; see also
section \ref{to_do} below). In \pref{respgen:1}, for example, if BA737 is at
gate 4, the response generator would produce \pref{respgen:2} rather
than a simple \qit{no}. That is, it would report the answer
to \pref{respgen:1} along with the answer to \pref{respgen:3}.
\begin{examps}
\item Is BA737 at gate 2? \label{respgen:1}
\item \sys{No, BA737 is at gate 4.} \label{respgen:2}
\item Which gate is BA737 at? \label{respgen:3}
\end{examps}
In that case, the architecture of the \textsc{Nlitdb}\xspace would have to be more
elaborate than that of figure \ref{arch_fig}, as the response
generator would have to submit questions (e.g.\ \pref{respgen:3}) on
its own, in order to collect the additional information that is needed
for the cooperative responses.
\subsection{Configuration tools}
As already noted, there are several parts of the prototype \textsc{Nlitdb}\xspace
that need to be modified whenever the \textsc{Nlitdb}\xspace is configured for a new
application. Most large-scale \textsc{Nlidb}\xspace{s} provide tools that automate
these modifications, ideally allowing people that are not aware of the
details of the \textsc{Nlitdb}\xspace's code to configure the system (see section 6
of \cite{Androutsopoulos1995}, and chapter 11 of \cite{Alshawi}). A
similar tool is needed in the prototype \textsc{Nlitdb}\xspace of this thesis. Figure
\ref{arch_fig} shows how this tool would fit into the \textsc{Nlitdb}\xspace's
architecture.\footnote{Some of the heuristics of the quantifier
scoping module and parts of the anaphora resolution module may in
practice be also domain-dependent. In that case, parts of these
modules would also have to be modified during the configuration. For
simplicity, this is not shown in figure \ref{arch_fig}.}
\section{The airport database}
This section provides more information about the hypothetical airport
database, for which the prototype \textsc{Nlitdb}\xspace was configured.
\begin{figure}
\hrule
\medskip
\begin{center}
$\begin{array}{l}
gates(gate, availability) \\
runways(runway, availability) \\
queues(queue, runway) \\
servicers(servicer) \\
inspectors(inspector) \\
sectors(sector) \\
flights(flight) \\
tanks(tank, content) \\
norm\_departures(flight, norm\_dep\_time, norm\_dep\_gate) \\
norm\_arrivals(flight, norm\_arr\_time, norm\_arr\_gate) \\
norm\_servicer(flight, servicer) \\
flight\_locations(flight, location) \\
circling(flight) \\
inspections(code, inspector, inspected, status) \\
services(code, servicer, flight, status) \\
boardings(code, flight, gate, status) \\
landings(code, flight, runway, status) \\
\mathit{takeoffs}(code, flight, runway, status) \\
taxiings(code, flight, origin, destination, status)
\end{array}$
\caption[Relations of the airport database]{Relations of the airport database}
\label{db_relations}
\end{center}
\hrule
\end{figure}
The airport database contains nineteen relations, all valid-time and
coalesced (section \ref{bcdm}). Figure \ref{db_relations} shows the
names and explicit attributes of the relations. For simplicity, I
assume that the values of all the explicit attributes are strings. I
also assume that chronons correspond to minutes, and that the
$gregorian$ calendric relation of section \ref{calrels} is available.
The $runways$ relation has the following form:
\adbtable{3}{|l|l||l|}{$runways$}
{$runway$ & $\mathit{availability}$ & }
{$runway1$ & $open$ & $[8\text{:}00am \; 1/1/96, \; 7\text{:}30pm \; 3/1/96]$ \\
& & $\;\; \union \;
[6\text{:}00am \; 4/1/96, \; 2\text{:}05pm \; 8/1/96]
\; \union \; \dots$ \\
$runway1$ & $closed$ & $[7\text{:}31pm \; 3/1/96, \; 5\text{:}59am \; 4/1/96]$ \\
& & $\;\; \union \;
[2\text{:}06pm \; 8/1/96, \; 5\text{:}45pm \; 8/1/96]$ \\
$runway2$ & $open$ & $[5\text{:}00am \; 1/1/96, \; 9\text{:}30pm \; 9/1/96]
\; \union \; \dots$ \\
$runway2$ & $closed$ & $[9\text{:}31pm \; 9/1/96, \;
10\text{:}59am \; 10/1/96] \; \union \; \dots$
}
The $\mathit{availability}$ values are always $open$ or $closed$.
There are two tuples for each runway: one showing the times when the
runway was open, and one showing the times when it was closed. If a
runway did not exist at some time (e.g.\ a runway may not have been
constructed yet at that time), both tuples of that runway exclude this
time from their time-stamps. The $gates$ relation is similar. Its
$availability$ values are always $open$ or $closed$, and there are two
tuples for each gate, showing the times when the gate was open
(available) or closed (unavailable) respectively.
Runways that are used for landings or take-offs have queues, where
flights wait until they are given permission to enter the runway. The
$queues$ relation lists the names of the queues that exist at various
times, along with the runways where the queues lead to. The
$servicers$ relation shows the names of the servicing companies that
existed at any time. The $inspectors$, $sectors$, and $flights$
relations are similar. The $tanks$ relation shows the contents
($water$, $foam$, etc., or $empty$ if the tank was empty) of each tank
at every time where the tank existed.
Each outgoing flight is assigned a normal departure time and gate (see
also section \ref{aspect_examples}). The $norm\_departures$ relation
shows these times and gates. For example, if $norm\_departures$ were
as follows, this would mean that from 9:00am on 1/1/92 to 5:30pm on
31/11/95 BA737 normally departed each day from gate 2 at 2:05pm. (For
simplicity, I assume that all flights are daily.) At 5:31pm on
31/11/95, the normal departure time of BA737 was changed to 2:20pm,
while the normal departure gate remained gate 2. No further change to the
normal departure time or gate of BA737 was made since then.
\adbtable{4}{|l|c|c||l|}{$norm\_departures$}
{$flight$ & $norm\_dep\_time$ & $norm\_dep\_gate$ &}
{$BA737$ & $2\text{:}05pm$ & $gate2$ &
$[9\text{:}00am \; 1/1/92, \; 5\text{:}30pm \; 31/11/95]$ \\
$BA737$ & $2\text{:}20pm$ & $gate2$ &
$[5\text{:}31pm \; 31/11/95, \; now]$
}
Similarly, each incoming flight is assigned a normal arrival time and
a gate, listed in $norm\_arrivals$. Flights are also assigned normal
servicers, servicing companies that over a period of time normally
service the flights whenever they arrive or depart. This information
is stored in $norm\_servicer$. The
$flight\_locations$ relation shows the location of each flight over
the time. Possible $location$ values are the names of airspace
sectors, gates, runways, or queues of runways. The $circling$ relation
shows the flights that were circling at each time.
As discussed in section \ref{aspect_examples}, flights, gates, and
runways are occasionally inspected. The $inspections$ relation was
discussed in section \ref{via_TSQL2}. It shows the inspection code,
inspector, inspected object, status (completed or not), and time of
each inspection. The $services$, $boardings$, $landings$,
$\mathit{takeoffs}$, and $taxiings$ relations are very similar. They
provide information about actual services, boardings, landings,
take-offs, and taxiings from one location ($origin$) to another
($destination$). Each service, boarding, landing, take-off, or taxiing
is assigned a unique code, stored in the $code$ attribute. The
$status$ attribute shows if the climax is reached at the latest
time-point of the time-stamp. The values of the $origin$ and
$destination$ attributes of $taxiings$ are names of gates, runways,
and queues.
Apart from relations, a database would in practice also contain
\emph{integrity constraints} (see \cite{Gertz1995} and
\cite{Wijsen1995}). There would be, for example, a constraint saying
that if the $circling$ relation shows a flight as circling at some
time, the $flights$ relation must show that flight as existing at the
same time. I do not discuss integrity constraints, as they are not
directly relevant to this thesis.
\section{Sample questions and output} \label{samples}
This section presents sample questions from the airport domain, along
with the corresponding output of the prototype \textsc{Nlitdb}\xspace. The questions
are chosen to demonstrate that the \textsc{Nlitdb}\xspace behaves according to the
specifications of the previous chapters. The questions are \emph{not}
intended to be (and are probably not) a representative sample of
questions that a real user might want to submit in the airport domain
(see comments about Wizard of Oz experiments in section \ref{wizard}
below). The user submits questions using the \texttt{nli} Prolog
predicate:
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,left,sector3,at,pm5_00,yesterday]).
\end{verbatim}
\normalsize\doublespace
The system parses the question and reports the generated \textsc{Hpsg}\xspace sign.
\singlespace\small
\begin{verbatim}
HPSG Sign:
(phrase,
qstore:(ne_set_quant,
elt:(det:exists,
restind:(index:(_10148,
minute_ent,
tvar:plus),
restr:(ne_set_psoa,
elt:(part,
part_var:_10148,
partng:pm5_00),
elts:e_set))),
elts:(ne_set_quant,
elt:(det:interrog,
restind:(index:(_10486,
flight_ent,
tvar:plus),
restr:(ne_set_psoa,
elt:(flight,
arg1:_10486),
elts:e_set))),
elts:e_set)),
synsem:(synsem,
loc:(cat:(aspect:point,
comps:e_list,
head:(aux:minus,
inv:minus,
mod:none,
prd:minus,
vform:fin),
spr:e_list,
subj:e_list),
cont:(at_op,
main_psoa:(at_op,
main_psoa:(past,
et_handle:(temp_ent,
tvar:plus),
main_psoa:(leave_something,
arg1:_10486,
arg2:sector3)),
time_spec:_10148),
time_spec:yesterday)),
nonloc:(inherited:slash:e_set,
to_bind:slash:e_set)))
\end{verbatim}
\normalsize\doublespace The sign above is written in \textsc{Ale}\xspace's notation. The sign is of
sort {\srt phrase}\/ (it corresponds to a phrase rather than a single
word), and it has the features {\feat qstore} and {\feat synsem}. The
{\feat qstore} value represents a non-empty set of quantifiers ({\srt
ne\_set\_quant}). Its {\feat elt} feature describes the first
element of that set, which is an existential quantifier. The
quantifier ranges over a \textsc{Top}\xspace variable, represented by an \textsc{Hpsg}\xspace index
of sort {\srt minute\_ent} (see figure \vref{ind_hierarchy}) whose
{\feat tvar} is $+$ (the index represents a \textsc{Top}\xspace variable rather than
a constant). The {\feat elt} value represents the \textsc{Top}\xspace-like
expression $\exists \, x2^v \; \ensuremath{\mathit{Part}}\xspace[pm5\_00^g, x2^v]$. The Prolog
variable \texttt{\_10148} is a pointer to the index of the quantifier,
i.e.\ it plays the same role as the boxed numbers (e.g.\ \avmbox{1},
\avmbox{2}) in the \textsc{Hpsg}\xspace formalism of chapter \ref{English_to_TOP}.
The {\feat elts} value describes the rest of the set of quantifiers,
using in turn an {\feat elt} feature (second element of the overall
set), and an {\feat elts} feature (remainder of the set, in this case
the empty set). The second element of the overall set represents the
\textsc{Top}\xspace expression $?x1^v \; flight(x1^v)$. In the airport application,
the lexical entries of non-predicative nouns do not introduce \ensuremath{\mathit{Ntense}}\xspace
operators (this generates appropriate readings in most cases; see the
discussion in section \ref{non_pred_nps}). This is why no \ensuremath{\mathit{Ntense}}\xspace
operator is present in the second quantifier of the sign. (The effect
of \ensuremath{\mathit{Ntense}}\xspace{s} can still be seen in the airport application in the
case of non-predicative adjectives, that do introduce \ensuremath{\mathit{Ntense}}\xspace{s}.)
The features of the {\feat synsem} value are as in chapter
\ref{English_to_TOP}. The {\feat cont} value represents the \textsc{Top}\xspace
expression $\ensuremath{\mathit{At}}\xspace[yesterday, \ensuremath{\mathit{At}}\xspace[x2^v, \ensuremath{\mathit{Past}}\xspace[x3^v,
leave\_something(x1^v, sector3)]]]$. The extractor of section
\ref{extraction_impl} extracts a \textsc{Top}\xspace formula from the sign, and
prints it as a Prolog term.
\singlespace\small
\begin{verbatim}
TOP formula extracted from HPSG sign:
interrog(x1^v,
and(part(pm5_00^g, x2^v),
and(flight(x1^v),
at(yesterday,
at(x2^v,
past(x3^v,
leave_something(x1^v, sector3)))))))
\end{verbatim}
\normalsize\doublespace
The Prolog term above stands for:
\begin{examps}
\item[] $?x1^v \;
\ensuremath{\mathit{Part}}\xspace[pm5\_00^g, x2^v] \land flight(x1^v) \; \land$ \\
$\ensuremath{\mathit{At}}\xspace[yesterday,\ensuremath{\mathit{At}}\xspace[x2^v, \ensuremath{\mathit{Past}}\xspace[x3^v, leave\_something(x1^v, sector3)]]]$
\end{examps}
The extracted formula then goes through the post-processor of section
\ref{extraction_impl}. The post-processor
eliminates the \ensuremath{\mathit{Part}}\xspace operator, adding the $pm5\_00^g$ as an
extra argument to the corresponding \ensuremath{\mathit{At}}\xspace operator:
\singlespace\small
\begin{verbatim}
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
at(yesterday,
at(pm5_00^g, x2^v,
past(x3^v,
leave_something(x1^v, sector3))))))
\end{verbatim}
\normalsize\doublespace
The Prolog term above stands for:
\begin{examps}
\item $?x1^v \; flight(x1^v) \land \ensuremath{\mathit{At}}\xspace[yesterday,$ \\
$\ensuremath{\mathit{At}}\xspace[pm5\_00^g, x2^v, \ensuremath{\mathit{Past}}\xspace[x3^v, leave\_something(x1^v, sector3)]]]$
\label{log:1}
\end{examps}
\nspace{1.4}
The post-processed formula is then translated into \textsc{Tsql2}\xspace:
\singlespace\small
\begin{verbatim}
TSQL2 query:
(SELECT DISTINCT SNAPSHOT t8.1
FROM (SELECT DISTINCT t6.1, t7.1, t7.2, t7.3, t7.4
VALID VALID(t6)
FROM (SELECT DISTINCT t1.1
VALID VALID(t1)
FROM (SELECT DISTINCT fl.1
VALID VALID(fl)
FROM flights(PERIOD) AS fl
)(SUBPERIOD) AS t1
WHERE PERIOD(TIMESTAMP 'beginning', TIMESTAMP 'forever')
CONTAINS VALID(t1)
) AS t6,
(SELECT DISTINCT t2.1, t5.1, t5.2, t5.3
VALID VALID(t5)
FROM (SELECT DISTINCT SNAPSHOT VALID(cp2)
FROM gregorian AS cp2
WHERE cp2.5 = '17' AND cp2.6 = '00'
) AS t2,
(SELECT DISTINCT VALID(t4), t4.1, t4.2
VALID VALID(t4)
FROM (SELECT DISTINCT t3.1, t3.2
VALID VALID(t3)
FROM (SELECT DISTINCT flocs.1, flocs.2
VALID PERIOD(END(VALID(flocs)),
END(VALID(flocs)))
FROM flight_locations(PERIOD) AS flocs
)(SUBPERIOD) AS t3
WHERE t3.2 = 'sector3'
AND INTERSECT(
INTERSECT(t2.1,
INTERSECT(
PERIOD(TIMESTAMP 'beginning',
TIMESTAMP 'forever'),
PERIOD 'today' - INTERVAL '1' DAY)),
PERIOD(TIMESTAMP 'beginning',
TIMESTAMP 'now' -
INTERVAL '1' MINUTE))
CONTAINS VALID(t3)
) AS t4
) AS t5
) AS t7
WHERE t6.1 = t7.3
AND VALID(t6) = VALID(t7)
) AS t8
)
\end{verbatim}
\normalsize\doublespace The ``\sql{SELECT DISTINCT fl.1}~\dots \sql{FROM
flights(PERIOD) AS fl}'' that starts at the sixth line of the \textsc{Tsql2}\xspace
code is the \sql{SELECT} statement to which \ensuremath{\mathit{h'_{pfuns}}}\xspace maps predicates
of the form $flight(\tau_1)$. This statement returns a relation that
shows the flights that existed at each time. The embedded \sql{SELECT}
statement that is associated with the correlation name \sql{t6} is the
result of applying the translation rule for predicates (section
\ref{trans_rules}) to the $flight(x1^v)$ of \pref{log:1}. The
``\sql{WHERE PERIOD(TIMESTAMP 'beginning', TIMESTAMP 'forever')
CONTAINS VALID(t1)}'' corresponds to the restriction that $et$ must
fall within $lt$. (At this point, no constraint has been imposed on
$lt$, and hence $lt$ covers the whole time-axis.) This \sql{WHERE}
clause has no effect and could be removed during an optimisation phase
(section \ref{tsql2_opt}).
The ``\sql{SELECT DISTINCT flocs.1}~\dots
\sql{flight\_locations(PERIOD) AS flocs}'' that starts at the 23rd
line of the \textsc{Tsql2}\xspace code is the \sql{SELECT} statement to which \ensuremath{\mathit{h'_{pfuns}}}\xspace
maps predicates of the form $leave\_something(\tau_1, \tau_2)$. This
statement generates a relation that for each flight and location,
shows the end-points of maximal periods where the flight was at that
location. The embedded \sql{SELECT} statement that is associated with
\sql{t4} is the result of applying the translation rule for predicates
to the $leave\_something(x1^v, sector3)$ of \pref{log:1}.
\sql{VALID(t3)} is the leaving-time, which has to fall within $lt$.
The three nested \sql{INTERSECT}s represent constraints that have been
imposed on $lt$: the \ensuremath{\mathit{Past}}\xspace operator requires $lt$ to be a subperiod of
$[p_{first}, st)$ (i.e.\ a subperiod of \sql{TIMESTAMP 'beginning',
TIMESTAMP 'now' - INTERVAL '1' MINUTE}), the $\ensuremath{\mathit{At}}\xspace[pm5\_00^g, \dots]$
requires $lt$ to be a subperiod of a 5:00pm-period (\sql{t2.1} ranges
over 5:00pm-periods), and the $\ensuremath{\mathit{At}}\xspace[yesterday,\dots]$ requires the
localisation time to be a subperiod of the previous day (\sql{PERIOD
'today' - INTERVAL '1' DAY}).
The \sql{SELECT} statement that is associated with \sql{t5} is
generated by the translation rule for \ensuremath{\mathit{Past}}\xspace (section
\ref{trans_rules}), and the \sql{SELECT} statement that is associated
with \sql{t7} is introduced by the translation rule for $\ensuremath{\mathit{At}}\xspace[\sigma_g,
\beta, \phi']$ (section \ref{atsg_rule}). (The $\ensuremath{\mathit{At}}\xspace[yesterday, \dots]$
of \pref{log:1} does not introduce its own \sql{SELECT} statement, it
only restricts $lt$; see the translation rule for $\ensuremath{\mathit{At}}\xspace[\kappa, \phi']$
in section \ref{trans_rules}.) The \sql{SELECT} statement that is
associated with \sql{t8} is introduced by the translation rule for
conjunction (section \ref{conj_rule}). It requires the attribute
values that correspond to the $x1^v$ arguments of $flight(x1^v)$ and
$leave\_something(x1^v, sector3)$, and the event times where the two
predicates are true to be identical. Finally, the top-level
\sql{SELECT} statement is introduced by the translation rule for
$?\beta_1 \; ?\beta_2 \; ?\beta_3 \; \dots \; ?\beta_k \; \phi'$ (section
\ref{wh1_rule}). It returns a snapshot relation that
contains the attribute values that correspond to $x1^v$ (the flights).
No further comments need to be made about the generated \textsc{Hpsg}\xspace signs
and \textsc{Tsql2}\xspace queries. To save space, I do not show these in the rest of
the examples. I also do not show the \textsc{Top}\xspace formulae before the
post-processing, unless some point needs to be made about them.
As noted in section \ref{progressives}, no attempt is made to block
progressive forms of state verbs. The progressive forms of these verbs
are taken to have the same meanings as the corresponding
non-progressive ones. This causes the two questions below to receive
the same \textsc{Top}\xspace formula.
\singlespace\small
\begin{verbatim}
| ?- nli([which,tanks,contain,water]).
TOP formula after post-processing:
interrog(x1^v,
and(tank(x1^v),
pres(contains(x1^v, water))))
| ?- nli([which,tanks,are,containing,water]).
TOP formula after post-processing:
[same formula as above]
\end{verbatim}
\normalsize\doublespace There are two lexical entries for the base form of \qit{to
service}, one for the habitual homonym, and one for the non-habitual
one. The habitual entry introduces the predicate functor
$hab\_servicer\_of$ and classifies the base form as state. The
non-habitual entry introduces the functor
$actl\_servicing$ and classifies the base form as culminating
activity. The simple present lexical rule (section
\ref{single_word_forms}) generates a simple present lexical entry for
only the habitual homonym (whose base form is state). Hence, the
\qit{services} below is treated as the simple present of the habitual
homonym (not as the simple present of the non-habitual homonym), and
only a formula that contains the $hab\_servicer\_of$ functor is
generated. This captures the fact that the question
can only have a habitual meaning (it cannot refer to a servicer that is
actually servicing BA737 at the present; the reader is reminded that
the scheduled-to-happen reading of the simple present is ignored in
this thesis -- see section \ref{simple_present}).
\singlespace\small
\begin{verbatim}
| ?- nli([which,servicer,services,ba737]).
TOP formula after post-processing:
interrog(x1^v,
and(servicer(x1^v),
pres(hab_servicer_of(x1^v, ba737))))
\end{verbatim}
\normalsize\doublespace In contrast, the present participle lexical rule (section
\ref{single_word_forms}) generates progressive entries for both the
non-habitual (culminating activity base form) and the habitual (state
base form) homonyms. This causes the question below to receive two
parses, one where the \qit{is servicing} is the present continuous of
the non-habitual homonym, and one where it is the
present continuous of the habitual homonym. This gives
rise to two formulae, one involving the $actl\_servicing$
functor (the servicer must be servicing BA737 at the present),
and one involving the $hab\_servicer\_of$ functor (the servicer must
be the current normal servicer of BA737). (The $x2^v$ in the first formula is
an occurrence identifier; see section \ref{occurrence_ids}.) The
habitual reading of the second formula seems rather unlikely in this
case.
\singlespace\small
\begin{verbatim}
| ?- nli([which,servicer,is,servicing,ba737]).
TOP formula after post-processing:
interrog(x1^v,
and(servicer(x1^v),
pres(actl_servicing(x2^v, x1^v, ba737))))
TOP formula after post-processing:
interrog(x1^v,
and(servicer(x1^v),
pres(hab_servicer_of(x1^v, ba737))))
\end{verbatim}
\normalsize\doublespace There are also different lexical entries for the actual \qit{to
depart} and the habitual \qit{to depart} (at some time). The
habitual entry introduces the functor $hab\_dep\_time$, requires an
\qit{at~\dots} complement, and classifies the base form as state. The
non-habitual entry introduces the functor $actl\_depart$, requires no
complement, and classifies the base form as point. When \qit{BA737
departed at 5:00pm.} is taken to involve the habitual homonym,
\qit{at 5:00pm} is treated as the complement that specifies the
habitual departure time (the second argument of
$hab\_dep\_time(\tau_1, \tau_2)$). When the sentence is taken to
involve the non-habitual homonym, \qit{at 5:00pm} is treated as a
temporal modifier, and it introduces an \ensuremath{\mathit{At}}\xspace operator (section
\ref{hpsg:punc_adv}). In the following question, this analysis leads
to two formulae: one where each reported flight must have actually
departed at 5:00pm at least once in 1993, and one where the habitual
departure time of each reported flight must have been 5:00pm some time
in 1993. The second reading seems the preferred one in this example.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flights,departed,at,pm5_00,in,y1993]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
at(y1993,
at(pm5_00^g, x2^v,
past(x3^v,
actl_depart(x1^v))))))
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
at(y1993,
past(x2^v,
hab_dep_time(x1^v, pm5_00)))))
\end{verbatim}
\normalsize\doublespace The first question below receives no parse, because \qit{to
circle} is classified as activity verb (there is no habitual state
homonym in this case), and the simple present lexical rule does not
generate simple present lexical entries for activity verbs. In
contrast, the present participle lexical rule does generate
progressive entries for activity verbs. This causes the second
question below to be mapped to the formula one would expect. The
failure to parse the first question is justified, in the sense that
the question seems to be asking about flights that have some circling
habit, and the \textsc{Nlitdb}\xspace has no access to information on circling
habits. A more cooperative response, however, is needed to
explain this to the user.
\singlespace\small
\begin{verbatim}
| ?- nli([does,ba737,circle]).
**No (more) parses.
| ?- nli([is,ba737,circling]).
TOP formula after post-processing:
pres(circling(ba737))
\end{verbatim}
\normalsize\doublespace Following the arrangements of section \ref{hpsg:per_advs}, in
the following question where a culminating activity combines with a
period adverbial, two formulae are generated: one where the inspection
must have simply been completed on 1/5/92, and one where the whole
inspection (from start to completion) must have been carried out on
1/5/92. The first reading seems unlikely in this example, though as
discussed in section \ref{period_adverbials}, there are sentences
where the first reading is the intended one.
\singlespace\small
\begin{verbatim}
| ?- nli([who,inspected,uk160,on,d1_5_92]).
TOP formula after post-processing:
interrog(x1^v,
at(d1_5_92,
end(past(x2^v,
culm(inspecting(x3^v, x1^v, uk160))))))
TOP formula after post-processing:
interrog(x1^v,
at(d1_5_92,
past(x2^v,
culm(inspecting(x3^v, x1^v, uk160)))))
\end{verbatim}
\normalsize\doublespace
In the following question, the punctual adverbial \qit{at 5:00pm}
combines with a culminating activity. According to section
\ref{point_adverbials}, two readings arise: one where the taxiing
starts at 5:00pm, and one where it finishes at 5:00pm. In both cases,
the punctual adverbial causes the aspect of \qit{which flight taxied
to gate 2 at 5:00pm} to become point. That point sentence then
combines with the period adverbial \qit{yesterday}. According to
section \ref{period_adverbials}, the instantaneous situation of the
point phrase (the start or end of the taxiing) must occur within the
period of the adverbial. This analysis leads to two formulae: one
where the taxiing starts at 5:00pm on the previous day, and one where
the taxiing finishes at 5:00pm on the previous day. These formulae
capture the most likely readings of the question. Unfortunately, if
the order of \qit{at 5:00pm} and \qit{yesterday} is reversed, the
generated formulae are not equivalent to the ones below (see the
discussion in section \ref{hpsg:mult_mods})
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,taxied,to,gate2,at,pm5_00,yesterday]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
at(yesterday,
at(pm5_00^g, x2^v,
end(past(x3^v,
culm(taxiing_to(x4^v, x1^v, gate2))))))))
\end{verbatim}
\newpage
\begin{verbatim}
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
at(yesterday,
at(pm5_00^g, x2^v,
begin(past(x3^v,
culm(taxiing_to(x4^v, x1^v, gate2))))))))
\end{verbatim}
\normalsize\doublespace In the sentence below (which is treated as a yes/no question),
the treatment of past perfects and punctual adverbials of section
\ref{hpsg:punc_adv} allows \qit{at 5:00pm} to modify either the verb
phrase \qit{left gate 2}, or the entire \qit{BA737 had left gate 2}.
This gives rise to two \textsc{Top}\xspace formulae: one where 5:00pm is the time at
which BA737 left gate 2, and one where 5:00pm is a reference time at
which BA737 had already left gate 2. The two formulae capture the two
most likely readings of the sentence.
\singlespace\small
\begin{verbatim}
| ?- nli([ba737,had,left,gate2,at,pm5_00]).
TOP formula after post-processing:
past(x2^v,
perf(x3^v,
at(pm5_00^g, x1^v,
leave_something(ba737, gate2))))
TOP formula after post-processing:
at(pm5_00^g, x1^v,
past(x2^v,
perf(x3^v,
leave_something(ba737, gate2))))
\end{verbatim}
\normalsize\doublespace Similarly, in the following question, the \qit{at 5:00pm} is
allowed to modify either the verb phrase \qit{taken off}, or the
entire \qit{BA737 had taken off}. In the first case, the verb phrase
still has the aspectual class of the base form, i.e.\ culminating
activity. According to section \ref{point_adverbials}, 5:00pm is the
time where the taking off was completed or started. These two readings
are captured by the first and second formulae below. (The second
reading seems unlikely in this example.) In the case where \qit{at
5:00pm} modifies the entire \qit{BA737 had taken off}, the \qit{had}
has already caused the aspect of \qit{BA737 had taken off} to become
(consequent) state. According to section \ref{point_adverbials}, in
that case 5:00pm is simply a time-point where the situation of the
sentence (having departed) holds. This reading is captured by the
third formula.
\singlespace\small
\begin{verbatim}
| ?- nli([ba737,had,taken,off,at,pm5_00]).
TOP formula after post-processing:
past(x2^v,
perf(x3^v,
at(pm5_00^g, x1^v,
end(culm(taking_off(x4^v, ba737))))))
TOP formula after post-processing:
past(x2^v,
perf(x3^v,
at(pm5_00^g, x1^v,
begin(culm(taking_off(x4^v, ba737))))))
TOP formula after post-processing:
at(pm5_00^g, x1^v,
past(x2^v,
perf(x3^v,
culm(taking_off(x4^v, ba737)))))
\end{verbatim}
\normalsize\doublespace The first question below receives the formula one would expect.
As discussed in section \ref{hpsg:mult_mods}, in the second question
below the grammar of chapter \ref{English_to_TOP} allows two parses:
one where \qit{yesterday} attaches to \qit{BA737 was circling}, and
one where \qit{yesterday} attaches to \qit{BA737 was circling for two
hours}. These two parses give rise to two different but logically
equivalent formulae.
\singlespace\small
\begin{verbatim}
| ?- nli([ba737,was,circling,for,two,hours,yesterday]).
TOP formula after post-processing:
at(yesterday,
for(hour^c, 2,
past(x1^v,
circling(ba737))))
| ?- nli([yesterday,ba737,was,circling,for,two,hours]).
TOP formula after post-processing:
for(hour^c, 2,
at(yesterday,
past(x1^v,
circling(ba737))))
TOP formula after post-processing:
at(yesterday,
for(hour^c, 2,
past(x1^v,
circling(ba737))))
\end{verbatim}
\normalsize\doublespace The following example reveals a problem in the current
treatment of temporal modifiers. The \textsc{Hpsg}\xspace version of this thesis
(section \ref{hpsg:pupe_adv}) allows temporal modifiers to attach only
to finite sentences (finite verb forms that have already combined with
their subjects and complements) or past participle verb phrases (past
participles that have combined with all their complements but not
their subjects). In both cases, the temporal modifier attaches after
the verb has combined with all its complements. English temporal
modifiers typically appear either at the beginning or the end of the
sentence (not between the verb and its complements), and hence
requiring temporal modifiers to attach after the verb has combined with
its complements is in most cases not a problem. However, in the
following question (which most native English speakers find acceptable) the
temporal modifier (\qit{for two hours}) is between the verb
(\qit{queued}) and its complement (\qit{for runway2}). Therefore,
the temporal modifier cannot attach to the verb after the verb
has combined with its complement, and the system fails to parse
the sentence. (In contrast, \qit{UK160 queued for runway 2 for two
hours.}, where the temporal modifier follows the complement, is parsed
without problems.)
\singlespace\small
\begin{verbatim}
| ?- nli([uk160,queued,for,two,hours,for,runway2]).
**No (more) parses.
\end{verbatim}
\normalsize\doublespace As explained in section \ref{post_processing}, the
post-processing removes \ensuremath{\mathit{Culm}}\xspace operators that are within \ensuremath{\mathit{For}}\xspace operators
introduced by \qit{for~\dots} adverbials. This is demonstrated in the
following example. The \qit{for~\dots} adverbial introduces a
\texttt{for\_remove\_culm} pseudo-operator, which can be thought of as
a \ensuremath{\mathit{For}}\xspace operator with a flag attached to it, that signals that \ensuremath{\mathit{Culm}}\xspace{s}
within the \ensuremath{\mathit{For}}\xspace operator must be removed. The post-processor
removes the \ensuremath{\mathit{Culm}}\xspace, and replaces the
\texttt{for\_remove\_culm} with an ordinary \ensuremath{\mathit{For}}\xspace operator.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,boarded,for,two,hours]).
TOP formula extracted from HPSG sign:
interrog(x1^v,
and(flight(x1^v),
for_remove_culm(hour^c, 2,
past(x2^v,
culm(boarding(x3^v, x1^v))))))
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
for(hour^c, 2,
past(x2^v,
boarding(x3^v, x1^v)))))
\end{verbatim}
\normalsize\doublespace Duration \qit{in~\dots} adverbials introduce \ensuremath{\mathit{For}}\xspace operators
that carry no flag to remove enclosed \ensuremath{\mathit{Culm}}\xspace{s}. In the
following question, this leads to a formula that (correctly)
requires the boarding to have been completed.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,boarded,in,two,hours]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
for(hour^c, 2,
past(x2^v,
culm(boarding(x3^v, x1^v))))))
\end{verbatim}
\normalsize\doublespace As explained in section \ref{present_perfect}, the present
perfect is treated in exactly the same way as the simple past. This
causes the two questions below to receive the same formula.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,has,been,at,gate2,for,two,hours]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
for(hour^c, 2,
past(x2^v,
located_at(x1^v, gate2)))))
| ?- nli([which,flight,was,at,gate2,for,two,hours]).
TOP formula after post-processing:
[same formula as above]
\end{verbatim}
\normalsize\doublespace
As discussed in section \ref{special_verbs}, when \qit{finished}
combines with a culminating activity, the situation must have reached
its completion. In contrast, when \qit{stopped} combines with a
culminating activity, the situation must have simply stopped, without
necessarily reaching its completion. This difference is captured in
the two formulae below by the existence or absence of a \ensuremath{\mathit{Culm}}\xspace.
\singlespace\small
\begin{verbatim}
| ?- nli([j_adams,finished,inspecting,uk160,at,pm5_00]).
TOP formula after post-processing:
at(pm5_00^g, x1^v,
past(x2^v,
end(culm(inspecting(x3^v, j_adams, uk160)))))
| ?- nli([j_adams,stopped,inspecting,uk160,at,pm5_00]).
TOP formula after post-processing:
at(pm5_00^g, x1^v,
past(x2^v,
end(inspecting(x3^v, j_adams, uk160))))
\end{verbatim}
\normalsize\doublespace In the airport domain, non-predicative adjectives (like
\qit{closed} below) introduce \ensuremath{\mathit{Ntense}}\xspace operators. In the question
below, the formula that is extracted from the \textsc{Hpsg}\xspace sign contains an
\ensuremath{\mathit{Ntense}}\xspace whose first argument is a variable. As explained in section
\ref{post_processing}, this leads to two different formulae after the
post-processing, one where \qit{closed} refers to the present, and one
where \qit{closed} refers to the time of the verb tense.
\singlespace\small
\begin{verbatim}
| ?- nli([was,any,flight,on,a,closed,runway,yesterday]).
TOP formula extracted from HPSG sign:
and(flight(x1^v),
and(and(ntense(x2^v, closed(x3^v)),
runway(x3^v)),
at(yesterday,
past(x4^v,
located_at(x1^v, x3^v)))))
**Post processing of TOP formula generated 2 different formulae.
TOP formula after post-processing:
and(flight(x1^v),
and(and(ntense(now, closed(x3^v)),
runway(x3^v)),
at(yesterday,
past(x4^v,
located_at(x1^v, x3^v)))))
TOP formula after post-processing:
and(flight(x1^v),
and(and(ntense(x4^v, closed(x3^v)),
runway(x3^v)),
at(yesterday,
past(x4^v,
located_at(x1^v, x3^v)))))
\end{verbatim}
\normalsize\doublespace In the following question, the \qit{currently} clarifies that
\qit{closed} refers to the present. The \ensuremath{\mathit{Ntense}}\xspace in the formula
extracted from the \textsc{Hpsg}\xspace sign has $now^*$ as its first
argument. The post-processing has no effect.
\singlespace\small
\begin{verbatim}
| ?- nli([was,any,flight,on,a,currently,closed,runway,yesterday]).
TOP formula extracted from HPSG sign:
and(flight(x1^v),
and(and(ntense(now, closed(x2^v)),
runway(x2^v)),
at(yesterday,
past(x3^v,
located_at(x1^v, x2^v)))))
\end{verbatim}
\normalsize\doublespace In the following question, the verb tense refers to the
present, and hence \qit{closed} can only refer to a currently closed
runway. The post-processor generates only one formula,
where the first argument of \ensuremath{\mathit{Ntense}}\xspace is $now^*$.
\singlespace\small
\begin{verbatim}
| ?- nli([is,any,flight,on,a,closed,runway]).
TOP formula extracted from HPSG sign:
and(flight(x1^v),
and(and(ntense(x2^v, closed(x3^v)),
runway(x3^v)),
pres(located_at(x1^v, x3^v))))
TOP formula after post-processing:
and(flight(x1^v),
and(and(ntense(now, closed(x3^v)),
runway(x3^v)),
pres(located_at(x1^v, x3^v))))
\end{verbatim}
\normalsize\doublespace Predicative adjectives do not introduce \ensuremath{\mathit{Ntense}}\xspace{s} (section
\ref{hpsg:adjectives}), and \textsc{Top}\xspace predicates introduced by these
adjectives always end up within the operator(s) of the verb
tense. This captures the fact that predicative adjectives always refer
to the time of the verb tense.
\singlespace\small
\begin{verbatim}
| ?- nli([was,gate2,open,on,monday]).
TOP formula after post-processing:
at(monday^g, x1^v
past(x2^v,
open(gate2)))
\end{verbatim}
\normalsize\doublespace
For reasons explained in section \ref{pred_nps}, the system fails to
parse sentences that contain proper names or names of days, months,
etc.\ when these are used as predicative noun phrases (e.g.\ the first
two questions below). Other predicative noun phrases pose no problem
(e.g.\ the third question below).
\singlespace\small
\begin{verbatim}
| ?- nli([d1_1_91,was,a,monday]).
**No (more) parses.
| ?- nli([ba737,is,uk160]).
**No (more) parses.
| ?- nli([ba737,is,a,flight]).
TOP formula after post-processing:
pres(flight(ba737))
\end{verbatim}
\normalsize\doublespace
Multiple interrogative words can be handled, as demonstrated below.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flight,is,at,which,gate]).
TOP formula after post-processing:
interrog(x1^v,
interrog(x2^v,
and(gate(x1^v),
and(flight(x2^v),
pres(located_at(x2^v, x1^v))))))
\end{verbatim}
\normalsize\doublespace
In the first question below, the grammar of chapter
\ref{English_to_TOP} allows \qit{yesterday} to attach to either
\qit{BA737 was circling} or to the whole \qit{did any flight leave a
gate while BA737 was circling}. Two \textsc{Hpsg}\xspace signs are generated as a
result of this, from which two different but logically equivalent
formulae are extracted. In contrast, in the second question below, the
\qit{yesterday} cannot attach to \qit{BA737 was circling}, because of
the intervening \qit{while} (\qit{while BA737 was circling} is treated
as an adverbial, and \qit{yesterday} cannot attach to another
adverbial). Consequently, only one formula is generated.
\singlespace\small
\begin{verbatim}
| ?- nli([did,any,flight,leave,a,gate,while,ba737,was,circling,yesterday]).
TOP formula after post-processing:
and(flight(x1^v),
and(gate(x2^v),
at(at(yesterday,
past(x3^v,
circling(ba737))),
past(x4^v,
leave_something(x1^v, x2^v)))))
TOP formula after post-processing:
and(flight(x1^v),
and(gate(x2^v),
at(yesterday,
at(past(x3^v,
circling(ba737)),
past(x4^v,
leave_something(x1^v, x2^v))))))
| ?- nli([did,any,flight,leave,a,gate,yesterday,while,ba737,was,circling]).
TOP formula after post-processing:
and(flight(x1^v),
and(gate(x2^v),
at(past(x3^v,
circling(ba737)),
at(yesterday,
past(x5^v,
leave_something(x1^v, x2^v))))))
\end{verbatim}
\normalsize\doublespace In the questions below, the subordinate clause is a
(progressive) state. According to section \ref{before_after_clauses},
in the first question the flights must have arrived before a
time-point where BA737 started to board (\qit{to arrive} is a point
verb in the airport domain). In the second question, section
\ref{before_after_clauses} allows two readings: the flights must
have arrived after a time-point where BA737 started or stopped
boarding. The generated formulae capture these readings.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flights,arrived,before,ba737,was,boarding]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
before(past(x2^v,
boarding(x3^v, ba737)),
past(x4^v,
actl_arrive(x1^v)))))
| ?- nli([which,flights,arrived,after,ba737,was,boarding]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
after(begin(past(x2^v,
boarding(x3^v, ba737))),
past(x4^v,
actl_arrive(x1^v)))))
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
after(past(x2^v,
boarding(x3^v, ba737)),
past(x4^v,
actl_arrive(x1^v)))))
\end{verbatim}
\normalsize\doublespace Below, the subordinate clause is a culminating activity. In the
first question, according to section \ref{before_after_clauses} the
flights must have arrived before a time-point where BA737
finished or started to board. In the second question,
the flights must have arrived after a time-point where BA737
finished boarding. These readings are captured by the generated
formulae.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flights,arrived,before,ba737,boarded]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
before(end(past(x2^v,
culm(boarding(x3^v, ba737)))),
past(x4^v,
actl_arrive(x1^v)))))
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
before(past(x2^v,
culm(boarding(x3^v, ba737))),
past(x4^v,
actl_arrive(x1^v)))))
| ?- nli([which,flights,arrived,after,ba737,boarded]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
after(past(x2^v,
culm(boarding(x3^v, ba737))),
past(x4^v,
actl_arrive(x1^v)))))
\end{verbatim}
\normalsize\doublespace In the next two questions, the subordinate clause is a
consequent state. According to section \ref{before_after_clauses}, in
the first question the flights must have arrived before the situation
of the subordinate clause (having boarded) began, i.e.\ before BA737
finished boarding. In the second question, the flights must have
arrived after the situation of the subordinate clause (having boarded)
began, i.e.\ after BA737 finished boarding. These readings are
captured by the generated \textsc{Top}\xspace formulae.
\singlespace\small
\begin{verbatim}
| ?- nli([which,flights,arrived,before,ba737,had,boarded]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
before(past(x2^v,
perf(x3^v,
culm(boarding(x4^v,
ba737)))),
past(x5^v,
actl_arrive(x1^v))))))
| ?- nli([which,flights,arrived,after,ba737,had,boarded]).
TOP formula after post-processing:
interrog(x1^v,
and(flight(x1^v),
after(begin(past(x2^v,
perf(x3^v,
culm(boarding(x4^v,
ba737))))),
past(x5^v,
actl_arrive(x1^v))))))
\end{verbatim}
\normalsize\doublespace The question below combines a
\qit{when} interrogative and a \qit{while~\dots} clause. The generated
formula asks for maximal past circling-periods of BA737 that fall
within maximal past periods where UK160 was located at gate 2.
\singlespace\small
\begin{verbatim}
| ?- nli([when,while,uk160,was,at,gate2,was,ba737,circling]).
TOP formula after post-processing:
interrog_mxl(x3^v,
at(past(x2^v,
located_at(uk160, gate2)),
past(x3^v,
circling(ba737))))
\end{verbatim}
\normalsize\doublespace Finally, the question below receives two formulae: the first
one asks for times of past actual departures; the second one asks for
past normal departure times. (The latter reading is easier to accept
if an adverbial like \qit{in 1992} is attached.) In the second
question, only a formula for the habitual reading is generated,
because the simple present lexical rule (section
\ref{single_word_forms}) does not generate a simple present lexical
entry for the non-habitual \qit{to depart} (which is a point verb).
\singlespace\small
\begin{verbatim}
| ?- nli([when,did,ba737,depart]).
TOP formula after post-processing:
interrog_mxl(x2^v,
past(x2^v,
actl_depart(ba737)))
TOP formula after post-processing:
interrog(x1^v,
past(x2^v,
hab_dep_time(ba737, x1^v)))
| ?- nli([when,does,ba737,depart]).
TOP formula after post-processing:
interrog(x1^v,
pres(hab_dep_time(ba737, x1^v)))
\end{verbatim}
\normalsize\doublespace
\section{Speed issues}
As already noted, the prototype \textsc{Nlitdb}\xspace was developed simply to
demonstrate that the mappings from English to \textsc{Top}\xspace and from \textsc{Top}\xspace to
\textsc{Tsql2}\xspace are implementable. Execution speed was not a priority, and the
\textsc{Nlitdb}\xspace code is by no means optimised for fast execution. On a lightly
loaded Sun \textsc{Sparc}station 5, single-clause questions with
single parses are typically mapped to \textsc{Tsql2}\xspace queries in about 15--30
seconds. Longer questions with subordinate clauses and multiple parses
usually take 1--2 minutes to process. (These times include the
printing of all the \textsc{Hpsg}\xspace signs, \textsc{Top}\xspace formulae, and \textsc{Tsql2}\xspace queries.)
The system's speed seems acceptable for a research prototype, but it is
unsatisfactory for real-life applications.
Whenever a modification is made in the software, the code of the
affected modules has to be recompiled. This takes only a few seconds
in the case of modules that are written in Prolog (the post-processor
and the \textsc{Top}\xspace to \textsc{Tsql2}\xspace translator), but it is very time-consuming in
the case of modules that are written in \textsc{Ale}\xspace's formalism (the
components of the \textsc{Hpsg}\xspace grammar and the extractor of \textsc{Top}\xspace formulae).
This becomes particularly annoying when experimenting with the
grammar, as in many cases after modifying the grammar all its
components (sort hierarchy, lexical rules, etc.) have to be
recompiled, and this recompilation takes approximately 8 minutes on
the above machine.
\section{Summary}
The framework of this thesis was tested by developing a prototype
\textsc{Nlitdb}\xspace, implemented using Prolog and \textsc{Ale}\xspace. The prototype was
configured for the hypothetical airport application. A number of
sample questions were used to demonstrate that the system behaves
according to the specifications of the previous chapters. The
architecture of the prototype is currently minimal. A preprocessor,
mechanisms for quantifier-scoping and anaphora resolution, an
equivalential translator, a response generator, and configuration
tools would have to be added if the system were to be used in
real-life applications. Execution speed would also have to be
improved.
\chapter{Comparison with Previous Work on NLITDBs} \label{comp_chapt}
\proverb{Other times other manners.}
This chapter begins with a discussion of previous work on
\textsc{Nlitdb}\xspace{s}. The discussion identifies six problems from which previous
proposals on \textsc{Nlitdb}\xspace{s} suffer. I then examine if the
framework of this thesis overcomes these problems.
\section{Previous work on NLITDBs} \label{previous_nlitdbs}
This section discusses previous work on \textsc{Nlitdb}\xspace{s}. Clifford's work,
which is the most significant and directly relevant to this thesis, is
presented first.
\subsection{Clifford} \label{Clifford_prev}
Clifford \cite{Clifford} defined a temporal version of the relational
model. He also showed how a fragment of English questions involving
time can be mapped systematically to logical expressions whose
semantics are defined in terms of a database structured according to
his model.\footnote{Parts of \cite{Clifford} can be found in
\cite{Clifford4}, \cite{Clifford5}, and \cite{Clifford3}. The
database model of this section is that of \cite{Clifford}. A
previous version of this model appears in \cite{Clifford2}.}
Clifford's approach is notable in that both the semantics of the
English fragment and of the temporal database are defined within
a common model-theoretic framework, based on Montague semantics
\cite{Dowty}.
Clifford extended the syntactic coverage of Montague's \textsc{Ptq}\xspace grammar,
to allow past, present, and future verb forms, some temporal connectives
and adverbials (e.g.\ \qit{while}, \qit{during}, \qit{in 1978},
\qit{yesterday}), and questions. \pref{cliff:2} -- \pref{cliff:7} are
all within Clifford's syntactic coverage. (Assertions like
\pref{cliff:4} are treated as yes/no questions.)
\begin{examps}
\item Is it the case that Peter earned 25K in 1978? \label{cliff:2}
\item Does Rachel manage an employee such that he earned 30K? \label{cliff:3}
\item John worked before Mary worked. \label{cliff:4}
\item Who manages which employees? \label{cliff:6}
\item When did Liz manage Peter? \label{cliff:7}
\end{examps}
Clifford does not allow progressive verb forms. He also claims that no
distinction between progressive and non-progressive forms is necessary
in the context of \textsc{Nlitdb}\xspace{s} (see p.12 of \cite{Clifford4}).
According to Clifford's view, \pref{cliff:1a} can be treated in
exactly the same manner as \pref{cliff:1b}. This ignores the fact
that \pref{cliff:1b} most probably refers to a company that habitually
or normally services BA737, or to a company that will service BA737
according to some plan, not to a company that is actually servicing
BA737 at the present. In contrast, \pref{cliff:1a} most probably
refers to a company that is actually servicing BA737 at the present,
or to a company that is going to service BA737. Therefore, the \textsc{Nlitdb}\xspace
should not treat the two questions as identical, if its responses are
to be appropriate to the meanings users have in mind.
\begin{examps}
\item Which company is servicing flight BA737? \label{cliff:1a}
\item Which company services flight BA737? \label{cliff:1b}
\end{examps}
Clifford also does not discuss perfect tenses (present perfect, past
perfect, etc.), which do not seem to be allowed in his framework.
Finally, he employs no aspectual taxonomy (this will be discussed in
section \ref{sem_assess}).
Following the Montague tradition, Clifford employs an intensional
higher order language (called \textsc{Il}$_s$\xspace) to represent the meanings of
English questions. There is a set of syntactic rules that determine the
syntactic structure of each sentence, and a set of semantic rules
that map syntactic structures to expressions of \textsc{Il}$_s$\xspace. For example,
\pref{cliff:7} is mapped to the \textsc{Il}$_s$\xspace expression of \pref{cliff:8}.
\begin{examps}
\item $\begin{aligned}[t]
\lambda i_1 [[i_1 < i] \land \exists y [&EMP'_*(i_1)(Peter) \land \\
&MGR'(i_1)(y) \land
y(i_1) = Liz \land
AS\_1(Peter, y)]]
\end{aligned}$
\label{cliff:8}
\end{examps}
Roughly speaking, \pref{cliff:8} has the following meaning.
$EMP'_*(i_1)(Peter)$ means that Peter must be an employee at the
time-point $i_1$. $MGR'(i_1)(y)$ means that $y$ must be a partial
function from time-points to managers (an \emph{intension} in Montague
semantics terminology) which is defined for (at least) the time-point
$i_1$. $AS\_1(Peter, y)$ requires $y$ to represent the history of
Peter's managers (i.e.\ the value $y(i_1)$ of $y$ at each time-point
$i_1$ must be the manager of Peter at that time-point). The $y(i_1) =
Liz$ requires the manager of Peter at $i_1$ to be Liz. Finally, $i$ is
the present time-point, and $i_1 < i$ means that $i_1$ must precede
$i$. \pref{cliff:8} requires all time-points $i_1$ to be
reported, such that $i_1$ precedes the present time-point, Peter is an
employee at $i_1$, and Peter's manager at $i_1$ is Liz.
The following (from \cite{Clifford}) is a relation in
Clifford's database model (called \textsc{Hrdm}\xspace\ -- Historical Relational
Database Model).
\nspace{1.0}
\begin{center}
{\small
\begin{tabular}{|l|l|l|l|l|}
\hline
\multicolumn{5}{|l|}{$emprel$} \\
\hline \hline
$EMP$ & $MGR$ & $DEPT$ & $SAL$ & $lifespan$ \\
\hline
&&&& \\
$Peter$ & $ \left[ \begin{array}{l}
S2 \rightarrow Elsie\\
S3 \rightarrow Liz
\end{array} \right] $
& $ \left[ \begin{array}{l}
S2 \rightarrow Hardware\\
S3 \rightarrow Linen
\end{array} \right] $
& $ \left[ \begin{array}{l}
S2 \rightarrow 30K \\
S3 \rightarrow 35K
\end{array} \right] $
& $\{S2,S3\}$\\
&&&& \\
\hline
&&&& \\
$Liz$ & $ \left[ \begin{array}{l}
S2 \rightarrow Elsie\\
S3 \rightarrow Liz
\end{array} \right] $
& $ \left[ \begin{array}{l}
S2 \rightarrow Toy\\
S3 \rightarrow Hardware
\end{array} \right] $
& $ \left[ \begin{array}{l}
S2 \rightarrow 35K \\
S3 \rightarrow 50K
\end{array} \right] $
& $\{S2,S3\}$\\
&&&& \\
\hline
&&&& \\
$Elsie$ & $ \left[ \begin{array}{l}
S1 \rightarrow Elsie\\
S2 \rightarrow Elsie
\end{array} \right] $
& $ \left[ \begin{array}{l}
S1 \rightarrow Toy\\
S2 \rightarrow Toy
\end{array} \right] $
& $ \left[ \begin{array}{l}
S1 \rightarrow 50K \\
S2 \rightarrow 50K
\end{array} \right] $
& $\{S1,S2\}$\\
&&&& \\
\hline
\end{tabular}
}
\end{center}
\nspace{1.4} The $\mathit{lifespan}$ of each tuple shows the
time-points (``states'' in Clifford's terminology) for which the tuple
carries information. In \textsc{Hrdm}\xspace, attribute values are not necessarily
atomic. They can also be sets of time-point denoting symbols (as in
the case of $\mathit{lifespan}$), or partial functions from time-point
denoting symbols to atomic values. The relation above means that at
the time-point $S2$ the manager of Peter was Elsie, and that at $S3$
the manager of Peter was Liz. \textsc{Hrdm}\xspace uses additional time-stamps to
cope with schema-evolution (section \ref{no_issues}). I do not discuss
these here.
Clifford shows how the semantics of \textsc{Il}$_s$\xspace expressions can be defined in
terms of an \textsc{Hrdm}\xspace database (e.g.\ how the semantics of \pref{cliff:8}
can be defined in terms of information in $emprel$). He also defines
an algebra for \textsc{Hrdm}\xspace, similar to the relational algebra of the
traditional relational model \cite{Ullman}. (Relational algebra is a
theoretical database query language. Most \textsc{Dbms}\xspace{s} do not support it
directly. \textsc{Dbms}\xspace users typically specify their requests in more
user-friendly languages, like \textsc{Sql}\xspace. \textsc{Dbms}\xspace{s}, however, often use
relational algebra internally, to represent operations that need to be
carried out to satisfy the users' requests.) The answer to
\pref{cliff:7} can be found using \pref{cliff:9}, which is an
expression in Clifford's algebra.
\begin{examps}
\item $\mathit{\omega(\sigma\text{-}WHEN_{EMP = Peter, MGR = Liz}(emprel))}$
\label{cliff:9}
\end{examps}
$\sigma\text{-}WHEN_{EMP = Peter, MGR = Liz}(emprel)$
\index{swhen@$\sigma\text{-}WHEN$ (operator of Clifford's algebra)}
generates a single-tuple relation (shown below as
$emprel2$) that carries the information of Peter's tuple from
$emprel$, restricted to when his manager was Liz. The $\omega$
\index{o@$\omega$ (operator of Clifford's algebra)}
operator returns a set of time-point denoting symbols, that represents
all the time-points for which there is information in the
relation-argument of $\omega$. In our example, \pref{cliff:9}
returns $\{S3\}$.
\nspace{1.0}
\begin{center}
{\small
\begin{tabular}{|l|l|l|l|l|}
\hline
\multicolumn{5}{|l|}{$emprel2$} \\
\hline \hline
$EMP$ & $MGR$ & $DEPT$ & $SAL$ & $lifespan$ \\
\hline
&&&& \\
$Peter$ & $[S3 \rightarrow Liz] $
& $[S3 \rightarrow Linen]$
& $[S3 \rightarrow 35K] $
& $\{S3\}$\\
&&&& \\
\hline
\end{tabular}
}
\end{center}
\nspace{1.4}
Clifford outlines an algorithm for mapping \textsc{Il}$_s$\xspace expressions to
appropriate algebraic expressions (e.g.\ mapping \pref{cliff:8} to
\pref{cliff:9}; see p.170 of \cite{Clifford}). The description of this
algorithm, however, is very sketchy and informal.
According to Clifford
(\cite{Clifford4}, p.16), a parser for his version of the \textsc{Ptq}\xspace grammar
(that presumably also maps English questions to \textsc{Il}$_s$\xspace
expressions) has been developed. Clifford, however, does not provide any
information on whether or not a translator from \textsc{Il}$_s$\xspace to his algebra
was ever implemented (as noted above, this mapping is not
fully defined), and there is no indication that Clifford's framework
was ever used to implement an actual \textsc{Nlitdb}\xspace.
\subsection{Bruce} \label{Bruce_prev}
Bruce's \textsc{Chronos} \cite{Bruce1972} is probably the first
natural language question-answering system that attempted to address
specifically time-related issues. \textsc{Chronos} is not really
an interface to a stand-alone database system. When invoked, it has no
information about the world. The user ``teaches'' \textsc{Chronos}
various facts (using statements like \pref{Bruce:1} and
\pref{Bruce:2}), which are stored internally as expressions of a
Lisp-like representation language. Questions about the stored facts
can then be asked (e.g.\ \pref{Bruce:3}, \pref{Bruce:4}).
\begin{examps}
\item The American war for independence began in 1775. \label{Bruce:1}
\item The articles of confederation period was from 1777 to 1789.
\label{Bruce:2}
\item Does the American war for independence coincide with the time
from 1775 to 1781? \label{Bruce:3}
\item Did the time of the American war for independence overlap the
articles of confederation period? \label{Bruce:4}
\end{examps}
Bruce defines formally a model of time, and explores how relations
between time-segments of that model can represent the semantics of
some English temporal mechanisms (mainly verb tenses). Bruce's
time-model and temporal relations seem to underlie \textsc{Chronos}'
Lisp-like representation language. Bruce, however, provides no
information about the representation language itself. With the
exception of verb tenses, there is very little information on the
linguistic coverage of the system and the linguistic assumptions on
which the system is based, and scarcely any information on the mapping
from English to representation language. (The discussion in
\cite{Bruce1972} suggests that the latter mapping may be based on
simplistic pattern-matching techniques.) Finally, Bruce does not
discuss exactly how the stored facts are used to answer questions like
\pref{Bruce:3} and \pref{Bruce:4}.
\subsection{De, Pan, and Whinston}
De, Pan, and Whinston \cite{De} \cite{De2} describe a
question-answering system that can handle a fragment of English
questions involving time. The ``temporal database'' in this case is a
rather ad hoc collection of facts and inference rules (that can be
used to infer new information from the facts), rather than a
principled database built on a well-defined database model. Both the
grammar of the linguistic processor and the facts and rules of the
database are specified in ``equational logic'' (a kind of
logic-programming language). There is no clear intermediate
representation language, and it is very difficult to distinguish the
part of the system that is responsible for the linguistic processing
from the part of the system that is responsible for retrieving
information from the ``database''. De et al.\ consider this an
advantage, but it clearly sacrifices modularity and portability. For
example, it is very hard to see which parts of the software would have
to be modified if the natural language processor were to be used with
a commercial \textsc{Dbms}\xspace.
The system of De et al.\ does not seem to be based on any clear
linguistic analysis. There is also very little information in
\cite{De} and \cite{De2} on exactly which temporal linguistic
mechanisms are supported, and which semantics are assigned to these
mechanisms. Furthermore, no aspectual classes are used (see related
comments in section \ref{sem_assess}).
\subsection{Moens} \label{Moens_prev}
Moens' work on temporal linguistic phenomena \cite{Moens}
\cite{Moens2} has been highly influential in the area of tense and
aspect theories (some ideas from Moens' work were mentioned in chapter
\ref{linguistic_data}). In the last part of \cite{Moens} (see also
\cite{Moens3}), Moens develops a simplistic \textsc{Nlitdb}\xspace. This has a very
limited linguistic coverage, and is mainly intended to illustrate
Moens' tense and aspect theory, rather than to constitute a detailed
exploration of issues related to \textsc{Nlitdb}\xspace{s}.
As in the case of Bruce and De et al., Moens' ``database'' is not a
stand-alone system built according to an established (e.g.\
relational) database model. Instead, it is a collection of Prolog
facts of particular forms, that record information according to an
idiosyncratic and unclearly defined database model. Apart from purely
temporal information (that shows when various events took place),
Moens' database model also stores information about \emph{episodes}.
According to Moens, an episode is a sequence of ``contingently''
related events. Moens uses the term ``contingency'' in a rather vague
manner: in some cases it denotes a consequence relation (event A was a
consequence of event B); in other cases it is used to refer to events
that constitute steps towards the satisfaction of a common goal. The
intention is, for example, to be able to store an event where John
writes chapter 1 of his thesis together with an event where John
writes chapter 2 of his thesis as constituting parts of an episode
where John writes his thesis. Some episodes may be parts of larger
episodes (e.g.\ the episode where John writes his thesis could be part
of a larger episode where John earns his PhD).
Moens claims that episodic information of this kind is necessary if
certain time-related linguistic mechanisms (e.g.\ \qit{when~\dots}
clauses, present perfect) are to be handled appropriately. Although I
agree that episodic information seems to play an important role in how
people perceive temporal information, it is often difficult to see how
Moens' episodic information (especially when events in an episode are
linked with consequence relations) can be used in a practical \textsc{Nlitdb}\xspace
(e.g.\ in section \ref{present_perfect}, I discussed common claims
that the English present perfect involves a consequence relation, and
I explained why an analysis of the present perfect that posits a
consequence relation is impractical in \textsc{Nlitdb}\xspace{s}). By assuming that
the database contains episodic information, one also moves away from
current proposals in temporal databases, that do not consider
information of this kind. For these reasons, I chose not to assume
that the database provides episodic information. As was demonstrated
in the previous chapters, even in the absence of such information
reasonable responses can be generated in a large number of cases.
Moen's database model is also interesting in that it provides some
support for \emph{imprecise temporal information}. One may know, for
example, that two events A and B occurred, and that B was a
consequence of A, without knowing the precise times where A and B
occurred. Information of this kind can be stored in Moens' database,
because in his model events are not necessarily associated with times.
One can store events A and B as a sequence of contingently related
events (here contingency would have its consequence meaning) without
assigning them specific times. (If, however, there is no contingency
relation between the two events and their exact times are unknown,
Moens' model does not allow the relative order of A and B to be
stored.) Although there has been research on imprecise temporal
information in databases (e.g.\ \cite{Brusoni1995},
\cite{Koubarakis1995}), most of the work on temporal databases assumes
that events are assigned specific times. To remain compatible with
this work, I adopted the same assumption.
Moens' system uses a subset of Prolog as its meaning representation
language. English questions are translated into expressions of this
subset using a \textsc{Dcg} grammar \cite{Pereira1980}, and there are
Prolog rules that evaluate the resulting expressions against the
database. Moens provides no information about the \textsc{Dcg}
grammar. Also, the definition of the meaning representation language
is unclear. It is difficult to see exactly which Prolog expressions
are part of the representation language, and the semantics of the
language is defined in a rather informal way (by listing Prolog code
that evaluates some of the possible expressions of the representation
language against the database).
\subsection{Spenceley} \label{Spenceley_prev}
Spenceley \cite{Spenceley1989} developed a version of the
\textsc{Masque} natural language front-end \cite{Auxerre2} that can
cope with certain kinds of imperatives and temporal questions. The
front-end was used to interface to a Prolog database that modelled a
blocks-world similar to that of Winograd's
\textsc{Shrdlu} \cite{Winograd1973}. The dialogue in \pref{Spenc:1} --
\pref{Spenc:5.5} illustrates the capabilities of Spenceley's
system. The user can type imperatives, like \pref{Spenc:1} and
\pref{Spenc:2}, that cause the database to be updated
to reflect the new state of the blocks-world. At any point, questions
like \pref{Spenc:3} and \pref{Spenc:5} can be issued, to ask about
previous actions or about the current state of the world.
\begin{examps}
\item Take Cube1. \label{Spenc:1}
\item Put Cube1 on Cube2. \label{Spenc:2}
\item What was put on Cube2? \label{Spenc:3}
\item \sys{Cube1.} \label{Spenc:4}
\item Is Cube2 on Cube1? \label{Spenc:5}
\item \sys{No.} \label{Spenc:5.5}
\end{examps}
A simplistic aspectual taxonomy is adopted, that distinguishes between
\emph{states} and \emph{actions} (the latter containing Vendler's
activities, accomplishments, and achievements; see section
\ref{asp_taxes}). The linguistic coverage is severely
restricted. For example, the user can ask about past actions
and present states (e.g.\ \pref{Spenc:3}, \pref{Spenc:5}), but not
about past states (\pref{Spenc:6} is rejected). Only
\qit{while~\dots}, \qit{before~\dots}, and \qit{after~\dots}
subordinate clauses can be used to specify past times, and subordinate
clauses can refer only to actions, not states (e.g.\ \pref{Spenc:7} is
allowed, but \pref{Spenc:8} is not). Temporal adverbials, like \qit{at
5:00pm} in \pref{Spenc:9}, are not supported. Spenceley also
attempts to provide some support for \emph{tense anaphora} (section
\ref{temporal_anaphora}), but her tense anaphora mechanism is very
rudimentary.
\begin{examps}
\item \rej Where was Cube1? \label{Spenc:6}
\item What was taken before Cube1 was put on Cube2? \label{Spenc:7}
\item \rej What was taken before Cube1 was on Cube2? \label{Spenc:8}
\item \rej What was taken at 5:00pm? \label{Spenc:9}
\end{examps}
The English requests are parsed using an ``extraposition grammar''
\cite{Pereira}, and they are translated into a subset of Prolog that
acts as a meaning representation language.\footnote{The syntax and
semantics of a similar Prolog subset, that is used as the meaning
representation language of another version of \textsc{Masque}, are
defined in \cite{Androutsopoulos}.} The resulting Prolog expressions
are then executed by the Prolog interpreter to update the database or
to retrieve the requested information. The ``database'' is a
collection of ad hoc Prolog facts (and in that respect similar to the
``databases'' of Bruce, De et al., and Moens). It stores information
about past actions, but not states (this is probably why questions
like \pref{Spenc:8} are not allowed). Also, the database records
temporal relations between actions (which action followed which
action, which action happened during some other action), but not the
specific times where the actions happened. Hence, there is no
information in the database to answer questions like \pref{Spenc:9},
that require the specific times where the actions happened to be
known.
\subsection{Brown}
Brown \cite{Brown1994} describes a question-answering system that can
handle some temporal linguistic phenomena. As in Bruce's system,
the user first ``teaches'' the system various facts (e.g.\ \qit{Pedro
is beating Chiquita.}), and he/she can then ask questions about
these facts (e.g.\ \qit{Is he beating her?}). Brown's system is
interesting in that it is based on Discourse Representation Theory
(\textsc{Drt}\xspace), a theory in which tense and aspect have received
particular attention \cite{Kamp1993}. Brown's system, however, seems
to implement the tense and aspect mechanisms of \textsc{Drt}\xspace to a very limited
extent. Brown shows only how simple present, simple past, present
continuous, and past continuous verb forms can be handled. Other tenses,
temporal adverbials, temporal subordinate clauses, etc.\ do not seem
to be supported.
Brown's system transforms the English sentences into \textsc{Drt}\xspace discourse
representation structures, using a grammar written in an extended
\textsc{Dcg} version \cite{Covington1993}. Brown provides very little
information about this grammar. The relation of Brown's grammar to
that sketched in \cite{Kamp1993} is also unclear. The discourse
representation structures are then translated into Prolog facts (this
turns out to be relatively straight-forward). As in Moens' and
Spenceley's systems, the ``database'' is a collection of Prolog facts,
rather than a principled stand-alone system.
\subsection{Other related work} \label{other_related_prev}
Hafner \cite{Hafner} considers the inability of existing \textsc{Nlidb}\xspace{s} to
handle questions involving time a major weakness. Observing that there
is no consensus among database researchers on how the notion of time
should be supported in databases (this was true when \cite{Hafner} was
written), Hafner concludes that \textsc{Nlidb}\xspace designers who wish their
systems to handle questions involving time cannot look to the
underlying \textsc{Dbms}\xspace for special temporal support. She therefore proposes
a temporal reasoning model (consisting of a temporal ontology, a
Prolog-like representation language, and inference rules written in
Prolog), intended to be incorporated into a hypothetical \textsc{Nlidb}\xspace to
compensate for the lack of temporal support from the \textsc{Dbms}\xspace. Hafner,
however, does not describe exactly how her reasoning model would be
embedded into a \textsc{Nlidb}\xspace (e.g.\ how the semantics of verb tenses,
temporal adverbials, etc.\ could be captured in her representation
language, how English questions could be translated systematically
into her representation language, and exactly how her inference rules
would interact with the \textsc{Dbms}\xspace). Also, although when \cite{Hafner} was
written it was true that there was no consensus among temporal
database researchers, and that the \textsc{Nlidb}\xspace designer could not expect
special temporal support from the \textsc{Dbms}\xspace, this is (at least to some
extent) not true at the present. A temporal database query language
(\textsc{Tsql2}\xspace) that was designed by a committee comprising most leading
temporal database researchers now exists, and a prototype \textsc{Dbms}\xspace
(\textsc{TimeDB}; section \ref{tdbs_general}) that supports \textsc{Tsql2}\xspace has
already appeared. Instead of including into the \textsc{Nlitdb}\xspace a
temporal reasoning module (as sketched by Hafner), in this thesis I
assumed that a \textsc{Dbms}\xspace supporting \textsc{Tsql2}\xspace is available, and I exploited
\textsc{Tsql2}\xspace's temporal facilities.
Mays \cite{Mays1986} defines a modal logic which can be used to reason
about possible or necessary states of the world (what may or will
become true, what was or could have been true; see also the discussion
on modal questions in section \ref{no_issues}). Mays envisages a
reasoning module based on his logic that would be used when a \textsc{Nlidb}\xspace
attempts to generate cooperative responses. In \pref{Mays:2}, for
example, the system has offered to monitor the database, and to inform
the user when Kitty Hawk reaches Norfolk. In order to avoid responses
like \pref{Mays:4}, the system must be able to reason that the
distance between the two cities will never change. Mays, however, does
not discuss exactly how that reasoning module would be embedded into a
\textsc{Nlidb}\xspace (e.g.\ how English questions would be mapped to expressions of
his logic, and how the reasoning module would interact with the
database).
\begin{examps}
\item Is the Kitty Hawk in Norfolk? \label{Mays:1}
\item \sys{No, shall I let you know when she is?} \label{Mays:2}
\item Is New York less than 50 miles from Philadelphia? \label{Mays:3}
\item \sys{No, shall I let you know when it is?} \label{Mays:4}
\end{examps}
Hinrichs \cite{Hinrichs} proposes methods to address some time-related
linguistic phenomena, reporting on experience from a natural language
understanding system that, among other things, allows the user to
access time-dependent information stored in a database. Although
Hinrichs' methods are interesting (some of them were discussed in
section \ref{noun_anaphora}), \cite{Hinrichs} provides little
information on the actual natural language understanding system, and
essentially no information on the underlying \textsc{Dbms}\xspace and how the
intermediate representation language expressions are evaluated against
the database. There is also no indication that any aspectual taxonomy
is used, and the system uses a version of Montague's \textsc{Ptq}\xspace grammar (see
related comments in section \ref{eval_Grammar} below).
Finally, in \textsc{Cle} (a generic natural language front-end
\cite{Alshawi}) verb tenses introduce into the logical expressions
temporal operators, and variables that are intended to represent
states or events. The semantics of these operators and variables,
however, are left undefined. In \textsc{Clare} (roughly speaking, a
\textsc{Nlidb}\xspace based on \textsc{Cle}; see \cite{Alshawi2}) the temporal
operators are dropped, and verb tenses are expressed using
predications over event or state variables. The precise semantic
status of these variables remains obscure. Both \cite{Alshawi} and
\cite{Alshawi2} do not discuss temporal linguistic phenomena in any
detail.
\section{Assessment} \label{evaluation}
It follows from the discussion in section \ref{previous_nlitdbs} that
previous approaches to \textsc{Nlitdb}\xspace{s} suffer from one or more of
the following: (i) they ignore important English temporal mechanisms,
or assign to them over-simplified semantics (e.g.\ Clifford,
Spenceley, Brown), (ii) they lack clearly defined meaning representation
languages (e.g.\ Bruce, De et al., Moens), (iii) they do not provide
complete descriptions of the mappings from natural language to meaning
representation language (e.g.\ Bruce, Moens, Brown), or (iv) from meaning
representation language to database language (e.g.\ Clifford), (v)
they adopt idiosyncratic and often not well-defined database models or
languages (e.g.\ Bruce, De et al., Moens, Spenceley, Brown), (vi) they do not
demonstrate that their ideas are implementable (e.g.\ Clifford,
Hafner, Mayes). In this section I assess the work of this thesis with
respect to (i) -- (vi), comparing mainly to Clifford's work, which
constitutes the most significant previous exploration of \textsc{Nlitdb}\xspace{s}.
\subsection{English temporal mechanisms and their semantics} \label{sem_assess}
In section \ref{Clifford_prev}, I criticised Clifford's lack of
aspectual taxonomy. It should be clear from the discussion in chapter
\ref{linguistic_data} that the distinction between aspectual classes
pertains to the semantics of most temporal linguistic mechanisms, and
that without an aspectual taxonomy important semantic distinctions
cannot be captured (e.g.\ the fact that the simple past of a
culminating activity verb normally implies that the climax was
reached, while the simple past of a point, state, or activity verb
carries no such implication; the fact that an \qit{at~\dots} adverbial
typically has an inchoative or terminal meaning with a culminating
activity, but an interjacent meaning with a state, etc.) The aspectual
taxonomy of this thesis allowed me to capture many distinctions of
this kind, which cannot be accounted for in Clifford's framework.
Generally, this thesis examined the semantics of English temporal
mechanisms at a much more detailed level compared to Clifford's work.
Particular care was also taken to explain clearly which temporal
linguistic mechanisms this thesis attempts to support, which
simplifications were introduced in the semantics of these mechanisms,
and which phenomena remain to be considered (see table
\vref{coverage_table} for a summary). This information is difficult to
obtain in the case of Clifford's work.
In terms of syntactic coverage of time-related phenomena, the grammar
of this thesis is similar to Clifford's. Both grammars, for example,
support only three kinds of temporal subordinate clauses:
\qit{while~\dots}, \qit{before~\dots}, and
\qit{after~\dots} clauses. Clifford's grammar allows simple-future
verb forms (these are not supported by the grammar of this
thesis), but it does not allow progressive or perfect forms
(which are partially supported by the grammar of this thesis).
The two grammars allow similar temporal adverbials (e.g.\
\qit{in~1991}, \qit{before 3/5/90}, \qit{yesterday}), though there are
adverbials that are supported by Clifford's grammar but not by the
grammar of this thesis (e.g.\ \qit{never}, \qit{always}), and
adverbials that are supported by the grammar of this thesis but not by
Clifford's (e.g.\ \qit{for five hours}, \qit{in two days}). Both
grammars support yes/no questions, \qit{Who/What/Which~\dots?} and
\qit{When~\dots?} questions, multiple interrogatives
(e.g.\ \qit{Who inspected what on 1/1/91?}), and assertions (which are
treated as yes/no questions). The reader is reminded, however, that
Clifford assigns to temporal linguistic mechanisms semantics which
are typically much shallower than the semantics of this thesis.
Although the framework of this thesis can cope with an interesting set
of temporal linguistic phenomena, there are still many English temporal
mechanisms that are not covered (e.g.\ \qit{since~\dots} adverbials,
\qit{when~\dots} clauses, tense anaphora). Hence,
the criticism about previous approaches, that important
temporal linguistic mechanisms are not supported, applies to the
work of this thesis as well. (It also applies to Clifford's framework, where
most of these mechanisms are also not covered.) I claim, however, that
the temporal mechanisms that are currently supported are assigned
sufficiently elaborate semantics, to the extent that the other criticism
about previous approaches, that they use over-simplified semantics, does
not apply to the work of this thesis. I hope that further work on the
framework of this thesis will extend its coverage of temporal
phenomena (see section \ref{to_do} below).
\subsection{Intermediate representation language}
From the discussion in section \ref{previous_nlitdbs}, it follows that
some previous proposals on \textsc{Nlitdb}\xspace{s} (e.g.\ Bruce, De et al., Moens) use
meaning representation languages that are not clearly defined.
(Clifford's work does not suffer from this problem; his \textsc{Il}$_s$\xspace language
is defined rigorously.) This is a severe problem. Without a detailed
description of the syntax of the representation language, it is very
difficult to design a mapping from the representation language to a
new database language (one may want to use the linguistic front-end
with a new \textsc{Dbms}\xspace that supports another database language), and to
check that existing mappings to database languages cover all the
possible expressions of the representation language. Also, without a
rigorously defined semantics of the representation language, it is
difficult to see the exact semantics that the linguistic front-end
assigns to natural language expressions, and it is impossible to prove
formally that the mapping from representation language to database
language preserves the semantics of the representation language
expressions. This pitfall was avoided in this thesis: both
the syntax and the semantics of \textsc{Top}\xspace are completely and formally
defined (chapter \ref{TOP_chapter}).
\subsection{Mapping from English to representation language}
\label{eval_Grammar}
In section \ref{previous_nlitdbs}, I noted that some previous \textsc{Nlitdb}\xspace
proposals (e.g.\ Bruce, Moens) provide very little or no information
on the mapping from English to meaning representation language.
(Again, this criticism does not apply to Clifford's work; his mapping
from English to \textsc{Il}$_s$\xspace is well-documented.) In this thesis, this pitfall
was avoided: I adopted \textsc{Hpsg}\xspace, a well-documented and currently
widely-used grammar theory, and I explained in detail (in chapter
\ref{English_to_TOP}) all the modifications that were introduced to
\textsc{Hpsg}\xspace, and how \textsc{Hpsg}\xspace is used to map from English to \textsc{Top}\xspace. I consider
the fact that this thesis adopts \textsc{Hpsg}\xspace to be an improvement over
Clifford's framework, which is based on Montague's ageing \textsc{Ptq}\xspace
grammar, and certainly a major improvement over other previous \textsc{Nlitdb}\xspace
proposals (e.g.\ Bruce, Spenceley, De et al., Moens) that employ ad
hoc grammars which are not built on any principled grammar theory.
\subsection{Mapping from representation language to database language}
As mentioned in section \ref{Clifford_prev}, Clifford outlines an
algorithm for translating from \textsc{Il}$_s$\xspace (his intermediate representation
language) to a version of relational algebra. This algorithm, however,
is described in a very sketchy manner, and there is no proof that the
algorithm is correct (i.e.\ that the generated algebraic expressions
preserve the semantics of the \textsc{Il}$_s$\xspace expressions). In contrast, the
\textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping of this thesis is defined rigorously, and I
have proven formally that it generates appropriate \textsc{Tsql2}\xspace queries
(chapter \ref{tdb_chapter} and appendix \ref{trans_proofs}).
\subsection{Temporal database model and language}
Several previous proposals on \textsc{Nlitdb}\xspace{s} (e.g.\ De et al., Spenceley,
Moens) adopt temporal database models and languages that are
idiosyncratic (not based on established database models and languages)
and often not well-defined. Although Clifford's database model and
algebra are well-defined temporal versions of the traditional
relational database model and algebra, they constitute just one of
numerous similar proposals in temporal databases, and it is unlikely
that \textsc{Dbms}\xspace{s} supporting Clifford's model and algebra will ever
appear. This thesis adopted \textsc{Tsql2}\xspace and its underlying \textsc{Bcdm}\xspace model. As
already noted, \textsc{Tsql2}\xspace was designed by a committee comprising most
leading temporal database researchers, and hence it has much better
chances of being supported by forthcoming temporal \textsc{Dbms}\xspace{s}, or at
least of influencing the models and languages that these \textsc{Dbms}\xspace{s} will
support. As mentioned in section \ref{tdbs_general}, a prototype \textsc{Dbms}\xspace
for a version of \textsc{Tsql2}\xspace has already appeared. Although I had to
introduce some modifications to \textsc{Tsql2}\xspace and \textsc{Bcdm}\xspace (and hence the
database language and model of this thesis diverge from the
committee's proposal), these modifications are relatively few and
well-documented (chapter \ref{tdb_chapter}).
\subsection{Implementation}
As mentioned in section \ref{Clifford_prev}, although a parser for
Clifford's \textsc{Ptq}\xspace version has been implemented, there is no indication
that a translator from \textsc{Il}$_s$\xspace to his relational algebra was ever
constructed, or that his framework was ever used to build an actual
\textsc{Nlitdb}\xspace. (Similar comments apply to the work of Hafner and Mays of
section \ref{other_related_prev}.) In contrast, the framework of this
thesis was used to implement a prototype \textsc{Nlitdb}\xspace. Although several
modules need to be added to the prototype \textsc{Nlitdb}\xspace (section
\ref{modules_to_add}), the existence of this prototype constitutes an
improvement over Clifford's work. Unfortunately, the \textsc{Nlitdb}\xspace of this
thesis still suffers from the fact that it has never been linked to a
\textsc{Dbms}\xspace (section \ref{prototype_arch}). I hope that this will be
achieved in future (see section \ref{to_do} below).
\section{Summary}
In terms of syntactic coverage of temporal linguistic mechanisms, the
framework of this thesis is similar to Clifford's. The semantics that
Clifford assigns to these mechanisms, however, are much shallower than
those of this thesis. In both frameworks, there are several
time-related phenomena that remain to be covered. Unlike some of the
previous \textsc{Nlitdb}\xspace proposals, the intermediate representation language
of this thesis (\textsc{Top}\xspace) is defined rigorously, and the mapping from
English to \textsc{Top}\xspace is fully documented. Unlike Clifford's and other
previous proposals, this thesis adopts a temporal database model and
language (\textsc{Tsql2}\xspace) that were designed by a committee comprising most
leading temporal database researchers, and that are more likely to be
supported by (or at least influencing) forthcoming temporal \textsc{Dbms}\xspace{s}.
The mapping from \textsc{Top}\xspace to \textsc{Tsql2}\xspace is fully defined and formally proven.
In contrast, Clifford's corresponding mapping is specified in a
sketchy way, with no proof of its correctness. Also, unlike Clifford's
and other previous proposals, the framework of this thesis was used to
implement a prototype \textsc{Nlitdb}\xspace. The implementation of this thesis
still suffers from the fact that the prototype \textsc{Nlitdb}\xspace has not been
linked to a \textsc{Dbms}\xspace. I hope, however, that this will be achieved in
future.
\chapter{Conclusions} \label{conclusions_chapt}
\proverb{Times change and we with time.}
\section{Summary of this thesis}
This thesis has proposed a principled framework for constructing
natural language interfaces to temporal databases (\textsc{Nlitdb}\xspace{s}). This
framework consists of:
\begin{itemize}
\item a formal meaning representation language (\textsc{Top}\xspace), used to represent the
semantics of English questions involving time,
\item an \textsc{Hpsg}\xspace version that maps a wide range of English temporal
questions to appropriate \textsc{Top}\xspace expressions,
\item a set of translation rules that turn \textsc{Top}\xspace expressions into
suitable \textsc{Tsql2}\xspace queries.
\end{itemize}
The framework of this thesis is principled, in the sense that it is
clearly defined and based on current ideas from tense and aspect
theories, grammar theories, temporal logics, and temporal databases.
To demonstrate that it is also workable, it was employed to construct a
prototype \textsc{Nlitdb}\xspace, implemented using \textsc{Ale}\xspace and Prolog.
Although several issues remain to be addressed (these are discussed in
section \ref{to_do} below), the work of this thesis constitutes an
improvement over previous work on \textsc{Nlitdb}\xspace{s}, in that: (i) the
semantics of English temporal mechanisms are generally examined at a
more detailed level, (ii) the meaning representation language is
completely and formally defined, (iii) the mapping from English to
meaning representation language is well-documented and based on a
widely-used grammar theory, (iv) a temporal database language and
model that were designed by a committee comprising most leading
temporal database researchers are adopted, (v) the mapping from
meaning representation language to database language is clearly
defined and formally proven, (vi) it was demonstrated that the
theoretical framework of this thesis is implementable, by constructing
a prototype \textsc{Nlitdb}\xspace on which more elaborate systems can be based.
\section{Further work} \label{to_do}
There are several ways in which the work of this thesis could be
extended:
\paragraph{Extending the linguistic coverage:}
\label{wizard}
In section \ref{evaluation}, I noted that although the framework of
this thesis can handle an interesting set of temporal linguistic
mechanisms, there are still many time-related linguistic phenomena
that are not supported (see table \vref{coverage_table}). One could
explore how some of these phenomena could be handled. The temporal
anaphoric phenomena of section \ref{temporal_anaphora} are among those
that seem most interesting to investigate: several researchers have
examined temporal anaphoric phenomena, e.g.\ \cite{Partee1984},
\cite{Hinrichs1986}, \cite{Webber1988}, \cite{Eberle1989}, and it
would be interesting to explore the applicability of their proposals
to \textsc{Nlitdb}\xspace{s}. A Wizard of Oz experiment could also be carried out to
determine which temporal phenomena most urgently need to be added to
the linguistic coverage, and to collect sample questions that could be
used as a test suite for \textsc{Nlitdb}\xspace{s} \cite{King1996}. (In a Wizard of
Oz experiment, users interact through terminals with a person that
pretends to be a natural language front-end; see \cite{Diaper1986}.)
\paragraph{Cooperative responses:} In section \ref{no_issues}, I noted
that the framework of this thesis provides no mechanism for
cooperative responses. It became evident during the work of this
thesis that such a mechanism is particularly important in \textsc{Nlitdb}\xspace{s}
and should be added (cases where cooperative responses are needed were
encountered in sections \ref{simple_past}, \ref{progressives},
\ref{special_verbs}, \ref{period_adverbials}, \ref{while_clauses},
\ref{before_after_clauses}, \ref{at_before_after_op}, and
\ref{samples}). To use an example from section \ref{simple_past},
\pref{coop:2} is assigned a \textsc{Top}\xspace formula that requires BA737 to have
\emph{reached} gate 2 for the answer to be affirmative. This causes a
negative response to be generated if BA737 was taxiing to gate 2 but
never reached it. While a simple negative response is strictly
speaking correct, it is hardly satisfactory in this case. A more
cooperative response like \pref{coop:4} is needed.
\begin{examps}
\item Did BA737 taxi to gate 2? \label{coop:2}
\item \sys{BA737 was taxiing to gate 2 but never reached it.} \label{coop:4}
\end{examps}
In other cases, the use of certain English expressions reveals a
misunderstanding of how situations are modelled in the database and
the \textsc{Nlitdb}\xspace. In \pref{coop:3}, for example, the \qit{for~\dots}
adverbial shows that the user considers departures to have durations
(perhaps because he/she considers the boarding part of the departure;
see section \ref{point_criterion}). In the airport application,
however, departures are treated as instantaneous (they include only
the time-points where the flights leave the gates), and \qit{to taxi}
is classified as a point verb. The \qit{for~\dots} adverbial combines
with a point expression, which is not allowed in the framework of this
thesis (see table \vref{for_adverbials_table}). This causes
\pref{coop:3} to be rejected without any explanation to the user. It
would be better if a message like \pref{coop:3a} could be generated.
\begin{examps}
\item Which flight was departing for twenty minutes? \label{coop:3}
\item \sys{Departures of flights are modelled as instantaneous.}
\label{coop:3a}
\end{examps}
\paragraph{Paraphrases:} As explained in section \ref{prototype_arch},
a mechanism is needed to generate English paraphrases of possible
readings in cases where the \textsc{Nlitdb}\xspace understands a question to be ambiguous.
\paragraph{Optimising the TSQL2 queries:} As discussed in section
\ref{tsql2_opt}, there are ways in which the generated \textsc{Tsql2}\xspace queries
could be optimised before submitting them to the \textsc{Dbms}\xspace. One could
examine exactly how these optimisations would be carried out.
\paragraph{Additional modules in the prototype NLITDB:}
Section \ref{modules_to_add} identified several modules that would
have to be added to the prototype \textsc{Nlitdb}\xspace if this were to be used in
real-life applications: a preprocessor, modules to handle quantifier
scoping and anaphora resolution, an equivalential translator, and a
response generator. Adding a preprocessor and a simplistic response
generator (as described at the beginning of section
\ref{response_generator}) should be easy, though developing a response
generator that would produce cooperative responses is more complicated
(see the discussion above and section \ref{response_generator}). It
should also be possible to add an equivalential translator without introducing
major revisions in the work of this thesis. In contrast, adding
modules to handle quantifier scoping and anaphora requires extending
first the theoretical framework of this thesis: one has to modify
\textsc{Top}\xspace to represent universal quantification, unresolved quantifiers,
and unresolved anaphoric expressions (sections \ref{quantif_scoping}
and \ref{anaphora_module}), and to decide how to determine the scopes
or referents of unresolved quantifiers and anaphoric expressions.
\paragraph{Linking to a DBMS:} As explained in sections
\ref{tdbs_general} and \ref{contribution}, a prototype \textsc{Dbms}\xspace
(\textsc{TimeDb}) that supports a version of \textsc{Tsql2}\xspace was released
recently, but the prototype \textsc{Nlitdb}\xspace of this thesis has not been linked
to that system (or any other \textsc{Dbms}\xspace). Obviously, it would be
particularly interesting to connect the \textsc{Nlitdb}\xspace of this thesis to
\textsc{TimeDb}. This requires bridging the differences between the
versions of \textsc{Tsql2}\xspace that the two systems adopt (section \ref{contribution}).
\paragraph{Embedding ideas from this thesis into existing NLIDBs:}
Finally, one could explore if ideas from this thesis can be used in
existing natural language front-ends. In section
\ref{other_related_prev}, for example, I noted that \textsc{Cle}'s
formulae contain temporal operators whose semantics are undefined. One
could examine if \textsc{Top}\xspace operators (whose semantics are formally
defined) could be used instead. Ideas from the \textsc{Top}\xspace to \textsc{Tsql2}\xspace mapping
of chapter \ref{tdb_chapter} could then be employed to translate the
resulting \textsc{Cle} formulae into \textsc{Tsql2}\xspace.
\newpage
\nspace{1.0}
\addcontentsline{toc}{chapter}{Bibliography}
| proofpile-arXiv_065-504 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{1. Introduction}
Supersymmetric theories (SUSY) \cite{R0,R9} are the best motivated
extensions of the Standard Model (SM) of the electroweak and strong
interactions. They provide an elegant way to stabilize the huge
hierarchy between the Grand Unification or Planck scale and the Fermi
scale, and its minimal
version, the Minimal Supersymmetric Standard Model (MSSM) allows for a
consistent unification of the gauge coupling constants and a natural
solution of the Dark Matter problem \cite{R1a}. \\ \vspace*{-3mm}
Supersymmetry predicts the existence of a left-- and right--handed scalar
partner to each Standard Model (SM) quark. The current eigenstates,
$\tilde{q}_L$ and $\tilde{q}_R$, mix to give the mass eigenstates
$\tilde{q}_1$ and $\tilde{q}_2$; the mixing angle is proportional to the
quark mass and is therefore important only in the case of the third
generation squarks \cite{R1}. In particular, due to the large value of
the top mass $m_t$, the mixing between the left-- and right--handed
scalar partners of the top quark, $\tilde{t}_L$ and $\tilde{t}_R$, is
very large and after diagonalization of the mass matrix, the lightest
scalar top quark mass eigenstate $\tilde{t}_1$ can be much lighter than the top
quark and all the scalar partners of the light quarks \cite{R1}. \\ \vspace*{-3mm}
If the gluinos [the spin $1/2$ superpartners of the gluons] are heavy
enough, scalar quarks will mainly decay into quarks and charginos and/or
neutralinos [mixtures of the SUSY partners of the electroweak gauge
bosons and Higgs bosons]. These are in general tree--level two--body
decays, except in the case of the lightest top squark which could decay
into a charm quark and a neutralino through loop diagrams if the decay
into a chargino and a bottom quark is not overwhelming \cite{R2}. These
decays have been extensively discussed in the Born approximation
\cite{R3}. In this paper we will extend these analyses by including the
${\cal O}(\alpha_s)$ corrections, which due to the relatively large
value of the strong coupling constant, might be large and might affect
significantly the decay rates and the branching ratios\footnote{If
the gluinos are lighter than squarks, then squarks will mainly decay
into quarks plus gluinos; the QCD corrections to these processes have
been recently discussed in Refs.\cite{R5,R5a}.}. \\ \vspace*{-3mm}
The particular case of the QCD corrections to scalar quark decays into
massless quarks and photinos has been discussed in Refs.~\cite{R4,R5}.
In the general case that we
will address here, there are three [related] features which complicate
the analysis, the common denominator of all these features being the
finite value of quark masses: (i) In the case of the decays of top and bottom
squarks, one needs to take into account the finite value of the top
quark mass in
the phase space as well as in the loop diagrams. (ii) Scalar quark mixing will
introduce a new parameter which will induce additional contributions;
since the mixing angle appears in the Born approximation, it needs to be
renormalized. (iii) The finite quark mass [which enters the coupling
between scalar quarks, quarks and the neutralino/chargino states] needs
also to be renormalized. \\ \vspace*{-3mm}
The QCD corrections to the reaction $\tilde{q} \rightarrow q \chi$ analyzed in
the present paper are very similar to the case of the reverse process,
$t \rightarrow \tilde{t} \chi^0$ and $t \rightarrow \tilde{b} \chi^+$ recently discussed
in Ref.~\cite{R6} (see also Ref.~\cite{R7}). During the preparation of
this paper, we received a report by Kraml et al. \cite{R8}, where a
similar analysis has been conducted. Our analytical results agree with
those given in this paper\footnote{We thank the Vienna group and in
particular S. Kraml for their cooperation
in resolving some discrepancies with some of the formulae and plots
given in the early version of the paper Ref.~\cite{R8}. We also thank T.
Plehn for checking independently the results.}. We extend their
numerical analysis, which focused on the decay of the lightest top
squark into the lightest charginos and neutralinos, by discussing the
decays into the heavier charginos and neutralinos and by studying the
case of bottom squarks and the SUSY partners of light squarks.
\subsection*{2. Born Approximation}
In the Minimal Supersymmetric Standard Model \cite{R0,R9}, there are two
charginos $\chi_i^+ [i=1,2$] and four neutralinos $\chi_{i}^0$
[$i=1$--4]. Their masses and their couplings to squarks and quarks are
given in terms of the Higgs--higgsino mass parameter $\mu$, the ratio of
the vacuum expectation values $\tan\beta$ of the two Higgs doublet MSSM fields
needed to break the electroweak symmetry, and the wino mass parameter
$M_2$. The bino and gluino masses are related to the parameter $M_2$
[$M_1 \sim M_2/2$ and $m_{\tilde{g}} \sim 3.5 M_2$] when the gaugino
masses and the three coupling constants of
SU(3)$\times$SU(2)$\times$U(1) are unified at the Grand Unification
scale. \\ \vspace*{-3mm}
The squark masses are given in terms of the parameters $\mu$ and $\tan\beta$,
as well as the left-- and right--handed scalar masses $M_{\tilde{q}_L}$
and $M_{\tilde{q}_R}$ [which in general are taken to be equal] and the
soft--SUSY breaking trilinear coupling $A_q$. The top and bottom squark mass
eigenstates, and their mixing angles, are determined by diagonalizing
the following mass matrices
\begin{equation}
{\cal M}^2_{\tilde{t}} =
\left(
\begin{array}{cc} M_{\tilde{t}_L}^2 + m_t^2 + \cos 2 \beta (\frac{1}{2}
- \frac{2}{3}s_W^2) \, M_Z^2 & m_t \, M^{LR}_t \\
m_t \, M^{LR}_t & M_{\tilde{t}_R}^2 + m_t^2
+ \frac{2}{3}\cos 2 \beta \; s_W^2 \, M_Z^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}^2_{\tilde{b}} =
\left(
\begin{array}{cc} M_{\tilde{t}_L}^2 + m_b^2 + \cos 2 \beta (-\frac{1}{2}
+\frac{1}{3}s_W^2) \, M_Z^2 & m_b \, M^{LR}_b \\
m_b \, M^{LR}_b & M_{\tilde{b}_R}^2 + m_b^2
- \frac{1}{3}\cos 2 \beta \; s_W^2 \, M_Z^2
\end{array}
\right)
\end{equation}
where $M^{LR}_{t,b}$ in the off--diagonal terms read: $M^{LR}_t = A_t - \mu
\, \cot \beta$ and $M^{LR}_b = A_b - \mu \tan\beta$. \\ \vspace*{-3mm}
In the Born approximation, the partial widths for the decays
$\tilde{t}_i \rightarrow t\chi^0_j$, $\tilde{t}_i \rightarrow b\chi^+_j$ can be
written as $[q\equiv t$ or $b$, and we drop the indices of the
neutralino/chargino states]
\begin{eqnarray}
\Gamma_0( \tilde{t}_i \rightarrow q \chi) = \frac{\alpha}{4\,m_{\tilde{t}_i}^3}
\bigg[ ( {c_L^i}^2 + {c_R^i}^2 ) \,
( m_{\tilde{t}_i}^2 - m_{q}^2 - m_{\chi}^2 )
- 4 \, c_L^i\,c_R^i \, m_{q}\,m_{\chi}\,
\epsilon_\chi
\bigg] \,\lambda^{1/2}(m_{\tilde{t}_i}^2,m_{q}^2,m_{\chi}^2)
\end{eqnarray}
where $\lambda (x,y,z)=x^2+y^2+z^2-2\,(xy+xz+yz)$ is the usual two--body
phase space function and $\epsilon_\chi$ is the sign of the eigenvalue
of the neutralino $\chi$. The couplings $c_{L,R}^i$ for the neutral
current process, $\tilde{t}_i \rightarrow t \chi^0$, are given by
\begin{eqnarray}
\left\{ \begin{array}{c} c_R^1 \\ c_R^2 \end{array} \right\}
&=& b\,m_t\, \left\{ \begin{array}{c} \st{t} \\ \ct{t} \end{array}
\right\}
+ f_L\, \left\{ \begin{array}{c} \ct{t} \\ -\st{t} \end{array}
\right\} \nonumber \\
\left\{ \begin{array}{c} c_L^1 \\ c_L^2 \end{array} \right\}
&=& b\,m_t\, \left\{ \begin{array}{c} \ct{t} \\ -\st{t} \end{array}
\right\}
+ f_R\, \left\{ \begin{array}{c} \st{t} \\ \ct{t} \end{array} \right\}
\end{eqnarray}
\begin{eqnarray}
b & = & \frac{1}{\sqrt{2}\, M_W \sin\beta\,s_W} \; N_{j4} \nonumber \\
f_L & = & \sqrt{2}\left[ \frac{2}{3} \; N_{j1}'
+ \left(\frac{1}{2} - \frac{2}{3}\, s_W^2 \right)
\frac{1}{c_W s_W}\;N_{j2}' \right] \nonumber \\
f_R & = &-\sqrt{2}\left[ \frac{2}{3} \; N_{j1}'
- \frac{2}{3} \frac{s_W}{c_W} \; N_{j2}' \right] \ ,
\end{eqnarray}
and for the charged current process, $\tilde{t}_i \rightarrow b\chi^+$,
\begin{eqnarray}
\left\{ \begin{array}{c} c_L^1 \\ c_L^2 \end{array} \right\} & = &
\frac{m_b\,U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}
\left\{ \begin{array}{c} -\ct{t} \\ \st{t} \end{array} \right\}
\nonumber \\
\left\{ \begin{array}{c} c_R^1 \\ c_R^2 \end{array} \right\} & = &
\frac{V_{j1}}{s_W} \,
\left\{ \begin{array}{c} \ct{t} \\ -\st{t} \end{array} \right\}
- \frac{m_t\,V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}
\left\{ \begin{array}{c} \st{t} \\ \ct{t} \end{array} \right\} \ .
\end{eqnarray}
In these equations, $\theta_t$ is the $\tilde{t}$ mixing angle [which as
discussed previously can be expressed in terms of the Higgs--higgsino SUSY mass
parameter $\mu$, $\tan\beta$ and the soft--SUSY breaking trilinear coupling
$A_t$] with $s_{\theta}=\sin\theta$, $c_{\theta}=\cos\theta$ etc.;
$s_W^2=1-c_W^2\equiv \sin^2\theta_W$ and $N, U/V$ are the diagonalizing
matrices for the neutralino and chargino states \cite{R10} with
\begin{eqnarray}
N'_{j1}= c_W N_{j1} +s_W N_{j2} \ \ \ , \ \ \
N'_{j2}= -s_W N_{j1} +c_W N_{j2} \ .
\end{eqnarray}
A similar expression eq.~(3) can be obtained for the neutral and charged
decays of bottom squarks, $\tilde{b}_i \rightarrow b \chi_j^0$ and $ \tilde{b}
\rightarrow t \chi_j^-$
\begin{eqnarray}
\Gamma_0( \tilde{b}_i \rightarrow q \chi) = \frac{\alpha}{4\,m_{\tilde{b}_i}^3}
\bigg[ ( {c_L^i}^2 + {c_R^i}^2 ) \,
( m_{\tilde{b}_i}^2 - m_{q}^2 - m_{\chi}^2 )
- 4 \, c_L^i\,c_R^i \, m_{q}\,m_{\chi}\,
\epsilon_\chi
\bigg] \,\lambda^{1/2}(m_{\tilde{b}_i}^2,m_{q}^2,m_{\chi}^2)
\end{eqnarray}
with the couplings $c_{L,R}^i$ in the neutral decay $\tilde{b} \rightarrow b
\chi^0$ given by [$\theta_b$ is the $\tilde{b}$ mixing angle]
\begin{eqnarray}
\left\{ \begin{array}{c} c_R^1 \\ c_R^2 \end{array} \right\}
&=& b\,m_b\, \left\{ \begin{array}{c} \st{b} \\ \ct{b} \end{array}
\right\}
+ f_L\, \left\{ \begin{array}{c} \ct{b} \\ -\st{b} \end{array}
\right\} \nonumber \\
\left\{ \begin{array}{c} c_L^1 \\ c_L^2 \end{array} \right\}
&=& b\,m_b\, \left\{ \begin{array}{c} \ct{b} \\ -\st{b} \end{array}
\right\}
+ f_R\, \left\{ \begin{array}{c} \st{b} \\ \ct{b} \end{array} \right\}
\end{eqnarray}
\begin{eqnarray}
b & = & \frac{1}{\sqrt{2}\, M_W \cos\beta\,s_W} \; N_{j3} \nonumber \\
f_L & = & \sqrt{2}\left[ -\frac{1}{3} \; N_{j1}'
+ \left(-\frac{1}{2} +\frac{1}{3}\, s_W^2 \right)
\frac{1}{c_W s_W}\;N_{j2}' \right] \nonumber \\
f_R & = &-\sqrt{2}\left[ -\frac{1}{3} \; N_{j1}'
+ \frac{1}{3} \frac{s_W}{c_W} \; N_{j2}' \right] \ ,
\end{eqnarray}
and for the charged current process, $\tilde{b}_i \rightarrow t\chi^-$,
\begin{eqnarray}
\left\{ \begin{array}{c} c_L^1 \\ c_L^2 \end{array} \right\} & = &
\frac{m_t\,V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}
\left\{ \begin{array}{c} -\ct{b} \\ \st{b} \end{array} \right\}
\nonumber \\
\left\{ \begin{array}{c} c_R^1 \\ c_R^2 \end{array} \right\} & = &
\frac{U_{j1}}{s_W} \,
\left\{ \begin{array}{c} \ct{b} \\ -\st{b} \end{array} \right\}
- \frac{m_b\,U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}
\left\{ \begin{array}{c} \st{b} \\ \ct{b} \end{array} \right\} \ .
\end{eqnarray}
In the case where the mass of the final quark and the squark mixing
angle are neglected [as it is the case for the first and second
generation squarks], the decay widths simplify to
\begin{eqnarray}
\Gamma_0(\tilde{q}_i \rightarrow q \chi) = \frac{\alpha}{4}\,m_{\tilde{q}_i}\,
\left( 1- \frac{m_{\chi}^2}{m_{\tilde{q}_i}^2} \right)^2 f_i^2
\end{eqnarray}
where the $f_i$'s
[with now $i=L,R$ since there is no squark mixing] in the case
of the neutral decays, $\tilde{q} \rightarrow q \chi^0$, are given in terms of
the quark isospin $I_{3L}^q$ and charge $e_q$, by
\begin{eqnarray}
f_L & = & \sqrt{2}\left[ e_q \; N_{j1}'
+ \left(I_{3L}^q - e_q s_W^2 \right)
\frac{1}{c_W s_W}\;N_{j2}' \right] \nonumber \\
f_R & = &-\sqrt{2}\left[ e_q \; N_{j1}'
-e_q\, \frac{s_W}{c_W} \; N_{j2}' \right] \ ,
\end{eqnarray}
while for the charged decays, $\tilde{q} \rightarrow q' \chi^+$ one has
for up--type (down--type) squarks:
\begin{eqnarray}
f_L= V_{j1}/s_W \ (U_{j1}/s_W) \ \ , \ \ f_R=0 \ .
\end{eqnarray}
\subsection*{3. QCD corrections to Top Squark Decays}
The QCD corrections to the top squark decay width, eq.~(3), consist of virtual
corrections Figs.1a--d, and real corrections with an additional gluon
emitted off the initial $\tilde{t}$ or final $t$ [for the neutral decay] or
$b$ [for the charged decay] quark states, Fig.~1e. The ${\cal
O}(\alpha_s)$ virtual contributions can be split into gluon and gluino
exchange in the $q$--$\tilde{t}$--$\chi$ [$q=t,b$] vertex as well as
mixing diagrams and the $\tilde{t}$ and $t/b$ wave function
renormalization constants. The renormalization of the $q$--$\tilde{t}$--$\chi$
coupling is achieved by renormalizing the top/bottom quark masses and
the $\tilde{t}$ mixing angle. We will use the dimensional reduction
scheme\footnote{The quark mass and wave-function counterterms will
be different in the dimensional regularization \cite{R11a} and dimensional
reduction schemes \cite{R11}. Since dimensional reduction is
the scheme which preserves supersymmetry, we will present our results in
this scheme.} to regularize the ultraviolet divergencies, and
a fictitious gluon mass $\lambda$ is introduced to regularize the infrared
divergencies.
\subsubsection*{3.1 Virtual Corrections}
The QCD virtual corrections to the $\tilde{t}_i$--$\chi$--$q$ interaction
vertex can be cast into the form
\begin{eqnarray}
\delta \Gamma^i = ie \ \frac{\alpha_s}{3\pi} \, \sum_{j=g,
\tilde{g}, {\rm mix}, {\rm ct} } \left[ G_{j,L}^i P_L + G_{j,R}^i P_R \right]
\end{eqnarray}
where $G^i_{g}, \,G^i_{\tilde{g}}, \,G^i_{\rm mix}$ and $G^i_{\rm ct}$ denote
the gluon and gluino exchanges in the vertex, and the mixing and counterterm
contributions, respectively. \\ \vspace*{-3mm}
The contribution of the gluonic exchange [Fig.~1a] can be written as
\begin{eqnarray}
G^i_{g,L,R} = c_{L,R}^i \, F_1^i + c_{R,L}^i \,F_2^i
\end{eqnarray}
with the form factors $F^i_{1,2}$ given by
\begin{eqnarray}
F_1^i & = & B_0
+ 2 \, m_{q}^2 \, C_0 - 2 \, m_{\tilde{t}_i}^2 \, (C_{11}-C_{12})
+ 2 \, m_{\chi}^2 \, C_{11} \nonumber \\
F_2^i & = & -2 \, m_{q} \, m_{\chi} \, (C_0+C_{11})
\end{eqnarray}
with $q \equiv t$ for the neutral and $q \equiv b$ for the charged
decays; the two and three--point Passarino--Veltman functions, $B_0
\equiv B_0(m_{\tilde{t}_i}^2,\lambda,m_{\tilde{t}_i})$ and
$C_{..} \equiv C_{..}(m_{q}^2, m_{\tilde{t}_i}^2, m_{\chi}^2,$ $m_{q}^2,
\lambda^2,m_{\tilde{t}_i}^2)$ can be found in Ref.~\cite{R12}. \\ \vspace*{-3mm}
The gluino exchange contributions [Fig.~1b], are given by
\begin{eqnarray}
G_{\tilde{g},L,R}^i & = & -2 \sum_{k=1,2} \,
d_{L,R}^k \bigg[
(v_{\tilde{q}}^k v_{\tilde{t}}^i+a_{\tilde{q}}^k a_{\tilde{t}}^i)
F_4^{ik}
\mp (a_{\tilde{q}}^k v_{\tilde{t}}^i+v_{\tilde{q}}^k a_{\tilde{t}}^i)
F_5^{ik}
\nonumber \\
& & \hspace{1.5cm}
+ (v_{\tilde{q}}^k v_{\tilde{t}}^i-a_{\tilde{q}}^k a_{\tilde{t}}^i)
F_6^{ik}
\mp (a_{\tilde{q}}^k v_{\tilde{t}}^i-v_{\tilde{q}}^k a_{\tilde{t}}^i)
F_7^{ik}
\bigg]
\nonumber \\
& & \hspace{1.5cm}
+ d_{R,L}^k \bigg[
(v_{\tilde{q}}^k v_{\tilde{t}}^i+a_{\tilde{q}}^k a_{\tilde{t}}^i)
F_1^{ik}
\mp (a_{\tilde{q}}^k v_{\tilde{t}}^i+v_{\tilde{q}}^k a_{\tilde{t}}^i)
F_1^{ik}
\nonumber \\
& & \hspace{1.5cm}
+ (v_{\tilde{q}}^k v_{\tilde{t}}^i-a_{\tilde{q}}^k a_{\tilde{t}}^i)
F_2^{ik}
\mp (a_{\tilde{q}}^k v_{\tilde{t}}^i-v_{\tilde{q}}^k a_{\tilde{t}}^i)
F_3^{ik}
\bigg]
\end{eqnarray}
with again $q=t$ for the neutral decay and $q=b$ for the charged one; the
form factors $F^{ik}_{1,..,7}$ read
\begin{eqnarray}
F_1^{ik} & = & m_{\tilde{g}}\,m_{\chi}\, [C_0+C_{12}] \nonumber \\
F_{2,3}^{ik} & = & m_{\chi}\,
[\pm m_{q}\, (C_0+C_{11})+m_t\,C_{12}] \nonumber \\
F_{4,5}^{ik} & = & m_{\tilde{g}}\, [m_t \,C_0
\pm m_{q}\,(C_{11}-C_{12})] \nonumber \\
F_{6,7}^{ik} & = & m_{\tilde{q}_k}^2\,C_0 \pm
m_t \,m_{q}\, [C_0+C_{11}-C_{12}]
+m_{q}^2\, [C_{11}-C_{12}] +m_{\chi}^2\,C_{12}+B_0
\end{eqnarray}
with the two-- and three--point functions
$B_0\equiv B_0(m_{\tilde{t}_i}^2,m_{\tilde{g}},m_t)$ and $C_{..} \equiv
C_{..} (m_{q}^2,m_{\tilde{t}_i}^2,m_{\chi}^2,m_{\tilde{q}}^2,$
$m_{\tilde{g}}^2, m_t^2)$. The couplings $d_{R,L}^k$ are given by
\begin{eqnarray}
d_{L,R}^k \hspace{0.3cm} = c_{R,L}^k \
\end{eqnarray}
for neutralinos, while for the charginos one has
\begin{eqnarray}
\left\{\begin{array}{c} d_L^1 \\ d_L^2 \end{array} \right\}
& = &
\frac{U_{j1}}{s_W}\,
\left\{\begin{array}{c} \ct{b} \\ -\st{b} \end{array} \right\}
-\frac{m_b\,U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}\,
\left\{\begin{array}{c} \st{b} \\ \ct{b} \end{array} \right\} \nonumber \\
\left\{\begin{array}{c} d_R^1 \\ d_R^2 \end{array} \right\}
& = &
\frac{m_t\,V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}\,
\left\{\begin{array}{c} -\ct{b} \\ \st{b} \end{array} \right\} \ .
\end{eqnarray}
The $v_{\tilde{q}}^i$ and $a_{\tilde{q}}^i$ couplings read
\begin{eqnarray}
v_{\tilde{q}}^1 & = & {\textstyle\frac{1}{2}}\,( \ct{q}-\st{q} ) \ ,
\hspace{1.cm}
v_{\tilde{q}}^2 \; = \; {\textstyle - \frac{1}{2}}\,( \ct{q}+\st{q} )
\ , \nonumber \\
a_{\tilde{q}}^1 & = & {\textstyle\frac{1}{2}}\,( \ct{q}+\st{q} )
\ , \hspace{1.cm}
a_{\tilde{q}}^2 \; = \; {\textstyle\frac{1}{2}}\,( \ct{q}-\st{q} ) \ .
\end{eqnarray}
\vspace*{3mm}
Finally, the mixing contributions due to the diagrams Fig.~1c, yield
the expressions
\begin{eqnarray}
G_{\rm mix,L,R}^i & = & \frac{(-1)^i\,(\delta_{1i}\,c_{L,R}^2
+ \delta_{2i}\,c_{L,R}^1)}
{m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2} \,
\bigg[ 4 m_t \,m_{\tilde{g}}\, c_{2 \theta_t}\,B_0(m_{\tilde{t}_i}^2,
m_t,m_{\tilde{g}}) \nonumber \\
& & \hspace{4.5cm} + \, c_{2 \theta_t} s_{2\theta_t} (
A_0(m_{\tilde{t}_2}^2)- A_0(m_{\tilde{t}_1}^2) ) \bigg] \ .
\end{eqnarray}
Therein, $A_0$ is the Passarino--Veltman one--point function.
Note that all these contributions are the same in both the dimensional
reduction and dimensional regularization schemes.
\subsubsection*{3.2 Counterterms}
The counterterm contributions in eq.~(15) are due to the $\tilde{t}$
and $t/b$
wave function renormalizations [Fig.~1d] as well as the renormalization of the
quark mass $m_t$ or $m_b$ and the mixing angle $\theta_t$, which appear in the
Born couplings. \\ \vspace*{-3mm}
For the neutral decay process, $\tilde{t}_i \rightarrow t\chi^0_j$, the
counterterm contribution is given by
\begin{eqnarray}
G^{1,2}_{\rm ct,L} & = & \frac{1}{2}\,c^{1,2}_L\,( \delta Z^t_R
+ \delta Z_{\tilde{t}_{1,2}})
+ b \, \{\ct{t},-\st{t}\} \, \delta m_t
- b\,m_t \, \{\st{t},\ct{t}\} \, \delta \theta_t
+ f_R \, \{\ct{t},-\st{t}\} \, \delta \theta_t \nonumber \\
G^{1,2}_{\rm ct,R} & = & \frac{1}{2}\,c^{1,2}_R\,( \delta Z^t_L
+ \delta Z_{\tilde{t}_{1,2}})
+ b \, \{\st{t},\ct{t}\} \, \delta m_t
+ b\,m_t \, \{\ct{t},-\st{t}\} \, \delta \theta_t
- f_L \, \{\st{t},\ct{t}\} \, \delta \theta_t
\ , \nonumber \\ && \end{eqnarray}
whereas for the charged current process, $\tilde{t}_i \rightarrow
b\chi^+_j$, one obtains,
\begin{eqnarray}
G^{1,2}_{\rm ct,L} & = & \frac{1}{2}\,c^{1,2}_L\, \left[ \delta Z^b_R
+ \delta Z_{\tilde{t}_{1,2}}
+ 2\,\frac{\delta m_b}{m_b} \right]
+ \frac{ m_b U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}
\,\{\st{t},\ct{t}\}\,\delta \theta_t \nonumber \\
G^{1,2}_{\rm ct,R} & = & \frac{1}{2}\,c^{1,2}_R\, \left[ \delta Z^b_L
+ \delta Z_{\tilde{t}_{1,2}} \right]
- \,\frac{ \delta m_t \, V_{j2}}{\sqrt{2}\,s_W\,M_W\,
\sin\beta}
\,\{\st{t},\ct{t}\} \nonumber \\ & & \vspace{0.5cm}
- \frac{V_{j1}}{s_W}\,\{\st{t},\ct{t}\}\,\delta \theta_t
- \frac{m_t V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}
\,\{\ct{t},-\st{t}\}\,\delta \theta_t \ .
\end{eqnarray}
In the on--shell scheme, the quark and squark masses are defined as
the poles of the propagators and the wave--function renormalization
constants follow from the residues at the poles; the corresponding
counterterms are given by (see also Refs.~\cite{R6,R8})
\begin{eqnarray}
\frac{\delta m_q}{m_q} & = & \frac{1}{2}
\bigg[ \Sigma^q_R(m_q^2)+\Sigma^q_L(m_q^2)\bigg]
+ \Sigma^q_S(m_q^2) \nonumber \\
\delta Z^q_L & = & - \Sigma^q_L(m_q^2)
- m_q^2 \bigg[ {\Sigma^q_L}^{\prime}(m_q^2)
+{\Sigma^q_R}^{\prime}(m_q^2)+2\,{\Sigma^q_S}^{\prime}(m_q^2)
\bigg] \nonumber \\
\delta Z^q_R & = & - \Sigma^q_R(m_q^2)
- m_q^2 \bigg[ {\Sigma^q_L}^{\prime}(m_q^2)
+{\Sigma^q_R}^{\prime}(m_q^2)+2\,{\Sigma^q_S}^{\prime}(m_q^2)
\bigg] \nonumber \\
\delta Z_{\tilde{t}_i} & = & - \left(\Sigma_{\tilde{t}}^{ii}
\right)'(m_{\tilde{t}_i}^2)
\end{eqnarray}
In the dimensional reduction scheme, the self--energies $\Sigma$ and their
derivatives $\Sigma'$, up to a factor $\alpha_s /3\pi$ which has been
factorized out, are given by \cite{R6,R8}
\begin{eqnarray}
\Sigma^q_L(k^2) & = & - \bigg[ 2 \,B_1(k^2,m_q,\lambda)
+ (1+c_{2 \theta_q}) B_1(k^2,m_{\tilde{g}},m_{\tilde{q}_1})
+ (1-c_{2 \theta_q}) B_1(k^2,m_{\tilde{g}},m_{\tilde{q}_2}) \bigg]
\nonumber \\
\Sigma^q_R(k^2) & = & - \bigg[ 2 \,B_1(k^2,m_q,\lambda)
+ (1-c_{2 \theta_q}) B_1(k^2,m_{\tilde{g}},m_{\tilde{q}_1})
+ (1+c_{2 \theta_q}) B_1(k^2,m_{\tilde{g}},m_{\tilde{q}_2}) \bigg]
\nonumber \\
\Sigma^q_S(k^2) & = & - \bigg[ 4 \,B_0(k^2,m_q,\lambda)
+ \frac{m_{\tilde{g}}}{m_q}\,s_{2 \theta_q}\,
( B_0(k^2,m_{\tilde{g}},m_{\tilde{q}_1})
- B_0(k^2,m_{\tilde{g}},m_{\tilde{q}_2}) ) \bigg]
\nonumber \\
(\Sigma_{\tilde{t}}^{ii})'(k^2) & = & - 2 \bigg[
- 2\,B_1(k^2,m_{\tilde{t}_i},\lambda)
- 2\,k^2 \,B_1'(k^2,m_{\tilde{t}_i},\lambda)
+ (m_t^2+m_{\tilde{g}}^2-k^2)\,B_0'(k^2,m_t,m_{\tilde{g}})
\nonumber \\ & & \hspace{0.8cm}
- \,B_0(k^2,m_t,m_{\tilde{g}}) + (-1)^i\,2\,s_{2 \theta}\,m_t\,m_{\tilde{g}}
B_0'(k^2,m_t,m_{\tilde{g}}) \bigg] \ .
\end{eqnarray}
Using dimensional regularization, the quark self--energies differ from the
previous expressions by a constant; in terms of the their values in the
dimensional reduction scheme, they are given by
\begin{eqnarray}
\left. \Sigma_{L,R}^q \right|_{\rm dim.~reg.} = \Sigma_{L,R}^q -2
\ \ , \ \
\left. \Sigma_{S}^q \right|_{\rm dim.~reg.} = \Sigma_{S}^q + 2 \ .
\end{eqnarray}
Finally, we need a prescription to renormalize the $\tilde{t}$ mixing angle
$\theta_t$. Following Ref.~\cite{R13}, we choose this condition in such a way
that it cancels exactly the mixing contributions eq.~(23) for the decay
$\tilde{t_2} \rightarrow t \chi^0$
\begin{eqnarray}
\delta\theta_t & = & \frac{1} {m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2}
\left[4 \, m_t \,m_{\tilde{g}} \,c_{2 \theta_t} \,B_0(m_{\tilde{t}_2}^2,m_t,
m_{\tilde{g}}) + c_{2 \theta_t} s_{2\theta_t} (A_0(m_{\tilde{t}_2}^2)-
A_0(m_{\tilde{t}_1}^2) ) \right] \ .
\end{eqnarray}
Alternatively, since the lightest top squark $\tilde{t}_1$ can be lighter
than the top quark and then is more likely to be discovered first
in the top decays $t \rightarrow \tilde{t}_1 \chi_0$, one can choose the
renormalization condition such that
the mixing contributions are cancelled in the latter process; this
leads to a counterterm similar to eq.~(29) but with $B_0(m_{\tilde{t}_2}^2,m_t,
m_{\tilde{g}})$ replaced by $B_0(m_{\tilde{t}_1}^2,m_t, m_{\tilde{g}})$.
The difference between the two renormalization conditions,
\begin{eqnarray}
\Delta \delta\theta_t = \frac{4 m_t \,m_{\tilde{g}} \,c_{2 \theta_t}}
{m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2}
\left[ B_0(m_{\tilde{t}_1}^2,m_t,m_{\tilde{g}})
- B_0(m_{\tilde{t}_2}^2,m_t,m_{\tilde{g}}) \right]
\end{eqnarray}
is, however, very small numerically. Indeed, if $m_{\tilde{t}_1}$ is a few GeV
away from $m_{\tilde{t}_2}$, one has $\theta_t \simeq -\pi/4$ and therefore
$c_{2 \theta_t} \sim 0$, leading to a difference which is less than one
permille for the scenario of Figs.~2a/b. For degenerate top squarks, one
has $\Delta \delta \theta =4m_t m_{\tilde{g}} c_{2 \theta_t} B_0'
(m_{\tilde{t}_2}^2,m_t,m_{\tilde{g}})$ which is also very small numerically
[less than $\sim 1\% $ for the scenarios of Fig.~2.] \\ \vspace*{-3mm}
The complete virtual corrections to the $\tilde{t}_i \rightarrow q \chi$
decay width is then given by
\begin{eqnarray}
\Gamma^V(\tilde{t}_i \rightarrow q \chi) & = &
\frac{\alpha}{6 \, m_{\tilde{t}_i}^3} \frac{\alpha_s}{\pi}
\; \mbox{Re} \; \bigg\{
(c_L^i \, G_L^i + c_R^i \, G_R^i)\,
( m_{\tilde{t}_i}^2 - m_{q}^2 - m_{\chi}^2 )
\nonumber \\
& & \hspace{1.6cm}
- \; 2 \; ( c_L^i \, G_R^i + c_R^i \, G_L^i )
\, m_{q} \, m_{\chi} \epsilon_\chi \,
\bigg\} \,
\lambda^{1/2}(m_{\tilde{t}_i}^2,m_{q}^2,m_{\chi}^2) \ .
\end{eqnarray}
The sum of all virtual contributions including the counterterms are
ultraviolet finite as it should be, but they are still infrared divergent;
the infrared divergencies will be cancelled after adding the real corrections.
\subsubsection*{3.3 Real Corrections}
The contributions to the squark decay widths from the real corrections,
with an additional gluon emitted from the initial $\tilde{t}$ or final
$t/b$ states, can be cast into the form
\begin{eqnarray}
\Gamma_{\rm real}^i & = & \frac{2\,\alpha}{3 \, m_{\tilde{t}_i}}
\frac{\alpha_s}{\pi}
\bigg\{
8 \; c_L^i \, c_R^i \; m_{q} \, m_{\chi} \epsilon_\chi \,
\big[\; ( m_{\tilde{t}_i}^2 + m_{q}^2 - m_{\chi}^2) \, I_{01}
+ m_{\tilde{t}_i}^2 \, I_{00}
+ m_{q}^2 \, I_{11}
+ I_0 + I_1 \big]
\nonumber \\
& & \hspace{1.6cm} +\; ({c_L^i}^2+{c_R^i}^2) \,
\big[\; 2 \, ( m_{q}^2 + m_{\chi}^2 - m_{\tilde{t}_i}^2 )
\, ( m_{\tilde{t}_i}^2 \, I_{00} + m_{q}^2 \, I_{11}
+ I_0 + I_1 )
\nonumber \\
& & \hspace{4.1cm}
+ 2 \, ( m_{q}^4 - \; ( m_{\chi}^2 - m_{\tilde{t}_i}^2 )^2 ) \, I_{01}
- I
- I_1^0 \big]
\bigg\}
\end{eqnarray}
where the phase space integrals $ I(m_{\tilde{t}_i},m_{q},m_{\chi})
\equiv I $ are given by \cite{R14}
\begin{eqnarray}
I_{00} & = & \frac{1}{4\,m_{\tilde{t}_i}^4}\bigg[ \kappa \, \ln \bigg(
\frac{\kappa^2}{\lambda\,m_{\tilde{t}_i}\,m_{q}\,m_{\chi}}\bigg)
-\kappa-(m_{q}^2-m_{\chi}^2) \ln
\bigg(\frac{\beta_1}{\beta_2}\bigg)-m_{\tilde{t}_i}^2\,\ln (\beta_0)
\bigg] \nonumber \\
I_{11} & = & \frac{1}{4\,m_{q}^2\,m_{\tilde{t}_i}^2}\bigg[ \kappa \, \ln
\bigg(
\frac{\kappa^2}{\lambda\,m_{\tilde{t}_i}\,m_{q}\,m_{\chi}}\bigg)
-\kappa-(m_{\tilde{t}_i}^2-m_{\chi}^2)\ln
\bigg(\frac{\beta_0}{\beta_2}\bigg)-m_{q}^2\,\ln (\beta_1)
\bigg] \nonumber \\
I_{01} & = & \frac{1}{4\,m_{\tilde{t}_i}^2}
\bigg[ -2\,\ln \bigg(\frac{\lambda\,m_{\tilde{t}_i}\,
m_{q}\,m_{\chi}}{\kappa^2} \bigg)\,\ln (\beta_2)
+ 2\,\ln^2(\beta_2) - \ln^2(\beta_0) - \ln^2(\beta_1) \nonumber \\
& & + 2\,\mbox{Li}_2\,(1-\beta_2^2) - \mbox{Li}_2 \,(1-\beta_0^2)
- \mbox{Li}_2\,(1-\beta_1^2) \bigg] \nonumber \\
I & = & \frac{1}{4\,m_{\tilde{t}_i}^2} \bigg[ \frac{\kappa}{2}(m_{\tilde{t}_i}^2
+m_{q}^2+m_{\chi}^2)
+2\,m_{\tilde{t}_i}^2\,m_{q}^2\,\ln (\beta_2)
+2\,m_{\tilde{t}_i}^2\,m_{\chi}^2\,\ln (\beta_1)
+2\,m_{q}^2\,m_{\chi}^2\,\ln (\beta_0) \bigg] \nonumber \\
I_0 & = & \frac{1}{4\,m_{\tilde{t}_i}^2} \bigg[ -2\,m_{q}^2\,\ln (\beta_2)
-2\,m_{\chi}^2\,\ln (\beta_1)-\kappa \bigg] \nonumber \\
I_1 & = & \frac{1}{4\,m_{\tilde{t}_i}^2}\bigg[ -2\,m_{\tilde{t}_i}^2\,
\ln (\beta_2)
-2\,m_{\chi}^2\,\ln (\beta_0)-\kappa \bigg] \nonumber \\
I_1^0 & = & \frac{1}{4\,m_{\tilde{t}_i}^2}\bigg[ m_{\tilde{t}_i}^4 \,
\ln (\beta_2)
-m_{\chi}^2 \,(2\,m_{q}^2-2\,m_{\tilde{t}_i}^2
+m_{\chi}^2) \, \ln (\beta_0)
-\frac{\kappa}{4}\,(m_{q}^2-3\,m_{\tilde{t}_i}^2+5\,m_{\chi}^2)
\bigg] \ .
\end{eqnarray}
with $\kappa = \lambda^{1/2}(m_{\tilde{t}_i}^2,m_{q},m_{\chi})$ and
\begin{equation}
\beta_0 = \frac{m_{\tilde{t}_i}^2-m_{q}^2-m_{\chi}^2+\kappa}
{2\,m_{q}\,m_{\chi}},\;\;
\beta_1 = \frac{m_{\tilde{t}_i}^2-m_{q}^2+m_{\chi}^2-\kappa}
{2\,m_{\tilde{t}_i}\,m_{\chi}},\;\;
\beta_2 = \frac{m_{\tilde{t}_i}^2+m_{q}^2-m_{\chi}^2-\kappa}
{2\,m_{\tilde{t}_i}\,m_{q}} \ .
\end{equation}
\bigskip
\noindent Our analytical results agree with the results obtained recently in
Ref.~\cite{R8}.
\subsection*{4. QCD corrections to other squark decays}
\subsubsection*{4.1 Bottom Squark Decays}
In the case of the bottom squark decays, $\tilde{b}_i \rightarrow b \chi^0$ and
$\tilde{b}_i \rightarrow t \chi^-$, the analytical expressions of the QCD
corrections are just the same as in the previous section once the proper
changes of the squark [$m_{\tilde{t}_i} \rightarrow m_{\tilde{b}_i}$], the quark
$[q\equiv b$ and $q\equiv t$ for the neutral and charged decays] masses
and the mixing angles $[\theta_t \rightarrow \theta_b$] are performed. The couplings
for $\tilde{b}$ decays are as given in section 2: for the $d^k_{L,R}$
couplings, one has in the case of the neutral decay
$\tilde{b}_i \rightarrow b \chi^0$
\begin{eqnarray}
d_{L,R}^k \hspace{0.3cm} = c_{R,L}^k \ ,
\end{eqnarray}
with $c_{L,R}^k$ of eq.~(11), while in the charged decay $\tilde{b}_i
\rightarrow t \chi^-$, they read
\begin{eqnarray}
\left\{\begin{array}{c} d_L^1 \\ d_L^2 \end{array} \right\}
& = &
\frac{V_{j1}}{s_W}\,
\left\{\begin{array}{c} \ct{t} \\ -\st{t} \end{array} \right\}
-\frac{m_t\,V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}\,
\left\{\begin{array}{c} \st{t} \\ \ct{t} \end{array} \right\} \nonumber \\
\left\{\begin{array}{c} d_R^1 \\ d_R^2 \end{array} \right\}
& = &
\frac{m_b\,U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}\,
\left\{\begin{array}{c} -\ct{t} \\ \st{t} \end{array} \right\} \ .
\end{eqnarray}
The counterterm contributions are the same as in eq.~(24) with the
change $(t, \tilde{t}) \rightarrow (b, \tilde{b})$ in the neutral
decay; in the charged decay mode they are different due to different
couplings (see also Refs.~\cite{R6,R8}):
\begin{eqnarray}
G^{1,2}_{\rm ct,L} & = & \frac{1}{2}\,c^{1,2}_L\, \left[ \delta Z^t_R
+ \delta Z_{\tilde{b}_{1,2}}
+ 2\,\frac{\delta m_t}{m_t} \right]
+ \frac{ m_t V_{j2}}{\sqrt{2}\,s_W\,M_W\,\sin\beta}
\,\{\st{b},\ct{b}\}\,\delta \theta_b \nonumber \\
G^{1,2}_{\rm ct,R} & = & \frac{1}{2}\,c^{1,2}_R\, \left[ \delta Z^t_L
+ \delta Z_{\tilde{b}_{1,2}} \right]
- \,\frac{ \delta m_b \, U_{j2}}{\sqrt{2}\,s_W\,M_W\,
\cos\beta}
\,\{\st{b},\ct{b}\} \nonumber \\ & & \vspace{0.5cm}
- \frac{U_{j1}}{s_W}\,\{\st{b},\ct{b}\}\,\delta \theta_b
- \frac{m_b U_{j2}}{\sqrt{2}\,s_W\,M_W\,\cos\beta}
\,\{\ct{b},-\st{b}\}\,\delta \theta_b \ .
\end{eqnarray}
where again the $c_{L,R}^k$ are given by eq.~(11). Except for very large
values of $\tan\beta$, the $\tilde{b}$ mixing angle [as well as the bottom quark
mass] can be set to zero and the analytical expressions simplify
considerably\footnote{In the absence of mixing, the left-- and
right--handed bottom squarks are, to a very good approximation,
degenerate if $M_{\tilde{q}_L} = M_{\tilde{q}_R}$. In the rest of the
discussion, ${\tilde{b}_L}$ and ${\tilde{b}_R}$ [and a {\it fortiori}
the partners of the light quarks ${\tilde{q}_L}$ and ${\tilde{q}_R}$]
will be considered as degenerate.}.
The case of the neutral decay $\tilde{b} \rightarrow b \chi^0$ is
even simpler since one can also neglect the mass of the final $b$ quark.
In fact, the latter situation corresponds to the case of decays of first
and second generation squarks into light quarks and charginos/neutralinos,
which will be discussed now.
\subsubsection*{4.2 Light Quark Partners Decays}
Neglecting the squark mixing angle as well as the mass of the final
quarks, the virtual corrections of the processes $\tilde{q}_i \rightarrow q
\chi$ [where the subscript $i$ stands now for the chirality of the
squark, since in the absence of squark mixing one has $\tilde{q}_{L,R}
=\tilde{q}_{1,2}$] are given by the sum of the gluon and gluino
exchange vertices and the wave--function counterterm, plus the real
correction. The total width can then be written as
\begin{eqnarray}
\Gamma^i = \Gamma^i_0
\bigg[ 1 \,+ \, \frac{4}{3} \frac{\alpha_s}{\pi} \,
\left( F_{\rm g}+ F_{\rm \tilde{g}}+ F_{\rm ct} + F_{\rm r} \right) \bigg]
\end{eqnarray}
where the decay width in the Born approximation $\Gamma^i_0$ has been given
in eq.~(12).
In terms of the ratio $\kappa= m_{\chi}^2/m_{\tilde{q}}^2$, the gluon
exchange corrections are given by [$\Delta =1/(4-n)$ with $n$ the
space-time dimension, and $\mu$ is the renormalization scale]
\begin{eqnarray}
F_{\rm g} &=& \frac{\Delta}{2} + 1 - \frac{1}{2} \ln \frac{ m_ {\tilde{q}}^2 }
{\mu^2} -\frac{1}{4} \ln^2 \frac{ \lambda^2/ m_ {\tilde{q}}^2 } {(1-\kappa)^2 }
- \ln \frac{ \lambda^2/ m_ {\tilde{q}}^2 } {1-\kappa}
-{\rm Li_{2}}(\kappa) \ .
\end{eqnarray}
The gluino exchange contribution, with $\gamma= m_{\tilde{g}}^2
/m_{\tilde{q}}^2$, is given by
\begin{eqnarray}
F_{\rm \tilde{g}} = \sqrt{ \kappa \gamma} \left[ \frac{1}{ \kappa} \ln
(1-\kappa)+ \frac{1}{1-\kappa} \left[ \gamma \ln \gamma -(\gamma-1)
\ln (\gamma-1) \right] + \frac{ \kappa +\gamma -2}{(1-\kappa)^2} \, I \,
\right]
\end{eqnarray}
with
\begin{eqnarray}
I & \equiv & \frac{1}{m_{\tilde{q}_i}^2\,(1-\kappa)}\,
C_0 (0,m_{\tilde{q}}^2, m_{\chi}^2,m_{\tilde{q}}^2,m_{\tilde{g}}^2, 0) \ .
\end{eqnarray}
In terms of dilogarithms, the function $I$ is given for
$\kappa \gamma <1$ by
\begin{eqnarray}
I= {\rm Li_{2}} \left( \frac{\gamma-1}{\gamma \kappa-1} \right)
- {\rm Li_{2}} \left( \kappa \frac{\gamma-1}{\gamma \kappa-1} \right)
- {\rm Li_{2}} \left( \frac{\gamma+\kappa-2}{\gamma \kappa-1} \right)
+ {\rm Li_{2}} \left( \kappa \frac{\gamma+\kappa-2}{\gamma \kappa-1} \right)
\end{eqnarray}
and for $\kappa \gamma > 1$ one has
\begin{eqnarray}
I & = &-{\rm Li_{2}}\left( \frac{\gamma \kappa-1}{\gamma-1} \right)
+{\rm Li_{2}}\left( \frac{\gamma \kappa-1}{\gamma+\kappa-2} \right)
+{\rm Li_{2}}\left( \frac{\gamma \kappa-1}{\kappa(\gamma-1)} \right)
-{\rm Li_{2}}\left( \frac{\gamma \kappa-1}{\kappa(\gamma+\kappa-2)}
\right) \nonumber \\
& & -\ln (\kappa)\,\ln \frac{\gamma+\kappa-2}{\gamma-1} \ .
\end{eqnarray}
The counterterm contribution, consisting of the sum of the squark and
quark wave--function renormalization constants, reads
\begin{eqnarray}
F_{\rm ct} &=& - \frac{\Delta}{2} + \frac{\gamma}{4\,(1-\gamma)} -
\frac{\gamma}{2}
- \frac{15}{8}
+ \frac{1}{2} \ln \frac{ m_ {\tilde{q}}^2}{\mu^2}
- \frac{1}{4} \ln \frac{ \lambda^2}{ m_ {\tilde{q}}^2 } \nonumber \\ & &
- \frac{1}{2}(\gamma^2-1) \ln \frac{\gamma -1}{\gamma}
+ \frac{1}{4}\left[ \frac{2\,\gamma-1}{(1-\gamma)^2}+3 \right] \ln \gamma
\ . \end{eqnarray}
Finally, the real corrections with massless quarks in the final
state contribute
\begin{eqnarray}
F_{\rm r} &=&
\frac{1}{4} \ln^2 \frac{ \lambda^2/ m_{\tilde{q}}^2 }{(1-\kappa)^2}
+ \frac{5}{4} \ln \frac{ \lambda^2/ m_{\tilde{q}}^2 }{(1-\kappa)^2}
- \frac{\kappa\,(4-3 \kappa)}{4\,(1-\kappa)^2}\ln \kappa
\nonumber \\ & &
- {\rm Li_{2}}(\kappa) -\ln \kappa \ln (1-\kappa)
- \frac{3 \kappa-5}{8\,(\kappa-1)} - \frac{\pi^2}{3} + 4 \ .
\end{eqnarray}
We see explicitly that the ultraviolet divergences $\Delta/2$ and the
scale $\mu$ cancel
when $F^i_g$ and $F^i_{\rm ct}$ are added, and that the infrared
divergences $\ln^2(\lambda^2/ m_{\tilde{q}}^2)$ and $\ln (\lambda^2/
m_{\tilde{q}}^2)$ disappear when $F_g$, $F_{\rm ct}$ and $F_{\rm
r}$ are summed. The gluino exchange contribution eq.~(40) does not contain
any ultraviolet or infrared divergences. The total correction in
eq.~(38) then reads
\begin{eqnarray}
F_{\rm tot} & = & F_{\rm g}+ F_{\rm \tilde{g}}+ F_{\rm ct} + F_{\rm r} \nonumber \\
& = & - \frac{1}{8}\left( \frac{4\,\gamma^2-27\,\gamma+25}{\gamma-1}
+ \frac{3\,\kappa-5}{\kappa-1} \right)
- \frac{\pi^2}{3} - 2\, {\rm Li_2}(\kappa)
- \frac{1}{2}\,(\gamma^2-1)\,\ln \frac{\gamma-1}{\gamma} \nonumber \\
& &
+ \frac{3\,\gamma^2-4\,\gamma+2}{4\,(1-\gamma)^2}\,\ln \gamma
- \frac{3}{2}\,\ln (1- \kappa)
+ \frac{3\,\kappa^2-4\,\kappa}{4\,(\kappa-1)^2}\,\ln \kappa
- \ln \kappa\,\ln (1-\kappa) \nonumber \\
& & +\sqrt{ \kappa \gamma} \left[ \frac{1}{ \kappa} \ln
(1-\kappa)+ \frac{1}{1-\kappa} \left[ \gamma \ln \gamma -(\gamma-1)
\ln (\gamma-1) \right] + \frac{ \kappa +\gamma -2}{(1-\kappa)^2} \, I \,
\right] .
\end{eqnarray}
\smallskip
In the limit where the mass of the final neutralino or chargino is much
smaller than the mass of the initial squark, the analytical expression
of the QCD correction further simplifies:
\begin{eqnarray}
F_{\rm tot}= \frac{3 \gamma^2-4\gamma+2}{4(\gamma-1)^2} \ln \gamma
- \frac{1}{2}
(\gamma^2-1) \ln \frac{\gamma-1}{\gamma} -\frac{2 \gamma^2-11 \gamma
+10}{4(\gamma-1)} -\frac{\pi^2}{3} \ .
\end{eqnarray}
Note the explicit logarithmic dependence on the gluino mass in the
correction. This logarithmic behaviour, leading
to a non-decoupling of the gluinos for very large masses,
\begin{eqnarray}
F_{\rm tot} = \frac{3}{4} \ln \frac{ m_{\tilde{g}}^2} {m_{\tilde{q}}^2}
+\frac{5}{2} -\frac{\pi^2}{3} \ \ \ {\rm for} \ \
m_{\tilde{g}} \gg m_{\tilde{q}}
\end{eqnarray}
is due to the wave function renormalization and is a consequence of the
breakdown of SUSY as discussed in Ref.~\cite{R4}. Had we chosen
the $\overline{\rm MS}$ scheme when renormalizing the squark/quark wave
functions [i.e. subtracting only the poles and the related constants in
the expression eq.~(23)] we would have been left with contributions which
increase linearly with the gluino mass. \\ \vspace*{-3mm}
Our analytical results in the case of massless final quarks agree with the
corresponding results obtained in Refs.\cite{R4,R5}, where the QCD
corrections to the decay of a squark into a massless quark and a photino
have been derived, after correcting the sign of $F_{\tilde{g}}$ in
Ref.~\cite{R4}; see also the discussion given in Ref.~\cite{R5}.
\subsection*{5. Numerical Analysis and Discussion}
In the numerical analysis of the QCD corrections to squark decays, we
will choose $m_t=180$ GeV (consistent with \cite{R17}) and $m_b=5$ GeV
for the top and bottom quark masses and a constant value for the strong
coupling constant $\alpha_s =0.12$ [the value of $\alpha_s$ in the
running from a scale of 0.1 to 1 TeV does not change significantly]; the
other fixed input parameters are $\alpha=1/128$, $M_Z=91.187$ GeV and
$s_W^2=0.23$ \cite{R18}. For the SUSY parameters, we will take into
account the experimental bounds from the Tevatron and LEP1.5 data
\cite{R16}, and in some cases use the values favored by fits
of the electroweak precision data from LEP1 \cite{R15}. \\ \vspace*{-3mm}
Fig.~2 shows the partial widths for the decays of the lightest top
squark into the two charginos $\chi_{1,2}^+$ and a bottom quark [2a]
and into the lightest neutralino $\chi_1^0$ and the sum of all neutralinos
[the opening of the neutralino thresholds can be seen in the curves] and a
top quark [2b]. In these figures, $\tan\beta$ is fixed to $\tan\beta=1.6$, a value
favored by $b$--$\tau$ Yukawa coupling unification \cite{R19}. The
solid, dashed and dot--dashed curves correspond to the $(M_2, \mu$)
values [in GeV]: $(70, -500), (70, -70)$ and $(300,-70)$ in Fig.~2a
[which give approximately the same value for the lightest chargino mass,
$m_{\chi_1^+} \simeq 70$ GeV] and $(100, -500), (100, -100)$ and
$(250,-50)$ in Fig.~2b [giving an LSP mass of $m_{\chi_1^0} \sim 50$
GeV]. These values correspond to the scenarios $M_2 \ll |\mu|$, $M_2
\simeq \mu$ and $M_2 \gg |\mu|$, and have been chosen to allow for a
comparison with the numerical analysis given in \cite{R8}. The
parameters in the $\tilde{t}$ mass matrix are fixed by requiring
$m_{\tilde{t}_2} =600$ GeV and varying $M_{\tilde{t}_L}$. The mixing
angle is then completely fixed assuming $M_{\tilde{t}_R}=
M_{\tilde{t}_L}$ ($\theta_{\tilde{t}}\approx -\pi/4$ except
for $m_{\tilde{t}_1}$ very close to $m_{\tilde{t}_2}$);
in the bottom squark sector we have
$m_{\tilde{b}_1}= 220$ GeV, $m_{\tilde{b}_2} \sim 230$ GeV and
$\theta_{\tilde{b}} \simeq 0$.
\\ \vspace*{-3mm}
Fig.~3 shows the magnitude of the QCD corrections relative to the Born
width to the decays of the lightest top squark into charginos+bottom
[3a/b] and neutralinos+stop [3c/d] for the scenarios described in
Fig.~2a [for Figs.3a/b] and Fig.~2b [for Figs.3c/d]. For both the
neutral and charged decays,
the QCD corrections can be rather large and vary in a wide
margin: from $\sim \pm 10\%$ for light top squarks up to $\sim -40\%$
for $m_{\tilde{t}_1} \sim m_{\tilde{t}_2}$ and some $(M_2, \mu)$ values.
\\ \vspace*{-3mm}
The small spikes near $m_{\tilde{t}_1} \sim 425$ (530) GeV for $\chi^+ b$
$(\chi^0 t$) decays are due to thresholds in the top squark wave function
renormalization constants from the channel $\tilde{t}_1 \rightarrow \tilde{g} t$.
For the depicted $m_{\tilde{t}_1}$ range, this happens only for the
value $M_2 =70$ (100) GeV which leads to $m_{\tilde{g}} \simeq 3.5 M_2
\sim 245 (350)$ GeV. Note, however, that when this occurs, the channel
$\tilde{t}_1 \rightarrow \tilde{g} t$ becomes by far the main decay mode,
and the chargino/neutralino modes are very rare. \\ \vspace*{-3mm}
In Fig.~4 the variation of the QCD corrections for the decay
$\tilde{t}_1 \rightarrow b \chi_1^+$ [4a] and $\tilde{t}_1 \rightarrow t \chi_1^0$ [4b]
is displayed as a function of the gluino mass, for two values of $\mu=-50$
and $-500$ GeV and $\tan\beta=1.6$ and $20$. The top squark masses are fixed to
$m_{\tilde{t}_1} =300$ and $m_{\tilde{t}_2}=600$ GeV
($\theta_{\tilde{t}}= -\pi/4$) and the $\tilde{b}$
masses are as in Fig.~2. $M_2$ and hence the chargino and neutralino
masses are fixed by $m_{\tilde{g}}$. The figure exhibits a slight
dependence of the QCD correction on the gluino mass. For the chosen
set of squark mass parameters, the variation of the QCD correction
with $\mu$ is rather pronounced, while the variation with $\tan\beta$
is milder. \\ \vspace*{-3mm}
Fig.~5 shows the partial decay widths for the decays of the
lightest bottom squark [which in our convention is denoted by
$\tilde{b}_1$ and is almost left--handed] into the lightest chargino
$\chi_{1}^-$ and a top quark [5a] and into the lightest neutralino
$\chi_1^0$ and a bottom quark [5b]. As in Fig.~2, $\tan\beta$ is fixed to
$\tan\beta=1.6$ and $m_{\tilde{t}_1}=600$ GeV; the mass difference between the
two squarks is $\simeq 10$ GeV and we have for the mixing angle
$\theta_{\tilde{b}} \simeq 0$. The solid, dashed and dot--dashed curves
correspond to the $(M_2, \mu$) values [in GeV]: $(60, -500), (70, -60)$
and $(300,-60)$ in Fig.~5a and $(100, -500), (100, -100)$ and
$(250,-50)$ in
Fig.~5b. The decay $\tilde{b}_1 \rightarrow t \chi_1^-$ is by far dominant when
the channel $\tilde{b}_1 \rightarrow \tilde{g}b$ is closed, since its decay
width is almost two orders of magnitude larger than the $\tilde{b}_1
\rightarrow$ LSP+bottom decay width. \\ \vspace*{-3mm}
Fig.~6 presents the magnitude of the relative QCD corrections
to the decays $\tilde{b}_1
\rightarrow t \chi_1^-$ [6a] and $\tilde{b}_1 \rightarrow b \chi_1^0$ [6b] as a function
of the bottom squark mass, for the same scenarios as in Fig.~5. Again, depending
on the values of $\mu, M_2$ and $m_{\tilde{b}_1}$, the QCD corrections
vary from ($\pm$) a few percent up to $-50\%$. \\ \vspace*{-3mm}
Finally, Fig.~7 displays the QCD corrections to the decays of the SUSY
partners of massless quarks into neutralinos, $\tilde{q} \rightarrow q \chi_0$,
as a function of the ratio $\kappa=m_{\chi}^2/ m_{\tilde{q}}^2$ for
several values of the ratio $\gamma=m_{\tilde{g}}^2 /m_{\tilde{q}}^2,
\gamma=1.2, 1.5$ and 2 [7a] and as a function of $\gamma$ for several
values of $\kappa, \kappa=0.2, 0.5$ and $0.8$ [7b]. The quark mass and
the squark mixing angle are set to zero and all squarks are taken to
be degenerate. The corrections then depend
only on the two parameters, $\kappa$ and $\gamma$ since the dependence
on the other SUSY parameters factorizes in the Born term. The QCD
corrections vary from small [most of the time negative] values for small
$\kappa$ values and small gluino masses, up to $\sim 20\%$ near
threshold. \\ \vspace*{-3mm}
For the decays $\tilde{q}_L \rightarrow q' \chi^\pm_j$ [the right--handed squark
does not decay into charginos], the matrix elements in the chargino
mass matrix do not factorize in the Born expressions and the QCD
corrections further depend on the ratios $U_{j1}/V_{j1}$ through
the contribution $F_{\tilde{g}}$. This dependence is, however, rather mild
since first the ratio $U_{j1}/V_{j1}$ is of order unity in most of the
relevant SUSY parameter space [in particular for $|\mu| > M_2$] and second
the contribution $F_{\tilde{g}}$ is small compared to the other contributions
for gluino masses below 1 TeV. The QCD corrections for the decays $\tilde{q}_L
\rightarrow q' \chi^\pm$ are thus approximately the same as in the case of the decays
into neutralinos. \\ \vspace*{-3mm}
In conclusion: we have calculated the ${\cal O}(\alpha_s)$ QCD corrections
to decay modes of scalar squarks into quarks plus charginos or neutralinos
in the Minimal Supersymmetric Standard Model. We have paid a particular
attention to the case of $\tilde{t}$ [and also $\tilde{b}$] squarks, where
mixing effects are important. In the case of top squark decays, the QCD
corrections can reach values of the order of a few ten percent depending on
the various SUSY parameters.
They can be either positive or negative and increase logarithmically with
the gluino mass. For the scalar partners of light quarks, the corrections
do not exceed the level of ten to twenty percent for gluino masses less than
1 TeV.
\vspace*{2cm}
\noindent {\bf Acknowledgements}: \\ \vspace*{-3mm}
\noindent We thank Tilman Plehn and Peter Zerwas for discussions and for the
comparison between their results Ref.~\cite{R5a} and ours, and the
Vienna group, in particular Sabine Kraml, for discussions about Ref.~\cite{R8}.
\newpage
| proofpile-arXiv_065-525 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we use a new real space renormalization-group map to
study renormalization of SUSY $\phi^4$ theories.
Symanzik's \cite{Sy} work proved that $\phi^4$ theories can be
represented as weakly self-avoiding random walk models. They are
in the same universality class as the self-avoiding random walk \cite{La}.\
Our renormalization-group transformation is carried out on the space of
probabilities. Real space renormalization-group methods have proved to be
useful in the study of a wide class of phenomena and are used here
to provide a new stochastic meaning to the parameters involved in the flow of
the interacting constant and the mass, as well as the $\beta$ function for
SUSY $\phi^4$.\
The hierarchical models introduced by Dyson \cite{Dy} have the feature of
having a simple renormalization-group transformation. We use a hierarchical
lattice where the points are labeled by elements of a countable, abelian
group {\it G} with ultrametric $\delta$; i.e. the metric space
$({\it G},\delta)$ is hierarchical. The hierarchical structure of this metric space induces a renormalization-group
map that is ``local"; i.e. instead of studying the space of random functions
on the whole lattice, we can descend to the study of random functions on
L-blocks (cosets of {\it G}) \cite{Ev}. Our method provides a probabilistic
meaning to every parameter appearing in the flow of interaction constant and
the mass for any $\phi^n$.\
This paper is organized as follows; in Section 2 we present
the lattice and the corresponding class of L\'{e}vy processes which are
studied here. In Section 3 we define the renormalization-group map and apply
this to SUSY $\phi^4$, in random walk representation. In Section 4 we use
the results obtained in previous Section to give new ligth into the stochastic
meaning of the map and results are presented.\
\section{The lattice with ultrametric and the L\'{e}vy process.}
The hierarchical lattice used in this paper was introduced by Brydges, Evans
and Imbrie \cite{Ev}.Here, we present a slight variant. Fix an integer
$L\geq 2$. The points of the lattice are labeled by elements of the countable,
abelian group ${\it G}=\oplus ^{\infty}_{k=0}{\bf Z}_{L^d}$, d being the
dimension of the lattice. A one-dimensional example can be found in Brydges
et al \cite{Ev} and a two dimensional example in Rodr\'{\i}guez-Romo \cite{Su}.
An element $X_i$ in ${\it G}$ is an infinite sequence
$$
X_i\equiv (...,y_k,...,y_2, y_1, y_0 )\;\;;\;\;y_i\in {\bf Z}_{L^d}\;\;
\mbox{ thus }\;\; X_i\in {\it G}=\oplus^{\infty}_{k=0}{\bf Z}_{L^d},
$$
where only finitely many $y_i$ are non-zero.\
Let us define subgroups
\begin{equation}
\{0\}={\it G}_0\subset {\it G}_1\subset ...\subset {\it G}
\;\;\;\mbox{where }
{\it G}_k=\{X_i\in {\it G}| y_i=0, i\geq k \}
\end{equation}
and the norm $|\cdot|$ as
\begin{equation}
|X_i|=\left\{\begin{array}{cc}
0 & \mbox{ if $X_i=0$} \\
L^p & \mbox{where $p=\inf\{k| X_i\in {\it G}_k\}$ if $X_i\neq 0$}
\end{array}
\right.
\end{equation}
Then, the map $\delta:(X_{i+1}, X_i)\rightarrow |X_{i+1}-X_i|$ defines a
metric on ${\it G}$. In this metric the subgroups ${\it G}_k$ are balls
$|X_i|\leq L^k$ containing $L^{dk}$ points. Here the operation + (hence - as
well) is defined componentwise.\
The metric defined by eq(2) satisfies a stronger condition than the triangle
inequality, i.e.
\begin{equation}
|X_i+X_{i+1}|\leq \mbox{ Max}(|X_i|,|X_{i+1}|).
\end{equation}
From eq(3), it is clear that the metric introduced is an ultrametric. \
For the purposes of this paper we introduce the L\'{e}vy process as a
continuous time random walk $w$. This is the following ordered sequence of
sites in {\it G};
\begin{equation}
(w(t_0),...,w(t_0+...+t_n))\;\;,w(t_0+...+t_i)=X_i\in {\it G},
\;\;T=\sum ^n_{i=0}t_i ,\; n\geq 0
\end{equation}
where $t_i$ is the time spent in $X_i\in {\it G}$ (waiting time at $X_i$)
and T, fixed at this point, is the runnig time for the process. For convenience we take
$X_0=0$.\
We are not dealing with nearest neighbour random walks on the lattice,
provided we mean neighbourhood with respect to the ultrametric distance
$\delta$ previously defined. We propose the L\'{e}vy process we are dealing
with, having a probability $P(w)=r^ne^{-rT}\prod^{n-1}_{i=0}q(X_{i+1},X_{i})$.
Namely the continuous time random walk has a probability $rdt$ (r is the
jumping rate) of making a step in time $(t,dt)$ and, given that it jumps, the
probability of jumping from $X_i$ to $X_{i+1}$ is $q(X_{i+1},X_i)$,
conditioned to a fixed running time $T$ for the process. $q(X_{i+1},X_i)$ is
an open function of the initial and final sites of jumps in the lattice
${\it G}$. Here we define $Dw$ by $\int(\cdot)Dw$=\newline
$\sum_n\sum_{[X_i]^n_{i=0}}\int^T_0\prod^n_{i=0}dt_i
\delta(\sum^n_{i=0}t_i-T)(\cdot)$. From this follows $\int P(w)Dw$=$1$.\
Let the space of simple random walks of length $n$, be $\Lambda_n$, with
probability measure $P(w)$, we construct on this space the weakly SARW model
that represents SUSY $\phi^4$ (through McKane-Parisi-Sourlas theorem
\cite{MKPS}). We take advantage of this feature to provide a better
understanding of SUSY $\phi^4$ renormalization in terms of stochastic
processes. This method can be straightforward generalized to any SUSY
$\phi^n$ with ultrametric.
\section{The renormalization-group map on the random walk representation of
SUSY $\phi^4$.}
We propose a renormalization-group map on the lattice $R(X_i)=LX'_i$ where
$X_i\in {\it G}$ and $LX'_i\in {\it G}'$=${\it G}/{\it G}_1\sim {\it G}$; i.e.
the renormalized lattice ${\it G}'$ is isomorphic to the original lattice
${\it G}$. Here $LX'_i=(...,y_2,y_1)$.\
Besides we propose the action of the renormalization-group map on the space of
random walks $R(w)=w'$, from $w$
above as defined, to $w'$. Here,
$w'$ is the following ordered sequence of sites in ${\it G}'=
{\it G}/{\it G}_1\approx {\it G}$;
\begin{equation}
(w'(t'_0),...,w'(t'_0+...+t'_k))\;\;,\mbox{ where}
\end{equation}
$$
w'(t'_0,...,t'_{i'})=X'_{i'}\in {\it G},\;\;T'=\sum ^k_{i'=0}t'_i ,
\;0\leq k\leq n,
\;\;T=L^{\varphi}T'.
$$
$R$ maps $w(t_0)+{\it G}_1,w(t_0+t_1)+{\it G}_1,...,w(t_0+...+t_n)+{\it G}_1$
to cosets $Lw(t_0)$,$Lw(t_0+t_1)$,...,$Lw(t_0+...+t_n)$ respectively. If two
or more successive cosets in the image are the same, they are listed only as one
site in $w'(t'_0),...,w(t'_0+...+t'_k)$, and the times $t'_j$ are sums of the
corresponding $t_i$ for which successive cosets are the same, rescaled by
$L^{\varphi}$. For $\varphi=2$, we are dealing with normal diffusion (this is
the standard version of SUSY $\phi^4$), in case $\varphi<2$ with superdiffusion,
and subdiffusion for $\varphi>2$. In the following this parameter is arbitrary,
so we can study general cases.\
We can now work out probability measures at the $(p+1)^{th}$ stage in the
renormalization provided only that we know the probabilities at the $p^{th}$
stage. We integrate the probabilities of all the paths $w^{(p)}$
consistent with a fixed path $w^{(p+1)}$ in accordance with the following.
Let $R(w)=w'$ be the renormalization-group map above as stated, then
$P'(w')$= \newline
$L^{\varphi k}\int Dw P(w)\chi (R(w)=w')$. Here $R(w)=w'$ is a
renormalization-group transformation that maps an environment $P(w)$ to a new
one, $P'(w')$, thereby implementing the scaling.\
Hereafter
\begin{equation}
m_{j'}=\sum^{j'}_{i'=0}n_{i'}+j' \;\;\mbox{ and}
\end{equation}
$$
n=\sum^{k}_{i'=0}n_{i'}+k\;\;\;\;\;\;0\leq j'\leq k, \mbox{ being}
$$
$n_{i'}$=$max\{i|w(t_0+...+t_j)\in LX_{i'}, \forall j\leq i\}$; i.e. the
number of steps (for paths $w$) in the contracting ${\it G}_1$ coset that, once
the renormalization-group map is applied, has the image $LX'_{i'}$.\
Concretely $P'(w')$ can be written like
\begin{equation}
P'(w')=
L^{\varphi k}\;\sum_{\left[ n_{i'}\right]^{k}_{i'=0}}
\;\sum_{\left[ X_{i}\right]^{n}_{i=0}}
\int \prod^{n}_{i=0}dt_{i}\;
\prod^{k}_{j'=0}\;
\delta (\sum^{m_{j'}}_{{i}=m_{j'-1}+1}\;\;t_{i}-
L^{\varphi }t'_{j'})\times
\end{equation}
$$
\times
\prod^{k}_{j'=0}\;
\prod^{m_{j'}}_{{i}=m_{j'-1}+1}\;
\chi (X_{i}\in LX'_{j'}\;\;)P(w).
$$
It is straightforward to prove that the probability $P(w)$ where we substitute
$q(X_{i+1},X_i)$ by $c|X_{i+1}-X_i|^{-\alpha}$ $\forall X_i,X_{i+1}\in {\it G}$
($c$ is a constant fixed up to normalization and $\alpha$ another constant),
is a fixed point of the renormalization-group map provided $\varphi$=
$\alpha-d$. Even more, if in $P(w)$ we substitute $q(X_{i+1},X_i)$ by
$c\left(|X_{i+1}-X_i|^{-\alpha}+|X_{i+1}-X_i|^{-\gamma}\right)$
$\forall X_i,X_{i+1}\in {\it G}$, $\gamma>>\alpha$ ($\gamma$ is an additional constant),
then this flows to the very same fixed point of the renormalization-group
map that in the first case. This holds provided
$log\left(\frac{L^{-\alpha}-L^{-\gamma}-2L^{d-\gamma-\alpha}}
{L^{-\alpha}-2L^{d-\gamma}}\right)\rightarrow 0$ and $\varphi$=$\alpha-d$.\
Let us substitute $q(X_{i+1},X_i)$ by
$q_1(|X_{i+1}-X_i|)+\epsilon b(X_{i+1},X_i)$ where $q_1(|X_{i+1}-X_i|)$ is any
function of the distance between $X_{i+1}$ and $X_i$, both sites in the
lattice; $b(X_{i+1},X_i)$ is a random function and $\epsilon$ a small
parameter. We can impose on $b(X_{i+1},X_i)$ the following conditions.\
a) $\sum_{X_{i+1}}b(X_{i+1},X_i)$=$0$.\
b) Independence. We take $b(X_{i+1},X_i)$ and $b(X'_{i+1},X'_i)$ to be
independent if $X_{i+1}\neq X'_i$.\
c) Isotropy.\
d) Weak randomness.\
In this case, $P(w)$ is still a fixed point of the renormalization-group map
provided\newline
\begin{equation}
\sum_{(n_j)^k_0}\sum_{(X_i)^n_0}\prod^{n-1}_{i=0}b(X_{i+1},X_i)
\prod^k_{j=0}\prod^{m_j}_{i=m_{j-1}+1}\chi(X_i\in LX'_j)=
\end{equation}
$$
\prod^k_{j=0}b(X'_{j+1},X'_j)L^{-\varphi k}(b(1)(L^d-1))^{n_j}
$$
where $b(1)=\frac{1-L^{-\varphi}}{L^d-1}$ is the probability of jumping from
one specific site to another specific site inside the ${\it G}_1$ coset.\
One formal solution to eq(8) is the following
\begin{equation}
b(X_{i+1},X_i)=\left\{\begin{array}{cc}
\frac{1-L^{\varphi}}{L^d-1} & \mbox{ if $|X_{i+1}-X_i|=L$} \\
\sum_t\left(
\begin{array}c
d+\varphi\\
t
\end{array}
\right)
f(X_{i+1})^tf(X_i)^{d+\varphi-t} & \mbox{where $|X_{i+1}-X_i|>L$}
\end{array}
\right.
\end{equation}
up to proper normalization. Here $f(X_i)$ and $f(X_{i+1})$ are homogeneous
function of sites in the lattice, order -1. Besides they add to $...,1)$ and
are positive defined. Since in the limit $d+\varphi\rightarrow \infty$ (provided
the mean remains finite) binomial probability distribution tends to Poissson
distribution; we think a nontrivial SUSY $\phi^4$ theory could be included in
this case \cite{Kl}.\
The random walk representation of the SUSY $\phi^4$ is a weakly SARW
that penalizes two-body interactions, this is a configurational measure model.
Configurational measures are measures on $\Lambda_n$. Let $P_U(w)$ be the
probability on this space such that
\begin{equation}
P_U(w)=\frac{U(w)P(w)}{Z}
\end{equation}
where $Z=\int U(w)P(w)Dw$ and $P(w)$ is the probability above described, thus
a fixed point of the renormalization-group map. Besides $U(w)$ is the energy
of the walks. To study the effect of the renormalization-group map on $P_U(w)$ we need to follow the trajectory of $U(w)$ after applying
several times the renormalization-group map.\
Therefore, from previous definition of the renormalization-group map follows;
$$
P'_{U'}(w')=L^{\varphi k}\int
P_{U}(w)
\chi (R(w)=w')Dw
$$
where $Z'=Z$, thus
\begin{equation}
U'(w')=
\frac{\int Dw P(w)\chi (R(w)=w')U(w)}
{\int Dw P(w)\chi (R(w)=w')}
\end{equation}
Note that eq(11) can be view as the conditional expected value of $U(w)$
given that the renormalization-group map is imposed. Therefore and hereafter,
to simplify notation, we write eq(11) as $U'(w')=< U(w) >_{w'}$.\
In the random walk representation of the SUSY $\phi^4$ model with
interaction $\lambda$, and mass (killing rate in the stochastic framework)
$m$, $U$ is as follows.
\begin{equation}
U(w)=\prod_{X\in {\it G}}
e^{-m\sum_{i\in J_{X}}t_{i}-\lambda \sum_{i<j\in J_{X}}
t_{i}t_{j}
{\bf 1}_{\left\{w(t_i)=w(t_j)\right\} }},
\end{equation}
being $m<0$ and $\lambda\stackrel{>}{\scriptscriptstyle<}0$ (small) constants.
Here we set a randomly free running time for the process $T$. The probability
$P_U(w)$, where $U(w)$ is defined as in eq(12), flows to a fixed form after
the renormalization-group map is applied. This fixed form is characterized by
the renormalized energy
\begin{equation}
U'(w')=
\prod_{X'\in{\it G}}
e^{
-m'
\sum_{i'\in J_{X'}}
t'_{i'}-
\lambda'
\sum_
{\stackrel{i'< j'}
{\left\{i',j'\right\}\in J_{X'}}}
t'_{i'}t'_{j'}
{\bf 1}_{(w(t'_{i'})=w(t'_{j'}))}}
\times
\end{equation}
$$
\left\{1+\eta'_{1}
\sum_{
\stackrel{i'< j'< k'}
{\left\{i',j',
k'\right\}\in J_{X'}}}
t'_{i'}t'_{j'}
t'_{k'}
{\bf 1}_{(w(t'_{i'})=w(t'_{j'})
=w(t'_{k'}))}+
\right.
$$
$$
+\left.
\eta'_2
\sum_
{\stackrel{i'< j'}
{\left\{i',j'\right\}\in J_{X'}}}
t'_{i'}t'_{j'}
{\bf 1}_{(w(t'_{i'})=w(t'_{j'}))}
+\eta'_{3}\sum_{i'\in J_{X'}}
t'_{i'}\right\}+r'.
$$
Here
\begin{equation}
m' = L^{\varphi}m+m'_1 \mbox{$\;\;\;where$}
\end{equation}
\begin{equation}
m'_{1}= \gamma_{1}\lambda-\gamma_{2}\lambda^2+r_{m'_1}.
\end{equation}
\begin{equation}
\lambda'=L^{2\varphi-d}\lambda-\chi\lambda^2+r_{\lambda'}.
\end{equation}
\begin{equation}
\eta'_{1}=\eta_{1}L^{3\varphi-2d}+\eta \lambda^2.
\end{equation}
\begin{equation}
\eta'_2=\eta_1A+L^{(2\varphi-d)}\eta_2 \mbox{$\;\;\;and$}
\end{equation}
\begin{equation}
\eta'_3=\eta_1B+\eta_2\gamma_{1}+L^{\varphi}\eta_{3}.
\end{equation}
All parameters involved in eq(15), eq(16), eq(17), eq(18) and eq(19);
namely $\gamma_1$, $\gamma_2$, $\chi$, $\eta$, $A$ and $B$ have precise, well
defined formulae \cite{Su}. They are linearized conditional expectations of
events inside contracting ${\it G}_1$ cosets which, upon renormalization, maps
to a fixed random walk with totally arbitrary topology. Even more, we have
precise formulae for all the remainders, also \cite{Su}. Concretely speaking,
$\gamma_1$ and $\gamma_2$ are contributions to renormalized local times
coming from one and two two-body interactions inside the contracting
${\it G}_1$ cosets, respectively. $\chi$ is the two two-body interaction
(inside the contracting ${\it G}_1$ cosets) contribution to renormalized
two-body interaction. $\eta$ is the contribution to renormalized three-body
interaction coming from two two-body interactions. Finally $A$ and $B$ are the
one three-body interaction (inside the contracting ${\it G}_1$ cosets)
contribution to renormalized two-body interaction and local time,
respectively.\
In the SUSY representation of $\phi^4$ we can say that $\gamma_1$
and $\gamma_2$ are first and second order contributions of SUSY $\phi^4$ to
renormalized SUSY mass; $\chi$ is the second order contribution of SUSY
$\phi^4$ to renormalized SUSY $\phi^4$; $\eta$ is the second order contribution
(the first order contribution is null due to topological restriccions) of SUSY
$\phi^4$ to renormalized SUSY $\phi^6$. Finally, $A$ and $B$ are first order
contributions of SUSY $\phi^6$ (already generated at this stage by previous
renormalization stages) to renormalized SUSY $\phi^4$ and mass, respectively.\
Eq(13) is presented in terms of the product of two factors. The first one
(exponential) involves only; a) trivial flow of mass and interacting constant
b) $\lambda\phi^4$ contribution (inside contracting ${\it G}_1$ cosets) to
renormalized mass and $\lambda'\phi^4$ up to leading order. The second factor
involves mixed terms; namely $\lambda\phi^4$ and $\phi^6$ contributions
(inside the contracting ${\it G}_1$ cosets) to renormalized mass, $\phi^4$ and
$\phi^6$. $\phi^6$ terms come into the scheme because they are produced from
$\lambda\phi^4$ due to the fixed topology of the continuous-time random walk
on the hierarchical lattice. This arrangment allows us to distinguish the
physically meaningful (leading order) magnitudes. From this, we analyze some
results in next section.\
We can choose either representation to obtain the final formulae for
parameters and remainders. Here and in Rodr\'{\i}guez-Romo S. \cite{Su} we
choose the one to provide new stochastic meaning to renormalizing SUSY field
theories.\
We claim that this result is the space-time renormalization-group
trajectory, for the weakly SARW energy interaction studied by Brydges, Evans
and Imbrie \cite{Ev} provided $\varphi=2$ and $d=4$. In their paper the
trajectory of a SUSY $\phi^{4}$ was studied (recall that this can be
understood in terms of intersection of random walks due to Mc Kane, Parisi,
Sourlas theorem) from a SUSY field-theoretical version of the
renormalization-group map, on almost the same hierarchical lattice we are
studying here. We improve the model by providing exact expressions for
$\lambda$ and $m$ for each step the renormalization-group is applied in the
stochastic framework, among others.\
To obtain eq(13) we have introduced an initial mass term $m$, $O(\lambda^2)$
(this allows a factorization whose errors are included in the remainder $r'$,
automatically). We use the Duplantier's hypotesis \cite{Du} and assume all
divergences of the SUSY $\phi^4$ with ultrametric as coming from the vertices
or interactions per site of the lattice. This hypothesis has been proved to be
correct in dimension 2 by means of conformal field theory. Then, a formal
Taylor series expansion is applied which is analyzed for each particular
topology in the renormalized field theory (this is done in random walk
representation) per site of the new lattice. Putting everything together and
by induction, we obtain the final result.\
We can apply the very same method to study any SUSY $\Phi^n$ model on this
ultrametric space.\
\section{Renormalized SUSY $\phi^4$ with ultrametric. The stochastic
approach.}
To start with, we write the physically meaningful (leading order) part of
eq(13); namely eq(14), eq(15) and eq(16) in parameter space. Let us define the
following vector
\begin{equation}
{\bf H}=(m,\lambda)
\end{equation}
Here we have approached up to the most probable events (first order in SUSY
representation). The action of the renormalization-group map (RG) is expressed
as
\begin{equation}
{\bf H'}=R({\bf H})=(m',\lambda').
\end{equation}
The fixed points in our theory, $(m^*_1,\lambda^*_1)$ and
$(m^*_2,\lambda^*_2)$ are as follows.\\
a) The trivial $m^*_1$=$\lambda^*_1$=$0$.\\
b) $\lambda^*_2$=$\frac{L^{2\varphi-d}-1}{\chi}$ ;
$m^*_2$=$\frac{\gamma_1(L^{2\varphi-d}-1)}{\chi(1-L^{\varphi})}-
\frac{\gamma_2(L^{2\varphi-d}-1)^2}{\chi^2(1-L^{\varphi})}$.\\
The nontrivial fixed point involves a renormalized two-body interaction which
is inverse to the conditional expectation of two two-body interactions that
renormalizes to a two-body interaction inside the contracting ${\it G}_1$
cosets ($\chi$) given that the RG map is applied. Meanwhile the renormalized
mass in this point is given in terms of two ratios. The first one involves the
ratio of conditional expectations of one two-body interaction that
renormalizes to local times ($\gamma_1$) inside a contracting ${\it G}_1$
coset and $\chi$. The second ratio involves the conditional expectation of
two two-body interactions that renormalize to local times ($\gamma_2$) inside
a contracting ${\it G}_1$ coset and $\chi^2$. Both; $\lambda^*_2,m^*_2$, are
independent of the scaling factor $L$ for large lattices.\
As we come infinitesimally close to a particular fixed point,
(called this ${\bf H^*}$), the trajectory is given completely by the single
matrix $M$ (its eigenvalues and eigenvectors). Namely
\begin{equation}
M_{ij}=\left.\frac{\partial R_i({\bf H})}
{\partial H_i}\right|_{{\bf H}={\bf H}^*}
\end{equation}
From the random walk representation of SUSY $\phi^4$ with ultrametric, up to
the most probable event approach (leading order in SUSY representation), we
obtain
\begin{equation}
M=
\left(
\begin{array}{cc}
L^{\varphi} & \gamma_1-2\gamma_2\lambda^* \\
0 & L^{2\varphi-d}-2\chi\lambda^*
\end{array}
\right),
\end{equation}
where $\lambda^*$ can be either $\lambda^*_1$ or $\lambda^*_2$.\
The eigenvalues and eigenvectors of this matrix are as follows.\\
a) $l_1=L^{\varphi}$ with eigenvector $(m,0)$.\\
b) $l_2=L^{2\varphi-d}-2\chi\lambda^*$ with eigenvector
$\left(m, -\frac{L^{\varphi}-2L^{2\varphi-d}+2\chi\lambda^*}
{\gamma_1-2\gamma_2\lambda^*}m\right)$, where $\lambda^*$ can be either
$\lambda^*_1$ or $\lambda^*_2$.\\
For $L\geq 2$ and $\varphi> 0$; both fixed points $(m^*_1, \lambda^*_1)$
and $(m^*_2, \lambda^*_2)$ are repulsive in the direction of the eigenvector
$(m,0)$, marginal if $\varphi=0$ and attractive if $\varphi<0$. The trivial
fixed point $(m^*_1, \lambda^*_1)$ is repulsive in the direction of the
eigenvector
$\left(m,-\frac{L^{\varphi}-2L^{2\varphi-d}+2\chi}{\gamma_1} m\right)$
provided $\varphi>d/2$, marginal if $\varphi=d/2$ and attractive otherwise.
Finally, the fixed point $(m^*_2, \lambda^*_2)$ is repulsive in the direction
of the eigenvector
$\left(m,-\frac{L^{\varphi}-2L^{2\varphi-d}+2\chi}{\gamma_1} m\right)$
provided $d/2>\varphi$, marginal if $d/2=\varphi$ and
attractive otherwise. This means that the only critical line which forms the
basin of attraction for both fixed points is given only for $0<\varphi<d/2$
and is locally defined by $g_1=0$. Here $g_1$ is the linear scaling field
associated with the eigenvector $(m,0)$.\
The largest eigenvalue defines the critical exponent $\nu$. In the trivial
fixed point $(m^*_1,\lambda^*_1)$, $\nu=1/\varphi$ provided $d\geq \varphi$.
If $d< \varphi$ than $\nu=\frac{1}{2\varphi-d}$. Here the eigenvalue
$l_1=L^{\varphi}$$>1$ provided $2\varphi<d$; i.e. this fixed point is
repulsive in the direction of the eigenvector $(m,0)$ if and only if
$2\varphi<d$. Although our results are rather general, let us consider the
Flory's case as an example \cite{Fl}. For $d\geq 5$ this trivial fixed point,
in the Flory's case, is attractive in the direction of the eigenvector
$(m,0)$, marginal in dimension four and repulsive otherwise.\
In the fixed point $(m^*_2,\lambda^*_2)$, $\nu=\frac{1}{\varphi}$, provided
$\beta>-log_L\left(\frac{1+L^{\beta-d}}{2}\right)$. Back to the example we are
considering here (Flory's case) \cite{Fl}; for $d\geq 5$ this fixed point is
repulsive, marginal in $d=4$ and attractive otherwise.\
We cannot explain, from this first order approach (the most probable event),
logarithmic corrections to the end-to-end distance in the critical dimension.
This is correctly explained, although heuristically, elsewhere \cite{Su}.\
Using the spin representation, we find the following.\\
a) For the trivial fixed point $(m^*_1,\lambda^*_1)$.\\
$\alpha$=$2-d/\varphi$ ; $\beta$=$\frac{2(d-\varphi)}{\varphi}$ ;
$\gamma$=$\frac{4\varphi-3d}{\varphi}$ ;
$\delta$=$\frac{2\varphi-d}{2d-2\varphi}$ ;
$\nu$=$\frac{1}{\varphi}$ and finally $\eta$=$2-4\varphi+3d$.\\
b) For the fixed point $(m^*_2, \lambda^*_2)$.\\
$\alpha$=$2-d/\varphi$ ; $\beta$=$\frac{d-log_L(2-L^{2\varphi-d})}{\varphi}$ ;
$\gamma$=$\frac{2log_L(2-L^{2\varphi-d})-d}{\varphi}$ ;
$\delta$=$\frac{Log_L(2-L^{2\varphi-d})}{d-log_L(2-L^{2\varphi-d})}$ ;
$\nu$=$\frac{1}{\varphi}$ and finally $\eta$=$2+d-2Log_L(2-L^{2\varphi-d})$.\\
Besides, if we introduce critically the mass as was done in Brydges et al.
\cite{Ev} in $d=4$ and $\varphi=2$, the critical exponents look as follows.\\
$\alpha$=$0$ ; $\beta$=$\frac{1}{2}$ ; $\gamma$=$1$ ; $\delta$=$3$ ;
$\nu$=$\frac{1}{2}$ and finally $\eta$=0.\\
On the other hand, we know that for the SUSY $\lambda\phi^4$,
$\beta(\lambda')$=$\mu\frac{\partial \lambda'}{\partial \mu}$, where $\mu$ is
a parameter with the dimensions of mass; namely $\mu$ is an arbitrary mass
parameter.\
Since we know the fixed points for the theory in random walk representation;
these must be the zeros of $\beta(\lambda)$ in the
SUSY $\lambda\phi^4$ representation. Using this criteria we obtain the
following expression for $\beta(\lambda)$;
\begin{equation}
\beta(\lambda)=\frac{\gamma_2}{1-L^{\varphi}}\lambda^2-
\frac{\gamma_1}{1-L^{\varphi}}\lambda+m
\end{equation}
up to a multiplicative constant.\
An interesting pictorial interpretation of the renormalized group equation was
suggested by S. Coleman \cite{Co}. The equation can be viewed as a flow of
bacteria in a fluid streaming in a one dimensional channel. Here we provide a
new interpretation of the velocity of the fluid at the point $\lambda$,
$\beta(\lambda)$, in terms of stochastic events (function of conditional
expectations of two-body interactions inside contracting ${\it G}_1$ cosets).
For large lattice $\beta(\lambda)$ is independent of the lattice parameter
$L$.\
Concretely, $\beta(\lambda)$ (or the velocity of the fluid at the point
$\lambda$) is written in terms of one two-body and two two-body contributions
to renormalized local times (stochastic approach) or mass (field theory
approach). The first contribution is $O(\lambda)$ and the second,
$O(\lambda^2)$.
Let us call $\beta'(\lambda)
=\left(\frac{\partial \beta(\lambda)}{\partial \lambda}\right)_m$, then
\begin{equation}
\beta'(\lambda)=\frac{2\gamma_2\lambda-\gamma_1}{1-L^{\varphi}}
\end{equation}
In the trivial fixed point, $\beta'(\lambda^*_1)> 0$ (infrared stable),
provided $\varphi> 0$ and $L\geq 2$; besides $\beta'(\lambda^*_1)< 0$
(ultraviolet stable), provided $\varphi< 0$ and $L\geq 2$. In the fixed point
$(m^*_2,\lambda^*_2)$, $\beta'(\lambda^*_2)\geq 0$ (infrared stable), provided
$\varphi\geq 1/2\left[d+log_L\left(\frac{\gamma_1\chi}{2\gamma_2}+
1\right)\right]$, and $\beta'(\lambda^*_2)< 0$ (ultraviolet stable) otherwise.
Here we define $ d_H=log_L\left(\frac{\gamma_1\chi}{2\gamma_2}+1\right)$ which
is given in terms of the ratio for conditional expectations of two-body
interactions which renormalizes to local time and two-body interactions. From
this, the following estimates are obtained \\
a) d=4; $\beta'(\lambda^*_2)\leq 0$, provided $d_H\geq 0$.\
b) d=3; $\beta'(\lambda^*_2)\leq 0$, provided $d_H\geq 1/3$.\
b) d=2; $\beta'(\lambda^*_2)\leq 0$, provided $d_H\geq 2/3$.\
d) d=1; $\beta'(\lambda^*_2)\leq 0$, provided $d_H\geq 1$.\
\section{Summary}
Because of the equivalence between the polymer and SAW problems, functional
integration methods were employed in the majority of theoretical approaches to
these problems. It should, however, be remarked that the critical exponents
for the SAW obtained by this method are only meaningful if the spatial
dimensionality $d$ is close to its formal value $d=4$, and it is not yet clear
how to get results for real space in this way. There is another method based
on the search for a solution to the exact equation for the probability density
of the end-to-end distance of the random walk \cite{Al}. By defining the self-
consistent field explicitly, the density could be found with the help of the
Fokker-Planck equation. In this paper we provide another alternative view
where the probability density, as a random function of the random walk, is
proposed.\
Discrete random walks approximate to diffusion processes and many of the
continuous equations of mathematical physics can be obtained, under suitable
limit conditions, from such walks. Besides we can look at this relation the
other way around; that is, suppose that the movement of an elementary particle
can be described as a random walk on the lattice, which represents the
medium it traverses, and that the distance between two neighbouring vertices,
though very small, is of a definite size; therefore the continuous equations
can be considered as merely approximations which may not hold at very small
distances. We show in this paper how the mathematical results are easily
derived by standard methods. The main interest lies in the interpretation of
the results.\
In our approach the properties of the medium will be described by the lattice
and the transition probabilities. We obtain $m'$, the ``mass" of the field as observed
on this particular hierarchical lattice. The lattice is characterized by the
ultrametric space used to label this.\
We propose to obtain renormalized $n$-body interactions out of a set of
stochastic diagrams with a fixed totally arbitrary topology.\
Here we would like to stress that the search for a proper mathematical
foundation of a physical theory does not mean only a concession to the quest
for aesthetic beauty and clarity but is intended to meet an essential physical
requirement. The mathematical control of the theory plays a crucial role to
allow estimates on the proposed approximations and neglected terms.\
Usually approximations must be introduced which often have the drawback that,
although they can work well, they are uncontrolled: there is no small
parameter which allows an estimate of the error.\
Explicit mathematical formulae for all the parameters and remainders in the
method can be provided. In sake of brevity we present these elsewhere
\cite{Su}. All of them are expressed in terms of conditional expectations of
events inside contracting ${\it G}_1$ cosets.\
Once a successful theoretical scheme has been found it is conceivable that it
is possible to reformulate its structure in equivalent terms but in different
forms in order to isolate, in the most convenient way, some of its aspects and
eventually find the road to successive developments.\
Let us remark that we are talking of a particle picture even when we deal with
systems containing many particles or even field systems.\
We hope our method and ideas may help in the proper understanding of the
association of stochastic processes to the quantum states of a dynamical
system; i.e. stochastic quantization.\
Summarizing, in this paper we present an heuristic space-time \newline
renormalization-group map, on the space of probabilities, to study SUSY
$\phi^4$ in random walk representation, on a hierarchical metric space defined
by a countable, abelian group ${\it G}$ and an ultrametric $\delta$. We
present the L\'evy process on $\Lambda_n$ that correspond to the random walk
representation of SUSY $\phi^4$ which is a configurational measure model
from the point of view of a stochastic processes. We apply the
renormalization-group map on the random walk representation and work out
explicitly the weakly SARW case for double intersecting paths which
corresponds to SUSY $\phi^4$, as an example. The generalization to SUSY
$\phi^n$, for any $n$, is straightforward. New conclusions are derived from
our analysis.\
Our result improves the field-theoretical approach \cite{Ev} by obtaining an
exact probabilistic formula for the flow of the interaction constant and the
mass under the map.
\section{Acknowledgments}
This research was partially supported by CONACYT, Ref 4336-E, M\'exico.
| proofpile-arXiv_065-540 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The problem of classification of trivial Lagrangians i.e. Lagrangians
leading to trivial Euler-Lagrange equations has a long history and it seems
that it is not solved in full generality even today in the framework of
classical field theory. The basic input to the rigorous formulation of this
problem is a fiber bundle
$
\pi: Y \rightarrow X
$
where
$X$
is usually interpreted as the space-time manifold and the fibres describe the
internal degrees of freedom of the problem. To formulate the Lagrangian
formalism one has to include the ``velocities" i.e. one must work on some
jet-bundle extension of
$
\pi: Y \rightarrow X.
$
There are two approaches to this problem. The first one uses the infinite jet
bundle extension formalism. In this framework one can prove that, quite
generally, a trivial Lagrangian is what is usually called, a total derivative
\cite{T}, \cite{A}. The second approach, which is more suitable for the needs
of specific physical problems is to use a finite jet bundle extension. This
corresponds to field theories with a finite order of partial derivatives. The
most common physical theories are, at most, of second order. The r\^ole of the
finite-order approach has been particulary emphasised by Krupka (see for
instance \cite{K1}). In the finite-order approach there is no general solution
of the trivial Lagrangian problem. However, there are a number of partial
results which deserve mentioning. For instance, the first-order case has
been completely elucidated (see \cite{E}-\cite{GP}; needless to say, this
rather simple case has been rediscovered independently by many people from
time to time). For the second and higher order Lagrangians only partial
results are known to the author. More precisely, we refer to Olver's papers
\cite{BCO}, \cite{O} where one studies the case of a trivial Lagrangian of
order
$r$
which is dependent only of the highest order derivatives in a polynomially
homogeneous way. In this case one can show that the Lagrangian is a linear
combination of some polynomial expressions called hyper-Jacobians. The
existence of some linear dependence between the hyper-Jacobians is emphasised
but the general problem of finding all possible linear dependencies is not
solved. It is worth mentioning that the results of Olver are based on a very
complex algebraic machinery of higher dimensional determinants, invariant
theory, etc. combined with a generalization of Gel'fand-Dikii transform.
In this paper we will present a rather complete analysis of case of the
second-order trivial Lagrangian. Without any limitations we will be able to
prove that the dependence of the second order derivatives is through the
hyper-Jacobian polynomials. In particular the polynomiality emerges naturally
from the proof and the homogeneity condition is not needed. Moreover we will
elucidate the question of possible linear dependencies between the
hyper-Jacobians and it will emerge that only the tracelessness conditions
already appearing in \cite{BCO}, \cite{O} are sufficient i.e. no other
constraints are independent of these. All these results will be obtained
from a long but quite elementary proof. The methods used are essentially
those used in \cite{G1}, \cite{G2} for the analysis of the most general
form of a Euler-Lagrange expression: they consist in complete induction and
some Fock space techniques.
We feel that this completely new method may offer a way for the analysis of
the general case of
$r$-th
order Lagrangians and so it deserves some attention.
The structure of the paper is the following one. In Section 2 we present the
jet-bundle formalism to fix the notations and illustrate our problem on the
well-known first-order trivial Lagrangian case. We also present the equations
to be solved in the second-order case and comment on the best strategy to solve
them. In Section 3 we consider the second-order case for a scalar field theory.
The proof is based on induction on the dimension of the space-time manifold and
is contained in the proof of the main theorems in \cite{G1}, \cite{G2}. We give
it, nevertheless in this Section because we will need further some notations,
results and techniques. We also start the analysis for the general case by an
obvious corollary of the result obtained for the scalar field and we will
formulate the general result we want to prove. In Section 4 the case of a
two component classical field is analysed and one is able to perceive the
nature of the difficulties involved in this analysis. In Section 5 the
case of a field with three or more components is settled completing the
analysis. In Section 6 we combine the results from the preceding two
Sections in the main theorem and make some final comments.
\section{The Jet-Bundle Formalism in the Lagrangian Theory}
2.1 As we have said in the Introduction, the kinematical structure of
classical field theory is based on a fibred bundle structure
$
\pi: Y \mapsto X
$
where
$Y$
and
$X$
are differentiable manifolds of dimensions
$
dim(X) = n, \quad dim(Y) = N + n
$
and
$\pi$
is the canonical projection of the fibration. Usually $X$
is interpreted as the ``space-time" manifold and the fibres of $Y$
as the field variables. Next, one considers the $r$-th jet bundle
$
J^{r}_{n}(Y) \mapsto X \quad (r \in \rm I \mkern -3mu N).
$
A
$r$-th
order jet with source
$x \in X$,
and target
$y \in Y$
is, by definition, an equivalence class of all the smooth maps
$
\zeta: X \rightarrow Y
$
verifying
$\zeta(x) = y$
and having the same partial derivatives in $x$ up to order $r$ (in any
chart on $X$ and respectively on $Y$).
We denote the equivalence class of
$\zeta$
by
$
j^{r}\zeta
$
and the factor set by
$
J^{r}_{x,y}.
$
Then the
$r$-th order
jet bundle extension is, by definition
$
J^{r}_{n}(Y) \equiv \cup J^{r}_{x,y}.
$
One usually must take
$
r \in \rm I \mkern -3mu N
$
sufficiently large such that all formulas make sense. Let us consider a
local system of coordinates in the chart
$
U \subseteq X: \quad
(x^{\mu}) \quad (\mu = 1,...,n).
$
Then on some chart
$
V \subseteq \pi^{-1}(U) \subset Y
$
we take a local coordinate system adapted to the fibration structure:
$
(x^{\mu},\psi^{A}) \quad (\mu = 1,...,n, \quad A = 1,...,N)
$
such that the canonical projection is
$
\pi(x^{\mu},\psi^{A}) = (x^{\mu}).
$
Then one can extend this system of coordinates to
$
J^{r}_{n}(Y)
$
as follows: on the open set
$
V^{r} \equiv (\pi^{r,0})^{-1}(V)
$
we define the coordinates of
$
j^{r}_{x}\zeta
$
to be
$
(x^{\mu},\psi^{A},\psi^{A}_{\mu},...,\psi^{A}_{\mu_{1}.,,,,\mu_{r}})
$
where
$
\mu_{1} \leq \cdots \leq \mu_{s} \qquad (s \leq r).
$
Explicitly
\begin{equation}
\psi^{A}_{\mu_{1},...,\mu_{s}}(j^{r}_{x}\zeta) \equiv
\prod_{i=1}^{s} {\partial\over \partial x^{\mu_{i}}} \zeta(x)
\quad (s=1,...,r).
\end{equation}
If
$
\mu_{1},...,\mu_{s}
$
are arbitrary numbers belonging to the set
$
\{1,...,n\}
$
then by the expression
$
\{\mu_{1},...,\mu_{s}\}
$
we understand the result of the operation of increasing ordering. Then
the notation
$
\psi^{A}_{\{\mu_{1},...,\mu_{s}\}}
$
becomes meaningful for all set of numbers
$
\mu_{1},...,\mu_{s}.
$
If
$
I = \{\mu_{1},...,\mu_{s}\}
$
is an arbitrary set from
$
\{1,...,n\}^{\times s}
$
then we define
\begin{equation}
\psi^{A}_{I} = \psi^{A}_{\mu_{1},...,\mu_{s}} \equiv
\psi^{A}_{\{\mu_{1},...,\mu_{s}\}}.
\end{equation}
This notation makes sense whenever the cardinal of $I$ verifies:
$
|I| \leq r
$
where if
$
I = \emptyset
$
then we put
$
\psi^{A}_{\emptyset} = \psi^{A}.
$
With this convention the expression
$
\psi^{A}_{I}
$
is completely symmetric in the individual
$
\mu_{1},...,\mu_{s}
$
which make up the multi-index $I$.
2.2 Let us consider
$
s \leq r
$
and
$T$
a
$(n + 1)$-form
which can be written in the local coordinates introduced above as:
\begin{equation}
T = {\cal T}_{A} \quad d\psi^{A} \wedge dx^{1} \wedge \cdots \wedge dx^{n}
\label{edif}
\end{equation}
with
$
{\cal T}_{A}
$
some smooth functions of
$
(x^{\mu},\psi^{A}_{I}) \qquad (|I| \leq s).
$
Then
$T$
is a globally defined object. We call such a
$T$
a {\it differential equation of order s}.
2.3 To introduce some special type of differential equations we need some
very useful notations \cite{AD}. We define the differential operators:
\begin{equation}
\partial^{I}_{A} \equiv {r_{1}!...r_{l}! \over |I|!}
{\partial \over \partial \psi^{A}_{I}}
\label{pdif}
\end{equation}
where
$
r_{i}
$
is the number of times the index
$i$
appears in $I$.
The combinatorial factor in (\ref{pdif}) avoids possible overcounting in the
computations which will appear in the following. One has then:
\begin{equation}
\partial^{\mu_{1},...,\mu_{l}}_{A} \psi^{B}_{\nu_{1},...,\nu_{m}} =
\cases{ { 1\over l!} \delta^{A}_{B}
perm(\delta^{\mu_{i}}_{\nu_{j}}), & for $l = m$ \cr
0, & for $l \not= m$ \cr}
\end{equation}
where
\begin{equation}
perm\left( \delta^{\mu_{i}}_{\nu_{j}} \right) \equiv
\sum_{P \in {\cal P}_{l}} \delta^{\mu_{1}}_{\nu_{P(1)}}\cdots
\delta^{\mu_{l}}_{\nu_{P(l)}}
\end{equation}
is a permanent. (In general we denote by
$
perm(A)
$
the permanent of the matrix
$A$).
Next, we define the total derivative operators:
\begin{equation}
D_{\mu} = {\partial\over \partial x^{\mu}} + \sum_{l=0}^{r-1}
\psi^{A}_{\nu_{1},...,\nu_{l},\mu} \partial^{\nu_{1},...,\nu_{l}}_{A}
= {\partial\over \partial x^{\mu}} + \sum_{|I|\leq r-1} \psi^{A}_{I\mu}~
\partial^{I}_{A}
\label{tdif}
\end{equation}
where we use the convention
$
IJ \equiv I \cup J.
$
One can check that
\begin{equation}
D_{\mu}\psi^{A}_{I} = \psi^{A}_{I\mu}, \qquad |I| \leq r-1
\label{der}
\end{equation}
and
\begin{equation}
[D_{\mu}, D_{\nu}] = 0.
\label{com}
\end{equation}
Finally we define the differential operators
\begin{equation}
D_{I} \equiv \prod_{i \in I} D_{\mu_{i}}.
\label{tdifs}
\end{equation}
Because of (\ref{com}) the order of the factors in the right hand side is
irrelevant.
2.4 A differential equation
$T$
is called {\it locally variational} (or of the {\it Euler-Lagrange type})
{\it iff} there exists a local real function
${\cal L}$
such that the functions
$
{\cal T}_{A}
$
from (\ref{edif}) are of the form:
\begin{equation}
{\cal E}_{A}({\cal L}) \equiv \sum_{l=0}^{r} (-1)^{l}
D_{\mu_{1},...,\mu_{l}} (\partial^{\mu_{1},...,\mu_{l}}_{A} {\cal L})
\label{Eop}
\end{equation}
One calls
${\cal L}$
a {\it local Lagrangian} and:
\begin{equation}
L \equiv {\cal L}~dx^{1}\wedge\cdots \wedge dx^{n}
\label{Lform}
\end{equation}
a {\it local Lagrange form}.
If the differential equation
$T$
is constructed as above then we denote it by
$
E(L).
$
A local Lagrangian is called a {\it total divergence} if it is of the form:
\begin{equation}
{\cal L} = D_{\mu} V^{\mu}.
\end{equation}
A Lagrangian is called {\it trivial} (or {\it null} in the terminology of
\cite{BCO}, \cite{O}) if it satisfies:
\begin{equation}
E(L) = 0.
\label{trEL}
\end{equation}
One can check that a total divergence Lagrangian is trivial. The converse of
this statement has been proved only in the infinite jet bundle approach
\cite{T}.
2.5 Let us briefly review the case of trivial first-order Lagrangians.
One must take in the equation (\ref{trEL}) above the function
${\cal L}$
depending only on the variables
$
(x^{\mu}, \psi^{A}, \psi^{A}_{\mu}).
$
Then we obtain the condition of triviality as follows:
\begin{equation}
\partial_{A} {\cal L} - D_{\mu} \partial_{A}^{\mu} {\cal L} = 0.
\label{trEL1}
\end{equation}
One can easily find out that this equation is equivalent to the following
two equations:
\begin{equation}
\left(\partial^{\mu}_{A}~\partial^{\nu}_{B} + \mu \leftrightarrow \nu
\right) {\cal L} = 0
\label{I1}
\end{equation}
and
\begin{equation}
\partial_{A} {\cal L} -
(\partial_{\mu} + \psi^{B}_{\mu} \partial_{B})
\partial_{A}^{\mu} {\cal L} +
(\partial_{\mu} + \psi^{B}_{\mu} \partial_{B})
(\partial_{\nu} + \psi^{C}_{\nu} \partial_{C}) {\cal L} = 0.
\label{I2}
\end{equation}
From (\ref{I1}) one easily discovers that
${\cal L}$
is a polynomial of maximal degree
$n$
in the first-order derivatives of the following form:
\begin{equation}
{\cal L} = \sum_{k=1}^{n} {1 \over k!}
L^{\mu_{1},...,\mu_{k}}_{A_{1},...,A_{k}}
\prod_{i=1}^{k} \psi^{A_{i}}_{\mu_{i}}
\label{TrI}
\end{equation}
where the functions
$
L^{...}_{...}
$
depend only on the variables
$
(x^{\mu}, \psi^{A})
$
and are completely antisymmetric in the upper indices
$
\mu_{1},...,\mu_{k}
$
and in the lower indices
$
A_{1},...,A_{k}.
$
To exploit the equation (\ref{I2}) one defines the form
\begin{equation}
\Lambda = \varepsilon_{\mu_{1},...,\mu_{n}} \quad
\sum_{k=1}^{n} {1 \over k1} L^{\mu_{1},...,\mu_{k}}_{A_{1},...,A_{k}} \quad
d\psi^{A_{1}} \wedge \cdots \wedge d\psi^{A_{k}} \wedge
dx^{\mu_{k+1}} \wedge \cdots \wedge dx^{\mu_{n}}
\label{lam}
\end{equation}
and shows that (\ref{I2}) is equivalent to
\begin{equation}
d\Lambda = 0.
\end{equation}
2.6 Finally we give the similar details for the second-order case.
Let us consider a second-order Lagrangian:
$
{\cal L}(x^{\mu} ,\psi^{A},\psi_{\nu}^{A},\psi_{\nu \rho }^{A})
$
and impose the triviality condition (\ref{trEL}). One obtains explicitly:
\begin{equation}
\partial_{A} {\cal L} - D_{\mu} \partial_{A}^{\mu} {\cal L} +
D_{\mu} D_{\nu} \partial_{A}^{\mu\nu} {\cal L} = 0.
\label{trEL2}
\end{equation}
We write in detail this equation and note a linear dependence on the
fourth-order derivatives; applying the operator
$
\partial^{\zeta_{1}\zeta_{2}\zeta_{3}\zeta_{4}}_{B}
$
we obtain after some rearrangements:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\partial_{A}^{\mu\rho_{1}} \partial_{B}^{\rho_{2}\rho_{3}}
{\cal L} + (A \leftrightarrow B) = 0.
\label{II1}
\end{equation}
We will use many times from now on a convenient notation, namely by
$
\sum_{(\mu,\nu,\rho)}
$
we will mean the sum over all the {\it cyclic} permutations of the indices
$
\mu, \nu, \rho.
$
We take into account this equations in the original equation (\ref{trEL2})
to simplify it a little bit by eliminating the dependence on the fourth order
derivatives. What remains is an equations having a quadratic dependence on the
third-order derivatives. We differentiate twice this equation with respect to
the third-order derivatives and obtain as before:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})} \sum_{(\zeta_{1},\zeta_{2},\zeta_{3})}
\partial^{\zeta_{1}\zeta_{2}}_{D}\partial^{\zeta_{3}\rho_{3}}_{A}
\partial^{\rho_{1}\rho_{2}}_{B} {\cal L} = 0.
\label{II2}
\end{equation}
Taking into account (\ref{II1}) and (\ref{II2}) the initial equation becomes
linear in the third-order derivatives; differentiating once with respect to
the third-order derivatives one gets:
\begin{equation}
\sum_{(\zeta_{1},\zeta_{2},\zeta_{3})}
\left[ \left(\partial_{D}^{\zeta_{1}\zeta_{2}} \partial_{A}^{\zeta_{3}} -
\partial_{A}^{\zeta_{1}\zeta_{2}} \partial_{D}^{\zeta_{3}} \right) +
2 \left(\partial_{\mu} + \psi^{B}_{\mu} \partial_{B} +
\psi^{B}_{\mu\rho} \partial^{\rho}_{B}\right)
\partial^{\zeta_{1}\zeta_{2}}_{D} \partial^{\zeta_{3}\mu}_{A} \right]
{\cal L} = 0.
\label{II3}
\end{equation}
From the initial equation what is left is:
\begin{equation}
\begin{array}{c}
\partial_{A} {\cal L} -
(\partial_{\mu} + \psi^{B}_{\mu} \partial_{B}+
\psi^{B}_{\mu\rho} \partial_{B}^{\rho})
\partial_{A}^{\mu} {\cal L} + \nonumber \\
(\partial_{\mu} + \psi^{B}_{\mu} \partial_{B}+
\psi^{B}_{\mu\rho} \partial_{B}^{\rho})
(\partial_{\nu} + \psi^{C}_{\nu} \partial_{C}+
\psi^{C}_{\mu\sigma} \partial_{C}^{\sigma}) {\cal L} \equiv 0.
\end{array}
\label{II4}
\end{equation}
So, equation (\ref{trEL2}) is equivalent to the identities
(\ref{II1})-(\ref{II4}).
Our strategy in the following will be to find the most general solution of the
equations involving only the second-order derivatives i.e. (\ref{II1}) and
(\ref{II2}). Some comments related to the dependence on the first-order
derivatives will be made at the end. Inspecting more carefully the
equation (\ref{II2}) it becomes clear that it for
$
N = 1, 2
$
it follows from (\ref{II1}). Also, (\ref{II1}) for
$
A = B
$
coincides with the same equation for
$
N = 1.
$
This is the reason for studying separately the cases
$
N = 1, 2
$
and
$
N \geq 3.
$
\section{Trivial Second-Order Lagrangians in the Case $N = 1$}
3.1 In this case we will omit completely the field indices
$
A, B,...
$
because they take only one value. The dependence on the second-order
derivatives is encoded in the equation (\ref{II1}) which becomes in this case:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\partial^{\mu\rho_{1}} \partial^{\rho_{2}\rho_{3}} {\cal L} = 0.
\label{N=1}
\end{equation}
As we have said in the Introduction, we intend to find the most general
solution of this equation using induction over
$n$.
To be able to formulate the induction hypothesis we introduce the following
polynomial expressions, called {\it hyper-Jacobians} \cite{BCO}, \cite{O}
(see also \cite{G1}, \cite{G2}) which in this case have the following form:
\begin{equation}
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}} \equiv
\varepsilon^{\rho_{1},...,\rho_{n}}
\prod_{i=1}^{r} \psi_{\rho_{i}\sigma_{i}} \qquad (r = 0,...,n)
\label{hyperJ}
\end{equation}
We will use consistently Bourbaki conventions:
$
\sum_{\emptyset} \cdots = 0
$
and
$
\prod_{\emptyset} \cdots = 1.
$
We note the following symmetry properties:
\begin{equation}
J^{\rho_{Q(r+1)},...,\rho_{Q(n)}}_{\sigma_{P(1)},...,\sigma_{P(r)}} =
(-1)^{|P|+|Q|}
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}}
\quad (r = 0,...,n)
\label{antisym}
\end{equation}
where
$P$
is a permutation of the numbers
$1,...,r$
and
$Q$
is a permutations of the numbers
$r+1,...,n$.
We also note that the following identities are true (see \cite{BCO}):
\begin{equation}
J^{\rho_{r+1},...,\rho_{n-1},\zeta}_{\sigma_{1},...,\sigma_{r-1},\zeta} = 0
\quad (r = 1,...,n-1).
\label{trace}
\end{equation}
In other words, the hyper-Jacobians are completely antisymmetric in the upper
indices, in the lower indices and are also traceless.
In the following we will need the expression for the derivatives
of the hyper-Jacobians. On easily finds out the following formula:
true:
\begin{equation}
\begin{array}{c}
\partial^{\mu\nu}
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\nu_{r}}
= \nonumber \\ {1 \over 2}
\quad \sum_{i=1}^{r} (-1)^{n-i} \delta_{\sigma_{i}}^{\nu}
J^{\rho_{1},...,\rho_{n},\mu}_{\sigma_{1},...,\hat{\sigma_{i}},...,\sigma_{r}}
+ (\mu \leftrightarrow \nu)
\quad (r = 0,...,n).
\end{array}
\label{derJ}
\end{equation}
This formula suggests the use of the Fock space techniques. Let us emphasize
this point in detail. We will consider the functions
$
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}}
$
as the components of a tensor
$
\{J_{r}\} \in {\cal H} \equiv {\cal F}^{-}(\rm I \mkern -3mu R^{n}) \otimes {\cal F}^{-}(\rm I \mkern -3mu R^{n})
$
where
$
J_{r}
$
belongs to the subspace of homogeneous tensors
$
{\cal H}_{n-r,r}
$
(where
$
{\cal H}_{p,q}
$
is the subspace of homogeneous tensors of degree
$
p, q
$
respectively.)
We will denote by
$
b^{*(\mu)}, c^{*}_{(\mu)}, b_{(\mu)}, c^{(\mu)}
$
the fermionic creation and respectively the annihilation operators acting in
$
{\cal H}.
$
With these notations one can rewrite (\ref{derJ}) in a more compact way,
namely:
\begin{equation}
\partial^{\mu\nu} J_{r} = \alpha_{r}
[b^{*(\mu)} c^{(\nu)} + b^{*(\nu)} c^{(\mu)}]
J_{r-1}; \qquad \alpha_{r} \equiv (-1)^{r-1} {1 \over 2}
\times \sqrt{r \over n-r+1} \qquad (r = 0,...,n).
\label{derJF}
\end{equation}
Also, the identities (\ref{trace}) can be compactly written as follows:
\begin{equation}
C J_{r} = 0 \qquad (r = 0,...,n)
\label{traceF}
\end{equation}
where we have defined
\begin{equation}
C \equiv b^{(\mu)} c_{(\mu)}.
\label{constraint}
\end{equation}
We need one more notation for our Fock space machinery, namely
$
<\cdot,\cdot>
$
which is the duality form between
$
{\cal H}
$
and
$
{\cal H}^{*}.
$
3.2 We prove now the main result.
\begin{thm}
The general solution of the equations (\ref{N=1}) is of the following form:
\begin{equation}
{\cal L} = \sum_{r=0}^{n}
{\cal L}^{\sigma_{1},...,\sigma_{r}}_{\rho_{r+1},...,\rho_{n}}
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}}
\label{polyn}
\end{equation}
where the functions
$
{\cal L}^{...}_{...}
$
are independent of
$
\psi_{\mu\nu}
$:
\begin{equation}
\partial^{\mu\nu}
{\cal L}^{\sigma_{1},...,\sigma_{r}}_{\rho_{r+1},...,\rho_{n}} = 0
\quad (r = 0,...,n),
\end{equation}
and have analogous properties as the hyper-Jacobians, namely the
(anti)symmetry property:
\begin{equation}
{\cal L}^{\sigma_{P(1)},...,\sigma_{P(r)}}_{\rho_{Q(r+1)},...,\rho_{Q(n)}}
= (-1)^{|P|+|Q|}
{\cal L}^{\sigma_{1},...,\sigma_{r}}_{\rho_{r+1},...,\rho_{n}}
\quad (r = 0,...,n)
\label{antisym-l}
\end{equation}
(where
$P$
is a permutation of the numbers
$1,...,r$
and
$Q$
is a permutations of the numbers
$r+1,...,n$)
and also verify the identities:
\begin{equation}
{\cal L}_{\rho_{r+1},...,\rho_{n-1},\zeta}^{\sigma_{1},...,\sigma_{r-1},\zeta}
= 0
\quad (r = 1,...,n-1)
\label{trace-l}
\end{equation}
(i. e. are traceless). The function coefficients
$
{\cal L}^{...}_{...}
$
are uniquely determined by
$
{\cal L}
$
and the properties (\ref{antisym-l}) and (\ref{trace-l}) above.
\label{structure}
\end{thm}
{\bf Proof:}
It is a particular case of the main theorem from \cite{G2}. It is convenient
to consider that
$
{\cal L}^{\sigma_{1},...,\sigma_{r}}_{\rho_{r+1},...,\rho_{n}}
$
are the components of a tensor
$
\{{\cal L}^{r}\}
$
in the dual space
$
{\cal H}^{*};
$
explicitly:
$
{\cal L}^{r} \in {\cal H}^{*}_{n-r+1,r}
$
(where
$
{\cal H}^{*}_{p,q}
$
is the subspace of homogeneous tensors of degree
$
p, q
$
respectively.)
With this trick, formula (\ref{polyn}) can be written in
compact notations as:
\begin{equation}
{\cal L} = \sum_{r=0}^{n} <{\cal L}^{r},J_{r}>.
\label{compact}
\end{equation}
We will denote by
$
b_{*(\mu)}, c^{*(\mu)}, b^{(\mu)}, c_{(\mu)}
$
the fermionic creation and respectively the annihilation operators acting in
$
{\cal H}^{*}.
$
(i) We now prove the uniqueness statement. So we must show that if
\begin{equation}
\sum_{r=0}^{n} <{\cal L}^{r},J_{r}> = 0
\label{uniqueness}
\end{equation}
then
$
{\cal L}^{r} = 0 \quad r = 0,...,n.
$
To prove this, we apply to the equation (\ref{uniqueness}) the operator
$
\prod_{i=1}^{p} \partial^{\rho_{i}\sigma_{i}} \quad (p \leq n)
$
and then we will make
$
\psi_{\mu\nu} \rightarrow 0.
$
Using (\ref{derJF}) one easily discovers the following equations:
\begin{equation}
\prod_{i=1}^{p} \quad \left[ b^{(\rho_{i})} c^{*(\sigma_{i})} +
b^{(\sigma_{i})} c^{*(\rho_{i})} \right] \quad {\cal L}^{p} = 0,
\qquad (p = 0,...,n).
\label{unicity}
\end{equation}
To analyze this system we first define the operator:
\begin{equation}
{\cal C} \equiv b^{(\rho)} c_{(\rho)}
\end{equation}
and prove by elementary computations that the condition
(\ref{trace-l}) can be rewritten as:
\begin{equation}
{\cal C} {\cal L}^{r} = 0 \quad (r = 0,...,n).
\label{trace-l-compact}
\end{equation}
At this point it is convenient to define the dual tensors
\begin{equation}
\tilde{\cal L}^{\sigma_{1},...,\sigma{r};\rho_{1},...,\rho_{r}} \equiv
{(-1)^{r} \over \sqrt{r! (n-r)!}} \varepsilon^{\rho_{1},....\rho_{n}}
{\cal L}^{\sigma_{1},...,\sigma{r}}_{\rho_{r+1},...,\rho_{n}}.
\label{dual}
\end{equation}
Because we have
\begin{equation}
\tilde{\tilde{\cal L}} = (-1)^{n} {\cal L}
\end{equation}
it is equivalent and more convenient to work in the dual space
$
\tilde{\cal H}.
$
We will denote by
$
b^{*}_{(\mu)}, c^{*}_{(\mu)}
$
and
$
b^{(\mu)}, c^{(\mu)}
$
the fermionic creation and respectively the annihilation operators acting in
$
\tilde{\cal H}.
$
Then the condition (\ref{trace-l-compact}) rewrites as:
\begin{equation}
\tilde{\cal C} \tilde{\cal L}^{r} = 0 \qquad (r = 0,...,n)
\label{trace-tilde}
\end{equation}
where
\begin{equation}
\tilde{\cal C} \equiv b^{(\mu)} c^{*}_{(\mu)}
\label{constraint-tilde}
\end{equation}
and the equation (\ref{unicity}) becomes:
\begin{equation}
\prod_{i=1}^{p} \quad \left[ b^{(\rho_{i})} c^{(\sigma_{i})} +
b^{(\sigma_{i})} c^{(\rho_{i})} \right] \tilde{\cal L}^{p} = 0,
\qquad (p = 0,...,n).
\label{unicity-tilde}
\end{equation}
Finally, we need the following number operators:
\begin{equation}
N_{b} \equiv b^{*(\rho)} b_{(\rho)}; \quad
N_{c} \equiv c^{*(\rho)} c_{(\rho)}.
\end{equation}
Then one knows that:
\begin{equation}
N_{b} \vert_{\tilde{\cal H}_{p,q}} = p {\bf 1}, \quad
N_{c} \vert_{\tilde{\cal H}_{p,q}} = q {\bf 1}.
\label{number}
\end{equation}
We analyze the system (\ref{unicity-tilde}) using some simple lemmas.
The proofs are elementary and are omitted.
\begin{lemma}
The following formula is true:
\begin{equation}
b^{*}_{(\mu)} c^{*}_{(\nu)} \left[ b^{(\mu)} c^{(\nu)} +
b^{(\nu)} c^{(\mu)} \right] = N_{b} (N_{c} + {\bf 1}) -
\tilde{\cal C}^{*} \tilde{\cal C}.
\end{equation}
\label{inverse}
\end{lemma}
\begin{lemma}
The operator
$\tilde{\cal C}$
commutes with all the operators of the form
$$
b^{(\mu)} c^{(\nu)} + b^{(\nu)} c^{(\mu)}.
$$
Explicitly:
\begin{equation}
\left[ \tilde{\cal C}, b^{(\mu)} c^{(\nu)} + b^{(\nu)} c^{(\mu)} \right] = 0.
\end{equation}
\label{commute}
\end{lemma}
\begin{lemma}
If the tensor
$t$
verifies the identity
$
\tilde{\cal C} t = 0
$
the the tensors
$$
\left[ b^{(\mu)} c^{(\nu)} + b^{(\nu)} c^{(\mu)}
\right] t
$$
also verify this identity.
\label{iteration}
\end{lemma}
We now have:
\begin{prop}
Suppose the tensor
$
t \in \tilde{\cal H}_{r,r} \quad (r = 0,...,n)
$
verify the system:
\begin{equation}
\prod_{i=1}^{p} \quad \left[ b^{(\rho_{i})} c^{(\sigma_{i})} +
b^{(\sigma_{i})} c^{(\rho_{i})} \right] \quad t = 0.
\end{equation}
Then we have
$
t = 0.
$
\end{prop}
{\bf Proof:} We apply to this system the operator
$
\prod_{i=1}^{p} b^{*}_{(\rho_{i})} c^{*}_{(\rho_{i})}
$
and make repeated use of the lemmas above.
$\nabla$
The argument involved in the proof above will be called {\it the
unicity argument}.
In conclusion the system (\ref{unicity-tilde}) has the solution
$
\tilde{\cal L}^{p} = 0 \quad (p = 0,...,n).
$
(ii) We start to prove the formula (\ref{polyn}) by induction over
$n$.
For
$
n = 1
$
the derivation of (\ref{polyn}) is elementary.
We suppose that we have the assertion of the theorem for a given $n$
and we prove it for
$
n + 1.
$
In this case the indices
$
\mu,\nu, ...
$
takes values (for notational convenience)
$
\mu,\nu, ...= 0,...,n
$
and
$
i,j,...= 1,...,n.
$
If we consider in (\ref{N=1}) that
$
\mu,\rho_{1},\rho_{2},\rho_{3} = 1,...,n
$
then we can apply the induction hypothesis and we get:
\begin{equation}
{\cal L} = \sum_{r=0}^{n} l^{i_{1},...,i_{r}}_{j_{r+1},...,j_{n}}
I_{i_{1},...,i_{r}}^{j_{r+1},...,j_{n}}.
\label{polyn'}
\end{equation}
Here
$
l^{...}_{...}
$
have properties of the type (\ref{antisym-l}) and ({\ref{trace-l}) and can
depend on
$
x, \psi^{A}, \psi^{A}_{\mu}
$
{\it and}
$
\psi^{A}_{0\mu}.
$
The expressions
$
I^{...}_{...}
$
are constructed from
$
\psi_{ij}
$
according to the prescription (\ref{hyperJ}).
(iii) We still have at our disposal the relations (\ref{N=1}) where at
least one index takes the value $0$. The computations are rather easy
to do using instead of (\ref{polyn'}) the compact tensor notation
(see (\ref{compact})) and the unicity argument. We give the results directly
for the dual tensors
$
\tilde{l}^{r}.
$
\begin{equation}
(\partial^{00})^{2} \tilde{l}^{r} = 0 \qquad (r = 0,...,n),
\label{eq1}
\end{equation}
\begin{equation}
\partial^{00} \partial^{0i} \tilde{l}^{r} = 0 \qquad (r = 0,...,n),
\label{eq2}
\end{equation}
\begin{equation}
\alpha_{r+1} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\partial^{00} \tilde{l}^{r+1} +
2 \partial^{0i} \partial^{0j} \tilde{l}^{r} = 0 \quad (r = 0,...,n-1),
\label{eq3}
\end{equation}
\begin{equation}
\partial^{0i} \partial^{0j} \tilde{l}^{n} = 0,
\label{eq4}
\end{equation}
\begin{equation}
\sum_{(i,j,k)} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\partial^{0k} \tilde{l}^{r} = 0 \qquad (r = 1,...,n).
\label{eq5}
\end{equation}
The expressions
$
\tilde{l}^{r}
$
are obviously considered as tensors from
$
\tilde{\cal H}_{r,r}
$
verifying the restriction:
\begin{equation}
\tilde{\cal C} \tilde{l}^{r} = 0 \quad (r = 0,...,n).
\label{id}
\end{equation}
As in \cite{G2}, these equations can be solved i.e. one can describe the
most general solution.
From (\ref{eq1}) we have:
\begin{equation}
\tilde{l}^{r} = \tilde{l}^{r}_{(0)} + \psi_{00} \tilde{l}^{r}_{(1)}
\label{sol}
\end{equation}
where the functions
$
\tilde{l}^{r}_{(\alpha)} \quad (\alpha = 0,1; \quad r = 0,...,n)
$
verify:
\begin{equation}
\partial^{00} \tilde{l}^{r}_{(\alpha)} = 0 \quad
(\alpha = 0,1; \quad r = 0,...,n)
\end{equation}
and also verify identities of the type (\ref{id}):
\begin{equation}
\tilde{\cal C} \tilde{l}^{r}_{(\alpha)} = 0 \quad (\alpha = 0,1;
\quad r = 0,...,n).
\label{id-alpha}
\end{equation}
From (\ref{eq2}) we also get:
\begin{equation}
\partial^{0i} \tilde{l}^{r}_{(1)} = 0, \quad (r = 0,...,n)
\label{restr1}
\end{equation}
and finally (\ref{eq3}) - (\ref{eq5}) become:
\begin{equation}
\alpha_{r+1} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\tilde{l}^{r+1}_{(1)} + 2 \partial^{0i} \partial^{0j} \tilde{l}^{r}_{(0)}
= 0, \qquad (r = 0,...,n-1)
\label{eq3'}
\end{equation}
\begin{equation}
\partial^{0i} \partial^{0j} \tilde{l}^{n}_{(0)} = 0
\label{4'}
\end{equation}
\begin{equation}
\sum_{(i,j,k)} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\partial^{0k} \tilde{l}^{r}_{(0)} = 0 \qquad
(r = 0,...,n).
\label{eq5'}
\end{equation}
(iv) We proceed further by applying the operator
$
\partial^{0k}
$
to (\ref{eq3'}); taking into account (\ref{restr1}) we obtain:
\begin{equation}
\partial^{0i} \partial^{0j} \partial^{0k} \tilde{l}^{r}_{(0)} = 0
\qquad (r = 0,...,n-1).
\label{eq3''}
\end{equation}
From this relation one obtains a polynomial structure in
$
\psi_{0i}
$
for
$
\tilde{l}^{r}_{(0)} \quad (r = 0,...,n-1):
$
\begin{equation}
\tilde{l}^{r}_{(0)} = \tilde{l}^{r}_{(00)} +
\psi_{0i} \tilde{l}^{r}_{(0i)} + {1 \over 2}
\psi_{0i} \psi_{0j} \tilde{l}^{r}_{(0ij)} \quad (r = 0,...,n-1).
\label{sol1}
\end{equation}
From (\ref{eq4}) one a obtains a similar polynomial structure:
\begin{equation}
\tilde{l}^{n}_{(0)} = \tilde{l}^{n}_{(00)} +
\psi_{0i} \tilde{l}^{n}_{(0i)} .
\label{sol2}
\end{equation}
Moreover we have the following restrictions on the various tensors appearing in
the preceding two formulae:
\begin{equation}
\partial^{0i} \tilde{l}^{r}_{(0\mu)} = 0 \quad (r = 0,...,n); \qquad
\partial^{0k} \tilde{l}^{r}_{(0ij)} = 0 \quad (r = 0,...,n-1)
\label{restr1'}
\end{equation}
and
\begin{equation}
\tilde{\cal C} \tilde{l}^{r}_{(0\mu)} = 0 \quad (r = 0,...,n); \qquad
\tilde{\cal C} \tilde{l}^{r}_{(0ij)} = 0 \quad (r = 0,...,n-1)
\label{id''}
\end{equation}
and we also can impose
\begin{equation}
\tilde{l}^{r}_{(0ij)} = \tilde{l}^{r}_{(0ji)} \quad (r = 0,...,n-1).
\label{restr2}
\end{equation}
If we substitute now (\ref{sol1}) into the original equation (\ref{eq3'})
we obtain
\begin{equation}
\tilde{l}^{r}_{(0ij)} = - 2 \alpha_{r+1}
\left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\tilde{l}^{r+1}_{(1)} \qquad (r = 0,...,n-1).
\label{sol3}
\end{equation}
Finally we substitute the expressions (\ref{sol1}) and (\ref{sol2}) into the
equation (\ref{eq5'}) and we obtain:
\begin{equation}
\sum_{(i,j,k)} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\tilde{l}^{r}_{(0k)} = 0 \qquad (r = 0,...,n)
\label{eq6}
\end{equation}
and
\begin{equation}
\sum_{(i,j,k)} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right]
\tilde{l}^{r}_{(0kl)} = 0 \qquad (r = 0,...,n).
\label{eq7}
\end{equation}
One must check that the expression (\ref{sol3}) for
$
\tilde{l}^{r}_{(0kl)}
$
is compatible with the restrictions (\ref{id''}) by applying by applying
the operator
$
\tilde{\cal C}
$
to this relation. Also one notes that (\ref{sol3}) identically verifies the
equation (\ref{eq7}).
In conclusion we are left to solve only
(\ref{eq6}) together with the restrictions (\ref{restr1'}),
and (\ref{id''}). We have the following results:
\begin{lemma}
The following formula is valid
\begin{equation}
\left[ \tilde{\cal C}^{*}, \tilde{\cal C} \right] = N_{b} - N_{c}.
\end{equation}
\end{lemma}
\begin{lemma}
If
$
t \in \tilde{\cal H}_{p,p}
$
verifies
$
{\cal C} t = 0
$
then it also verifies
$
{\cal C}^{*} t = 0
$
and conversely.
\label{c-star}
\end{lemma}
The proofs of these lemmas are elementary and are omitted. Based on them
we have
\begin{lemma}
Let
$
t^{k} \in \tilde{\cal H}_{p,p} \quad (k = 1,...,n)
$
be tensors verifying the restriction
\begin{equation}
\tilde{\cal C} t^{k} = 0
\label{Ct}
\end{equation}
and the system:
\begin{equation}
\sum_{(i,j,k)} \left[ b^{(i)} c^{(j)} + b^{(j)} c^{(i)} \right] t^{k} = 0.
\label{permutation}
\end{equation}
Then one can write {\it uniquely}
$t$
in of the following form:
\begin{equation}
t^{k} = b^{(k)} U + c^{(k)} V
\label{T}
\end{equation}
with
$
U \in \tilde{\cal H}_{p+1,p}
$
and
$
V \in \tilde{\cal H}_{p,p+1}
$
verifying
\begin{equation}
\tilde{\cal C} U = V \quad \tilde{\cal C}^{*} U = 0 \quad
\tilde{\cal C} V = 0 \quad \tilde{\cal C}^{*} V = U.
\label{UV}
\end{equation}
Here we put by convention
$
\tilde{\cal H}_{p,q} \equiv \{0\}
$
if at least one of the indices
$p$
and
$q$
is negative
or
$n+1$.
\label{permutation-lemma}
\end{lemma}
{\bf Proof:}
We apply to the equation (\ref{permutation}) the operator
$
b^{*}_{(i)} c^{*}_{(j)}
$
and we find out (after summation over $i$ and $j$ and taking
into account (\ref{Ct}):
\begin{equation}
(p+2)t^{k} = b^{(k)} b^{*}_{(l)} t^{l} + c^{(k)} c^{*}_{(l)} t^{l}.
\end{equation}
So we have the formula from the statement with:
$
U = (p+2)^{-1} b^{*}_{(l)} t^{l}
$
and
$
V = (p+2)^{-1} c^{*}_{(l)} t^{l}.
$
These expressions verify the identities (\ref{UV}). Conversely, if
we have (\ref{T}) and (\ref{UV}) it remains to check that the equations
(\ref{permutation}) and (\ref{Ct}) are indeed identically satisfied.
$\nabla$
From this lemma one can write down that the most general solution of
(\ref{eq6}). Combining with the previous results one obtains the most general
expression for the tensors
$
\tilde{l}^{r}.
$
Reverting to the original tensors
$
l^{r}
$
one obtains easily that the most general expression for them is:
\begin{equation}
l^{r} = l^{r}_{(0)} + \psi_{0i} \left[ b^{(i)} U^{r} + c^{*(i)} V^{r}
\right] - 2 \alpha_{r+1} \psi_{0i} \psi_{0j} b^{(i)} c^{*(j)}
l^{r+1}_{(1)} + \psi_{00} l^{r}_{(1)} \qquad (r = 0,...,n).
\label{sol-gen}
\end{equation}
The tensors
$
l^{r}_{(0)}, l^{r}_{(1)} \in \tilde{\cal H}_{r,n-r},
U^{r} \in \tilde{\cal H}_{r+1,n-r}, V^{r} \in \tilde{\cal H}_{r,n-r-1}
$
are not completely arbitrary; they must satisfy the following relations:
\begin{equation}
{\cal C} l^{r}_{(\alpha)} = 0, \quad (\alpha = 0,1; \quad r = 0,...,n),
\label{iden1}
\end{equation}
\begin{equation}
{\cal C} U^{r} = V^{r}, \quad {\cal C} V^{r} = 0, \quad
{\cal C}^{*} U^{r} = 0, \quad {\cal C}^{*} V^{r} = U^{r} \quad
(r = 0,...,n)
\label{iden2}
\end{equation}
and
\begin{equation}
\partial^{\mu\nu} l^{r}_{(\alpha)} = 0, \quad
\partial^{\mu\nu} U^{r} = 0, \quad
\partial^{\mu\nu} V^{r} = 0, \qquad
(r = 0,...,n; \quad \alpha = 0,1).
\label{iden3}
\end{equation}
The structure of the tensors
$
l^{r} \quad (r = 0,...,n)
$
is completely elucidated.
(v) It remains to introduce these expressions for
$
l^{r}
$
in (\ref{polyn'}) and regroup the terms. Like in \cite{G1}, \cite{G2}
one obtains the desired formula (\ref{polyn}) for
$
n+1
$
with the tensors
$
{\cal L}^{r}
$
expressed in terms of the tensors defined in the proof above. Finally one
must check that the tensors
$
{\cal L}^{r}
$
also verify the induction hypothesis i.e. the identities (\ref{trace-l}).
This is done after some computations using (\ref{iden1}) - (\ref{iden3})
and the induction is finished.
\vrule height 6pt width 6pt depth 6pt
\begin{rem}
We make a last comment concerning the unicity statement from the proof. First,
the non-uniqueness is easy to explain because if one add to the tensors
$
{\cal L}^{...}_{...}
$
contributions containing at least a factor
$
\delta_{\rho_{j}}^{\sigma_{i}}
$
then it immediately follows from the identity (\ref{trace}) that the right hand
side of the formula (\ref{polyn}) is not changed. So, the constrain
(\ref{trace-l}) is a way of eliminating these type of contributions respecting
in the same time the antisymmetry properties of the functions
$
{\cal L}^{...}_{...}
$
i.e. to obtain the {\it traceless} part of
$
{\cal L}^{...}_{...}.
$
In this context we mention that such a decomposition of a tensor in a
traceless part and a rest containing at least a delta factor is true in
extremely general conditions as it is proved in \cite{K3}.
\end{rem}
3.2 Let us prepare the ground for the analysis of the more complicated case
$
N \geq 2.
$
First we note that if in analogy to (\ref{dual}) we define:
\begin{equation}
\tilde{J}_{\sigma_{1},...,\sigma_{r};\rho_{1},...,\rho_{r}} \equiv
{(-1)^{r} \over \sqrt{r! (n-r)!}} \varepsilon_{\rho_{1},....\rho_{n}}
J_{\sigma_{1},...,\sigma_{r}}^{\rho_{r+1},...,\rho_{n}}.
\label{dual-hyperJ}
\end{equation}
then can rewrite (\ref{polyn}) as follows:
\begin{equation}
{\cal L} = \sum_{r=0}^{n}
\tilde{\cal L}^{\sigma_{1},...,\sigma_{r};\rho_{1},...,\rho_{r}}
\tilde{J}_{\sigma_{1},...,\sigma_{r},\rho_{1},...,\rho_{r}}.
\label{polyn-tilde}
\end{equation}
We intend to use equation (\ref{II1}) first for the case
$
A = B = 1,2...,N.
$
It is clear that we will be able to apply the theorem above. To do this we
define in analogy to (\ref{hyperJ}) and (\ref{dual-hyperJ}) the expressions
\begin{equation}
J^{(A)\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}} \equiv
\varepsilon^{\rho_{1},...,\rho_{n}}
\prod_{i=1}^{r} \psi_{\rho_{i}\sigma_{i}}^{A} \qquad (r = 0,...,n;
\quad A = 1,...,N)
\label{hyperJN}
\end{equation}
and
\begin{equation}
\tilde{J}^{(A)}_{\sigma_{1},...,\sigma_{r};\rho_{1},...,\rho_{r}} \equiv
{(-1)^{r} \over \sqrt{r! (n-r)!}} \varepsilon_{\rho_{1},....\rho_{n}}
J_{\sigma_{1},...,\sigma_{r}}^{(A)\rho_{r+1},...,\rho_{n}}.
\label{dual-hyperJN}
\end{equation}
Then the equations (\ref{II1}) for
$
A = B
$
will produce an expression of the following form:
\begin{equation}
{\cal L} = \sum_{r,s,...,=0}^{n}
{\cal L}^{\sigma_{1},...,\sigma_{r};\mu_{1},...,\mu_{s};
\cdots}_{\rho_{r+1},...,\rho_{n};\nu_{s+1},...,\nu_{n};\cdots}
J^{\rho_{r+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{r}}
J^{\nu_{s+1},...,\nu_{n}}_{\mu_{1},...,\mu_{s}} \cdots
\label{polynN}
\end{equation}
where the functions
$
{\cal L}^{...}_{...}
$
are verifying the following properties:
\begin{equation}
\partial^{\mu\nu}_{A} {\cal L}^{...}_{...} = 0
\end{equation}
\begin{equation}
{\cal L}^{\cdots;\sigma_{P(1)},...,\sigma_{P(r)};\cdots}_{\cdots;
\rho_{Q(r+1)},...,\rho_{Q(n)};\cdots} =
(-1)^{|P|+|Q|}
{\cal L}^{\cdots;\sigma_{1},...,\sigma_{r};\cdots}_{\cdots;
\rho_{r+1},...,\rho_{n};\cdots}
\end{equation}
(where
$P$
is a permutation of the numbers
$
1,...,r
$
and
$Q$
is a permutation of the numbers
$
r+1,...,n
$)
and
\begin{equation}
{\cal L}^{\cdots;\sigma_{1},...,\sigma_{r-1},\zeta;\cdots}_{\cdots;
\rho_{r+1},...,\rho_{n-1},\zeta;\cdots} = 0.
\end{equation}
Again the analysis is much simplified if one uses tensor notations.
Generalizing in an obvious way the scalar case the functions
$
{\cal L}^{...}_{...}
$
will become the components of a tensor
$
{\cal L} \in ({\cal H}^{*})^{\otimes N}
$
and one can write (\ref{polynN}) in a more compact manner:
\begin{equation}
{\cal L} = \sum_{r_{1},...,r_{N}=1}^{n}
<{\cal L}^{r_{1},...,r_{N}}, J^{(1)}_{r_{1}} \otimes \cdots J^{(N)}_{r_{N}}>
\label{polynN-compact}
\end{equation}
where
$
J^{(1)}_{r_{1}} \otimes \cdots J^{(N)}_{r_{N}} \in
{\cal H}^{\otimes N}
$
and
$
<\cdot,\cdot>
$
is the duality form.
Let
$
b^{*(\mu)}_{(A)}, c^{*(A)}_{(\mu)}, b^{(A)}_{(\mu)},c_{(A)}^{(\mu)}
$
be the creation and the annihilation operators acting in
$
{\cal H}^{\otimes N}
$
and
$
b^{*(A)}_{(\mu)}, c^{*(\mu)}_{(A)}, b^{(\mu)}_{(A)},c_{(\mu)}^{(A)}
$
the corresponding operators from
$
({\cal H}^{*})^{\otimes N}.
$
Then the constraints of the type (\ref{trace-l-compact}) can be written as
follows:
\begin{equation}
{\cal C}_{A} {\cal L}^{r_{1},...,r_{N}} = 0 \quad (A = 1,...,N)
\label{trace-lN}
\end{equation}
where we have defined:
\begin{equation}
{\cal C}_{A} \equiv b^{(\mu)}_{(A)} c_{(\mu)}^{(A)} \quad (A = 1,...,N).
\end{equation}
The expressions of the type (\ref{polynN}) or (\ref{polynN-compact}) are unique
in the sense that
$
{\cal L}
$
uniquely determines the function coefficients
$
{\cal L}^{r_{1},...,r_{N}};
$
this follows directly from the uniqueness statement of theorem \ref{structure}.
It is convenient to work with the dual tensors
$
\tilde{\cal L}^{r_{1},...,r_{N}} \in \tilde{\cal H}^{\otimes N}
\quad (A = 1,...,N)
$
defined analogously as in (\ref{dual}) which will verify the constraints:
\begin{equation}
\tilde{\cal C}_{A} \tilde{\cal L}^{r_{1},...,r_{N}} = 0 \quad (A = 1,...,N)
\label{C}
\end{equation}
where
\begin{equation}
\tilde{\cal C}_{A} \equiv b^{(\mu)}_{(A)} c_{(\mu)}^{*(A)} \quad (A = 1,...,N)
\end{equation}
are the expressions of the constraints in the dual space.
Our goal in the next two sections will be to prove the following result:
\begin{thm}
The most general solution of the equations (\ref{II1}) and (\ref{II2}) is of
the form:
\begin{equation}
{\cal L} = \sum_{r_{1},...,r_{N}=1}^{n}
<\tilde{\cal L}^{r_{1},...,r_{N}},
\tilde{J}^{(1)}_{r_{1}} \otimes \cdots \tilde{J}^{(N)}_{r_{N}}>.
\label{polynN-compact-dual}
\end{equation}
The tensors
$
\tilde{\cal L}^{\sigma_{1},...,\sigma_{r};\rho_{1},...,\rho_{r};
\mu_{1},...,\mu_{s};\nu_{1},...,\nu_{s};\cdots}
$
verify, the usual antisymmetry and tracelessness properties, but moreover
they verify the property of complete antisymmetry in {\it all} the indices
$
\sigma_{1},...,\sigma_{r},\mu_{1},...,\mu_{s},...
$
i.e. they verify the identities
\begin{equation}
\left[ b^{(\mu)}_{(A)} b^{(\nu)}_{(B)} + (\mu \leftrightarrow \nu) \right]
\tilde{\cal L}^{r_{1},...,r_{N}} = 0.
\label{antisymmetry}
\end{equation}
\label{structureN}
\end{thm}
To do this we will use the remaining equations i.e. (\ref{II1}) for
$
A \not= B
$
and (\ref{II2}). Using the compact expression (\ref{polynN-compact}) one
obtains from (\ref{II1}):
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})} \left[
b^{(\mu)}_{(A)} c^{(\rho_{1})}_{(A)} + b^{(\rho_{1})}_{(A)} c^{(\mu)}_{(A)}
\right] \left[
b^{(\rho_{2})}_{(B)} c^{(\rho_{3})}_{(B)} +
b^{(\rho_{3})}_{(B)} c^{(\rho_{2})}_{(B)} \right]
\tilde{\cal L}^{r_{1},...,r_{N}} + (A \leftrightarrow B) = 0
\label{II1-Fock}
\end{equation}
and from (\ref{II2}) it follows:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})} \sum_{(\zeta_{1},\zeta_{2},\zeta_{3})}
\left[ b^{(\zeta_{1})}_{(D)} c^{(\zeta_{2})}_{(D)} +
b^{(\zeta_{2})}_{(D)} c^{(\zeta_{1})}_{(D)} \right]
\left[ b^{(\rho_{1})}_{(B)} c^{(\rho_{2})}_{(B)} +
b^{(\rho_{2})}_{(B)} c^{(\rho_{1})}_{(B)} \right]
\left[ b^{(\zeta_{3})}_{(A)} c^{(\rho_{3})}_{(A)} +
b^{(\rho_{3})}_{(A)} c^{(\zeta_{3})}_{(A)} \right]
\tilde{\cal L}^{r_{1},...,r_{N}} = 0
\label{II2-Fock}
\end{equation}
For the case
$
N = 2
$
to be analysed in the next Section it easily follows that (\ref{II2-Fock})
follows from (\ref{II1-Fock}), so we will have to analyse only the first
equation for
$
A \not= B.
$
For the case
$
N \geq 3
$
to be analysed in Section 5 it will be convenient to start with
(\ref{II2-Fock}).
\section{Trivial Second-Order Lagrangians for the Case $N = 2$}
As we have said before we analyse (\ref{II1-Fock}) in the case
$
A \not= B.
$
It is convenient to redefine
$
b^{(\mu)}_{(A)} \rightarrow d^{\mu}, c^{(\mu)}_{(A)} \rightarrow e^{\mu},
b^{(\mu)}_{(B)} \rightarrow b^{\mu}, c^{(\mu)}_{(B)} \rightarrow c^{\mu}.
$
Then the equation (\ref{II1-Fock}) takes the generic form:
\begin{equation}
\sum b^{\mu} c^{\nu} d^{\rho} e^{\sigma} t = 0
\label{II1-compact}
\end{equation}
where the sum runs over all the permutations of indices
$
\mu, \nu, \rho, \sigma
$
and
$t$
is an arbitrary tensor from
$
\tilde{\cal H}_{k,k',r,r}.
$
Here
$k, k', r, r$ are the eigenvalues of the operators
$
N_{b}, N_{c}, N_{d}
$
and
$
N_{e}
$
respectively. We now define the following operators:
\begin{equation}
A \equiv c^{\rho} d^{*}_{\rho}, \quad
B \equiv d^{\rho} e^{*}_{\rho}, \quad
C \equiv e^{\rho} c^{*}_{\rho}, \quad
Z \equiv b^{\rho} c^{*}_{\rho}
\label{ABC}
\end{equation}
and there are some constraints to be taken into account, namely (see (\ref{C})
and lemma \ref{c-star}):
\begin{equation}
B t = 0, \quad B^{*} t = 0
\label{B}
\end{equation}
and
\begin{equation}
Z t = 0, \quad Z^{*} t = 0.
\label{Z}
\end{equation}
We will use in the following only the constraint (\ref{B}). We start the proof
of theorem \ref{structureN} by a series of lemmas and propositions.
\begin{lemma}
If the tensor
$t$
verifies the equation (\ref{II1-compact}) and the constraints (\ref{B}) then it
also verifies the equation
\begin{equation}
(r+2) \left[ (k+1) (r+3) {\bf 1} - M \right] b^{\mu} t =
c^{\mu} U_{(0)} + d^{\mu} V_{(0)} + e^{\mu} W_{(0)}
\label{II1-contraction}
\end{equation}
where
\begin{equation}
M \equiv A^{*} A + C C^{*}
\label{M}
\end{equation}
and
$
U_{(0)}, V_{(0)}, W_{(0)}
$
are some tensors constructed from
$t$. (We will not need in the following their explicit expressions.)
\end{lemma}
{\bf Proof:} One applies to the equation (\ref{II1-compact}) the operator
$
c^{*}_{\nu} d^{*}_{\rho} e^{*}_{\sigma}
$
and uses the constraints (\ref{B}).
$\nabla$
We want to prove that the operator in the square brackets from
(\ref{II1-contraction}) is invertible. To do this we notice that
$M$
can be restricted to the subspace of
$
\tilde{\cal H}_{k,k',r,r}
$
determined by
$
Ker{B} \cap Ker(B^{*}).
$
We denote this subspace by
$h$
and by
$M'$
the restriction of
$M$
to
$h$.
Then we have
\begin{prop}
The spectrum of the operator
$M'$
is described by:
\begin{equation}
{\it Spec}(M') \subset \{ v(v+r-k+2) | v = 0,...,v_{0}\}
\label{spectrum}
\end{equation}
where
$
v_{0} \equiv min\{k, 2(n-r)\}.
$
In particular we have
\begin{equation}
{\it Spec}(M') \subset [0, k (r+2)].
\label{S}
\end{equation}
\label{spectrum-M}
\end{prop}
{\bf Proof:}
One finds out after some tedious but straightforward computations that if
the tensor
$t$
verifies the eigenvalue equation
\begin{equation}
M' t = \lambda t
\label{X}
\end{equation}
then one also has
\begin{equation}
M' A^{s} (C^{*})^{u} t = \lambda_{s,u} A^{s} (C^{*})^{u} t.
\label{eigen}
\end{equation}
Here
$s$
and
$u$
are natural numbers verifying
$
s, u \leq n-r, \quad s+u \leq k
$
and the expression for the eigenvalue is
\begin{equation}
\lambda_{s,u} \equiv \lambda - \Lambda_{s+u}
\end{equation}
where
\begin{equation}
\Lambda_{v} \equiv v(v-r-k+2).
\end{equation}
The proof of the formula above is most easily done by induction: first one
considers the case
$
s = 0
$
and use the induction over
$u$
and next one proves (\ref{eigen}) by induction over
$s$.
Needless to say, one must make use of the various anticommutation relations
between the operators
$A, B, C$
and their Hermitian conjugates.
Now one supposes that
$
\lambda \not\in {\it Spec}(M')
$
i.e.
$
\lambda \not= \Lambda_{v} \quad (v= 0,1,...,v_{0}).
$
In this case it follows that
$
\lambda_{s,u} \not= 0
$
and we will be able to prove that one has
\begin{equation}
A^{s} (C^{*})^{u} t = 0 \quad (s,u \leq n-r, \quad s+u \leq k).
\label{recurrence}
\end{equation}
We analyse separately two cases. If
$
2r + k \leq 2n
$
we can take in (\ref{eigen})
$s$
and
$u$
such that
$
s + u = k.
$
One obtains that
\begin{equation}
\lambda_{s,u} A^{s} (C^{*})^{u} t = 0.
\end{equation}
Because
$
\lambda_{s,u} \not= 0
$
we have (\ref{recurrence}) for
$
s + u = k.
$
Next, one proves this relation for
$
s + u \leq k
$
by recurrence (downwards) over
$
v = s + u
$
using again (\ref{eigen}). In the case
$
2r + k > 2n
$
the proof of (\ref{recurrence}) is similar, only one has to start the induction
downwards from
$
s + u = 2(n-r).
$
The relation (\ref{recurrence}) is proved. If we take in this relation
$
s = u = 0
$
we get
$
t = 0.
$
In conclusion, if
$
\lambda
$
does not belong to the set
$
{\it Spec}(M')
$
then the equation (\ref{X}) does not have non-trivial solutions. Because the
operator
$M'$
lives in a finite dimensional Hilbert space the first assertion of the
proposition follows. The second assertion follows from the first and from
$
M \geq 0.
$
$\vrule height 6pt width 6pt depth 6pt$
\begin{cor}
The matrix
$
(k+1) (r+3) {\bf 1} - M'
$
is invertible.
\end{cor}
{\bf Proof:}
$
(k+1) (r+3)
$
does not belong to the spectrum of
$M'$.
$\nabla$
Now we come back to the equation (\ref{II1-contraction}); using the corollary
above and the finite dimensional functional calculus, it is not hard to prove
that one obtains from this equation the following consequence:
\begin{equation}
b^{\mu} t = c^{\mu} U + d^{\mu} V + e^{\mu} W
\label{UVW}
\end{equation}
where
$
U, V, W
$
are some tensors verifying
\begin{equation}
B U = 0, \quad B V = W, \quad B W = 0, \quad B^{*} U = 0, \quad B^{*} V = 0,
\quad B^{*} W = V.
\end{equation}
A structure formula of the type (\ref{UVW}) is valid for every tensor
$
\tilde{\cal L}^{r,k}
$
appearing in the structure formula for the trivial Lagrangian. It it important
to note that in deriving this result we have used only the constraints
(\ref{B}) and not the constraints (\ref{Z}). So, the tensors
$
\tilde{\cal L}^{r,k}
$
are not uniquely fixed by the unicity argument. We use the possibility of
redefining these tensors to our advantage. Indeed, if one inserts formulas of
the type (\ref{UVW}) into the expression of the Lagrangian one can show that
the contribution following from the first term is null (one must use the
tracelessness of the hyper-Jacobians). In other words, one can redefine the
tensors
$
\tilde{\cal L}^{r,k}
$
such that one has:
\begin{equation}
(r+2) b^{\mu} \tilde{\cal L}^{r,k} = d^{\mu} V + e^{\mu} W.
\end{equation}
Now one can make this formula more precise is one uses in a clever way lemma
\ref{permutation-lemma}, namely the following relation stands true:
\begin{equation}
(r+2) b^{\mu} \tilde{\cal L}^{r,k} = (d^{\mu} {\cal D} + e^{\nu} {\cal E})
{\cal L}^{r,k}
\label{L}
\end{equation}
where we have introduce new notations:
\begin{equation}
{\cal D} \equiv b^{\mu} d^{*}_{\mu}, \quad {\cal E} \equiv b^{\mu} e^{*}_{\mu}.
\end{equation}
Now we have
\begin{lemma}
The following formula is valid:
\begin{equation}
{(r+p+1)! \over (r+1)!} b^{\mu_{1}} \cdots b^{\mu_{p}} \tilde{\cal L}^{r,k} =
(-1)^{[p/2]} \sum_{s=0}^{p} (-1)^{(p+1)s} C^{s}_{p}
{\cal A}_{p} (d^{\mu_{1}} \cdots d^{\mu_{s}} e^{\mu_{s+1}} \cdots e^{\mu_{p}})
{\cal D}^{s} {\cal E}^{p-s} \tilde{\cal L}^{r,k}.
\label{bbb}
\end{equation}
Here
$
[m]
$
is the integer part of
$m$,
$
C^{s}_{p} \equiv {p! \over s! (p-s)!}
$
and
$
{\cal A}_{p}
$
is the operator of antisymmetrization in the indices
$
\mu_{1},...,\mu_{p}.
$
\end{lemma}
{\bf Proof:} By very long computations using induction over
$p$.
Indeed, for
$
p = 0
$
the formula is trivial and for
$
p = 1
$
we have (\ref{L}).
$\nabla$
In particular, if we take in (\ref{bbb})
$
p = k
$
one obtains after some prelucrations
\begin{equation}
{(r+k+1)! \over (r+1)!} b^{\mu_{1}} \cdots b^{\mu_{k}} \tilde{\cal L}^{r,k} =
\sum_{s=0}^{k} (-1)^{(k+1)s+[s/2]} {1 \over (k-s)!}
{\cal A}_{p} (d^{\mu_{1}} \cdots d^{\mu_{s}} e^{\mu_{s+1}} \cdots e^{\mu_{k}})
B^{k-s} L^{r,k}
\label{bcd}
\end{equation}
where
$
L^{r,k} \in \tilde{\cal H}_{r+k,r,0,k}
$
is given by
\begin{equation}
L^{r,k} = {\cal D}^{k} \tilde{\cal L}^{k,r}.
\end{equation}
Using indices the expression of
$
{\cal L}
$
becomes
\begin{equation}
\begin{array}{c}
{\cal L} = \sum_{r,k=0}^{n} {(r+1)! \over (r+k+1)!} \sum_{s=0}^{k}
(-1)^{(k+1)s+[s/2]} {1 \over (k-s)!}
(B^{k-s} L)^{\mu_{1},...\mu_{s}\rho_{1},...,\rho_{r};
\mu_{s+1},...,\mu_{k}\sigma_{1},...,\sigma_{r};\emptyset;\nu_{1},...,\nu_{k}}
\nonumber \\
\tilde{J}^{(1)}_{\rho_{1},...,\rho_{r};\sigma_{1},...,\sigma_{r}}
\tilde{J}^{(2)}_{\mu_{1},...,\mu_{k};\nu_{1},...,\nu_{k}}
\end{array}
\end{equation}
Now one uses the explicit expression for the operator
$B$
and the tracelessness of the hyper-Jacobians to show that one can replace in
the formula above
$
B^{k-s}
$
by a sum of lower powers of
$B$.
In the end one finds out by recurrence that the sum over
$s$
disappears and the formula above is transformed into:
\begin{equation}
{\cal L} = \sum_{r,k=0}^{n} {(r+1)! \over (r+k+1)!} (-1)^{k(k-1)/2}
L^{\mu_{1},...\mu_{k}\rho_{1},...,\rho_{r};
\sigma_{1},...,\sigma_{r};\emptyset;\nu_{1},...,\nu_{k}}
\tilde{J}^{(1)}_{\rho_{1},...,\rho_{r};\sigma_{1},...,\sigma_{r}}
\tilde{J}^{(2)}_{\mu_{1},...,\mu_{k};\nu_{1},...,\nu_{k}}.
\label{structureII}
\end{equation}
In other words, by redefining
$
{\cal L}^{r,k} \rightarrow {\cal L}^{r,k}_{1}
$
where:
\begin{equation}
{\cal L}_{1}^{\rho_{1},...,\rho_{r};\sigma_{1},...,\sigma_{r};
\mu_{1},...\mu_{k};\nu_{1},...,\nu_{k}} \equiv
(-1)^{k(k-1)/2} {(r+1)! \over (r+k+1)!}
L^{\mu_{1},...\mu_{k}\rho_{1},...,\rho_{r};
\sigma_{1},...,\sigma_{r};\emptyset;\nu_{1},...,\nu_{k}}
\label{redefine}
\end{equation}
one preserves the formula (\ref{structureII}) and has moreover
\begin{equation}
(b^{\mu} d^{\nu} + b^{\nu} d^{\mu}) {\cal L}^{r,k}_{1} = 0.
\end{equation}
This observation finishes the proof of the theorem \ref{structureN} for the
case
$
N = 2.
$
\section{Trivial Second-Order Lagrangians in the Case $N \geq 3$}
In this case we start with the equation (\ref{II2-Fock}) and note that it
gives something non-trivial {\it iff} all the three indices
$
A, B, D
$
are distinct one of the other. In this case it is convenient to redefine
$$
b^{(\mu)}_{(D)} \rightarrow d^{\mu},
c^{(\mu)}_{(D)} \rightarrow e^{\mu},
b^{(\mu)}_{(B)} \rightarrow f^{\mu},
c^{(\mu)}_{(B)} \rightarrow g^{\mu},
b^{(\mu)}_{(A)} \rightarrow b^{\mu},
c^{(\mu)}_{(A)} \rightarrow c^{\mu}
$$
and to obtain an equation of the following type:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})} \sum_{(\zeta_{1},\zeta_{2},\zeta_{3})}
\left( d^{\zeta_{1}} e^{\zeta_{2}} +
d^{\zeta_{2}} e^{\zeta_{1}} \right)
\left( f^{\rho_{1}} g^{\rho_{2}} +
f^{\rho_{2}} g^{\rho_{1}} \right)
\left( b^{\zeta_{3}} c^{\rho_{3}} +
b^{\rho_{3}} c^{\zeta_{3}} \right) t = 0
\label{II2-compact}
\end{equation}
where
$
t \in \tilde{\cal H}_{r,r,t,t,k,k};
$
here
$
r_{D} = r, r_{B} = t, r_{A} = k.
$
Moreover, the tensor
$t$
must satisfy the constraints (\ref{B}) and (\ref{Z}). It is convenient to
define
\begin{equation}
t_{1} \equiv \sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left( f^{\rho_{1}} g^{\rho_{2}} +
f^{\rho_{2}} g^{\rho_{1}} \right) b^{\rho_{3}} t
\label{tensor-X}
\end{equation}
and
\begin{equation}
t_{2} \equiv \sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left( f^{\rho_{1}} g^{\rho_{2}} +
f^{\rho_{2}} g^{\rho_{1}} \right) c^{\rho_{3}} t.
\label{tensor-Y}
\end{equation}
Let us note that
$
t_{1} \in \tilde{\cal H}_{r,r,t-1,t-1,k-1,k}
$
and
$
t_{2} \in \tilde{\cal H}_{r,r,t-1,t-1,k,k-1}.
$
Then we can rewrite (\ref{II2-compact}) in equivalent way if we use lemma
\ref{permutation-lemma} and the constraint (\ref{B}), namely:
\begin{equation}
(r+2) (c^{\mu} t_{1} + b^{\mu} t_{2}) =
d^{\mu} (A t_{1} + {\cal D} t_{2}) + c^{\mu} (C^{*} t_{1} + {\cal E} t_{2}).
\label{II2-XY}
\end{equation}
We can formulate now the central result of this Section:
\begin{prop}
The equation (\ref{II2-XY}) above together with the constrains (\ref{B}) and
(\ref{Z}) have only the trivial solution
$
t_{1} = 0.
$
\end{prop}
{\bf Proof:} The proof is extremely tedious and we give only a minimal number
of details.
(i) We apply to the equation (\ref{II2-XY}) the operator
$
c^{*}_{\mu}.
$
Let us define the following operators
\begin{equation}
Q \equiv A^{*} {\cal D} + C {\cal E}
\label{Q}
\end{equation}
and
\begin{equation}
R \equiv M + Q Z^{*}.
\label{R}
\end{equation}
Then one can write the result in a very simple form:
\begin{equation}
R t_{1} = (r+2) (k+1) t_{1}
\label{RX}
\end{equation}
so one must investigate this eigenvalue problem.
(ii) We denote by
$h'$
the subspace of
$
\tilde{\cal H}_{r,r,t-1,t-1,k-1,k}
$
defined by
$
Ker(B) \cap Ker(B^{*}) \cap Ker(Z)
$
and note that
$
t_{1} \in h'.
$
Next one shows by simple computations that the operator
$R$
commutes with the operators
$
B, B^{*}, Z
$
so it makes sense to restrict it to the subspace
$h'.$
The same assertion is true with respect to the operator
$R^{*}.
$
We will denote these restrictions by
$R'$
and
$R'^{*}$
respectively. From this observations it follows in particular that the operator
$
R - R^{*}
$
leaves the subspace
$h'$
invariant. But one computes that
\begin{equation}
R - R^{*} = Q Z^{*} - Z Q^{*} = Z^{*} Q - Q^{*} Z
\label{RQZ}
\end{equation}
and from the last equality it easily follows that
\begin{equation}
(t_{1}, (R - R^{*}) t_{1}') = 0 \quad \forall t_{1}, t_{1}' \in h'.
\end{equation}
Because
$
R - R^{*}
$
leaves the subspace
$h'$
invariant we conclude that
\begin{equation}
R' - R'^{*} = 0.
\end{equation}
Combining with (\ref{RQZ}) one discovers that:
\begin{equation}
(Q Z^{*} - Z Q^{*}) t_{1} = 0 \quad \forall t_{1} \in h'.
\end{equation}
Applying to this relation the operator
$Z$
produces after some computations:
\begin{equation}
Q t_{1} = 0.
\label{QX}
\end{equation}
Let us introduce the following notation:
\begin{equation}
N \equiv {\cal D}^{*} {\cal D} + {\cal E}^{*} {\cal E}.
\label{N}
\end{equation}
Then, taking into account (\ref{QX}) one can obtain from (\ref{RX}) the
following relation:
\begin{equation}
(2M - N) t_{1} = (r+2) (k+1) t_{1}.
\label{MN}
\end{equation}
(iii) In the same way one can obtain similar results concerning the tensor
$t_{2}$.
More precisely one denotes by
$h''$
the subspace of
$
\tilde{\cal H}_{r,r,t-1,t-1,k,k-1}
$
defined by
$
Ker(B) \cap Ker(B^{*}) \cap Ker(Z^{*})
$
and notes that
$
t_{2} \in h''.
$
Then one shows that similarly to (\ref{QX}) and (\ref{MN}) one has:
\begin{equation}
Q^{*} t_{2} = 0
\label{Q-star}
\end{equation}
and
\begin{equation}
(2N - M) t_{2} = (r+2) (k+1) t_{2}.
\label{NM}
\end{equation}
(iv) The relation (\ref{QX}) suggests to investigate if it is possible to
restrict the operator
$
2M - N
$
to the subspace
$
h_{Q} \equiv h' \cap Ker(Q).
$
One shows by elementary but rather long computations that this assertion is
true for the operators
$M, N$
so it makes sense to define by
$
M_{Q}, N_{Q}
$
the corresponding restrictions. So, indeed the operator
$
2M - N
$
leaves the subspace
$h_{Q}$
invariant.
(v) Moreover one computes the commutator of
$M$
and
$N$
and shows that:
\begin{equation}
[M_{Q}, N_{Q}] = 0.
\end{equation}
It follows that the operators
$
M_{Q}, N_{Q}
$
can be simultaneously diagonalized. Let us suppose that the equation (\ref{MN})
can have a non-trivial solution. Then it is easy to show that there must exists
at least one non-trivial solution
$t_{1}$
of this equation such that
$t_{1}$
is a simultaneous eigenvalue of the operators
$M, N.$
Explicitly:
\begin{equation}
M t_{1} = \lambda t_{1}, \quad N t_{1} = \mu t_{1}
\label{MNX}
\end{equation}
and
\begin{equation}
2 \lambda - \mu = (k+1) (r+2).
\end{equation}
(vi) Let us notice now that the relation (\ref{Q-star}) can be written as
follows:
\begin{equation}
Q^{*} Z^{*} t_{1} = 0.
\end{equation}
One applies to this relation, first the operator
$Z$
and next the operator
$Q$;
after some computations one gets:
\begin{equation}
(M - N)^{2} t_{1} = [Q, Q^{*}] t_{1}.
\end{equation}
One evaluates explicitly the commutator and uses (\ref{MNX}) to obtain the
following restriction on the eigenvalue
$
\lambda
$:
\begin{equation}
(\lambda - \lambda_{0})^{2} + (r-k+2) (\lambda - \lambda_{0}) + \lambda = 0
\label{lambda}
\end{equation}
where we have denoted for simplicity
\begin{equation}
\lambda_{0} \equiv (k+2) (r+1)
\end{equation}
i.e. the eigenvalue appearing in the right hand side of the equation
(\ref{MN}). Now it is easy to prove that the solutions of the equation
(\ref{lambda}) (if they exist at all) must be greater than
$
k (r+2).
$
But this conflicts with Proposition \ref{spectrum-M} (see formula (\ref{S}))
so we conclude that the equation (\ref{MN}) have only the trivial solution in
the subspace
$
h_{Q}.
$
This finishes the proof of the similar assertion from the statement.
$\vrule height 6pt width 6pt depth 6pt$
Remembering the definition of the tensor
$t_{1}$
we just have find out that we have
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left( f^{\rho_{1}} g^{\rho_{2}} +
f^{\rho_{2}} g^{\rho_{1}} \right) b^{\rho_{3}} \tilde{\cal L}^{...} = 0.
\label{fgb}
\end{equation}
Applying
$
Z^{*}
$
one finds out that also
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left( f^{\rho_{1}} g^{\rho_{2}} +
f^{\rho_{2}} g^{\rho_{1}} \right) c^{\rho_{3}} \tilde{\cal L}^{...} = 0.
\label{fgc}
\end{equation}
In particular these relations make the starting point -relation
(\ref{II2-compact})- identically satisfied. Because the indices
$A, B, D$
in the relation (\ref{II2-Fock}) are arbitrary we have proved in fact that this
relation is equivalent to the following two relations:
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left[ b^{(\rho_{1})}_{(B)} c^{(\rho_{2})}_{(B)} +
b^{(\rho_{2})}_{(B)} c^{(\rho_{1})}_{(B)} \right] b^{(\rho_{3})}_{(A)}
\tilde{\cal L}^{...} = 0
\label{II-final1}
\end{equation}
and
\begin{equation}
\sum_{(\rho_{1},\rho_{2},\rho_{3})}
\left[ b^{(\rho_{1})}_{(B)} c^{(\rho_{2})}_{(B)} +
b^{(\rho_{2})}_{(B)} c^{(\rho_{1})}_{(B)} \right] c^{(\rho_{3})}_{(A)}
\tilde{\cal L}^{...} = 0
\label{II-final2}
\end{equation}
for any
$
A \not= B.
$
It is easy to see that the preceding two relations make (\ref{II1-Fock}) and
(\ref{II2-Fock}) identities.
Now, starting for instance from (\ref{II-final1}) one can use recurrsively
the redefinition trick from the preceding Section to show the
$
\tilde{\cal L}^{...}
$
can be changed such that it verifies:
\begin{equation}
\left[ b^{(\mu)}_{(1)} b^{(\nu)}_{(B)} +
b^{(\nu)}_{(1)} b^{(\mu)}_{(B)} \right] \tilde{\cal L}^{...} = 0
\end{equation}
for
$
B = 2,3,...,N.
$
The preceding relation immediately implies the relation (\ref{antisymmetry})
and accordingly theorem \ref{structureN}.
This finishes the proof of the main theorem.
\section{Main Theorem}
6.1 In this Section we will exhibit the result of theorem \ref{structureN} in a
more compact way. To do this we introduce the second-order hyper-Jacobians
in full generality:
\begin{equation}
J^{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}}_{\nu_{1},...,\nu_{k}} \equiv
\varepsilon^{\mu_{1},...,\mu_{n}} \prod_{i=1}^{k}
\psi^{A_{i}}_{\mu_{i}\nu_{i}} \quad (k = 0,...,n).
\label{hJ}
\end{equation}
One notices that these expressions are the natural generalization of the
expressions (\ref{hyperJ}) defined in Section 3 and that they have
properties of the same kind; namely a antisymmetry property:
\begin{equation}
J^{A_{P(1)},...,A_{P(k)};\mu_{Q(k+1)},...,\mu_{Q(n)}}_{\nu_{P(1)},...,
\nu_{P(k)}} = (-1)^{|P|+|Q|}
J^{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}}_{\nu_{1},...,\nu_{k}}
\quad (k = 0,...,n)
\label{antisym-hJ}
\end{equation}
(where
$P$
is a permutation of the numbers
$1,...,k$
and
$Q$
is a permutations of the numbers
$k+1,...,n$)
and a tracelessness property (see \cite{O}):
\begin{equation}
J^{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n-1},\zeta}_{\nu_{1},...,
\nu_{r-1},\zeta} = 0.
\quad (k = 1,...,n-1).
\label{trace-hJ}
\end{equation}
The relations (\ref{antisym}) and (\ref{trace}) are particular cases of these
relations (for the case of the scalar field).
Then we have
\begin{thm}
The most general solution of the equations (\ref{II1}) and (\ref{II2}) is of
the following form:
\begin{equation}
{\cal L} = \sum_{k=0}^{n}
{\cal L}^{\nu_{1},...,\nu_{k}}_{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}}
J^{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}}_{\nu_{1},...,\nu_{k}}
\label{polyn-hJ}
\end{equation}
where the functions
$
{\cal L}^{...}_{...}
$
are independent of
$
\psi_{\mu\nu}^{B}
$:
\begin{equation}
\partial^{\mu\nu}_{B}
{\cal L}^{\nu_{1},...,\nu_{k}}_{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}} = 0
\quad (k = 0,...,n),
\end{equation}
and have analogous properties as the hyper-Jacobians, namely the
(anti)symmetry property:
\begin{equation}
{\cal L}_{A_{P(1)},...,A_{P(k)};\mu_{Q(k+1)},...,\mu_{Q(n)}}^{\nu_{P(1)},...,
\nu_{P(k)}} = (-1)^{|P|+|Q|}
{\cal L}_{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n}}^{\nu_{1},...,\nu_{k}}
\quad (k = 0,...,n)
\label{antisym-l-hJ}
\end{equation}
(where
$P$
is a permutation of the numbers
$1,...,k$
and
$Q$
is a permutations of the numbers
$k+1,...,n$)
and also verify the identities:
\begin{equation}
{\cal L}_{A_{1},...,A_{k};\mu_{k+1},...,\mu_{n-1},\zeta}^{\nu_{1},...,
\nu_{k-1},\zeta} = 0.
\label{trace-l-hJ}
\end{equation}
(i. e. are traceless). The function coefficients
$
{\cal L}^{...}_{...}
$
are uniquely determined by
$
{\cal L}
$
and the properties (\ref{antisym-l-hJ}) and (\ref{trace-l-hJ}) above.
\label{structure-hJ}
\end{thm}
{\bf Proof:} For
$
N = 1
$
this result coincides with theorem \ref{structure}. For
$
N \geq 2
$
we will prove it using the results from the preceding two sections. Namely, we
will show that the expression (\ref{polynN-compact-dual}) can be rearranged
such that it coincides with (\ref{polyn-hJ}) above. In fact, it is easier to
start with (\ref{polyn-hJ}) and to obtain the previous expression
(\ref{polynN-compact-dual}). To this purpose we will make
$
N \rightarrow N+1
$
and suppose that the indices
$
A, B, ...
$
run from
$0$
to
$N$;
the indices
$
a, b, ...
$
will run from
$1$
to
$N$.
We separate in the expression (\ref{polyn-hJ}) the contributions in which
$
n - r
$
indices take values from
$1$
to
$N$
and
the rest take the value
$0$.
One obtains
\begin{equation}
{\cal L} = \sum_{k=0}^{n} \sum_{r=k}^{n} C^{n-r}_{n-k}
{\cal L}^{\mu_{k+1},...,\mu_{n}}_{0,...,0,a_{r+1},...,a_{k};
\nu_{1},...,\nu_{k}}
J^{0,...,0,a_{r+1},...,a_{k};\nu_{1},...,\mu_{k}}_{\mu_{k+1},...,\mu_{n}}
\label{polyn-doublesum}
\end{equation}
where it is understood that there are
$
r - k
$
entries equal to
$0$.
One can rearrange this expression as follows:
\begin{equation}
{\cal L} = \sum_{k=0}^{n}
l^{\mu_{k+1},...,\mu_{n}}_{a_{k+1},...,a_{k};\nu_{1},...,\nu_{k}}
J^{a_{k+1},...,a_{n};\nu_{1},...,\nu_{k}}_{\mu_{k+1},...,\mu_{n}}
\label{polyn-a}
\end{equation}
where
\begin{equation}
l^{\mu_{k+1},...,\nu_{n}}_{a_{k+1},...,a_{n};\nu_{1},...,\nu_{k}} \equiv
\sum_{r=0}^{k} C^{n-r}_{n-k}
{\cal A}_{\nu_{1},...,\nu_{k}}
\left({\cal L}^{\mu_{r+1},...,\mu_{n}}_{0,...,0,a_{k+1},...,a_{k};
\nu_{1},...,\nu_{r}} \prod_{l=r+1}^{k} \psi^{0}_{\mu_{l}\nu_{l}} \right)
\end{equation}
where
$
{\cal A}_{\nu_{1},,...,\nu_{k}}
$
is the projector which antisymmetrizes in the indices
$
\nu_{1},...,\nu_{k}
$
and there are
$k-r$
entries equal to
$0$.
One defines the dual tensor
$
\tilde{l}^{...}_{...}
$
by analogy to (\ref{dual}) and discovers after some combinatorics that it is
given by the following relation:
\begin{equation}
\tilde{l}^{\mu_{k+1},...,\nu_{n};\nu_{k+1},...,\nu_{n}}_{a_{k+1},...,a_{n}}
= \sum_{s=0}^{k}
L^{\mu_{k+1},...,\nu_{n};\nu_{k+1},...,\nu_{n};
\sigma_{1},...,\sigma_{s}}_{a_{k+1},...,a_{n};\rho_{s+1},...,\rho_{n}}
J^{(0)\rho_{s+1},...,\rho_{n}}_{\sigma_{1},...,\sigma_{s}}
\end{equation}
where
\begin{equation}
L^{\mu_{k+1},...,\mu_{n};\nu_{k+1},...,\nu_{n};
\sigma_{1},...,\sigma_{s}}_{a_{k+1},...,a_{n};\rho_{s+1},...,\rho_{n}} \equiv
{\rm const.} {\cal A}_{\rho_{s+1},...,\rho_{n}} \left(
{\cal L}^{\mu_{k+1},...,\nu_{n};\nu_{k+1},...,\nu_{n};
\sigma_{1},...,\sigma_{s},\mu_{k+1},...,\mu_{n}}_{0,...,0,
a_{k+1},...,a_{n};\rho_{s+1},...,\rho_{k}}
\prod_{l=k+1}^{n} \delta^{\nu_{l}}_{\rho_{l}} \right)
\end{equation}
and we have
$s$
entries equal to
$0$.
If one defines the dual tensor
$
\tilde{L}^{....}
$
as in (\ref{dual}), one finally finds out that
\begin{equation}
L^{\mu_{k+1},...,\nu_{n};\nu_{k+1},...,\nu_{n};
\sigma_{1},...,\sigma_{s}}_{a_{k+1},...,a_{n};\rho_{s+1},...,\rho_{n}} =
{\rm const}
\tilde{\cal L}^{\sigma_{1},...,\sigma_{s}\mu_{k+1},...,\mu_{n};
\rho_{1},...,\rho_{s}\nu_{k+1},...,\nu_{n}}_{0,...,0,a_{k+1},...,a_{n}}
\end{equation}
and we have
$s$
entries equal to
$0$.
So, finally, after some relabelling we get the following expression from
(\ref{polyn-hJ}):
\begin{equation}
{\cal L} = \sum_{k=0}^{n} \sum_{r=0}^{n-k} {\rm const.}
\tilde{\cal L} ^{\sigma_{1},...,\sigma_{r}\mu_{1},...,\mu_{k};
\rho_{1},...,\rho_{r},\nu_{1},...,\nu_{k}}_{0,...,0,a_{1},...,a_{k}}
\tilde{J}^{a_{1},...,a_{k}}_{\mu_{1},...,\mu_{k};\nu_{1},...,\nu_{k}}
\tilde{J}^{(0)}_{\rho_{1},...,\rho_{r};\sigma_{1},...,\sigma_{r}}
\end{equation}
It is clear that we can iterate the procedure with the indices
$1,...,N$
and we will obtain finally the expression (\ref{polynN-compact-dual}).
$\vrule height 6pt width 6pt depth 6pt$
6.2 It is interesting to define the so-called {\it horizontalisation}
operation
${\bf h}$
on the space on differential form on the jet-bundle space (see for
instance \cite{K4}). In particular, it is defined by linearity,
multiplicity and:
\begin{equation}
{\bf h} dx^{\mu} \equiv dx^{\mu}, \quad
{\bf h} d \psi^{A} \equiv \psi^{A}_{\mu} dx^{\mu}, \quad
{\bf h} d \psi^{A}_{\mu} \equiv \psi^{A}_{\mu\nu} d x^{\nu}, \quad
{\bf h} d \psi^{A}_{\mu\nu} \equiv \psi^{A}_{\mu\nu\rho} d x^{\rho}.
\end{equation}
Let us now define the differential form
\begin{equation}
\Lambda \equiv \sum_{k=0}^{n} (-1)^{k(n+1)}
{\cal L}^{\mu_{k+1},...,\mu_{n}}_{A_{k+1},...,A_{n};\nu_{1},...,\nu_{k}}
d \psi^{A_{k+1}}_{\mu_{k+1}} \wedge \cdots \wedge d \psi^{A_{n}}_{\mu_{n}}
\wedge d x^{\nu_{1}} \wedge \cdots d x^{\nu_{k}}.
\end{equation}
Then it is elementary to prove
\begin{prop}
The Euler-Lagrange form associated to the trivial Lagrangian (\ref{polyn-hJ})
is given by
\begin{equation}
L = {\bf h} \Lambda.
\label{hor}
\end{equation}
\end{prop}
In other words, any second-order trivial Lagrangian can be obtained as a
result of the horizontalisation operation.
\begin{rem}
It is elementary to prove the same assertion for the first-order trivial
Lagrangians. In fact, relation (\ref{hor}) stands true if instead of the form
$\Lambda$
above one uses the expression (\ref{lam}).
\end{rem}
6.3 We still did not exploited the other two conditions of triviality, namely
(\ref{II3}) and (\ref{II4}). This analysis of these relations proves to be
not very conclusive. Indeed it is very hard to obtain some relations involving
directly the coefficients
$
{\cal L}^{...}_{...}
$
from (\ref{polyn-hJ}). The best we could do was to re-express this formula as
\begin{equation}
{\cal L} = \sum_{k=0}^{n} {1 \over k!}
L^{\mu_{1},...,\mu_{k};\nu_{1},...,\nu_{k}}_{A_{1},...,A_{k}}
\prod_{i=1}^{k} \psi^{A_{i}}_{\mu_{i}\nu_{i}}
\end{equation}
where the functions
$
L^{...}_{...}
$
are completely symmetric in the couples
$
(A_{l},\mu_{l},\nu_{l})
$
and are invariant at the interchanges
$
\mu_{l} \leftrightarrow \nu_{l} \quad (l = 0,...,n).
$
Then one can obtain from (\ref{II3}) and (\ref{II4}) some rather
complicated differential equations on the functions
$
L^{...}_{...}.
$
These equations seems to be rather hard to analyse. In particular it is not
clear if a polynomial structure in the first-order derivatives can be
established, although this seems not very plausible.
\section{Final Comments}
It is quite reasonable to expect that in the same way one can prove a formula
of the type (\ref{polynN-compact}) for the case of a trivial Lagrangian
of arbitrary order. In the
$p$-th
order case one must have hyper-Jacobians (\ref{dual-hyperJN}) with
$p$
groups of indices of the type
$
\rho_{1},...,\rho_{r}.
$
One must also expect that a trivial Lagrangian, more precisely its
dependence on the highest order derivatives, is also a result of the
horizontalisation operation.
These problems deserve some investigation and we intend to do this in future.
\vskip 1cm
| proofpile-arXiv_065-541 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the years it has been found that there exist many two-dimensional
classical spin models, discrete and continuous alike, whose ground-state
manifolds are macroscopically degenerate and, more interestingly, also
exhibit critical behaviours, i.e., spin-spin correlation functions
within the ground-state ensembles decay with distance as power laws.
The classification of universality class for these models has always
been a challenging problem\cite{Liebmann}
An earlier example of this kind is the antiferromagnetic Ising model
on the triangular lattice. The exact solution for this model by
Stephenson\cite{Stephenson} showed that although this model remains paramagnetic
at nonzero temperature, its ground state is critical. Later works
by Bl\"ote {\sl et al} revealed yet another remarkable property of the
ground-state ensemble of this model, namely, it permits a
Solid-on-Solid (SOS) representation in which spin fluctuations
are subsequently described by the fluctuating interface in the
SOS model\cite{Blote}. Recent studies also demonstrated that this interfacial
representation provides a valuable avenue for studying the
ground-state ordering of quantum
magnets\cite{quantum,Henley1}
and the ground-state roughness of oriented elastic manifolds in
random media\cite{Zeng}. Other recently studied models with
critical ground states include three-state antiferromatic Potts
model on the Kagom\'e lattice\cite{Huse,Chandra}, the $O(n)$ model
on the honeycomb lattice\cite{Blote2,Kondev1}, the Four-Coloring model
on the square lattice\cite{Kondev2,Kondev3}, and the square-lattice
non-crossing dimer model and dimer-loop model\cite{Raghavan}.
On the other hand, some very similar models with degenerate
ground states exhibit long-range order, such as the
constrained 4-state Potts antiferromagnet\cite{Burton}.
In this article we study the ground-state properties of antiferromagnetic
Ising model of general spin on a triangular lattice which also belongs
to the class of models mentioned above. Recent numerical studies of this
model include Monte Carlo simulations\cite{Nagai,Honda} and transfer matrix
calculations\cite{Lipowski}. Here we revisit this model by performing
Monte Carlo simulations. The motivation of the present work is two-fold:
(1)unlike previous simulations, we utilize the interfacial
representation directly in analyzing
the simulation results, for example, we compute the stiffness constant
of the fluctuating interface which, in turn, yields more accurate
critical exponents of various operators; and (2) we also study the
dynamical properties of this model for the first time making use of
the interfacial representation.
The body of the this paper is organized as follows.
Section \ref{Model} describes the model Hamiltonian and maps it
onto a spin-1 problem whose interfacial representation is then
described. In Section \ref{height-rep},
we propose an effective continuum theory for
the long-wavelength fluctuations of the interface. Here we also
show how to relate scaling dimensions of various operators
to the stiffness constant of the interface, and derive some
other analytical results based on this ``height representation.''
This allows analytical understanding of the phase diagram
(Sec.~\ref{PhaseDiag}).
Details of Monte Carlo simulations and numerical results
on both dynamical and static
properties are presented in Section \ref{MC-results},
including a comparison of the new and old approaches
to determining the exponents.
As a conclusion, the paper is summarized and various possible extensions
are outlined, in Section \ref{Conc-Disc}.
\section{The Model}
\label{Model}
The antiferromagnetic Ising model of spin-$S$ on a triangular
lattice can be described by the following Hamiltonian:
\begin{equation}
H = J \sum_ {{\bf r}} \sum_{{\bf e}} s({\bf r}) s({\bf r}+{\bf e})
\label{eq1}
\end{equation}
where the spin variable $s({\bf r})$ defined on lattice site
${\bf r}$ of the triangular lattice can take any value from a
discrete set $[-S, -S+1, \cdot\cdot\cdot, S-1, S]$,
and the sum over ${\bf e}$ runs over three nearest-neighbor vectors
${\bf e}_1$, ${\bf e}_2$ and ${\bf e}_3$ as shown in Fig.~\ref{fig1}.
Here the coupling constant $J$ is positive
describing the antiferromagnetic exchange interaction between
two nearest-neighbor spins: $s({\bf r})$ and $s({\bf r}+{\bf e})$.
One important reason for interest in this model is that the
$S\to \infty $ limit\cite{FNSinfty}
is the same as the Ising limit of the
(classical or quantum) Heisenberg antiferromagnet on the triangular lattice
with Ising-like anisotropic exchange.
That model was shown to exhibit a continuous classical ground state
degeneracy and unusual features of the selection by fluctuations
of ground states\cite{Heisenberg}.
The ground-state configurations of the above model given
by Eq. (\ref{eq1}) consist of entirely of triangles on which
one spin is $+S$, another is $-S$, and the third can be anything in $[-S,+S]$.
Thus, if spin $s(\bf r)$ takes an intermediate value $-S<s({\bf r})<S$, this
forces the six surrounding spins to alternate $+S$ and $-S$;
exactly which intermediate value $s(\bf r)$ takes does not matter in
determining whether a configuration is allowed.
\subsection{Spin-1 mapping}
Therefore, this allows
us to reduce each state $\{s({\bf r})\}$ to a state $\{\sigma({\bf r})\}$
of a {\sl spin-1} model, by mapping $s({\bf r})=+S$ to $\sigma({\bf r})=+1$,
$s({\bf r})=-S$ to $\sigma({\bf r})=-1$, and intermediate values
$-S<s({\bf r})<+S$ to $\sigma({\bf r})=0$.
In this {\sl spin-1} representation of the model,
the rules for allowed configurations are exactly the same as for
the $S=1$ model; however instead of being equal, the statistical weights
have a factor $2S-1$ for each spin with $\sigma({\bf r})=0$.
It should be noted that in the $S=1/2$ case, $s({\bf r})=\pm 1/2$
simply maps to $\sigma({\bf r})=\pm 1$.
It can also be shown that the expectation of any polynomial
in $\{ s({\bf r}) \}$,
in the ground-state ensemble of the spin-$S$ model, can be
written in terms of a similar expectation in the spin-1 model.
Specifically, one must simply replace
\begin{equation}
s({\bf r})^m \to \cases {S^m \sigma({\bf r}), & m odd \cr
S^m [ (1-C_m(S)) \sigma({\bf r})^2 + C_m(S)], &
m even \cr}
\end {equation}
where (e.g.) $C_2(S) = {1\over 3} (1-S^{-1})$.
Thus there is no loss of information in this mapping.
Indeed, in some sense, the extra freedom to have
$s({\bf r})$ vary from $-(S-1)$ to $S-1$ is trivial:
once given that $s({\bf r})$ and $s({\bf r}')$ are
intermediate spin values, there is no correlation between these values.
So we henceforth restrict ourselves
to the spin-1 mapped model whose partition function for its ground-state
ensemble can be written as:
\begin{equation}
Z=\sum_{\{\sigma(\bf r)\}} (2S-1)^{n_s}
\;\; ,
\label{eq2}
\end{equation}
where $n_s$ denotes the number of zero spins in a ground-state
configuration ${\{\sigma(\bf r)\}}$.
By varying the weight factor continuously in the spin-1 model,
it would possible to give a precise meaning to {\it any} real value of $S$,
and to simulate such an ensemble. However, in this article we perform
Monte Carlo simulations for an ensemble in which $2S$ takes only integer
values.
The spin-1 representation could be further reduced to a spin-1/2
representation $\tilde \sigma({\bf r})$ as described
in Refs.~\onlinecite{Lipowski2,Lipowski,Honda}.
They let
\begin{equation}
\tilde \sigma ({\bf r}) \equiv \sigma({\bf r})+ k({\bf r})
\label{spinhalf}
\end{equation}
Here $k({\bf r})=0$ if $\sigma({\bf r})=\pm 1$ and
if $\sigma({\bf r})=0$,
$k({\bf r})=+1$ or $-1$ according to whether
the surrounding spins are $(+1,-1,+1,-1,+1,-1)$ or the reverse.
Note this mapping is not invertible.
The spin-$1/2$ representation is less satisfactory in that is
arbitrarily breaks the up-down symmetry of correlation functions,
but it was desirable for the transfer-matrix calculations
of Lipowski {\it et al}\cite{Lipowski}
since it reduced the number of degrees of freedom.
\subsection{Height mapping}
We define a {\sl microscopic}, discrete-valued height function
$z({\bf r})$ living on the vertex of the triangular lattice
such that the step in $z({\bf r})$ between adjacent vertices
is a function of the adjacent spins:
\begin{equation}
z({\bf r+e}) - z({\bf r}) =
{1\over 2}+{3\over 2} \sigma({\bf r+e})\sigma({\bf r})
\;\; ,
\label{eq3}
\end{equation}
where $\sigma({\bf r})$ is the spin-1 operator and
$\bf e$ can be any of the three nearest-neighbor vectors
${\bf e}_{1,2,3}$. It is easy to show that the total
change in height function, when traversed along any
smallest loop, i.e, an elementary triangle, is zero.
Therefore, $z({\bf r})$ is well-defined everywhere
for the ground-state configurations, but it is not well-defined
in any excited state.
This prescription generalizes
that originally introduced by
Bl\"ote et al for the case $S=1/2$\cite{Blote,Ogawa,FN1}
(the prescriptions agree in that case).
This type of height mapping differs from other sorts of mapping
(e.g. dualities) in a crucial way: since the spin microstates
of the spin-1 model are mapped essentially one-to-one to
the height microstates, it is possible to perform Monte Carlo simulations
and construct configurations $z({\bf r})$ after each time step.
We have found that analysis of the $z({\bf r})$ correlations is
much more efficient for extracting critical exponents than
analysis of the spin correlations directly as was done in
previous Monte Carlo simulations\cite{Nagai}.
\section{Height Representation Theory}
\label{height-rep}
In this section we propose an effective
continuum theory which describes the
long-wavelength fluctuations of the interface. We also demonstrate
how the critical exponents of various operators are determined
by the stiffness constant of the interface.
\subsection{Effective free energy}
To describe the interface in the rough phase, we must define a smooth
height field $h({\bf x})$ by coarse-graining the discrete field $z({\bf r})$.
As a first stage, on every triangular plaquette formed by sites
${\bf r}_1, {\bf r}_2, {\bf r}_3$, define a new discrete height
\begin{equation}
h({\bf R}) \equiv {1\over 3} (z({\bf r}_1)+z({\bf r}_2)+z({\bf r}_3))
\;\; ,
\label{eq4}
\end{equation}
where ${\bf R}$ is the center of a triangle. The possible values of
the $h({\bf R})$ are $\{ n/2 \}$, for any integer $n$.
(For the case $S=1/2$, the only possible values are integers.)
To each of these values corresponds a {\it unique}
ground-state spin configuration of the spin-1 model
on that triangle, i.e.,
\begin{equation}
s({\bf r})=\Phi_s( h({\bf r+u})-h_0({\bf r}) )
\;\; ,
\label{eq5}
\end{equation}
where ${\bf u}$ is any vector from a site to
the center of any adjoining triangle.
The mapping is many-to-one: the function $\Phi_s(h)$ has period 6.
Notice that the r.h.s. of Eq.(\ref{eq5}) turns out to be independent
of $\bf u$, but the periodic dependence on $h$ is phase-shifted by a
function $h_0({\bf r})$ which takes different
values on each of the three $\sqrt3\times\sqrt3$ sublattices.
Essentially, we have mapped the $T=0$ ensemble of the
spin-1 problem into an equivalent interface problem.
Note that, given a configuration of $\{ h({\bf R}) \}$,
each $\sigma({\bf r})$
is specified (via Eq. (\ref{eq5})),
once for each adjoining triangle.
The requirement that these six values of $\sigma({\bf r})$
coincide translates to a somewhat complicated set of contraints
between pairs $h({\bf R})$ and $h({\bf R'})$
on adjoining triangles;
the difference $h({\bf R}) - h({\bf R'})$
may be 0, $\pm 1/2$, or $\pm 1$,
but some of these are disallowed (depending on
which $h()$ values are integer or half-odd-integer,
and on the orientation of ${\bf R}-{\bf R'}$).
The weight of each configuration is given, as in (\ref{eq2}), by
by $(2S-1)^{n_s}$.
Fig.~\ref{fig1} shows the $h({\bf R})$ mapping explicitly where the
spins $\sigma({\bf r})$ take values from $\{+1,0,-1\}$.
The twelve states are arranged in a circle because the pattern
repeats when $h\to h\pm6$.
There are certain special ``flat states'' in which $h({\bf R})$
is uniform on all triangles. Each of these is periodic
with a $\sqrt3 \times \sqrt3$ unit cell -- in effect it
is a repeat of one of the triangles in Fig.~\ref{fig1}.
We shall name these states by writing the spins on the
three sublattices,``$(+,+, -)$'' and ``$(+,-,0)$'';
here ``$\pm$'' stands for $\sigma=\pm 1$. It should be
noted that there are two non-equivalent species of flat state
corresponding to integer, and half-integer valued $h({\bf R})$
respectively. They are non-equivalent in the sense that they are
not related by {\sl lattice} symmetries. One of the species
that is favored by the {\sl locking potential}
(see Eq. (\ref{eq-Vlock}) below)
is what is previously called ``ideal''
states\cite{Kondev2,Kondev3,Raghavan,Burton}.
Thus we can imagine that all states can be described
as domains of uniform $h({\bf R})$ separated by domain walls.
Finally, by coarse-graining $h({\bf R})$ over distances
large compared to the lattice constant, one obtains $h({\bf x})$
which enters the conjectured continuum formula for
the free energy, which is entropic in origin\cite{Blote},
\begin{equation}
F(\{ h({\bf x}) \} = \int d{\bf x}
\left\lbrack
{K\over2} |\nabla h({\bf x})|^2 + V(h({\bf x}))
\right\rbrack
\;\; ,
\label{eq6}
\end{equation}
where $K$ is the stiffness constant of the fluctuating interface.
A lattice shift by one lattice constant
leaves the free energy invariant, but induces global shifts
in height space $h({\bf x}) \to h({\bf x})\pm 1$; hence
the potential $V(\cdot)$ in (\ref{eq6}) must
have period one.
It is typically approximated as
\begin{equation}
V(h) \approx h_V \cos (2\pi h).
\label{eq-Vlock}
\end{equation}
Such a periodic potential, usually
referred as the {\sl locking term}\cite{Jose}, favors the
heights to take their discrete values one of the two types
of flat state, depending on the sign of $h_V$.
For large $S$ we expect $h_V<0$, favoring the $(+,-,0)$ states,
in view of the large entropy of flippable spins; it is not so
sure which state is favored at smaller $S$, but this does not matter
for the critical exponents (see Sec.~\ref{Scaling} and~\ref{Operators}, below.
\subsection{Fluctuations and correlation functions}
\label{Fluctuations}
In the {\sl rough phase}, by definition,
the locking term is irrelevent,
and so the long-wavelength fluctations
of height variable $h({\bf x})$
are governed by the Gaussian term of Eq. (\ref{eq6}):
\begin{equation}
F(\{ h({\bf x}) \} = \int d{\bf x}
{K\over2} |\nabla h({\bf x})|^2
=\sum_{\bf q} {K\over2} {\bf q}^2 |h({\bf q})|^2
\;\; ,
\label{eq7}
\end{equation}
where we have performed the Fourier transform.
Hence by equipartition theorem,
\begin{equation}
S_h({\bf q}) \equiv \langle |h({\bf q})|^2 \rangle = {1\over {K {\bf q}^2}}
\;\; .
\label{eq8}
\end{equation}
Similarly, we can also measure
the {\sl height-height difference function} in the real space as:
\begin{eqnarray}
C_h({\bf R})
& \equiv & \frac{1}{2} \langle [h({\bf R})-h({\bf 0})]^2\rangle
\nonumber \\
& = & \frac{1}{2\pi K} \ln(\pi R/a) + ... \;\; (R \gg 1)
\;\; ,
\label{eq9}
\end{eqnarray}
where $a$ is the lattice spacing cutoff.
\subsection{Scaling dimensions}
\label{Scaling}
Using Eq. (\ref{eq9}), we can compute the scaling dimension
$x_O$ of any {\sl local} operator $O({\bf r})$,
which is defined as in the correlation function,
\begin{equation}
\langle O^*({\bf r)} O({\bf 0}) \rangle
\sim r^{-2x_O}
\;\; .
\label{eq10}
\end{equation}
By local operator, we mean that $O({\bf r})$ is a local
function of spin operators in the vicinity of $\bf r$.
Now, the same spin configuration is recovered when the height variable
$h({\bf R})$ is increased by 6.\cite{ferroJ2}
Thus any local operator $O({\bf r})$
is also a periodic function in the height space, and can consequently
be expanded as a Fourier series:
\begin{equation}
O({\bf r}) = \sum_{G} O_{G} e^{i G h({\bf r})}
\sim e^{i G_O h({\bf r})}
\;\; ,
\label{eq11}
\end{equation}
where $G$ runs over height-space reciprocal-lattice vectors
(i.e. multiples of $2\pi/6$).
The last step of simplification in (\ref{eq11}) follows because
the scaling dimension $x_O$ of the operator $O({\bf r})$
is determined by the leading relevant operator in the above
expansion, i.e., $G_O$ is the smallest $G$ with nonzero coefficient
in the sum.
Inserting Eq. (\ref{eq11}) into Eq. (\ref{eq10}) and making use
of Eq.~({\ref{eq9}), we obtain the following:
\begin{eqnarray}
\langle O^*({\bf r)} O({\bf 0}) \rangle
& = & \langle e^{-i G_O h({\bf r})} e^{i G_O h({\bf 0})} \rangle \nonumber \\
& = & e^{-G_O^2 C_h({\bf r})} \sim r^{-\eta_O}
\;\; .
\label{eq12}
\end{eqnarray}
Therefore, the critical exponent $\eta_O$ associated with the
operator $O({\bf r})$ is given by:
\begin{equation}
\eta_O \equiv 2 x_O = { 1 \over {2\pi K}} |G_O|^2
\;\; .
\label{eq13}
\end{equation}
\subsection{Definition of operators}
\label{Operators}
In this paper, besides the usual spin operator $\sigma({\bf r})$,
we also study the bond-energy operator $E({\bf r}+{\bf e}/2)$
for the reason that will become clear in the next section:
\begin{equation}
E({\bf r}+{\bf e}/2) =
{1\over 2}+{3\over 2} \sigma({\bf r+e})\sigma({\bf r})
\;\; ,
\label{eq14}
\end{equation}
where ${\bf e}$ denotes one of the three nearest-neighbor vectors
as before.
As discussed already, the spin operator on a given site
has a periodicity of $6$
in the height space, from which a simple inspection shows
that the bond-energy operator is also periodic in
the height space with a periodicity of $3$. Therefore,
the reciprocal lattice vectors of the most relevant operator
in the Fourier expansion in Eq. (\ref{eq11}) are
\begin{equation}
G_{\sigma} = {2\pi\over6} , \;\;\;\; G_{E} = {2\pi\over3}
\;\; ,
\label{eq15}
\end{equation}
for spin and bond-energy operators respectively.
If a magnetic field is implemented by adding a term
$-H \sum _{\bf r} \sigma({\bf r}) $ to the Hamiltonian,
then our dimensionless uniform ``magnetic field'' is defined
by $H' \equiv H/T$. The exponents associated with
$H'$ (and with the uniform magnetic susceptibility),
are easily related to the correlation
exponents of the uniform magnetization operator,
\begin{equation}
M({\bf R}) = {1\over 3}
( \sigma({\bf r}_1) + \sigma({\bf r}_2) +\sigma({\bf r}_3))
\;\; ,
\label{eq-M}
\end{equation}
where ${\bf R}$ is the center of a triangle formed by sites
${\bf r}_1, {\bf r}_2, {\bf r}_3$. A simple inspection of
Fig.~\ref{fig1} shows that such an operator has a periodicity
of $2$ in the height space, thus yielding:
\begin{equation}
G_{M} = {2\pi\over2}
\;\; .
\label{eq-GM}
\end{equation}
\subsection {Zone-corner singularities}
\label{Zone-corner}
Observe that the microscopic height variable
$z({\bf r})$ in any flat state is not uniform but is rapidly modulated
with the wave vector ${\bf Q}={4\pi\over3}(1,0)$. The amplitude of modulation
itself is a periodic function of the {\sl coarse-grained} height
field $h({\bf x})$ which in turn implies
that the correlation function decays with distance as a power-law,
and consequently that its structure factor
has a power-law singularity at ${\bf Q}$.
Such a zone-corner singularity is also directly connected to the
singularity in the structure factor of the bond-energy operator.
To see this, recall that there is a linear relation between the
microscopic height variables and the bond-energy operator given
by Eqs. (\ref{eq3}) and (\ref{eq14}), i.e.,
\begin{equation}
E({\bf r}+{{\bf e}\over 2}) = z({\bf r}+{\bf e}) - z({\bf r})
\;\; .
\label{eq-bond}
\end{equation}
Then it is interesting to note that the Fourier transform
$E_{\bf e} ({\bf q})$ of bond-energy operator given above
turns out to be
\begin{eqnarray}
E_{\bf e} ({\bf q})
& \equiv &
N^{-1/2} \sum_{\bf r}
e^{ i{\bf q}\cdot({\bf r}+ {{\bf e}\over2})} E({\bf r}+{{\bf e}\over2})
\nonumber \\
&=&
-2i\sin ({1\over 2} {\bf q} \cdot {\bf e}) z({\bf q})
\;\; .
\label{eq-Eq}
\end{eqnarray}
In other words, as a byproduct of measuring
$\langle |z({\bf q})|^2\rangle$, we have at the same time measured
the structure factor of, say, the bond-energy operator of the same
orientation specified by the nearest-neighbor vector ${\bf e}$:
\begin{equation}
S_E({\bf q}) \sim \langle |E_{\bf e}({\bf q})|^2\rangle
= 4 \sin^2 ({1\over 2} {\bf q} \cdot {\bf e})
\langle |z({\bf q})|^2\rangle
\;\; .
\label{eq-SE}
\end{equation}
We will utilize this relation in Sec.~\ref{Structure} to extract
the exponent of bond-energy operator from the Monte Carlo
simulations.
\subsection {Exact solution for $S=1/2$}
The $S=1/2$ triangular Ising antiferromagnet
is exactly soluble, by the same techniques
which solve the ferromagnetic two-dimensional Ising model,
and was immediately recognized to have critical behavior as $T\to 0$.
The spin and energy correlation functions were computed exactly by
Stephenson; it transpires that $\eta_\sigma=1/2$ and $\eta_E=2$
exactly, implying through the arguments of Bl\"ote et al
(see Sec.~\ref{Scaling} and~\ref{Operators})
that the effective stiffness in Eq.~(\ref{eq7}) is $K= \pi/9$ exactly.
The exponents implied by the interface scenario\cite{Blote}
-- in particular, the magnetic field exponent $\eta_M$ --
are fully confirmed by numerical transfer-matrix
computations.\cite{Blote3}
The Coulomb gas picture of Kondev {\em et al}\cite{Kondev4},
wherein the $S=1/2$ triangular Ising antiferromagnet
is viewed as a fully-packed loop model\cite{Blote2} with fugacity 1,
also predicts the exact exponents.
\section {Phase Diagram}
\label {PhaseDiag}
In this section, we collect some consequences of
the height representation for the phase diagram and the nature of
the various phases within it.\cite{Chandra2}}
\subsection{Kosterlitz-Thouless and locking transitions}
\label{Locking}
The locking potential $V(\cdot)$ in (\ref{eq6}) favors the flat states.
In view of (\ref{eq-Vlock}), its leading reciprocal-lattice vector
is $G_V=2\pi$,
corresponding to a scaling index
$ x_V = |G_V|^2 /{\pi K} = \pi/K $
for the corresponding conjugate field $h_V$.
It is well known that if $ 2 - x_V >0$, then $h_V$ becomes relevant
(under renormalization and the interface locks into one of
the flat states.\cite{Jose}
Since $K$ grows monotonically with $S$,
such a locking transition occurs at a critical $S_L$ where
$K_L =\pi/2=1.57079...$\cite{Blote,Lipowski}.
In this ``smooth'' phase, any spin operator $O({\bf r})$
has long-range order, by arguments as in Sec.~\ref{Scaling}.
\subsection{Fluctuations in smooth phase}
\label{Smooth}
One of our aims in this paper was to pinpoint the
locking transition $S_L$,
which demands that we have a criterion to distinguish these phases.
We must supplement Eq.~(\ref{eq8}), which shows the expected
qualitative behavior of height fluctuations $\langle |h({\bf q})|^2\rangle$
in the rough phase,
with a parallel understanding of the smooth phase.
In the smooth state, the symmetry (of height shifts) is broken
and a fully equilibrated system has long-range order, such that
$\langle h({\bf x}) \rangle$
is well defined and uniform throughout the system.
Fluctuations around this height, then, have at most short-range
(exponentially decaying) correlations.
Thus we expect them to have
a spatial ``white noise'' spectrum:
\begin{equation}
\langle |h({\bf q})|^2 \rangle \sim {\rm const}
\label{eq-smooth}
\end{equation}
for small $\bf q$.
A phase with ``hidden'' order was suggested
by Lipowski and Horiguchi\cite{Lipowski,Lipowski2}.
Numerical transfer-matrix calculations\cite{Lipowski}
using the spin-1/2 representation indicated
$0 < \eta_{\sigma} <1/9$ for $2S>6$, which is impossible if the
spin correlations are derived from height fluctuations,\cite{Blote}
as we reviewed in Sec.~\ref{height-rep}.
An exotic phase to reconcile these facts
was to postulate a phase in which the interface was smooth and
$\langle \tilde\sigma({\bf r}) \rangle\neq 0$,
yet for the real spins $\langle \sigma({\bf r}) \rangle = 0$
as suggested by spin correlation measurements.
What does this imply for our height variable $h({\bf R})$, which
has a one-to-one correspondence with the real spin configuration
$\{ \sigma({\bf r}) \}$?
If the interface is smooth, then the probability distribution
of height values on a given plaquette, $P(h({\bf R}))$,
is well defined.
In order to ``hide'' the order, it is necessary that $P(h)$
correspond to zero expectations of the spins.
Now, reversing $s({\bf r})$ on all three sites in the plaquette requires
$h \to h\pm 3$, as seen from Fig.~\ref{fig1}.
One can convince oneself that,
to have ensemble average $\langle \sigma({\bf r})\rangle =0$,
the distribution $P(h)$ must be at least as broad
as ${1\over 2} \delta (h-h_1) + {1\over 2} \delta (h-h_2)$, with
$h_1-h_2=\langle h \rangle \pm 3$, implying
the bound
\begin{equation}
{Var}[h({\bf R})] \equiv \langle h({\bf R})^2 \rangle
- \langle h({\bf R}) \rangle ^2 \ge (3/2)^2.
\label{eq-dhbound}
\end{equation}
\subsection{Finite temperature behavior}
\label{FiniteT}
At $T>0$,
plaquettes with non-minimal energy are present and they
correspond to vortices in the function $h({\bf x})$.
Thus, unfortunately, the height approach of analyzing
simulations more or less breaks down.
Nevertheless, one can still predict the $T>0$ phase diagram
from knowledge of the $T=0$ stiffness constant derived from
our simulations. The shape of this phase diagram has already
been explained in Ref.~\cite{Lipowski}; here we note some
additional interesting behaviors which can be predicted
(following Ref.~\onlinecite{Blote}(b)) using the exponents associated with
vortices.
The other exponents in Kosterlitz-Thouless (KT)
theory are associated with elementary defects (often called vortices).
Indeed, it is easy to check (in this system) that
the excess energy of a non-ground-state plaquette
is directly proportional to its vortex charge
(a Burgers vector in height-space), so the effect of nonzero
temperature is simply to make the vortex fugacity nonzero.
The vortex exponent is $\eta_v= 1/\eta_\sigma$, so as usual
the vortex fugacity becomes relevant and defects unbind,
destroying the critical state, at the KT transition
defined by a spin exponent taking the critical value
$\eta_{\sigma}=1/4$.
If $\eta_{\sigma}>1/4$ at zero temperature,
i.e. $K < K_{KT}\equiv 2\pi/9=0.69813...$,
then defects unbind as soon as $T>0$.
Thus a zero-temperature KT transition occurs at
$S_{KT}$ defined by $K=K_{KT}$.\cite{Lipowski}
Ref.~\onlinecite{Lipowski} did not, however, address the
critical exponents of the correlation length $\xi(T)$
and the specific heat $C(T)$ as a function of temperature, which
are also controlled by vortex exponents.
Naively, if the energy cost creating one vortex is $E_c$,
and if the minimum excitation is a vortex pair,
then one would expect the low-temperature specific
heat to behave as
$C(T) \sim \exp (-2 E_c/T)$
and at $S=1/2$ this is indeed true\cite{Wannier}.
However, the renormalization group\cite{Blote}
shows the singular specific heat behaves as
\begin{equation}
f(T) \sim y(T)^{4/(4-\eta_v)}
\end{equation}
where $y(T) = \exp (-E_c/T)$ is the vortex fugacity;
consequently when $\eta_v < 2$, the true behavior is
\begin{equation}
C(T) \sim \exp (-2 E_1/T)
\end{equation}
with $E_1 = 2 E_c /(4-\eta_v) < E_c$.
(Physically, part of the excitation energy is cancelled by
the large entropy due to the many places where the vortex pair
could be placed.)
This behavior has been observed in the 3-state Potts antiferromagnet
on the Kagom\'e lattice\cite{Huse}, and should occur
in the present system for all $S>1/2$.
\subsection {Finite magnetic field}
It is interesting to consider the effect of a nonzero magnetic field $H'$.
It is known already that at $S=1/2$,\cite{Blote}
such a field is an irrelevant perturbation,
so that the system remains in a critical state,
yet at sufficiently large $H$ it undergoes a
locking into a smooth phase,\cite{Blote3}
approximated by any of the three symmetry-equivalent
flat states of type ``$(+,+,-)$'' with magnetization $S/3$
As also already noted\cite{Lipowski},
there is a critical value $S_{cH}$
defined by $\eta_\sigma(S_{cH})= 4/9$,
beyond which $\eta_M = 9 \eta_\sigma < 4$
so that the system locks into long-range
order as soon as $H'$ is turned on.
Within this regime, there are still two subregimes
with different behavior of $M(h)$ near $h=0$.
For $2 < \eta_M < 4$, the initial slope is zero, i.e.,
the susceptibility is not divergent;
when $\eta_M < 2$, as occurs for $S \ge 2$,
there is a divergent susceptibility and correspondingly
there should be a singularity at ${\bf q}=0$
in the spin structure factor
$\langle |\sigma({\bf q})|^2 \rangle$.
What do we expect in the locked phase at $S> S_{L}$?
Here the difference between the two kinds of flat states becomes crucial.
The $H'$ field favors the $(+,+,-)$ type of flat state, but entropy favors
the $(+,-,0)$ type of flat state.
Thus we expect a transition to the $(+,+,-)$ state only at a nonzero
critical field $H'_c$.
On reducing $H'$ through $H'_c$, a twofold symmetry breaking occurs, in which
one of the $+$ sublattices becomes the $0$ (disordered) sublattice; hence,
this transition should be in the Ising universality class.
Presumably the line $H'_c(S)$ meets the $H'=0$ axis at $S=S_{L}$.
There must also be line of locking transitions $S_{cH}(H')$,
which terminates on the $H'=0$ axis at $S_{cH}$.
For $S=1/2$, the effect of the magnetic field was confirmed
numerically in Ref.~\onlinecite{Blote3}.
\section{Monte Carlo Simulations and Results}
\label{MC-results}
In this section we describe the implementation details
of Monte Carlo simulations performed for spin-1 model
in which $2S$ takes only integer values from $1$ to $8$.
We then present numerical results for the relaxation
times of slow modes in the Monte Carlo dynamics.
Two different methods of compute the critical exponents
of the spin, bond-energy, and uniform-magnetization operators
are described
in different sub-sections: one in terms of the extrapolated
stiffness constants of the interface and the other in
terms of the singularities of the corrsponding structure
factors.
\subsection{Details of Monte Carlo Simulations}
A spin is called {\sl flippable} if its six surrounding nearest-neighbor
spins alternate between $+1$ and $-1$. Clearly, changing the
value of this flippable spin results in another new spin configuration
in the ground-state ensemble, provided that we start with a spin
configuration in the ensemble. Moreover, such an update
maintains the global tilt of the interface due to the
{\sl local} nature of this update. This update will be used
as our Monte Carlo update in this paper. Two slightly different
cases arise for different values of $2S$: (1) for $2S=1$,
the local update is precisely equivalent to a spin flip
i.e., $\sigma({\bf r}) \rightarrow -\sigma({\bf r})$ due to
the absence of zero spin; and (2) for all other values of $2S$,
a random choice must be made in the local update: for
example, $\sigma({\bf r})=0 \rightarrow \sigma({\bf r})=1$
or $-1$. (Recall $S$ denotes the spin magnitude of the
original model.)
Let $n_s$ and $n_f$ denote the number of zero-spins and
flippable spins of configuration $\phi$.
If an attempted single-spin update for $\phi$ results in
a new configuration $\phi^{\prime}$ with $n_s^{\prime}$ and $n_f^{\prime}$,
then the transition probability $W$ in accordance
with the detailed balance principle is:
\begin{equation}
W= W_0 \cdot min \{ 1, {n_f\over{n_f^{\prime}}} \}
\cdot min\{1, (2S-1)^{n_s^{\prime}-{n_s}} \}
\;\; ,
\label{eq16}
\end{equation}
where $W_0$ denotes the {\sl bare} transition probability:
$W_0={1\over n_f}$ for $2S=1$, and $W_0={1\over 2 n_f}$ for
$2S \ge 2$ which reflects the random choice to be made in
the local update as discussed above. With the transition probability
given in Eq. (\ref{eq16}), it is straightforward to show that
the detailed balance principle is satisfied, i.e.,
$P(\phi) W(\phi \rightarrow \phi^{\prime})
=P(\phi^{\prime}) W(\phi^{\prime} \rightarrow \phi)$,
where $P(\phi)$ denotes the probability for
configuration $\phi$ to occur and $P(\phi) \sim (2S-1)^{n_s}$
since each spin configuration in the original spin-$S$ model
has equal probability to occur.
Note also that $n_f/n_f' = 1 + O(1/N)$ for large $N$,
so this rule is important only because of the finite system
size.
To implement in practice the transition probability given above,
we randomly select a site out of a list of the $n_f$ flippable sites, and
randomly update this spin to one of the two possible new spin values
if $2S\ge 2$ or simply flip this spin if $2S=1$. The total numbers
of zero spins $n_s^{\prime}$ and flippable spins $n_f^{\prime}$
in the resulting configuration are then computed.
This update is subsequently accepted with a probability:
$min \{ 1, {n_f/{n_f^{\prime}}} \}\cdot
min \{ 1, (2S-1)^{n_s^{\prime}-{n_s}} \}$.
A practical implementation of the transition probability given in
Eq. (\ref{eq16}) is thus achieved.
Throughout this paper, a unit time or one Monte Carlo Sweep (MSC)
is defined such that there are $N_s$ attempts of updating within
this unit of time (or one attempt per spin on average). Here $N_s$
denotes the total number of spins in the simulation cell. The
simulation cell always contains $N_s=72 \times 72$ spins in this
paper unless explicitly mentioned otherwise. Periodic boundary
conditions are adopted. Since we always start with a flat state,
the simulations are thus performed in the sector with a zero
global tilt of the interface.
\subsection{Dynamical scaling: the relaxation time $\tau_{\bf q}$ }
\label{Dynamical}
We now discuss the correlations between the configurations generated
sequentially in the Monte Carlo simulations by studying the relaxation
time of the slow modes in the model, namely, the Fourier modes
$h_{\bf q}$ which play the role of an order parameter\cite{Henley1}.
The linear-response dynamics of such a mode is usually formulated
as a Langevin equation,
\begin{equation}
{ {dh({\bf x},t)}\over{dt} } =
-\Gamma { {\delta F(\{ h({\bf x}) \}) }\over{\delta h({\bf x})}}
+\xi({\bf x}, t)
\;\; ,
\label{eq17}
\end{equation}
where $\Gamma$ is the dissipation constant, and the static
free energy functional $F(\{ h({\bf x}) \})$ is given by
Eq. (\ref{eq6}). Here $\xi({\bf x}, t)$ is a stochastic
noise generated in the Markov chain of Monte Carlo
simulations. As it is expected that the correlation time
of the slow mode under consideration is much longer than
that of the noise, and since the update steps are local and independent,
it is proper to model $\xi({\bf x}, t)$ as
Gaussian noise, uncorrelated in space or time:
\begin{equation}
\langle \xi({\bf x}, t) \xi({\bf x}^{\prime}, t^{\prime}) \rangle
=2 \Gamma \delta ({\bf x} - {\bf x}^{\prime}) \delta (t- t^{\prime})
\;\; ,
\label{eq18}
\end{equation}
in which the choice of $2\Gamma$ ensures that the steady-state of the
interface under the Langevin equation (\ref{eq17})
agrees with its equilibrium state
under the free energy (\ref{eq6}).
This linear stochastic differential equation can be solved easily by
performing Fourier transform. Eq. (\ref{eq17}) thus reduces to
\begin{equation}
{ {dh({\bf q},t)}\over{dt} } =
-\Gamma K |{\bf q}|^2 h({\bf q},t) +\xi({\bf q},t)
\;\; ,
\label{eq19}
\end{equation}
which implies an exponentially decaying correlation function of
$\langle h^{*}({\bf q},t) h({\bf q},0) \rangle
\sim e^{-t/{\tau_{\bf q}}} $ with the relaxation time $\tau_{\bf q}$
given by
\begin{equation}
\tau_{\bf q} = {1\over{\Gamma K}} |{\bf q}|^{-2}
\;\; .
\label{eq20}
\end{equation}
Therefore, the dynamical scaling exponent for the Monte Carlo dynamics,
defined by $\tau_{\bf q} \sim |{\bf q}|^{-z}$,
is always $z=2$ in the rough phase.
To check this prediction on the dynamical scaling exponent
in practice where the above continuum theory is regularized
on a lattice, we compute the following auto-correlation function
$C({\bf q},t)$ of the {\sl microscopic} height variable $z({\bf q})$:
\begin{equation}
C({\bf q},t) =
{
{\langle z^*({\bf q},0) z({\bf q},t) \rangle
-|\langle z({\bf q},0)\rangle|^2}
\over
{\langle z^*({\bf q},0) z({\bf q},0) \rangle
-|\langle z({\bf q},0)\rangle|^2}
}
\;\; ,
\label{eq21}
\end{equation}
Here $\langle \rangle$ stands for the dynamical average, and the
time $t$ is measured in unit of MCS. For each interger-valued
$2S=1,2,...,8$, we perform $10^5$ MCS's with a flat initial configuration
and compute the auto-correlation functions upto $t \le 50$ for modes
that correspond to the five smallest $|{\bf q}|^2$ values.
In Fig.~\ref{fig2}, we display the results so obtained for $2S=1$.
Other cases of $2S$ are found to have very similar features.
It is clear from Fig.~\ref{fig2} that $\log_{10} C({\bf q},t)$
can be fitted very well by $a - t/\tau_{\bf q}$ where $a$ and the
relaxation time $\tau_{\bf q}$ are the fitting parameters.
In other words, the relaxation is strictly exponential in all cases.
Note that we used a cutoff $t=10$ in our fitting. The same fitting
procedure is carried out for other cases of $2S$.
The final results of the relaxation time $\tau_{\bf q}$ as a function
of $|{\bf q}|^2$ for $2S=1, ..., 6$ are shown in Fig.~\ref{fig3}; and for
$2S=6,7,8$ as an insert. The fact that $\tau_{\bf q}$ scales as
$|{\bf q}|^2$ for $2S=1, ..., 5$ as indicated by the fitting in
Fig.~\ref{fig3} thus shows that the ground-state ensembles for $2S=1, ..., 5$
are in the rough phase. On the other hand, it is indeed clear from
the insert that for $2S=7$ and $8$, $\tau_{\bf q}$ curves downward as
$|{\bf q}|^2 \rightarrow 0$ which is in sharp constrast to
those of $2S=1, ..., 5$. From this, we conclude that ground-state
ensembles for $2S=7$ and $8$ are in the flat phase. As for $2S=6$,
it is not conclusive from the data available whether $\tau_{\bf q}$
scales as $|{\bf q}|^2$ or curves downward as $|{\bf q}|^2\rightarrow 0$.
Nonetheless, the fact that the relaxation time of the slowest mode
for $2S=6$ is longer than for any smaller {\it or larger} value of $S$,
suggests that $2S=6$ is very close
to the locking transition. Further support for this phase diagram
is also obtained by explicit calculations of stiffness constants
and critical exponents which is discussed in the next section.
\subsection {Stiffness constants and critical exponents}
\label{Stiffness}
As implied by Sec.~\ref{Fluctuations} , the stiffness constant of
the fluctuating interface can be directly measured by studying
the long-wavelength fluctuations of the height variables, i.e.,
their structure factor as given by Eq. (\ref{eq8}). It should
be noted that concerning the task of calculating the Fourier
components $h({\bf q})$ in Eq. (\ref{eq8}), it can be replaced
by the approximation in terms of the {\sl microscopic} height
variables $z({\bf q})$ given by
\begin{equation}
h({\bf q}) \approx z({\bf q}) \equiv {w_0\over\sqrt{N_s}}
\sum_{\bf r} e^{-i{\bf q}{\bf r}} z({\bf r})
\;\; ,
\label{eq22}
\end{equation}
where $\bf r$ labels a lattice site of the finite triangular
lattice of total $N_s$ lattice sites used in the simulation.
Here $w_0=\sqrt{3}/2$ is the {\sl weight} of a lattice site,
i.e., the area of its Voronoi region, which is introduced
so that the {\sl microscopic} height variable $z({\bf q})$
coincides with the {\sl coarse-grained} height variable
$h({\bf q})$ in the long-wavelength limit (${\bf q} \rightarrow 0$).
But unlike $h({\bf q})$, $z({\bf q})$ still contains features
such as zone-corner singularities discussed in Sec.~\ref{Zone-corner}
that are only manifested in miscroscopic height variables.
Starting with a flat state, we perform $2\times 10^3$
MCS's as the equilibrium time; subsequent measurements
of physical quantities are carried out at intervals of
$20$ MCS's. This separation is a compromise between
the correlation times of small $\bf q$ modes and of larger $\bf q$ modes,
which are respectively much longer and somewhat shorter than 20 MCS
-- see Fig.~\ref{fig2}. Each run consisted of $8 \times 10^5$
MCS, i.e. $4\times 10^4$ measurements were taken;
these were subdivided into $20$ independent
groups for the purpose of estimating statistical errors.
The same procedure is used for all $2S=1,2,...,8$ reported
in this paper.
In Fig.~\ref{fig4}, we plot $\langle |z({\bf q})|^2\rangle ^{-1}$ vs.
${\bf q}^{2}$ for $2S=1$, including all ${\bf q}$ in the first
Brillouin zone. From the plot, we observe that
$\langle |z({\bf q})|^2\rangle ^{-1}$ is remarkably isotropic up
to about ${\bf q}^{2} \sim 1.5$. This comes about because of
the 6-fold rotational symmetry of the triangular lattice
which ensures that {\sl anisotropy} occurs only in $q^6$
and higher order terms, assuming that the function is analytic.
This is in constrast to other models
defined on the square lattice where anisotropy already sets
in at the order of $q^4$\cite{Raghavan,Kondev1}. The lower
envelope of the data points in Fig.~\ref{fig4} corresponds to the line of
$q_y=0$ in the $q$-vector space. Other cases of $2S$ are found
to have very similar features as illustated in the insert of
Fig.~\ref{fig4} where we plot the lower envelope for all $2S=1,2,...,8$.
The structure factor of the height variables appears to
diverge in the long-wavelength limit $|{\bf q}|^2
\rightarrow 0$ for all $S$ values, even
for the largest $S$ values.
(In the latter case, however, we believe one would
see the plot asymptote to a constant value, in a
sufficiently large system; see below.)
Two other interesting features of the structure factor are also
revealed in the insert in Fig.~\ref{fig4}: (1) for $2S\ge 2$, it appears
to indicate yet another singularity at the zone corner
${\bf q} \rightarrow {\bf Q} \equiv {4\pi\over 3}(1,0)$
in the thermodynamic limit $N_s\rightarrow \infty$; and (2)
for $2S=1$, it approaches a constant instead. As already
discussed in Sec.~\ref{Zone-corner}, the appearance of
zone-corner singularities is expected, the precise nature of
such singularities, however, is discussed in the next section.
In the remaining of this section, we analyze the zone-center
singularity to check if height variables behave as required by
Eq. (\ref{eq8}) for the rough phase and consequently extract
the stiffness constants.
To further study the nature of zone-center singularity in terms
of how $\langle |z({\bf q})|^2\rangle$ scales as a function of
${\bf q}^{2}$ in the long-wavelength limit, we show the log-log
plot of $\langle |z({\bf q})|^2\rangle^{-1}$ vs. ${\bf q}^{2}$
for $2S=1,...,8$ in Fig.~\ref{fig5}. Comparing the simulation
results for different systems sizes of $L=36$, $48$ and and $72$,
we notice that data are well converged down to accessible
small ${\bf q}$ vectors -- except for the case of $2S=6$ and $7$,
where the finite size effect is still discernible. This is, of course,
consistent with the fact that $2S=6$ and $7$ are close to the
locking transition where the correlation length diverges;
it is interesting, however, to notice that their finite-size trends
are different.
In the case $2S=6$, the data plot for $L=72$
curves upwards less than that for $L=48$, while
in the case $2S=7$, the $L=72$ data show {\em more}
upwards curvature than the $L=48$ data.
By fitting $\langle |z({\bf q})|^2\rangle^{-1}$ to a function
$q^{2\alpha}$ with $\alpha$ being the fitting parameter, we obtain,
using the data of system size $L=72$ and a cutoff ${\bf q}^2 \le 0.5$,
the exponent $\alpha=0.990(1), 0.988(1), 0.986(2), 0.984(2), 0.974(2)$
and $0.935(1)$ respectively for $2S=1, 2, 3, 4, 5$, and $6$.
Apart from the case of $2S=6$, these values agree with
$\alpha=1$ as in the predicted ${\bf q}^{-2}$ power-law singularity
of the structure factor in the rough phase, Eq. (\ref{eq8}).
As for $2S=7$
and $8$, $\langle |z({\bf q})|^2\rangle^{-1}$ clearly deviates from
a power-law scaling and instead curves upwards to level off, which
indicates that models with $2S=7$ and $8$ are in the smooth phases
where $\langle |z({\bf q})|^2\rangle$ remains finite
as ${\bf q} \to 0$, as discussed in Sec.~\ref{Smooth}.
This conclusion is in excellent
agreement with that inferred from dynamical scaling analysis presented
in Sec.~\ref{Dynamical}.
It should be noted that in Fig.~\ref{fig5}, as a general procedure adopted
throughout this paper in extracting numerical values of some physical
quantities, we have averaged the data corresponding to the same magnitude
of $|{\bf q}|^2$ to further reduce the effect due to statistical errors.
The relative statistical error on each individual data point
$\langle |z({\bf q})|^2\rangle$ of small ${\bf q}$,
which is measured directly from the variance
among the 20 groups, is found to range from $1\%$ to $3\%$.
This is indeed consistent with the estimates of such relative
errors from the relaxation times of the slowest modes of models
with different values of $2S$ already given in Sec.~\ref{Dynamical}.
It is perhaps also worth noting that another good check on
the statistical errors on each data point is to compare the values
of $\langle |z({\bf q})|^2\rangle$ for three ${\bf q}$ vectors
which are related by $120^\circ$ rotations in reciprocal space,
which ought to be equal by symmetry. For example, in the case of
$2S=1$, the values of $\langle |z({\bf q})|^2\rangle$ for the
three ${\bf q}$ vectors of the same smallest magnitude
${\bf q}^2=0.0101539$ of system size $L=72$ are, respectively,
$285.551$, $280.528$, and $280.566$, from which one thus also
obtain the relative error of about $1\%$. This observation therefore
motivates the averaging procedure used in this paper.
The stiffness constants can be subsequently determined by fitting
${\bf q}^{-2} \langle |z({\bf q})|^2\rangle^{-1}$ to the
function $K + C_1 {\bf q}^2$ for the isotropic part of
the data in which the stiffness constant $K$ and $C_1$ are the
fitting parameters. The final fitting on the averaged data is shown
in Fig.~\ref{fig6} where we used a cutoff ${\bf q}^2 \le 0.5$
in the fitting. We also tried other different cutoffs of ${\bf q}^2\le 0.1$
and ${\bf q}^2 \le 1.0$, and found as expected that the stiffness
is not sensitive to the value of cutoff as long as it falls into the
isotropic part of the data. For example, we obtain, in the case of
$2S=1$, $K=0.3488\pm0.0022, 0.3490\pm0.0008$, and $0.3488\pm0.0006$
for cutoff ${\bf q}^2 \le 0.1, 0.5$, and $1.0$ respectively.
Therefore, taking into account of the uncertainty introduced
due to the cutoff, our final estimate for the stiffness constant
is then $K=0.349\pm0.001$ which is in excellent agreement
with the exact value $K_{\mbox{exact}}=0.349065...$.
Similar procedure is carried out for other cases of $2S$ and
the results are tabulated in Table I. In the same table, we also
give the value for the critical exponents of spin, bond-energy
and uniform magnetization operators which are obtained
straightforwardly according to Eqs. (\ref{eq13}), (\ref{eq15})
and (\ref{eq-GM}).
The agreement of our $\eta_\sigma^{(K)}$ values
with the ``$\eta_\sigma$'' values from transfer-matrix eigenvalues
(see Table~I of Ref.~\onlinecite{Lipowski},
is quite close and becomes better as $S$ grows (until $2S=6$.)
As already discussed in Sec.~\ref{FiniteT},
a Kosterlitz-Thouless (KT) transition occurs at a critical value $S_{KT}$
where $\eta_{\sigma}=1/4$, such that for $S>S_{KT}$ algebraic correlations
persist even at small finite temperatures.
It is clear from our data that $S_{KT}>3/2$.
As for $2S=6$, the value of ${\bf q}^{-2} \langle |z({\bf q})|^2\rangle^{-1}
=1.75\pm 0.06$ at the smallest nonzero ${\bf q}^2=0.010153$
is already larger than $K_L=\pi/2=1.57079$. That is, even if
the system may have a ``rough'' behavior at the length scales probed in
the simulation, the stiffness constant
is such that the locking potential is relevant and must dominates
at sufficiently large length scales, as discussed in Sec.~\ref{Locking}.
A similar observation has already been used to argue that the constrained
Potts antiferromagnet is in a smooth phase\cite{Burton}.
This fact together with the poor fitting using the formula suitable
for the rough phase (see the top curve of Fig.~\ref{fig6})
leave us little choice but to conclude that the ground-state ensemble
for $2S=6$ also falls into the smooth phase, or possibly is
exactly at the locking transition.
Just as the finite-size effect for $2S=6$ was severe both for the
spin-spin correlations (measured via Monte Carlo\cite{Nagai,Honda})
and also in spin-operator eigenvalues
(measured via tranfer-matrix,\cite{Lipowski})
we similarly find it is severe for height fluctuations.
However, in view of the exponential relationship between
the exponents and the stiffness constant, the latter measurements
are much more decisive as to the true phase of the system.
To sum up, based on the analysis on the nature of the singularity
in the height structure factor at the long-wavelength limit and
the numerical results on the stiffness constants, we thus conclude
that the model exhibits three phases with a KT phase
transition at ${3\over 2}<S_{KT}<2$ and a locking phase transition
at ${5\over 2} < S_{L} \le 3$.
\subsection{Structure factor and zone-corner singularity}
\label{Structure}
Another more traditional approach\cite{Nagai} in calculating the critical
exponents of various operators is to compute the corresponding
structure factors and analyze the power-law singularities at the
appropriate ordering wave vectors. Namely, if the correlation function
of an operator $O$ decays with distance as power-law (thus critical)
\begin{equation}
\langle O({\bf r}) O({\bf 0}) \rangle \sim
{ {e^{i{\bf Q}\cdot{\bf r}}}\over r^{\eta_O} }
\;\; ,
\label{eq23}
\end{equation}
then its structure factor near the ordering vector ${\bf Q}$ shows a
power-law singularity
\begin{equation}
S_O({\bf q=Q+k}) \sim {\bf k}^{2(x_O-1)}
\;\; ,
\label{eq24}
\end{equation}
from which the critical exponent $\eta_O \equiv 2x_O$ can be
numerically extracted. Here in this section, we adopt this
approach to calculate the critical exponents of spin,
bond-energy, and uniform-magnetization
operators so as to compare with those obtained
from the stiffness constant.
As given by Eq. (\ref{eq-SE}),
$S_E({\bf q=Q+k}) \sim \langle |z({\bf q=Q+k})|^2\rangle$.
Here ${\bf Q}={4\pi\over3}(1,0)$ is the ordering vector of
the bond-energy operator. Therefore the interesting feature
of structure factor of height variables, namely, the appearance
of zone-corner singularity as shown in Fig.~\ref{fig4},
is not only expected but also very useful in extracting the critical
exponent $\eta_E$.
Of course, such a zone-corner singularity
can also be understood within the framework of interfacial
representation, as in Sec.~\ref{height-rep}, particularly
Subsec.~\ref{Zone-corner}.
(Similar zone-corner singularities have been studied in
Refs.~\onlinecite{Kondev2} and \onlinecite{Raghavan}.)
Finally, according to the exact result $\eta_E=2$ ($x_E=1$) in the case of
$2S=1$, i.e., $S_E({\bf q=Q+k}) \sim {\bf k}^{2(x_E-1)} \rightarrow
const.$, the puzzling absence of the zone-corner singularity
for $2S=1$ as shown in Fig.~\ref{fig4} is also resolved.
In Fig.~\ref{fig7}, we plot $\log_{10}S_E({\bf q})$
vs. $\log_{10}|{\bf q-Q}|^2$
where we have averaged data points with the same magnitude of
$|{\bf q-Q}|^2$. Fitting $S_E({\bf q})$ to the function
$|{\bf q-Q}|^{2(x_E-1)}(C_1+C_2 |{\bf q-Q}|)$ where $x_E, C_1$ and $C_2$
are the fitting parameters, we obtain the critical exponents
$\eta^{(S)}_E$ which are tabulated in Table I.
In practice, we used two different
cutoffs in the fitting: $|{\bf q-Q}|^2 \le 0.1$ and $\le 0.5$. The
fitting for the latter is shown in Fig.~\ref{fig7}, and the final quoted
errors take into account the uncertainty due to the cutoffs.
Similarly, we also computed the structure factor for the spin
operator $S_{\sigma}(\bf q)$ using Fast Fourier transform while
computing the height-height correlation function within the same
Monte Carlo simulations.
Results are shown in Fig.~\ref{fig8} and the extracted exponents are also
tabulated in Table I. Fitting precedure used is exactly the same
as that for bond-energy except that we fit $S_{\sigma}({\bf q})$
to the function $C_1 |{\bf q-Q}|^{2(x_{\sigma}-1)}$ with $C_1$ and
$x_{\sigma}$ being the fitting parameters. From Table I,
we note that the critical exponents extracted in this way
are in good agreement with those obtained from stiffness
constant utilizing the interfacial representation, however,
the latter yields much better statistical errors by an order of
magnitude using the same Monte Carlo simulation data.
This clearly demonstates the superiority of the interfacial
representation in extracting critical exponents from numerical
data. Similar points were made regarding other models, but based on
much less extensive simulation data, in Refs.~\onlinecite{Kondev2}
and \onlinecite{Raghavan}.
Similar fits were attempted for $2S=6$,
yielding $\eta^{(S)}_E (2S=6) = 0.53\pm0.41$ and
$\eta^{(S)}_\sigma (2S=6)= 0.236\pm0.036$.
While the statistical error on $\eta^{(S)}_E (2S=6)$
is too large to render the fitting meaningful, the increase in
the value of $\eta^{(S)}_\sigma (2S=6)$ when compared with
$\eta^{(S)}_\sigma (2S=5)$ is added evidence that $2S=6$ is {\sl not}
in the rough phase; if it were still rough at that
value of $S$, we would have expected a continuation of
the decreasing trend of $\eta^{(S)}_\sigma$ with $S$.
As for the cases of $2S=7$ and $8$,
the structure factors of both the spin
and bond-energy operators
show {\it weaker} than power-law behavior as ${\bf q} \to {\bf Q}$, as in
Figs.~\ref{fig7} and~\ref{fig8},
but they increase to a larger value (not seen in these logarithmic plots)
right {\it at} $\bf Q$.
This is indeed consistent with the
$\delta$-function singularity.
expected if these cases fall into the smooth phase
with long-ranged order of the spin and bond-energy operators.
Finally, we consider the uniform magnetization correlation exponent $\eta_M$.
When $S>3/2$, it can be predicted (see $\eta^{(K)}_M$ in Table I) that
$\eta_M< 2$, implying a divergent (ferromagnetic)
susceptibility and a divergent structure factor $S_M({\bf q})$
as ${\bf q}\to 0$
Now, due to the linear relation (\ref{eq-M})
between $\{ M({\bf R}) \}$ and $\{ \sigma({\bf r}) \}$,
we immediately obtain
$S_M({\bf q}) \sim S_\sigma({\bf q})$ near ${\bf q}=0$,
just as $S_E({\bf q}) \sim \langle |z({\bf q})|^2\rangle$
near ${\bf q}={\bf Q}$
(see Sec.~\ref{Zone-corner} and Eq.~(\ref{eq-SE}))
Thus, a singularity at ${\bf q}=0$
is expected in the structure factor of spin operator which is plotted in
Fig.~\ref{fig9}. From this figure, it appears
that only for $2S=4$, $5$, and $6$ does $S_M({\bf q})$ show a
power-law singularity indicated by a straight line in this
log-log plot. This confirms the prediction based
on the stiffness constant; however,
the numerical values of $\eta_M$
extracted this way (see Table I) differ considerably from those
calculated from the stiffness constant in the case of
$2S=5$ and $6$.
It is also apparent from Table I that $\eta_\sigma^{(S)}$ is
systematically overestimated as compared with the more accurate
value derived from height fluctuations.
We suspect that a similar overestimation affected the
values of $\eta_\sigma$ that
were deduced from the finite-size scaling of the
susceptibility of the staggered magnetization\cite{Nagai,Honda}
(this obviously measures
the same fluctuations seen in $S_\sigma({\bf q})$ near $\bf Q$.)
Those data (also quoted in Ref.~\onlinecite{Lipowski})
have quoted errors about four times as large as ours
for $\eta_\sigma^{(K)}$.
Their exponent values are all noticeably larger than the
accurate value ($\eta_\sigma^{(K)}$ or $\eta_\infty$ from
Ref.~\onlinecite{Lipowski}) -- becoming {\it worse} as $S$ grows
(for $2S=4,5$ the difference is twice their their quoted error.)
Clearly the systematic contribution to their errors was underestimated.
The transfer-matrix method\cite{Lipowski}
ought to provide the
effective exponent $\eta_\sigma$ for spin correlations
on length scales comparable to the strip width, and hence
is likewise expected to overestimate $\eta_\sigma$;
indeed, every $\eta_\sigma$ value found in Ref.~\onlinecite{Lipowski}
is slightly larger than our corresponding $\eta_\sigma^{(K)}$ value.
\subsection{Smooth Phase}
\label{MC-Smooth}
Which type of flat state is actually selected in the smooth phase?
Fig.~\ref{fig10} shows the measured expectation of
$n_s$, the number of zero spins in the spin-1 representation,
for $1 \leq 2 S \leq 8$.
As $S$ grows, it is found that $\langle n_s\rangle$ approaches its
maximum allowed value $N_s/3$ as in the $(+,-,0)$ state,
rather than zero, as in the $(+,+,-)$ state.
Thus, the flat states with
half-integer valued $h({\bf R})$ in Fig.~\ref{fig1} are being selected
in the smooth phase.
Translating back to the spin-$S$ model, this means
that spins on two sublattices of the triangular lattice
take the extremal values, $+S$ and $-S$ respectively,
while spins on the third sublattice remain disordered.
It is perhaps more illuminating to study the distribution of height
variables to probe the height fluctuations in the smooth phase.
To this end, we also show, in Fig.~\ref{fig10}, the histogram of
height variable $h({\bf R})$ in the cases of $2S=2$ and $2S=8$,
which is measured for a {\sl typical} configuration generated in
the Monte Carlo simulations.\cite {FN-fig10}.
The broad distribution
observed in the case of $2S=2$ ($S<S_L$)
evolves to a narrowly peaked distribution
in the case of $2S=8$ ($S>S_L$). (It decays as
$\exp(-{\rm const}|h-\langle h \rangle|)$.) This supports the
intuitive picture presented in Sec.~\ref{Smooth}.
Furthermore, the center of this peaked distribution is half-integer
valued. (Numerically, the mean is $\langle h\rangle =0.46$ for
the distribution plotted in Fig.~\ref{fig10}.)
In other words, the locking potential $V(h)$ favors the $(+,0,-)$
type of flat state, in which one sublattice is flippable,
rather than the $(+,+,-)$ type of flat state. (See Fig.~\ref{fig1}).
This kind of flat state was also expected
analytically in the limit of large $S$
\cite{Horiguchi2,Horiguchi}.
We have also computed ${\rm Var} (h)$ for each value of $S$, in two ways.
First, ${\rm Var}(z)$ is just normalization factors times
$\sum _{\bf q \neq 0} \langle |z({\bf q})|^2 \rangle$, which we accumulated
throughout the Monte Carlo run, as described earlier in this
section; then it can be shown that
$Var(h) = Var(z) -{1\over 3} + {1 \over 2} \langle n_s \rangle$ exactly.
For $N_s=72$ this gives ${\rm Var}(h) = 1.06$ and $0.20$
for $2S=2$ and $2S=8$, respectively, showing the contrast of
the rough and smooth behavior.
Secondly, we can compute ${\rm Var}(h)$ directly from the histogram
(from one snapshot) seen in Fig.~\ref{fig10}; this gives respective
values $1.1$ and $0.15$, in satisfactory agreement with the first method.
The exotic ``hidden order'' phase\cite{Lipowski,Lipowski2}
(see Sec.~\ref{Smooth})
can be ruled out on the basis of these data:
according to Eq.~(\ref{eq-dhbound})
the variance of $h({\bf R})$ should be at least $(3/2)^2=2.25$
in the hidden-order phase,
while our measurements indicate it is at most only $0.20$.
Furthermore, for $2S=7$ and $8$, the structure factor $S_\sigma({\bf Q})$
at the zone-corner wave vector $\bf Q$
(not plotted) was much larger than at nearby $\bf q$;
that direct suggests a $\delta$-function
singularity in the thermodynamic limit, i.e., existence of
long-ranged spin order in which
$\langle s({\bf r})\rangle \ne 0$ on at least two of the sublattices.
Additionally, the spin structure factor $S_\sigma({\bf q})$
near the zone-corner wave vector $\bf Q$ (Fig.~\ref{fig8})
showed a striking curvature in the ``smooth'' cases $2S=7$ and $8$,
quite different from the behavior at smaller $S$. This makes it plausible that
$S_\sigma({\bf q}) \to {\rm constant}$, so that
spin fluctuations have short-range rather than power-law correlations
for $S>S_L$.
(It was not emphasized in Ref.~\onlinecite{Lipowski},
but power-law correlations are implied if one takes seriously
their measured values $0< \eta_\sigma < 1/9$ for $2S=7,8$.)
We propose, then, that actually $\eta_{\sigma} = \eta_{E} = \eta_{M} =0$
for $S> S_{c2}$, as in the simplest picture of the smooth phase,
and that the observed nonzero values are simply finite-size effects
due to the very slow crossover from rough to smooth behavior near
a roughening transition
(see Sec.~\ref{Disc-crossover}, below, for further discussion.)
\section{Conclusion and Discussion}
\label{Conc-Disc}
To conclude, in this article, we have investigated the
ground-state properties of the antiferromagnetic
Ising model of general spin on the triangular lattice
by performing Monte Carlo simulations.
Utilizing the interfacial representation, we extrapolated
the stiffness constants by studying the long-wavelength
singularity in the height variables, which in turn lead
to straightforward calculation of critical exponents
of various operators within the framework of height
representation. The results so obtained are further
compared with those extracted from a more tranditional
method, and demonstrate that the method in terms of
height representation method is by far the preferable
one for extracting the critical exponents.
We also analyzed both the dynamical and static properties
of the model in order to map out the phase diagram which consists of
three phases with a Kosterlitz-Thouless phase
transition at ${3\over 2}<S_{KT}<2$ and a locking phase transition
at ${5\over 2} < S_{L} \le 3$.
Even in the smooth state,
analysis of the height fluctuations (as in ${\rm Var}(h)$
was helpful in resolving questions which are made difficult by
the strong finite-size effects near the locking transition.
\subsection{Rational exponents?}
One of our initial motivations for this study was the possibility
of finding rational exponents even for $S>1/2$.
We believe the results in Table~I are the first
which are accurate enough to rule out this possibility.
Indeed, $\eta_\sigma(2S=4)\approx 3/16$ and $\eta_\sigma(2S=5)\approx4/27$,
with differences similar to the error (0.001).
But {\it any} random number differs from a rational number with
denominator $<30$ by the same typical error.
The exception is that $\eta_\sigma^{(K)}(2S=6)$ is quite close to $1/9$,
but we have given other reasons to be suspicious of this value.
\subsection {What is $S_{L}$?}
\label{Disc-crossover}
Another intriguing question was whether
the critical values $2S_{KT}$ and $2 S_{L}$ are exactly integers.
Previous data\cite{Lipowski} suggested that
$S_L\equiv 3$ exactly, and had large enough errors
that $S_{KT}=3/2$ could not be excluded.
Since $\eta_\sigma(S_{KT})\equiv 1/4$ and $\eta_\sigma(S_L)\equiv 1/9$,
this question was answered by the preceding subsection:
we find that definitely $S_{KT}<3/2$. Furthermore, we suspect
$S_{L} < 3$ as concluded in Sec.~\ref{Stiffness}
since the effective stiffness at the length scale we access
is more than enough to drive the system to the locked phase.
The question of the value of $S_L$ suggests paying closer
attention to the behavior of systems near the locking transition.
It has been noted previously how the locked phase tends to
behave qualitatively like the rough phase in a finite-size system, since the
crossover is a very slow function of size.\cite{Blote3}
This is consistent with the apparent power-law behaviors observed
at $S>S_{L}$ in previous studies\cite{Nagai,Lipowski} and with
the tendency of those studies to overestimate the exponents $\eta_{\sigma}$
and $\eta_{E}$ (as compared with our more accurate estimates.)
This would suggest that, if extensive finite-size
corrections were included in our analysis, they would
reduce our estimate of $S_{L}$ a bit further, i.e.
we would more definitely conclude that $2S=6$ is in the locked phase.
Our analysis near the locking transition at $S_{L}$ suffers from
our ignorance of the expected functional form of the critical behavior
as a function of $S-S_{L}$.
A study of the roughening transition\cite{Evertz}
used the Kosterlitz-Thouless (KT) renormalization group to
derive analytic approximations for the total height fluctuation
(closely analogous to ${\rm Var}(h)$
in our problem), which made it possible to overcome very strong
finite-size effects and fit the roughening temperature precisely.
Use of KT finite-size corrections was also essential in
extracting meaningful numbers from transfer-matrix calculations
near the locking transition induced by a magnetic field in
Ref.~\onlinecite{Blote3}.
Thus, a similar adaptation of the
KT renormalization group to give expressions
for the behavior of $\langle | z({\bf q} |^2 \rangle $, as a function
of (small) $|{\bf q}|$ and $S-S_{L}$, or the functional form of $K(S)$ near
$S_{L}$, could make possible a
more conclusive answer as to whether $S_{L}=3$ exactly.
\subsection{Possible improved algorithms}
Since the long-wavelength behavior in this model (in its rough phase) is
purely Gaussian with $z=2$ (see Sec.~\ref{Dynamical}), the critical slowing
down is particularly transparent.
It seems feasible to take advantage of the existence of a height
representation to develop an acceleration algorithm.
For example, it might be possible to extend the cluster algorithms
which are known for the $S=1/2$
triangular Ising antiferromagnet.\cite{cluster}
These are well-defined at $T>0$, but their effectiveness seems
to depend in a hidden fashion on the existence of the height representation
when $T\to 0$.
An intriguing alternative approach starts from the observation that
at long wavelengths the system obeys Langevin dynamics
(see Sec.~\ref{Dynamical}
and Ref.~\onlinecite{Henley1}).
Fourier acceleration\cite{Batrouni}, a nonlocal algorithm
(efficient owing to use of the Fast Fourier Transform algorithm),
is known to be effective in such cases.
The key is to replace the uncorrelated noise function $\xi({\bf x},t)$
of Eq.~(\ref{eq18}) with a new correlated noise function having
$\langle |\xi ({\bf q},t)|^2\rangle \sim 1/|{\bf q}|^2$.
This might be implemented by first constructing a random function
with such correlations, and then updating flippable spins
with probabilities determined by that function, in such a fashion as
to satisfy detailed balance.
Additionally, it may be possible to analyze transfer-matrices
using the height representation.
Quite possibly this would yield
an order-of-magnitude improvement in the accuracy
of the numerical results, for the same size system,
similar to the improvement in analysis of Monte Carlo data.
The transfer matrix breaks up into sectors corresponding to the step
made by $z({\bf r})$ upon following a loop transverse to the
strip (across the periodic boundary conditions.
Then the stiffness can be extracted directly from
the ratio of the dominant eigenvalues of two such sectors;
such an analysis is already standard for
quasicrystal random tilings, for which the long-wavelength degree of
freedom is also an effective interface
\cite{quasicrystal}.
\acknowledgements
C.Z. gratefully acknowledges the support
from NSF grant DMR-9419257 at Syracuse University.
C.L.H. was supported by NSF grant DMR-9214943.
\newpage
\widetext
\begin{table}
\caption{Stiffness constant and critical exponents. Here
$\eta_{\sigma}^{(K)}$, $\eta_{E}^{(K)}$ and $\eta_{M}^{(K)}$
are the estimates for the critical exponents of spin and
bond-energy operators calculated from the stiffness
constant $K$ as done in Sec. V(C), while $\eta_{\sigma}^{(S)}$,
$\eta_{E}^{(S)}$, and $\eta_{M}^{(S)}$ stand for the same critical
exponents, but extracted from the singularities of
their respective structure factors in Sec. V(D). Estimated
errors are also given in the parenthesis.}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
$2S$ & $K$ & $\eta_{\sigma}^{(K)}$ & $\eta_{E}^{(K)}$ & $\eta_{M}^{(K)}$
& $\eta_{\sigma}^{(S)}$ & $\eta_{E}^{(S)}$ & $\eta_{M}^{(S)}$
\\ \hline\hline
1 & 0.349(0.001) & 0.500(0.002) & 2.001(0.008) & 4.502(0.018)
& 0.511(0.013) & 1.844(0.057) & \\ \hline
2 & 0.554(0.003) & 0.315(0.001) & 1.260(0.006) & 2.836(0.013)
& 0.332(0.016) & 1.340(0.072) & \\ \hline
3 & 0.743(0.004) & 0.235(0.001) & 0.940(0.005) & 2.114(0.011)
& 0.254(0.019) & 1.047(0.082) & \\ \hline
4 & 0.941(0.006) & 0.186(0.001) & 0.742(0.004) & 1.670(0.010)
& 0.203(0.022) & 0.791(0.092) & 1.634(0.014)\\ \hline
5 & 1.188(0.008) & 0.147(0.001) & 0.588(0.004) & 1.322(0.009)
& 0.180(0.026) & 0.504(0.115) & 1.560(0.015)\\ \hline
6 & 1.597(0.015) & 0.109(0.001) & 0.437(0.004) & 0.984(0.009)
& 0.236(0.036) & 0.530(0.410) & 1.527(0.016)\\ \hline\hline
\end{tabular}
\end{table}
\twocolumn
| proofpile-arXiv_065-569 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Improved Actions}
In lattice QCD, one of the fundamental challenges is to minimize the
errors caused by discretizing space-time. This is accomplished
through a combination of advances in computer technology, and advances
in the formulation of methods to solve the problem computationally,
including the development of improved numerical algorithms. We show
an example of how an improved gauge field action can be used to
suppress artificial lattice contributions to physical measurements.
We consider gluon actions that are constructed in a gauge invariant
fashion, from a combination of Casimir invariants of closed Wilson
loops. In principle, a lattice action of this type can consist of an
arbitrary sum of Wilson loops, but a truncation to a small set of
localized loops is necessary due to computational expense. We
study actions constructed from $(1\times1)$ and
$(1\times2)$ loops:
\setlength{\tabcolsep}{0.00pc}
\vspace{0.25cm}
\begin{tabular}{@{\hspace{-0.5pc}}lcl@{\hspace{0.0pc}}}
\multicolumn{2}{@{\hspace{-0.5pc}}l@{\hspace{0.0pc}}}{$S=
K^{1\times1} $}&$ \displaystyle\sum {\rm Re}\,{\rm Tr}\, W^{(1 \times 1)} $ \cr
$ + $&$ K^{1\times2} $&$ \displaystyle\sum {\rm Re}\,{\rm Tr}\, W^{(1 \times 2)} $ \cr
$ + $&$ K_6^{1 \times 1} $&$ \displaystyle\sum {\rm Re}\,[
{3 \over 2} ({\rm Tr}\, W^{(1 \times 1)} )^2
- {1 \over 2} {\rm Tr}\, W^{(1 \times 1)} ] $ \cr
$ + $&$ K_8^{1 \times 1} $&$ \displaystyle\sum [ {9 \over 8} {\rm Tr}\, W^{(1 \times 1)} |^2
- {1 \over 8} ]$ \cr
\end{tabular}
\vspace{0.25cm}
where the actions and coefficients, in order of increasing improvement
(approximation to the renormalization trajectory) are given by:
\vspace{0.5cm}
\setlength{\tabcolsep}{0.60pc}
\begin{tabular}{@{\hspace{-0.5pc}}lcccc@{\hspace{0.0pc}}}
Action &$ K^{1\times1} $&$ K^{1\times2} $&$K_6^{1 \times 1}$&$
K_8^{1 \times 1} $\cr
WA &$ 1 $&$ 0 $&$ 0 $&$ 0 $ \cr
SW &$ 5/3$&$-1/12$&$ 0 $&$ 0 $ \cr
RGT &$ k $&$ -0.04k $&$ -0.12k $&$
-0.12k $ . \cr
\end{tabular}
\vspace{0.5cm}
To compare these three actions, we generate four ensembles:
\vspace{0.5cm}
\setlength{\tabcolsep}{0.45pc}
\begin{tabular}{@{\hspace{-0.5pc}}lrrrr@{\hspace{0.0pc}}}
Action & \multicolumn{1}{c}{Size} &$\beta$&
\multicolumn{1}{r}{$\approx\beta_{Wil}$}&$\ \ N$ \cr
Wilson &$ (16^3\times40) $ &$\ \ 6.0 $&$ 6.0\,\,\, $&$ 35 $\cr
SW &$ (16^3\times32) $ &$\ \ 4.2 $&$ 5.8\,\,\, $&$ 36 $\cr
SW &$ (16^3\times32) $ &$\ \ 4.43$&$ 6.0\,\,\, $&$ 40 $\cr
RGT &$ (18^3\times36) $ &$k=10.58$&$ 6.0\,\,\, $&$ 28 $\cr
\end{tabular}
\vspace{0.5cm}
The two SW ensembles allow us to study the effect of increasing
$\beta$, we have used estimates of corresponding Wilson action $\beta$
from the deconfining phase transition temperature calculation by Cella
{\it et al.\/}\cite{Cella}. Since we have used a modest number of
configurations in each case, we focus on the qualitative comparison
between Wilson and improved actions. Further calculations with a
larger number of lattices would be needed for quantitative studies,
for example, to determine the consistency and scaling of
$\chi_t/m_\rho$.
\section{Topology: Comparing Actions}
Lattice topology provides a test case for comparing various gauge
field actions. There are several prescriptions for measuring
topological charge
$$ Q = {1\over32\pi^2} \int d^4x F(x) \tilde{F}(x) $$ on the lattice, and
each prescription is subject to a different set of lattice cutoffs and
renormalizations which affect the measurement of the topological
susceptibility $\chi_t=\left\langle Q^2\right\rangle/V$. In the
plaquette method the topological density $F(x)\tilde{F}(x)$ is
constructed from a product of lattice $(1\times1)$ Wilson loops. This
method in general gives noninteger values of the topological charge,
and is affected by large multiplicative and additive lattice
renormalizations\cite{mixing}. The geometric method\cite{Luscher}
does guarantee an integer topological charge (except for
``exceptional'' configurations) but is not guaranteed to obey physical
scaling in the continuum limit, and is in fact known to violate
scaling for the Wilson action\cite{Gockeler89}. Low-action
dislocations which can be suppressed by improving the
action\cite{Gockeler89} contaminate the geometric $\chi_t$.
In the cooling prescription, ultraviolet fluctuations in the fields
are removed by locally minimizing the action in successive sweeps,
isolating instanton-like configurations. After cooling, a single
instanton configuration spanning several lattice spacings has a
computed charge of nearly one using either the geometric or plaquette
formula; we therefore apply the plaquette formula to the cooled
configurations to obtain a value for $Q$. Lattice artifacts are very
different among these methods, and we can in general get different
results for plaquette ($Q_p$), the geometric ($Q_g$), and the cooling
($Q_c$) topological charges computed on the same original
configuration. For improved actions, we expect lattice artifacts such
as dislocations to be suppressed, therefore we test this prediction by
comparing the different topological charge methods {\it with each
other\/}.
The cooling prescription actually encompasses a family of cooling
algorithms. Typically one cools by selecting a link $U$ to minimize
some action $S_c$, and since cooling is merely used as a tool to
isolate instantons, there is no reason to tie $S_c$ to the Monte Carlo
gauge action $S$. The cooling algorithms $S_c$ we consider here are
linear combinations of Wilson loops with coefficients $c_{(1\times1)}$
and $c_{(1\times2)}$, and since action is minimized only the ratio
$r_{12} = c_{(1\times2)}/c_{(1\times1)}$ is significant. The cooling
algorithm with $r_{12}=-0.05$ removes the leading scaling violation
from the classical instanton action, and we also include cooling
algorithms with $r_{12}=0$ and $r_{12}=-0.093$, which has been derived
from a linear weak coupling approximation to the RGT action, for
comparison. For the case $r_{12}=0$, the lack of barrier to a
decrease in the instanton size causes the instanton to disappear by
implosion during the cooling process, and for $r_{12}=-0.093$ a large
instanton expands until halted by the boundary. We cool for $200$
sweeps for all three algorithms, and the comparison between these
three in an indication of the systematic effect of picking some
particular means of cooling. We note that with $200$ sweeps of Wilson
cooling most of the topological charges are retained, since the large
instantons haven't had enough time to implode. In general, we do not
see any effect from the selection of the cooling algorithm, except
perhaps in one ensemble.
\begin{figure}[t]
\epsfxsize=7.0cm \epsfbox{wagcs1.eps}
\caption{Comparison of geometric and cooled topological charges for
Wilson action lattices at $a \approx 0.1{\rm fm}$. Each point
represents one configuration, with the cooled charge as the absicca
and the geometric charge on the uncooled lattice as the ordinate. The
least squares linear fit is shown. Due to close overlaps there appear
to be fewer that $35$ points.}
\label{fig:qgcwa}
\end{figure}
\begin{figure}[t]
\epsfxsize=7.0cm \epsfbox{gagcs1.eps}
\caption{Comparison of geometric and cooled topological charges for RGT
action lattices at $a \approx 0.1{\rm fm}$.}
\label{fig:qgcra}
\end{figure}
\section{Results}
As described above, we compute $Q_p$, $Q_g$, and $Q_c$ on all of our
lattices. We show two scatter plots (Figures \ref{fig:qgcwa},
\ref{fig:qgcra}) highlighting the discrepancy between $Q_g$ and $Q_c$.
The best fit line is constructed through the points on a scatter plot.
The slope of this line is an estimate of the ratio of multiplicative
renormalizations, and should be close to $1$ since both the geometric
and cooling methods give integer charges. The correlation
$$z_{gc} = \frac{\left\langle\left( Q_g - \bar{Q}_g\right)
\left( Q_c - \bar{Q}_c\right)\right\rangle }
{\sqrt{\left\langle\left(Q_g-\bar{Q}_g\right)^2\right\rangle
\left\langle\left(Q_c-\bar{Q}_c\right)^2\right\rangle} } $$
between $Q_g$ and $Q_c$ is a measure of random additive artifacts seen
by one method but not the other, such as dislocations which disappear
during cooling therefore contributing only to $Q_g$ but not to $Q_c$.
The scatter plots show a strong correlation between $Q_g$ and $Q_c$
for the RGT action, and a far weaker correlation for the WA,
suggesting that the effect of lattice artifacts on topological charge
is far less pronounced for the improved action.
\begin{figure}[t]
\epsfxsize=7.2cm \epsfbox{corrs.eps}
\vspace{-0.5cm}
\caption{Correlation $z_{gc}$ between geometric and cooled topological
charges, for WA, SW, and RGT ensembles. Results for three different
cooling methods are shown.}
\label{fig:zgc}
\end{figure}
\begin{figure}[t]
\epsfxsize=7.2cm \epsfbox{corrp.eps}
\vspace{-0.5cm}
\caption{Statistical correlation $z_{pc}$ between plaquette and cooled
topological charges, for same ensembles, three cooling methods.}
\label{fig:zpc}
\end{figure}
\begin{table}[b]
\caption{Correlations, SW Cooling Linear Fits}
\setlength{\tabcolsep}{0.39pc}
\begin{tabular}{@{\hspace{0.2pc}}lllll@{\hspace{0.0pc}}}
Corr. & \multicolumn{1}{c}{WA} & \multicolumn{1}{c}{SW, $4.2$} &
\multicolumn{1}{c}{SW, $4.43$} & \multicolumn{1}{c}{RGT} \cr
$z_{gc} $&$ 0.28(13) $&$ 0.77(8) $&$ 0.80(6) $&$ 0.88(5) $\cr
$z_{pc} $&$ 0.04(16) $&$ 0.29(16) $&$ 0.31(13) $&$ 0.32(16)$\cr
\end{tabular}
\label{tab:corr}
\end{table}
\vspace{0.25cm}
In Fig.~\ref{fig:zgc} we show a comparison between the WA, SW, and RGT
actions of the correlation between $Q_g$ and $Q_c$.
Fig.~\ref{fig:zpc} similarly shows the correlation between $Q_p$ and
$Q_c$, computed in the same manner as $z_{gc}$, and numerical values
are in Table \ref{tab:corr}. The SW action serves as an intermediate
point between the other two actions, since the RGT action represents a
better estimate of the renormalization trajectory than the WA and SW
actions. It is unclear whether the spread in $z_{gc}$ at $\beta=4.43$
is due to any systematic effect of the cooling algorithm. It is
possible that for better improved actions, where the exponential
suppression of dislocations is greater than for the S-W action,
increasing beta will have a more profound effect than we have seen
here for the S-W action. For the plaquette method, we have shown
previously\cite{lat94} that the multiplicative $Z_P$ becomes less
severe as the action improves, and the increased correlation $z_{pc}$
suggests that the additive renormalization also decreases. In
addition, improving the action is far more effective than increasing
$\beta$ for suppressing lattice artifacts.
Future calculations can include a more comprehensive study of other
improved actions. Other methods for $\chi_t$, including the fermionic
method\cite{SmVink}, and an indirect measurement by calculating the
$\eta'$ mass\cite{Tsuk94,lat95}, should also be tested. Having
established a correlation between cooled and uncooled topology and
located individual instantons, we are now prepared to investigate
Shuryak's picture of a dilute instanton gas, and the influence of
instantons on hadronic physics by working directly on uncooled
lattices.
\noindent{\bf Acknowledgement.} The calculations were
performed at the Ohio (OSC) and National Energy Research
(NERSC) supercomputer centers, and the Advanced Computing Laboratory.
| proofpile-arXiv_065-575 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the first part of this paper we shall investigate a
special case of relative continuity of symplectically adjoint maps
of a symplectic space. By this, we mean the following.
Suppose that $(S,\sigma)$ is a symplectic space, i.e.\ $S$
is a real-linear vector space with an anti-symmetric,
non-degenerate bilinear form $\sigma$ (the symplectic form).
A pair $V,W$ of linear maps of $S$ will be called
{\it symplectically adjoint} if
$\sigma(V\phi,\psi) = \sigma(\phi,W\psi)$ for all $\phi,\psi \in S$.
Let $\mu$ and $\mu'$ be two
scalar products on $S$ and assume that,
for each pair $V,W$ of symplectically adjoint
linear maps of $(S,\sigma)$, the boundedness
of both $V$ and $W$ with respect to $\mu$
implies their boundedness with respect to $\mu'$.
Such a situation we refer to as {\it relative $\mu - \mu'$
continuity of symplectically adjoint maps} (of $(S,\sigma)$).
A particular example of symplectically adjoint maps is
provided by the pair $T,T^{-1}$ whenever $T$ is a symplectomorphism
of $(S,\sigma)$. (Recall that a symplectomorphism of $(S,\sigma)$
is a bijective linear map $T : S \to S$ which preserves the
symplectic form, $\sigma(T\phi,T\psi) = \sigma(\phi,\psi)$ for
all $\phi,\psi \in S$.)
In the more specialized case to be considered in the present work,
which will soon be indicated to be relevant in applications,
we show that a certain distinguished relation between a
scalar product $\mu$ on $S$
and a second one, $\mu'$,
is sufficient for the relative $\mu - \mu'$
continuity of symplectically adjoint maps.
(We give further details in Chapter 2,
and in the next paragraph.)
The result will be applied in Chapter 3 to answer
a couple of open questions
concerning the algebraic structure of the quantum theory of the free
scalar field in arbitrary globally hyperbolic spacetimes:
the local definiteness, local primarity and Haag-duality
in representations of the local observable algebras
induced by quasifree Hadamard states,
as well as the determination of the type of the local
von Neumann algebras in such representations.
Technically, what needs to be proved in our approach to this problem
is the continuity of the temporal evolution of the Cauchy-data of
solutions of the scalar Klein-Gordon equation
\begin{equation}
(\nabla^a \nabla_a + r)\varphi = 0
\end{equation}
in a globally hyperbolic spacetime with respect to a certain
topology on the Cauchy-data space.
(Here, $\nabla$ is the covariant derivative of the metric $g$
on the spacetime, and $r$ an arbitrary realvalued,
smooth function.)
The Cauchy-data space is a symplectic space on which the said
temporal evolution is realized by symplectomorphisms. It
turns out that the classical ``energy-norm'' of solutions
of (1.1), which is given by a scalar
product $\mu_0$ on the Cauchy-data space, and the
topology relevant for the required continuity statement
(the ``Hadamard one-particle space norm''), induced by a
scalar product $\mu_1$ on the Cauchy-data space, are precisely
in the relation for which our result on relative $\mu_0 - \mu_1$
continuity of symplectically adjoint maps applies. Since the continuity
of the Cauchy-data evolution in the classical energy norm,
i.e.\ $\mu_0$, is well-known, the desired continuity in the
$\mu_1$-topology follows.
The argument just described may be viewed as the prime example of
application of the relative continuity result. In fact,
the relation between $\mu_0$ and $\mu_1$ is abstracted from
the relation between the classical energy-norm and the
one-particle space norms arising from ``frequency-splitting'' procedures
in the canonical quantization of (linear) fields.
This relation has been made precise
in a recent paper by Chmielowski [11]. It provides the
starting point for our investigation in Chapter 2, where
we shall see that one can associate
with a dominating scalar product $\mu \equiv \mu_0$ on
$S$ in a canonical way a positive, symmetric operator
$|R_{\mu}|$ on the $\mu$-completion of $S$, and a family of scalar
products $\mu_s$, $s > 0$, on $S$, defined as $\mu$ with
$|R_{\mu}|^s$ as an operator kernel. Using abstract
interpolation, it will be shown that
then relative $\mu_0 - \mu_s$ continuity of symplectically adjoint maps
holds for all $0 \leq s \leq 2$. The relative
$\mu_0 - \mu_1$ continuity arises as
a special case.
In fact, it turns out that the indicated interpolation
argument may even be extended to an apparently more general
situation from which the relative $\mu_0 - \mu_s$ continuity
of symplectically adjoint maps derives as a corollary, see
Theorem 2.2.
Chapter 3 will be concerned with the application of the result
of Thm.\ 2.2 as indicated above. In the preparatory Section
3.1, some notions of general relativity will be summarized, along
with the introduction of some notation. Section 3.2 contains a brief
synopsis of the notions of local definiteness, local primarity and
Haag-duality in the the context of quantum field theory in curved
spacetime. In Section 3.3 we present the $C^*$-algebraic quantization of the
KG-field obeying (1.1) on a globally hyperbolic spacetime, following [16].
Quasifree Hadamard states will be described in Section 3.4 according
to the definition given in [45]. In the same section we briefly summarize
some properties of Hadamard two-point functions, and derive, in
Proposition 3.5, the result concerning the continuity of the
Cauchy-data evolution maps in the topology of the Hadamard two-point
functions which was mentioned above. It will be seen in the last Section
3.5 that this leads, in combination with results obtained earlier
[64,65,66], to Theorem 3.6 establishing detailed properties of the algebraic
structure of the local von Neumann observable algebras in representations
induced by quasifree Hadamard states of the Klein-Gordon field over
an arbitrary globally hyperbolic spacetime.
\section{Relative Continuity of Symplectically Adjoint Maps}
\setcounter{equation}{0}
Let $(S,\sigma)$ be a symplectic space. A (real-linear) scalar
product $\mu$ on $S$ is said to {\it dominate} $\sigma$
if the estimate
\begin{equation}
|\sigma(\phi,\psi)|^2 \leq 4 \cdot \mu(\phi,\phi)\,\mu(\psi,\psi)\,,
\quad \phi,\psi \in S\,,
\end{equation}
holds; the set of all scalar products on $S$ which dominate $\sigma$
will be denoted by ${\sf q}(S,\sigma)$.
Given $\mu \in {\sf q}(S,\sigma)$, we write $H_{\mu} \equiv
\overline{S}^{\mu}$ for the completion of $S$ with respect to the
topology induced by $\mu$, and denote by $\sigma_{\mu}$ the
$\mu$-continuous extension, guaranteed to uniquely exist by (2.1),
of $\sigma$ to $H_{\mu}$. The estimate (2.1) then extends to
$\sigma_{\mu}$ and all $\phi,\psi \in H_{\mu}$. This entails
that there is a uniquely determined, $\mu$-bounded linear
operator $R_{\mu} : H_{\mu} \to H_{\mu}$ with the property
\begin{equation}
\sigma_{\mu}(x,y) = 2\,\mu(x,R_{\mu}y)\,, \quad x,y \in H_{\mu}\,.
\end{equation}
The antisymmetry of $\sigma_{\mu}$ entails for the
$\mu$-adjoint $R_{\mu}^*$ of $R_{\mu}$
\begin{equation}
R_{\mu}^* = - R_{\mu}\,,
\end{equation}
and by (2.1) one finds that the operator norm of $R_{\mu}$
is bounded by 1, $||\,R_{\mu}\,|| \leq 1$.
The operator $R_{\mu}$ will be called the {\it polarizator} of $\mu$.
In passing, two things should be noticed here:
\\[6pt]
(1) $R_{\mu}|S$ is injective since $\sigma$ is a non-degenerate
bilinear form on $S$, but $R_{\mu}$ need not be injective on
on all of $H_{\mu}$, as $\sigma_{\mu}$ may be degenerate.
\\[6pt]
(2) In general, it is not the case that $R_{\mu}(S) \subset S$.
\\[6pt]
Further properties of $R_{\mu}$ will be explored below.
Let us first focus on two significant subsets of ${\sf q}(S,\sigma)$ which
are intrinsically characterized by properties of the
corresponding $\sigma_{\mu}$ or, equivalently, the $R_{\mu}$.
The first is ${\sf pr}(S,\sigma)$, called the set of {\it primary}
scalar products on $(S,\sigma)$, where $\mu \in {\sf q}(S,\sigma)$ is
in ${\sf pr}(S,\sigma)$ if $\sigma_{\mu}$ is a symplectic form
(i.e.\ non-degenerate) on $H_{\mu}$. In view of (2.2) and
(2.3), one can see that this is equivalent to either
(and hence, both) of the following conditions:
\begin{itemize}
\item[(i)] \quad $R_{\mu}$ is injective,
\item[(ii)] \quad $R_{\mu}(H_{\mu})$ is dense in $H_{\mu}$.
\end{itemize}
The second important subset of ${\sf q}(S,\sigma)$ is denoted by
${\sf pu}(S,\sigma)$ and defined as consisting of those $\mu \in {\sf q}(S,\sigma)$
which satisfy the {\it saturation property}
\begin{equation}
\mu(\phi,\phi) = \sup_{\psi \in S\backslash \{0\} } \,
\frac{|\sigma(\phi,\psi)|^2}{4 \mu(\psi,\psi) } \,,\ \ \ \psi \in S \,.
\end{equation}
The set ${\sf pu}(S,\sigma)$ will be called the set of {\it pure} scalar
products on $(S,\sigma)$. It is straightforward to check that
$\mu \in {\sf pu}(S,\sigma)$ if and only if $R_{\mu}$ is a unitary
anti-involution, or complex structure, i.e.\
$R_{\mu}^{-1} = R_{\mu}^*$, $R_{\mu}^2 = - 1$. Hence
${\sf pu}(S,\sigma) \subset {\sf pr}(S,\sigma)$.
\\[10pt]
Our terminology reflects well-known relations between properties of
quasifree states on the (CCR-) Weyl-algebra of a symplectic space
$(S,\sigma)$ and properties of $\sigma$-dominating scalar products
on $S$, which we shall briefly recapitulate. We refer to
[1,3,5,45,49] and also references quoted therein for proofs
and further discussion of the following statements.
Given a symplectic space $(S,\sigma)$, one can associate with it
uniquely (up to $C^*$-algebraic equivalence) a $C^*$-algebra
${\cal A}[S,\sigma]$, which is generated by a family of unitary elements
$W(\phi)$, $\phi \in S$, satisfying the canonical commutation
relations (CCR) in exponentiated form,
\begin{equation}
W(\phi)W(\psi) = {\rm e}^{-i\sigma(\phi,\psi)/2}W(\phi + \psi)\,,
\quad \phi,\psi \in S\,.
\end{equation}
The algebra ${\cal A}[S,\sigma]$ is called the {\it Weyl-algebra}, or
{\it CCR-algebra}, of $(S,\sigma)$. It is not difficult to see that
if $\mu \in {\sf q}(S,\sigma)$, then one can define a state (i.e., a positive,
normalized linear functional) $\omega_{\mu}$ on ${\cal A}[S,\sigma]$ by setting
\begin{equation}
\omega_{\mu} (W(\phi)) : = {\rm e}^{- \mu(\phi,\phi)/2}\,, \quad \phi \in S\,.
\end{equation}
Any state on the Weyl-algebra ${\cal A}[S,\sigma]$ which can be realized in this way
is called a {\it quasifree state}. Conversely, given any quasifree
state $\omega_{\mu}$ on ${\cal A}[S,\sigma]$, one can recover its $\mu \in {\sf q}(S,\sigma)$ as
\begin{equation}
\mu(\phi,\psi) = 2 {\sf Re}\left. \frac{\partial}{\partial t}
\frac{\partial}{\partial \tau} \right|_{t = \tau = 0}
\omega_{\mu} (W(t\phi)W(\tau \psi))\,, \quad \phi,\psi \in S\,.
\end{equation}
So there is a one-to-one correspondence between quasifree states
on ${\cal A}[S,\sigma]$ and dominating scalar products on $(S,\sigma)$.
\\[10pt]
Let us now recall the subsequent terminology. To a state $\omega$
on a $C^*$-algebra $\cal B$ there corresponds (uniquely up to
unitary equivalence) a triple $({\cal H}_{\omega},\pi_{\omega},\Omega_{\omega})$,
called the GNS-representation of $\omega$ (see e.g.\ [5]), characterized by
the following properties: ${\cal H}_{\omega}$ is a complex Hilbertspace,
$\pi_{\omega}$ is a representation of $\cal B$ by bounded linear operators
on ${\cal H}_{\omega}$ with cyclic vector $\Omega_{\omega}$, and
$\omega(B) = \langle \Omega_{\omega},\pi_{\omega}(B)\Omega_{\omega}
\rangle $ for all $B \in \cal B$.
Hence one is led to associate with $\omega$ and
$\cal B$ naturally
the $\omega$-induced von Neumann algebra $\pi_{\omega}({\cal B})^-$,
where the bar means taking the closure with respect to the weak
operator topology in the set of bounded linear operators on ${\cal H}_{\omega}$.
One refers to $\omega$ (resp., $\pi_{\omega}$) as {\it primary}
if $\pi_{\omega}({\cal B})^- \cap \pi_{\omega}({\cal B})' = {\bf C} \cdot 1$
(so the center of $\pi_{\omega}({\cal B})^-$ is trivial), where the
prime denotes taking the commutant, and as {\it pure} if
$\pi_{\omega}({\cal B})' = {\bf C}\cdot 1$ (i.e.\ $\pi_{\omega}$ is
irreducible --- this is equivalent to the statement that $\omega$
is not a (non-trivial) convex sum of different states).
In the case where $\omega_{\mu}$ is a quasifree state on a Weyl-algebra
${\cal A}[S,\sigma]$, it is known that (cf.\ [1,49])
\begin{itemize}
\item[(I)] $\omega_{\mu}$ is primary if and only if $\mu \in {\sf pr}(S,\sigma)$,
\item[(II)] $\omega_{\mu}$ is pure if and only if $\mu \in {\sf pu}(S,\sigma)$.
\end{itemize}
${}$\\
We return to the investigation of the properties of the polarizator
$R_{\mu}$ for a dominating scalar product $\mu$ on a symplectic space
$(S,\sigma)$. It possesses a polar decomposition
\begin{equation}
R_{\mu} = U_{\mu} |R_{\mu}|
\end{equation}
on the Hilbertspace $(H_{\mu},\mu)$, where $U_{\mu}$ is an isometry
and $|R_{\mu}|$ is symmetric and has non-negative spectrum. Since
$R_{\mu}^* = - R_{\mu}$, $R_{\mu}$ is normal and thus
$|R_{\mu}|$ and $U_{\mu}$ commute. Moreover, one has
$|R_{\mu}| U_{\mu}^* = - U_{\mu} |R_{\mu}|$, and hence $|R_{\mu}|$ and $U_{\mu}^*$ commute
as well. One readily observes that $(U_{\mu}^* + U_{\mu})|R_{\mu}| = 0$.
The commutativity can by the spectral calculus be generalized to the
statement that, whenever $f$ is a real-valued, continuous function
on the real line, then
\begin{equation}
[f(|R_{\mu}|),U_{\mu}] = 0 = [f(|R_{\mu}|),U_{\mu}^*] \,,
\end{equation}
where the brackets denote the commutator.
In a recent work [11], Chmielowski noticed that if one defines
for $\mu \in {\sf q}(S,\sigma)$ the bilinear form
\begin{equation}
\tilde{\mu}(\phi,\psi) := \mu (\phi,|R_{\mu}| \psi)\,, \quad \phi,\psi \in S,
\end{equation}
then it holds that $\tilde{\mu} \in {\sf pu}(S,\sigma)$. The proof of this is straightforward.
That $\tilde{\mu}$ dominates $\sigma$ will be seen in Proposition 2.1 below.
To check the saturation property (2.4) for $\tilde{\mu}$, it suffices to
observe that for given $\phi \in H_{\mu}$, the inequality in the
following chain of expressions:
\begin{eqnarray*}
\frac{1}{4} | \sigma_{\mu}(\phi,\psi) |^2 & = & |\mu(\phi,U_{\mu} |R_{\mu}| \psi)|^2
\ = \ |\mu(\phi,-U_{\mu}^*|R_{\mu}|\psi) |^2 \\
& = & |\mu(|R_{\mu}|^{1/2}U_{\mu}\phi,|R_{\mu}|^{1/2}\psi)|^2
\\
& \leq & \mu(|R_{\mu}|^{1/2}U_{\mu}\phi,|R_{\mu}|^{1/2}U_{\mu}\phi) \cdot \mu(|R_{\mu}|^{1/2}\psi,
|R_{\mu}|^{1/2}\psi) \nonumber
\end{eqnarray*}
is saturated and becomes an equality upon choosing $\psi \in H_{\mu}$
so that $|R_{\mu}|^{1/2}\psi$ is parallel to $|R_{\mu}|^{1/2}U_{\mu} \phi$.
Therefore one obtains for all $\phi \in S$
\begin{eqnarray*}
\sup_{\psi \in S\backslash\{0\}}\, \frac{|\sigma(\phi,\psi)|^2}
{4 \mu(\psi,|R_{\mu}| \psi) } & = & \mu(|R_{\mu}|^{1/2}U_{\mu} \phi,|R_{\mu}|^{1/2}U_{\mu} \phi)
\\
& = & \mu(U_{\mu}|R_{\mu}|^{1/2}\phi,U_{\mu} |R_{\mu}|^{1/2} \phi) \\
& = & \tilde{\mu}(\phi,\phi)\,,
\end{eqnarray*}
which is the required saturation property.
Following Chmielowski, the scalar product $\tilde{\mu}$ on $S$ associated with
$\mu \in {\sf q}(S,\sigma)$ will be called the {\it purification} of $\mu$.
It appears natural to associate with $\mu \in {\sf q}(S,\sigma)$ the family $\mu_s$,
$s > 0$, of symmetric bilinear forms on $S$ given by
\begin{equation}
\mu_s(\phi,\psi) := \mu(\phi,|R_{\mu}|^s \psi)\,, \quad \phi,\psi \in S\,.
\end{equation}
We will use the convention that $\mu_0 = \mu$.
Observe that $\tilde{\mu} = \mu_1$. The subsequent proposition ensues.
\begin{Proposition}
${}$\\[6pt]
(a) $\mu_s$ is a scalar product on $S$ for each $s \geq 0$. \\[6pt]
(b) $\mu_s$ dominates $\sigma$ for $0 \leq s \leq 1$. \\[6pt]
(c) Suppose that there is some $s \in (0,1)$ such that $\mu_s \in {\sf pu}(S,\sigma)$.
Then $\mu_r = \mu_1$ for all $r > 0$. If it is in addition assumed
that $\mu \in {\sf pr}(S,\sigma)$, then it follows that $\mu_r = \mu_1$ for all
$r \geq 0$, i.e.\ in particular $\mu = \tilde{\mu}$. \\[6pt]
(d) If $\mu_s \in {\sf q}(S,\sigma)$ for some $s > 1$, then $\mu_r = \mu_1$ for
all $r > 0$. Assuming additionally $\mu \in {\sf pr}(S,\sigma)$, one obtains
$\mu_r = \mu_1$ for all $r \geq 0$, entailing $\mu = \tilde{\mu}$.\\[6pt]
(e) The purifications of the $\mu_s$, $0 < s < 1$, are equal
to $\tilde{\mu}$: We have $\widetilde{\mu_s} = \tilde{\mu} = \mu_1$ for all
$0 < s < 1$.
\end{Proposition}
{\it Proof.} (a) According to (b), $\mu_s$ dominates $\sigma$ for
each $0 \leq s \leq 1$, thus it is a scalar product whenever $s$ is
in that range. However, it is known that
$\mu(\phi,|R_{\mu}|^s \phi) \geq \mu(\phi,|R_{\mu}| \phi)^s$ for all vectors
$\phi \in H_{\mu}$ of unit length ($\mu(\phi,\phi) = 1$) and
$1 \leq s < \infty$, cf.\ [60 (p.\ 20)]. This shows that
$\mu_s(\phi,\phi) \neq 0$ for all nonzero $\phi$ in $S$, $s \geq 0$.
\\[6pt]
(b) For $s$ in the indicated range there holds the following estimate:
\begin{eqnarray*}
\frac{1}{4} |\sigma(\phi,\psi)|^2 & = & |\mu(\phi,U_{\mu}|R_{\mu}| \psi)|^2
\ = \ |\mu(\phi,-U_{\mu}^*|R_{\mu}| \psi )|^2 \\
& = & | \mu(|R_{\mu}|^{s/2}U_{\mu} \phi, |R_{\mu}|^{1 - s/2} \psi) |^2 \\
& \leq & \mu(U_{\mu} |R_{\mu}|^{s/2}\phi,U_{\mu}|R_{\mu}|^{s/2} \phi)
\cdot \mu(|R_{\mu}|^{s/2}\psi,|R_{\mu}|^{2(1-s)}|R_{\mu}|^{s/2} \psi) \\
& \leq & \mu_s(\phi,\phi)\cdot \mu_s(\psi,\psi)\,, \quad \phi,\psi \in S\,.
\end{eqnarray*}
Here, we have used that $|R_{\mu}|^{2(1-s)} \leq 1$.
\\[6pt]
(c) If $(\phi_n)$ is a $\mu$-Cauchy-sequence in $H_{\mu}$, then it is,
by continuity of $|R_{\mu}|^{s/2}$, also a $\mu_s$-Cauchy-sequence in
$H_s$, the $\mu_s$-completion of $S$. Via this identification, we obtain
an embedding $j : H_{\mu} \to H_s$. Notice that $j(\psi) = \psi$
for all $\psi \in S$, so $j$ has dense range; however, one has
\begin{equation}
\mu_s(j(\phi),j(\psi)) = \mu(\phi,|R_{\mu}|^s \psi)
\end{equation}
for all $\phi,\psi \in H_{\mu}$. Therefore $j$ need not be injective.
Now let $R_s$ be the polarizator of $\mu_s$. Then we have
\begin{eqnarray*}
2\mu_s(j(\phi),R_s j(\psi))\ = \ \sigma_{\mu}(\phi,\psi) & = &
2 \mu(\phi,R_{\mu}\psi) \\
& = & 2 \mu(\phi,|R_{\mu}|^sU_{\mu}|R_{\mu}|^{1-s}\psi) \\
& = & 2 \mu_s(j(\phi),j(U_{\mu}|R_{\mu}|^{1-s})\psi)
\,,\quad \phi,\psi \in H_{\mu}\,.
\end{eqnarray*}
This yields
\begin{equation}
R_s {\mbox{\footnotesize $\circ$}} j = j {\mbox{\footnotesize $\circ$}} U_{\mu}|R_{\mu}|^{1-s}
\end{equation}
on $H_{\mu}$. Since by assumption $\mu_s$ is pure, we have
$R_s^2 = -1$ on $H_s$, and thus
$$ j = - R_s j U_{\mu}|R_{\mu}|^{1-s} = - j(U_{\mu}|R_{\mu}|^{1-s})^2 \,.$$
By (2.12) we may conclude
$$ |R_{\mu}|^{2s} = - U_{\mu} |R_{\mu}| U_{\mu} |R_{\mu}| = U_{\mu}^*U_{\mu}|R_{\mu}|^2 = |R_{\mu}|^2 \,, $$
which entails $|R_{\mu}|^s = |R_{\mu}|$. Since $|R_{\mu}| \leq 1$, we see that for
$s \leq r \leq 1$ we have
$$ |R_{\mu}| = |R_{\mu}|^s \geq |R_{\mu}|^r \geq |R_{\mu}| \,,$$
hence $|R_{\mu}|^r = |R_{\mu}|$ for $s \geq r \geq 1$. Whence $|R_{\mu}|^r = |R_{\mu}|$
for all $r > 0$. This proves the first part of the statement.
For the second part we observe that $\mu \in {\sf pr}(S,\sigma)$ implies that
$|R_{\mu}|$, and hence also $|R_{\mu}|^s$ for $0 < s < 1$, is injective. Then the
equation $|R_{\mu}|^s = |R_{\mu}|$ implies that $|R_{\mu}|^s(|R_{\mu}|^{1-s} - 1) = 0$,
and by the injectivity of $|R_{\mu}|^s$ we may conclude $|R_{\mu}|^{1-s} =1$.
Since $s$ was assumed to be strictly less than 1, it follows that
$|R_{\mu}|^r = 1$ for all $r \geq 0$; in particular, $|R_{\mu}| =1$.
\\[6pt]
(d) Assume that $\mu_s$ dominates $\sigma$ for some $s > 1$, i.e.\ it
holds that
$$ 4|\mu(\phi,U_{\mu}|R_{\mu}|\psi)|^2 = |\sigma_{\mu}(\phi,\psi)|^2
\leq 4\cdot \mu(\phi,|R_{\mu}|^s\phi)\cdot \mu(\psi,|R_{\mu}|^s\psi)\,, \quad \phi,\psi
\in H_{\mu}\,, $$
which implies, choosing $\phi = U_{\mu} \psi$, the estimate
$$ \mu(\psi,|R_{\mu}| \psi) \leq \mu(\psi,|R_{\mu}|^s \psi) \,,
\quad \psi \in H_{\mu}\,,$$
i.e.\ $|R_{\mu}| \leq |R_{\mu}|^s$. On the other hand, $|R_{\mu}| \geq |R_{\mu}|^r \geq |R_{\mu}|^s$
holds for all $1 \leq r \leq s$ since $|R_{\mu}| \leq 1$. This implies
$|R_{\mu}|^r = |R_{\mu}|$ for all $r > 0$. For the second part of the statement one
uses the same argument as given in (c). \\[6pt]
(e) In view of (2.13) it holds that
\begin{eqnarray*}
|R_s|^2j & = & - R_s^2 j\ =\ - R_s j U_{\mu} |R_{\mu}|^{1-s} \\
& = & - j U_{\mu} |R_{\mu}|^{1-s}U_{\mu}|R_{\mu}|^{1-s}\ =\ - j U_{\mu}^2 (|R_{\mu}|^{1-s})^2 \,.
\end{eqnarray*}
Iterating this one has for all $n \in {\bf N}$
$$ |R_s|^{2n} j = (-1)^n j U_{\mu}^{2n}(|R_{\mu}|^{1-s})^{2n}\,. $$
Inserting this into relation (2.12) yields for all $n \in {\bf N}$
\begin{eqnarray}
\mu_s(j(\phi),|R_s|^{2n}j(\psi)) & = & \mu(\phi,
|R_{\mu}|^s (-1)^n U_{\mu}^{2n}(|R_{\mu}|^{1-s})^{2n}\psi) \\
& = & \mu(\phi,|R_{\mu}|^s(|R_{\mu}|^{1-s})^{2n}\psi)\,,\quad \phi,\psi \in H_{\mu}\,.
\nonumber
\end{eqnarray}
For the last equality we used that $U_{\mu}$ commutes with $|R_s|^s$
and $U_{\mu}^2|R_{\mu}| = - |R_{\mu}|$. Now let $(P_n)$ be a sequence of polynomials
on the intervall $[0,1]$ converging uniformly to the square root
function on $[0,1]$. From (2.14) we infer that
$$ \mu_s(j(\phi),P_n(|R_s|^2)j(\psi)) = \mu(\phi,|R_{\mu}|^s P_n((|R_{\mu}|^{1-s})^2)
\psi)\,, \quad \phi, \psi \in H_{\mu} $$
for all $n \in {\bf N}$, which in the limit $n \to \infty$ gives
$$ \mu_s(j(\phi),|R_s|j(\psi)) = \mu(\phi,|R_{\mu}|\psi)\,, \quad
\phi,\psi \in H_{\mu}\,, $$
as desired. $\Box$
\\[10pt]
Proposition 2.1 underlines the special role of
$\tilde{\mu} = \mu_1$. Clearly, one has $\tilde{\mu} = \mu$ iff $\mu \in {\sf pu}(S,\sigma)$.
Chmielowski has proved another interesting connection between
$\mu$ and $\tilde{\mu}$ which we briefly mention here. Suppose that
$\{T_t\}$ is a one-parametric group of symplectomorphisms of
$(S,\sigma)$, and let $\{\alpha_t\}$ be the automorphism group
on ${\cal A}[S,\sigma]$ induced by it via $\alpha_t(W(\phi)) = W(T_t\phi)$,
$\phi \in S,\ t \in {\bf R}$. An $\{\alpha_t\}$-invariant quasifree
state $\omega_{\mu}$ on ${\cal A}[S,\sigma]$ is called {\it regular} if the unitary
group which implements $\{\alpha_t\}$ in the GNS-representation
$({\cal H}_{\mu},\pi_{\mu},\Omega_{\mu})$ of $\omega_{\mu}$ is strongly
continuous and leaves no non-zero vector in the one-particle space
of ${\cal H}_{\mu}$ invariant. Here, the one-particle space is spanned
by all vectors of the form $\left. \frac{d}{dt} \right|_{t = 0}
\pi_{\mu}(W(t\phi))\Omega_{\mu}$, $\phi \in S$.
It is proved in [11] that, if $\omega_{\mu}$ is a regular quasifree
KMS-state for $\{\alpha_t\}$, then $\omega_{\tilde{\mu}}$ is the unique
regular quasifree groundstate for $\{\alpha_t\}$. As explained in
[11], the passage from $\mu$ to $\tilde{\mu}$ can be seen as a rigorous
form of ``frequency-splitting'' methods employed in the canonical
quantization of classical fields for which $\mu$ is induced
by the classical energy norm. We shall come back to this in the
concrete example of the Klein-Gordon field in Sec.\ 3.4.
It should be noted that the purification map $\tilde{\cdot} :
{\sf q}(S,\sigma) \to {\sf pu}(S,\sigma)$, $\mu \mapsto \tilde{\mu}$, assigns to a quasifree state
$\omega_{\mu}$ on ${\cal A}[S,\sigma]$ the pure quasifree state $\omega_{\tilde{\mu}}$
which is again a state on ${\cal A}[S,\sigma]$. This is different from the
well-known procedure of assigning to a state $\omega$ on a
$C^*$-algebra ${\cal A}$, whose GNS representation is primary,
a pure state $\omega_0$ on ${\cal A}^{\circ} \otimes
{\cal A}$.
(${\cal A}^{\circ}$ denotes the opposite algebra of ${\cal A}$,
cf.\ [75].) That procedure was introduced by Woronowicz and is an
abstract version of similar constructions for quasifree states on
CCR- or CAR-algebras [45,54,75]. Whether the purification map
$\omega_{\mu} \mapsto \omega_{\tilde{\mu}}$ can be generalized from quasifree states
on CCR-algebras to a procedure of assigning to (a suitable class of)
states on a generic $C^*$-algebra pure states on that same algebra,
is in principle an interesting question, which however we shall not
investigate here.
\begin{Theorem} ${}$\\[6pt]
(a) Let $H$ be a (real or complex) Hilbertspace with
scalar product $\mu(\,.\,,\,.\,)$, $R$ a (not necessarily bounded) normal
operator in $H$, and $V,W$ two $\mu$-bounded linear operators on $H$
which are $R$-adjoint, i.e.\ they satisfy
\begin{equation}
W{\rm dom}(R) \subset {\rm dom}(R) \quad {\it and} \quad V^*R = R W \quad
{\rm on \ \ dom}(R) \,.
\end{equation}
Denote by $\mu_s$ the Hermitean form on ${\rm dom}(|R|^{s/2})$ given by
$$ \mu_s(x,y) := \mu(|R|^{s/2}x,|R|^{s/2} y)\,, \quad
x,y \in {\rm dom}(|R|^{s/2}),\ 0 \leq s \leq 2\,.$$ We write
$||\,.\,||_0 := ||\,.\,||_{\mu} := \mu(\,.\,,\,.\,)^{1/2}$ and
$||\,.\,||_s := \mu_s(\,.\,,\,.\,)^{1/2}$ for the corresponding semi-norms.
Then it holds for all $0 \leq s \leq 2$ that
$$ V{\rm dom}(|R|^{s/2}) \subset {\rm dom}(|R|^{s/2}) \quad
{\it and} \quad W{\rm dom}(|R|^{s/2}) \subset {\rm dom}(|R|^{s/2}) \,, $$
and $V$ and $W$ are $\mu_s$-bounded for $0 \leq s \leq 2$.
More precisely, the estimates
\begin{equation}
||\,Vx\,||_0 \leq v\,||\,x\,||_0 \quad {\rm and} \quad
||\,Wx\,||_0 \leq w\,||\,x\,||_0\,, \quad x \in H\,,
\end{equation}
with suitable constants $v,w > 0$, imply that
\begin{equation}
||\,Vx\,||_s \leq w^{s/2}v^{1 -s/2}\,||\,x\,||_s \quad {\rm and} \quad
||\,Wx\,||_s \leq v^{s/2}w^{1-s/2}\,||\,x\,||_s \,,
\end{equation}
for all
$ x \in {\rm dom}(|R|^{s/2})$ and
$0 \leq s \leq 2$.
\\[6pt]
(b)\ \ \ (Corollary of (a))\ \ \ \
Let $(S,\sigma)$ be a symplectic space, $\mu \in {\sf q}(S,\sigma)$ a dominating
scalar product on $(S,\sigma)$, and $\mu_s$, $0 < s \leq 2$, the
scalar products on $S$ defined in (2.11). Then we have relative
$\mu-\mu_s$ continuity of each pair
$V,W$ of symplectically adjoint linear maps of $(S,\sigma)$
for all $0 < s \leq 2$. More precisely, for each pair $V,W$ of
symplectically adjoint linear maps of $(S,\sigma)$, the estimates
(2.16) for all $x \in S$ imply (2.17) for all $x \in S$.
\end{Theorem}
{\it Remark.} (i) In view of the fact that the operator $R$
of part (a) of the Theorem may be unbounded, part (b) can be
extended to situations where it is not assumed that the scalar
product $\mu$ on $S$ dominates the symplectic form $\sigma$.
\\[6pt]
(ii) When it is additionally assumed that $V = T$ and $W = T^{-1}$
with symplectomorphisms $T$ of $(S,\sigma)$, we refer in that
case to the situation of relative continuity of the pairs
$V,W$ as relative continuity of symplectomorphisms.
In Example 2.3 after the proof of Thm.\ 2.2 we show that
relative $\tilde{\mu} - \mu$ continuity of symplectomorphisms fails in general.
Also, it is not the case that relative $\mu - \mu'$ continuity
of symplectomorphisms holds if $\mu'$ is an arbitrary element
in ${\sf pu}(S,\sigma)$ which is dominated by $\mu$ ($||\,\phi\,||_{\mu'} \leq
{\rm const.}||\,\phi\,||_{\mu}$, $\phi \in S$), see Example 2.4
below. This shows that the special relation between $\mu$ and $\tilde{\mu}$
(resp., $\mu$ and the $\mu_s$) expressed in (2.11,2.15) is important
for the derivation of the Theorem.
\\[10pt]
{\it Proof of Theorem 2.2.} (a)\ \ \ In a first step, let it be
supposed that $R$ is bounded.
From the assumed relation (2.15)
and its adjoint relation $R^*V = W^* R^*$ we obtain, for $\epsilon' > 0$
arbitrarily chosen,
\begin{eqnarray*}
V^* (|R|^2 + \epsilon' 1) V & = & V^*RR^* V + \epsilon' V^*V \ =
\ RWW^*R^* + \epsilon' V^*V \\
& \leq & w^2 |R|^2 + \epsilon' v^21 \ \leq \ w^2(|R|^2 + \epsilon 1)
\end{eqnarray*}
with $\epsilon := \epsilon'v^2/w^2$.
This entails for the operator norms
$$ ||\,(|R|^2 + \epsilon' 1 )^{1/2}V \,||
\ \leq\ w\,||\,(|R|^2 + \epsilon 1)^{1/2}\,|| \,, $$
and since $(|R|^2 + \epsilon 1)^{1/2}$ has a bounded inverse,
$$ ||\,(|R|^2 + \epsilon' 1)^{1/2} V
(|R|^2 + \epsilon 1 )^{-1/2} \,||\ \leq\ w\,. $$
On the other hand, one clearly has
$$ ||\,(|R|^2 + \epsilon' 1)^0 V (|R|^2 + \epsilon 1)^0\,||
\ =\ ||\,V\,||\ \leq\ v\,. $$
Now these estimates are preserved if $R$ and $V$
are replaced by their complexified versions on the complexified
Hilbertspace $H \oplus iH = {\bf C} \otimes H$.
Thus, identifying if necessary
$R$ and $V$ with their complexifications, a standard interpolation
argument (see Appendix A) can be applied to yield
$$ ||\,(|R|^2 + \epsilon' 1)^{\alpha} V
(|R|^2 + \epsilon 1)^{-\alpha} \,||\ \leq\ w^{2\alpha}
v^{1 - 2\alpha} $$
for all $0 \leq \alpha \leq 1/2$. Notice that this inequality
holds uniformly in $\epsilon' > 0$. Therefore we may conclude that
$$ ||\,|R|^{2\alpha}Vx \,||_0\ \leq\ w^{2\alpha}v^{1 - 2\alpha}
\,||\,|R|^{2\alpha}x\,||_0
\,, \quad x \in H\,,\ 0 \leq \alpha \leq 1/2\,,$$
which is the required estimate for $V$. The analogous bound for
$W$ is obtained through replacing $V$ by $W$ in the
given arguments.
Now we have to extend the argument to the case that $R$ is unbounded.
Without restriction of generality we may assume that the Hilbertspace
$H$ is complex, otherwise we complexify it and with it all the
operators $R$,$V$,$W$, as above, thereby preserving their assumed properties.
Then let $E$ be the spectral measure of $R$, and denote by
$R_r$ the operator $E(B_r)RE(B_r)$ where $B_r := \{z \in {\bf C}:
|z| \leq r\}$, $r > 0$. Similarly define $V_r$ and $W_r$. From the
assumptions it is seen that $V_r^*R_r = R_rW_r$ holds for all
$r >0$. Applying the reasoning of the first step we arrive, for each
$0 \leq s \leq 2$, at the bounds
$$ ||\,V_r x\,||_s \leq w^{s/2}v^{1-s/2}\,||\,x\,||_s \quad {\rm and}
\quad ||\,W_r x\,||_s \leq v^{s/2}w^{1-s/2}\,||\,x\,||_s \,,$$
which hold uniformly in $r >0$ for all $x \in {\rm dom}(|R|^{s/2})$.
From this, the statement of the Proposition follows.\\[6pt]
(b) This is just an application of (a), identifying $H_{\mu}$ with $H$,
$R_{\mu}$ with $R$ and $V,W$ with their bounded extensions to $H_{\mu}$.
$\Box$
\\[10pt]
{\bf Example 2.3} We exhibit a symplectic space $(S,\sigma)$
with $\mu \in {\sf pr}(S,\sigma)$ and a symplectomorphism $T$ of $(S,\sigma)$
where $T$ and $T^{-1}$ are continuous with respect to $\tilde{\mu}$,
but not with respect to $\mu$. \\[6pt]
Let $S := {\cal S}({\bf R},{\bf C})$, the Schwartz space of rapidly decreasing
testfunctions on ${\bf R}$, viewed as real-linear space.
By $\langle \phi,\psi \rangle := \int \overline{\phi} \psi \,dx$
we denote the standard $L^2$ scalar product. As a symplectic form on
$S$ we choose
$$ \sigma(\phi,\psi) := 2 {\sf Im}\langle \phi,\psi \rangle\,, \quad
\phi,\psi \in S\,. $$
Now define on $S$ the strictly positive, essentially selfadjoint operator
$A\phi := -\frac{d^2}{dx^2}\phi + \phi$,
$\phi \in S$, in $L^2({\bf R})$. Its closure
will again be denoted by $A$; it is bounded below by $1$.
A real-linear scalar product $\mu$ will be defined on $S$ by
$$ \mu(\phi,\psi) := {\sf Re}\langle A\phi,\psi \rangle\,, \quad \phi,\psi \in
S. $$
Since $A$ has lower bound $1$, clearly $\mu$ dominates $\sigma$, and
one easily obtains $R_{\mu} = - i A^{-1}$, $|R_{\mu}| = A^{-1}$.
Hence $\mu \in {\sf pr}(S,\sigma)$ and
$$ \tilde{\mu}(\phi,\psi) = {\sf Re}\langle \phi,\psi \rangle\,,
\quad \phi,\psi \in S\,.$$
Now consider the operator
$$ T : S \to S\,, \quad \ \ (T\phi)(x) := {\rm e}^{-ix^2}\phi(x)\,,
\ \ \ x \in {\bf R},\ \phi \in S\,. $$
Obviously $T$ leaves the $L^2$ scalar product invariant, and hence also
$\sigma$ and $\tilde{\mu}$. The inverse of $T$ is just $(T^{-1}\phi)(x) =
{\rm e}^{i x^2}\phi(x)$, which of course leaves $\sigma$ and $\tilde{\mu}$
invariant as well. However, $T$ is not continuous with respect to $\mu$.
To see this, let $\phi \in S$ be some non-vanishing smooth function
with compact support, and define
$$ \phi_n(x) := \phi(x -n)\,, \quad x \in {\bf R}, \ n \in {\bf N}\,. $$
Then $\mu(\phi_n,\phi_n) = {\rm const.} > 0$ for all $n \in {\bf N}$.
We will show that $\mu(T\phi_n,T\phi_n)$ diverges for $n \to \infty$.
We have
\begin{eqnarray}
\mu(T\phi_n,T\phi_n) & = & \langle A T\phi_n,T\phi_n \rangle
\geq \int \overline{(T\phi_n)'}(T\phi_n)' \, dx \\
& \geq & \int (4x^2|\phi_n(x)|^2 + |\phi_n'(x)|^2)\, dx
- \int 4 |x \phi_n'(x)\phi_n(x)|\,dx \,,\nonumber
\end{eqnarray}
where the primes indicate derivatives and we have used that
$$ |(T\phi_n)'(x)|^2 = 4x^2|\phi_n(x)|^2 + |\phi_n'(x)|^2
+ 4\cdot {\sf Im}(ix \overline{\phi_n}(x)\phi_n'
(x))\,. $$
Using
a substitution of variables, one can see that in the last term
of (2.18) the positive integral grows like $n^2$ for large $n$, thus
dominating eventually the negative integral which grows only like $n$.
So $\mu(T\phi_n,T\phi_n) \to \infty$ for $n \to \infty$, showing that
$T$ is not $\mu$-bounded.
\\[10pt]
{\bf Example 2.4} We give an example of a symplectic space
$(S,\sigma)$, a $\mu \in {\sf pr}(S,\sigma)$ and a $\mu' \in {\sf pu}(S,\sigma)$, where
$\mu$ dominates $\mu'$ and where there is a symplectomorphism $T$
of $(S,\sigma)$ which together with its inverse is $\mu$-bounded,
but not $\mu'$-bounded.\\[6pt]
We take $(S,\sigma)$ as in the previous example and write for each
$\phi \in S$, $\phi_0 := {\sf Re}\phi$ and $\phi_1 := {\sf Im}\phi$.
The real scalar product $\mu$ will be defined by
$$ \mu(\phi,\psi) := \langle\phi_0,A\psi_0\rangle + \langle \phi_1,
\psi_1 \rangle \,, \quad \phi,\psi \in S\,, $$
where the operator $A$ is the same as in the example before. Since its
lower bound is $1$, $\mu$ dominates $\sigma$, and it is not difficult
to see that $\mu$ is even primary. The real-linear scalar product
$\mu'$ will be taken to be
$$ \mu'(\phi,\psi) = {\sf Re}\langle \phi,\psi \rangle\,, \quad
\phi,\psi \in S\,.$$
We know from the example above that $\mu' \in {\sf pu}(S,\sigma)$. Also, it is
clear that $\mu'$ is dominated by $\mu$. Now consider the
real-linear map $T: S \to S$ given by
$$ T(\phi_0 + i\phi_1) := A^{-1/2} \phi_1 - i A^{1/2}\phi_0\,, \quad
\phi \in S\,.$$
One checks easily that this map is bijective with $T^{-1} = - T$,
and that $T$ preserves the symplectic form $\sigma$. Also,
$||\,.\,||_{\mu}$ is preserved by $T$ since
$$ \mu(T\phi,T\phi) = \langle \phi_1,\phi_1 \rangle +
\langle A^{1/2}\phi_0,A^{1/2}\phi_0 \rangle = \mu(\phi,\phi)\,,
\quad \phi \in S\,.$$
On the other hand, we have for each $\phi \in S$
$$ \mu'(T\phi,T\phi) = \langle \phi_1,A\phi_1 \rangle + \langle \phi_0,
A^{-1}\phi_0 \rangle \,, $$
and this expression is not bounded by a ($\phi$-independent) constant times
$\mu'(\phi,\phi)$, since $A$ is unbounded with respect to the
$L^2$-norm.
\newpage
\section{The Algebraic Structure of Hadamard Vacuum Representations}
\setcounter{equation}{0}
${}$
\\[20pt]
{\bf 3.1 Summary of Notions from Spacetime-Geometry}
\\[16pt]
We recall that a spacetime manifold consists of a pair
$(M,g)$, where $M$ is a smooth, paracompact, four-dimensional
manifold without boundaries, and $g$ is a Lorentzian metric for $M$
with signature $(+ - -\, - )$. (Cf.\ [33,52,70], see these references
also for further discussion of the notions to follow.)
It will be assumed that $(M,g)$ is time-orientable, and moreover,
globally hyperbolic. The latter means that $(M,g)$ possesses
Cauchy-surfaces, where by a Cauchy-surface
we always mean a {\it smooth}, spacelike
hypersurface which is intersected exactly once by each inextendable
causal curve in $M$. It can be shown [15,28] that this is
equivalent to the statement that $M$ can be smoothly foliated in
Cauchy-surfaces. Here, a foliation of $M$ in Cauchy-surfaces is
a diffeomorphism $F: {\bf R} \times \Sigma \to M$, where $\Sigma$ is a
smooth 3-manifold so that $F(\{t\} \times \Sigma)$ is, for each
$t \in {\bf R}$, a Cauchy-surface, and the curves
$t \mapsto F(t,q)$
are timelike for all $q \in\Sigma$.
(One can even show that, if global hyperbolicity had been
defined by requiring only the existence of a non necessarily
smooth or spacelike Cauchy-surface (i.e.\ a topological
hypersurface which is intersected exactly once by each
inextendable causal curve), then it is still true that a globally
hyperbolic spacetime can be smoothly foliated in Cauchy-surfaces,
see [15,28].)
We shall also be interested in
ultrastatic globally hyperbolic spacetimes.
A globally hyperbolic spacetime
is said to be {\it ultrastatic} if a foliation
$F : {\bf R} \times \Sigma \to M$ in Cauchy-surfaces can be found so that
$F_*g$ has the form $dt^2 \oplus (- \gamma)$ with a complete
($t$-independent) Riemannian metric $\gamma$ on $\Sigma$.
This particular foliation will then be called a {\it natural foliation}
of the ultrastatic spacetime. (An ultrastatic spacetime may posses
more than one natural foliation, think e.g.\ of Minkowski-spacetime.)
The notation for the causal sets and domains of dependence
will be recalled: Given a spacetime $(M,g)$ and
${\cal O} \subset M$, the set $J^{\pm}({\cal O})$ (causal future/past of
${\cal O}$) consists of all points $p \in M$ which can be reached by
future/past directed causal curves emanating from ${\cal O}$. The set
$D^{\pm}({\cal O})$ (future/past domain of dependence of ${\cal O}$) is defined
as consisting of all $p \in J^{\pm}({\cal O})$ such that every
past/future inextendible causal curve starting at $p$ intersects ${\cal O}$.
One writes $J({\cal O}) := J^+({\cal O}) \cup J^-({\cal O})$ and $D({\cal O}) :=
D^+({\cal O}) \cup D^-({\cal O})$. They are called the {\it causal set}, and
the {\it domain of dependence}, respectively, of ${\cal O}$.
For ${\cal O} \subset M$, we denote by ${\cal O}^{\perp} := {\rm int}(M
\backslash J({\cal O}))$ the {\it causal complement} of ${\cal O}$,
i.e.\ the largest {\it open}
set of points which cannot be connected to ${\cal O}$ by any causal curve.
A set of the form ${\cal O}_G := {\rm int}\,D(G)$, where $G$ is a subset
of some Cauchy-surface $\Sigma$ in $(M,g)$, will be referred to
as the {\it diamond based on} $G$; we shall also say that
$G$ is the {\it base} of ${\cal O}_G$. We note that if ${\cal O}_G$ is a
diamond, then ${\cal O}_G^{\perp}$ is again a diamond, based on
$\Sigma \backslash \overline{G}$.
A diamond will be called
{\it regular} if $G$ is an open, relatively compact subset of
$\Sigma$ and if the boundary $\partial G$ of $G$ is contained in
the union of finitely many smooth, two-dimensional submanifolds
of $\Sigma$.
Following [45], we say that an open neighbourhood $N$ of a
Cauchy-surface $\Sigma$ in $(M,g)$ is a {\it causal normal neighbourhood}
of $\Sigma$ if (1) $\Sigma$ is a Cauchy-surface for $N$, and
(2) for each pair of points $p,q \in N$ with $p \in J^+(q)$, there
is a convex normal neighbourhood ${\cal O} \subset M$ such that
$J^-(p) \cap J^+(q) \subset {\cal O}$. Lemma 2.2 of [45] asserts the
existence of causal normal neighbourhoods for any Cauchy-surface
$\Sigma$.
\\[20pt]
{\bf 3.2 Some Structural Aspects of
Quantum Field Theory in Curved Spacetime}
\\[16pt]
In the present subsection, we shall address some of the problems
one faces in the formulation of quantum field theory in curved
spacetime, and explain the notions of local definiteness, local
primarity, and Haag-duality. In doing so, we follow our presentation
in [67] quite closely. Standard general references related to the
subsequent discussion are [26,31,45,71].
Quantum field theory in curved spacetime (QFT in CST, for short)
means that one considers quantum field theory means that one considers
quantum fields propagating in a (classical) curved background spacetime
manifold $(M,g)$. In general, such a spacetime need not possess
any symmetries, and so one cannot tie the notion of ``particles''
or ``vacuum'' to spacetime symmetries, as one does in quantum field
theory in Minkowski spacetime. Therefore, the problem of
how to characterize the physical states arises. For the discussion
of this problem, the setting of algebraic quantum field theory is
particularly well suited. Let us thus summarize some of the relevant
concepts of algebraic QFT in CST. Let a spacetime manifold
$(M,g)$ be given. The observables of a quantum system (e.g.\ a quantum
field) situated in $(M,g)$ then have the basic structure of a map
${\cal O} \to {\cal A(O)}$, which assigns to each open, relatively compact
subset ${\cal O}$ of $M$ a $C^*$-algebra ${\cal A(O)}$,\footnote{
Throughout the paper, $C^*$-algebras are assumed to be unital, i.e.\
to possess a unit element, denoted by ${ 1}$. It is further
assumed that the unit element is the same for all the ${\cal A(O)}$.}
with the properties:\footnote{where $[{\cal A}({\cal O}_1),{\cal A}({\cal O}_2)]
= \{A_1A_2 - A_2A_1 : A_j \in {\cal A}({\cal O}_j),\ j =1,2 \}$.}
\begin{equation}
{\it Isotony:}\quad \quad {\cal O}_1 \subset {\cal O}_2
\Rightarrow {\cal A}({\cal O}_1) \subset {\cal A}({\cal O}_2)
\end{equation}
\begin{equation}
{\it Locality:} \quad \quad {\cal O}_1 \subset {\cal O}_2^{\perp}
\Rightarrow [{\cal A}({\cal O}_1),{\cal A}({\cal O}_2)] = \{0 \} \,.
\end{equation}
A map ${\cal O} \to {\cal A(O)}$ having these properties is called a {\it net
of local observable algebras} over $(M,g)$. We recall that the
conditions of locality and isotony are motivated by the idea
that each ${\cal A(O)}$ is the $C^*$-algebra formed by the observables
which can be measured within the spacetime region ${\cal O}$ on the
system. We refer to [31] and references given there for further
discussion.
The collection of all open, relatively compact subsets of
$M$ is directed with respect to set-inclusion, and so we can, in view
of (3.1), form the smallest $C^*$-algebra ${\cal A} := \overline{
\bigcup_{{\cal O}}{\cal A(O)}}^{||\,.\,||}$ which contains all local algebras
${\cal A(O)}$.
For the description of a system we need not only observables but also
states. The set ${\cal A}^{*+}_1$ of all positive, normalized linear
functionals on ${\cal A}$ is mathematically referred to as the set
of {\it states} on ${\cal A}$, but not all elements of ${\cal A}^{*+}_1$
represent physically realizable states of the system. Therefore, given
a local net of observable algebras ${\cal O} \to {\cal A(O)}$ for a physical
system over $(M,g)$, one must specify the set of physically relevant states
${\cal S}$, which is a suitable subset of ${\cal A}^{*+}_1$.
We have already mentioned in Chapter 2 that every state $\omega \in
{\cal A}^{*+}_1$ determines canonically its GNS representation
$({\cal H}_{\omega},\pi_{\omega},\Omega_{\omega})$ and thereby induces
a net of von Neumann algebras (operator algebras on ${\cal H}_{\omega}$)
$$ {\cal O} \to {\cal R}_{\omega}({\cal O}) := \pi_{\omega}({\cal O})^- \,. $$
Some of the mathematical properties of the GNS representations, and of
the induced nets of von Neumann algebras, of states $\omega$ on
${\cal A}$ can naturally be interpreted physically. Thus one obtains
constraints on the states $\omega$ which are to be viewed as
physical states. Following this line of thought, Haag, Narnhofer
and Stein [32] formulated what they called the ``principle of local
definiteness'', consisting of the following three conditions
to be obeyed by any collection ${\cal S}$ of physical states.
\\[10pt]
{\bf Local Definiteness:} ${}\ \ \bigcap_{{\cal O} \owns p}
{\cal R}_{\omega}({\cal O}) = {\bf C} \cdot { 1}$ for all $\omega \in {\cal S}$
and all $p \in M$.
\\[6pt]
{\bf Local Primarity:} \ \ For each $\omega \in {\cal S}$, ${\cal R}_{\omega}
({\cal O})$ is a factor.
\\[6pt]
{\bf Local Quasiequivalence:} For each pair $\omega_1,\omega_2 \in {\cal S}$
and each relatively compact, open ${\cal O} \subset M$, the representations
$\pi_{\omega_1} | {\cal A(O)}$ and $\pi_{\omega_2} | {\cal A(O)}$ of ${\cal A(O)}$ are
quasiequivalent.
\\[10pt]
{\it Remarks.} (i) We recall (cf.\ the first Remark in Section 2) that
${\cal R}_{\omega}({\cal O})$ is a factor if ${\cal R}_{\omega}({\cal O}) \cap {\cal R}_{\omega}
({\cal O})' = {\bf C} \cdot { 1}$ where the prime means taking the
commutant. We have not stated in the formulation of local primarity
for which regions ${\cal O}$ the algebra ${\cal R}_{\omega}({\cal O})$ is required to be
a factor. The regions ${\cal O}$ should be taken from a class of subsets of
$M$ which forms a base for the topology.
\\[6pt]
(ii) Quasiequivalence of representations means unitary equivalence up to
multiplicity. Another characterization of quasiequivalence is to say
that the folia of the representations coincide, where the
{\it folium} of
a representation $\pi$ is defined as the set of all $\omega \in
{\cal A}^{*+}_1$ which can be represented as $\omega(A) = tr(\rho\,\pi(A))$
with a density matrix $\rho$ on the representation Hilbertspace of $\pi$.
\\[6pt]
(iii) Local definiteness and quasiequivalence together express
that physical states have finite (spatio-temporal) energy-density
with respect to each other, and local primarity and quasiequivalence
rule out local macroscopic observables and local superselection
rules. We refer to [31] for further discussion and background
material.
A further, important property which one expects to be satisfied
for physical states $\omega \in {\cal S}$ whose GNS representations are
irreducible \footnote{It is easy to see that,
in the presence of local primarity, Haag-duality will be
violated if $\pi_{\omega}$ is not irreducible.} is
\\[10pt]
{\bf Haag-Duality:} \ \ ${\cal R}_{\omega}({\cal O}^{\perp})' = {\cal R}_{\omega}({\cal O})$, \\
which should hold for the causally complete regions ${\cal O}$, i.e.\ those
satisfying $({\cal O}^{\perp})^{\perp} = {\cal O}$, where ${\cal R}_{\omega}({\cal O}^{\perp})$
is defined as the von Neumann algebra generated by all the ${\cal R}_{\omega}
({\cal O}_1)$ so that $\overline{{\cal O}_1} \subset {\cal O}^{\perp}$.
\\[10pt]
We comment that Haag-duality means that the von Neumann algebra
${\cal R}_{\omega}({\cal O})$ of local observables is maximal in the sense
that no further observables can be added without violating the
condition of locality. It is worth mentioning here that the condition
of Haag-duality plays an important role in the theory of superselection
sectors in algebraic quantum field theory in Minkowski spacetime
[31,59]. For local nets of observables generated by Wightman fields
on Minkowski spacetime it follows from the results of Bisognano and
Wichmann [4] that a weaker condition of ``wedge-duality'' is
always fulfilled, which allows one to pass to a new, potentially
larger local net (the ``dual net'') which satisfies Haag-duality.
In quantum field theory in Minkowski-spacetime where one is given
a vacuum state $\omega_0$, one can define the set of physical states
${\cal S}$ simply as the set of all states on ${\cal A}$ which are locally
quasiequivalent (i.e., the GNS representations of the states are
locally quasiequivalent to the vacuum-representation) to $\omega_0$.
It is obvious that local quasiequivalence then holds for ${\cal S}$.
Also, local definiteness holds in this case, as was proved by Wightman
[72]. If Haag-duality holds in the vacuum representation (which,
as indicated above, can be assumed to hold quite generally), then it
does not follow automatically that all pure states locally quasiequivalent
to $\omega_0$ will also have GNS representations fulfilling Haag-duality;
however, it follows once some regularity conditions are satisfied
which have been checked in certain quantum field models [19,61].
So far there seems to be no general physically motivated criterion
enforcing local primarity of a quantum field theory in algebraic
formulation in Minkowski spacetime. But it is known that many quantum
field theoretical models satisfy local primarity.
For QFT in CST we do in general not know what a vacuum state is and so
${\cal S}$ cannot be defined in the same way as just described. Yet in some
cases (for some quantum field models) there may be a set
${\cal S}_0 \subset {\cal A}^{*+}_1$ of distinguished states, and if this class
of states satisfies the four conditions listed above, then the set
${\cal S}$, defined as consisting of all states $\omega_1 \in {\cal A}^{*+}_1$
which are locally quasiequivalent to any (and hence all) $\omega
\in {\cal S}_0$, is a good candidate for the set of physical states.
For the free scalar Klein-Gordon field (KG-field) on
a globally hyperbolic spacetime, the following classes of
states have been suggested as distinguished, physically
reasonable states \footnote{The following list is not meant
to be complete, it comprises some prominent families of states
of the KG-field over a generic class of spacetimes for which
mathematically sound results are known. Likewise, the indicated
references are by no means exhaustive.}
\begin{itemize}
\item[(1)] (quasifree) states fulfilling local stability
[3,22,31,32]
\item[(2)] (quasifree) states fulfilling the wave front set (or microlocal)
spectrum condition [6,47,55]
\item[(3)] quasifree Hadamard states [12,68,45]
\item[(4)] adiabatic vacua [38,48,53]
\end{itemize}
The list is ordered in such a way that the less restrictive
condition preceeds the stronger one. There are a couple of
comments to be made here. First of all, the specifications
(3) and (4) make use of the information that one deals with
the KG-field (or at any rate, a free field obeying a linear
equation of motion of hyperbolic character), while the
conditions (1) and (2) do not require such input and are
applicable to general -- possibly interacting -- quantum
fields over curved spacetimes.
(It should however be mentioned that only for the KG-field (2)
is known to be stronger than (1). The relation between (1) and (2)
for more general theories is not settled.)
The conditions imposed on the classes of
states (1), (2) and (3) are related in that they are ultralocal remnants
of the spectrum condition requiring a certain regularity of the
short distance behaviour of the respective states which can be
formulated in generic spacetimes. The class of states (4) is
more special and can only be defined for the KG-field (or other
linear fields) propagating in Robertson-Walker-type spacetimes.
Here a distinguished choice of a time-variable can be made,
and the restriction imposed on adiabatic vacua is a regularity
condition on their spectral behaviour with respect to that
special choice of time. (A somewhat stronger formulation of
local stability has been proposed in [34].)
It has been found by Radzikowski [55] that for quasifree states of the
KG-field over generic globally hyperbolic spacetimes
the classes (2) and (3) coincide. The microlocal spectrum condition
is further refined and applied in [6,47]. Recently it was
proved by Junker [38] that adiabatic vacua of the KG-field
in Robertson-Walker spacetimes fulfill the microlocal spectrum
condition and thus are, in fact, quasifree Hadamard states.
The notion of the microlocal spectrum condition and the just mentioned
results related to it draw on pseudodifferential operator
techniques, particularly the notion of the wave front set, see [20,36,37].
Quasifree Hadamard states of the KG-field (see definition
in Sec.\ 3.4 below)
have been investigated for quite some time. One of the early
studies of these states is [12]. The importance of these
states, especially in the context of the semiclassical
Einstein equation, is stressed in [68]. Other significant
references include [24,25] and, in particular, [45] where,
apparently for the first time, a satisfactory definition of
the notion of a globally Hadamard state is given, cf.\
Section 3.4 for more details. In [66] it is proved that the
class of quasifree Hadamard states of the KG-field fulfills
local quasiequivalence in generic globally hyperbolic spacetimes
and local definiteness, local primarity and Haag-duality
for the case of ultrastatic globally hyperbolic spacetimes.
As was outlined in the beginning, the purpose of the present
chapter is to obtain these latter results also for arbitrary
globally hyperbolic spacetimes which are not necessarily
ultrastatic. It turns out that some of our previous results
can be sharpened, e.g.\ the local quasiequivalence specializes
in most cases to local unitary equivalence, cf.\ Thm.\ 3.6.
For a couple of other
results about the algebraic structure of the KG-field as well as other
fields over curved spacetimes we refer to
[2,6,15,16,17,40,41,46,63,64,65,66,74].
\\[24pt]
{\bf 3.3 The Klein-Gordon Field}
\\[18pt]
In the present section we summarize the quantization of the
classical KG-field over a globally hyperbolic spacetime in the
$C^*$-algebraic formalism. This follows in major parts the
the work of Dimock [16], cf.\ also references given there.
Let $(M,g)$ be a globally hyperbolic spacetime. The KG-equation
with potential term $r$ is
\begin{equation}
(\nabla^a \nabla_a + r) \varphi = 0
\end{equation}
where $\nabla$ is the Levi-Civita derivative of the metric $g$,
the potential function
$r \in C^{\infty}(M,{\bf R})$ is arbitrary but fixed, and the
sought for solutions $\varphi$ are smooth and real-valued.
Making use of the fact that $(M,g)$ is globally hyperbolic
and drawing on earlier results by Leray, it is shown in
[16] that there are two uniquely determined, continuous
\footnote{ With respect to the usual locally convex topologies
on $C_0^{\infty}(M,{\bf R})$ and $C^{\infty}(M,{\bf R})$, cf.\
[13].}
linear maps $E^{\pm}: C_0^{\infty}(M,{\bf R}) \to C^{\infty}(M,{\bf R})$
with the properties
$$ (\nabla^a \nabla_a + r)E^{\pm}f = f = E^{\pm}(\nabla^a
\nabla_a + r)f\,,\quad f \in C_0^{\infty}(M,{\bf R})\,, $$
and
$$ {\rm supp}(E^{\pm}f) \subset J^{\pm}({\rm supp}(f))\,,\quad
f \in C_0^{\infty}(M,{\bf R})\,. $$
The maps $E^{\pm}$ are called the advanced(+)/retarded(--)
fundamental solutions of the KG-equation with potential term
$r$ in $(M,g)$, and their difference $E := E^+ - E^-$ is referred
to as the {\it propagator} of the KG-equation.
One can moreover show that the Cauchy-problem for the KG-equation
is well-posed. That is to say, if $\Sigma$ is any Cauchy-surface
in $(M,g)$, and
$u_0 \oplus u_1 \in C_0^{\infty}(M,{\bf R}) \oplus
C_0^{\infty}(M,{\bf R})$ is any pair of Cauchy-data on $\Sigma$,
then there exists precisely one smooth solution $\varphi$
of the KG-equation (3.3) having the property that
\begin{equation}
P_{\Sigma}(\varphi) := \varphi | \Sigma \oplus
n^a \nabla_a \varphi|\Sigma = u_0 \oplus u_1\,.
\end{equation}
The vectorfield $n^a$ in (3.4) is the future-pointing
unit normalfield of $\Sigma$.
Furthermore, one has ``finite propagation speed'', i.e.\
when the supports of $u_0$ and $u_1$ are contained in a subset
$G$ of $\Sigma$, then ${\rm supp}(\varphi) \subset J(G)$. Notice that
compactness of $G$ implies that $J(G) \cap \Sigma'$ is compact
for any Cauchy-surface $\Sigma'$.
The well-posedness of the
Cauchy-problem is a consequence of the classical energy-estimate
for solutions of second order hyperbolic partial differential
equations, cf.\ e.g.\ [33]. To formulate it, we introduce further
notation. Let $\Sigma$ be a Cauchy-surface for $(M,g)$, and
$\gamma_{\Sigma}$ the Riemannian metric, induced by the ambient
Lorentzian metric, on $\Sigma$. Then denote the Laplacian
operator on $C_0^{\infty}(\Sigma,{\bf R})$ corresponding to
$\gamma_{\Sigma}$ by $\Delta_{\gamma_{\Sigma}}$, and define the
{\it classical energy scalar product} on
$C_0^{\infty}(\Sigma,{\bf R}) \oplus C_0^{\infty}(\Sigma,{\bf R})$ by
\begin{equation}
\mu_{\Sigma}^E(u_0 \oplus u_1,
v_0 \oplus v_1 ) := \int_{\Sigma} (u_0 (- \Delta_{\gamma_{\Sigma}} + 1)v_0
+ u_1 v_1) \, d\eta_{\Sigma} \,,
\end{equation}
where $d\eta_{\Sigma}$ is the metric-induced volume measure on
$\Sigma$. As a special case of the energy estimate
presented in [33] one then obtains
\begin{Lemma}
(Classical energy estimate for the KG-field.)
Let $\Sigma_1$ and $\Sigma_2$ be a pair of Cauchy-surfaces
in $(M,g)$ and $G$ a compact subset of $\Sigma_1$. Then there
are two positive constants $c_1$ and $c_2$ so that there
holds the estimate
\begin{equation}
c_1\,\mu^E_{\Sigma_1}(P_{\Sigma_1}(\varphi),P_{\Sigma_1}
(\varphi)) \leq \mu^E_{\Sigma_2}(P_{\Sigma_2}(\varphi),
P_{\Sigma_2}(\varphi)) \leq
c_2 \, \mu^E_{\Sigma_1}(P_{\Sigma_1}(\varphi),P_{\Sigma_1}
(\varphi))
\end{equation}
for all solutions $\varphi$ of the KG-equation (3.3)
which have the property that the supports of the Cauchy-data
$P_{\Sigma_1}(\varphi)$ are contained in $G$.
\footnote{ {\rm The formulation given here is to some extend
more general than the one appearing in [33] where it is
assumed that $\Sigma_1$ and $\Sigma_2$ are members of a foliation.
However, the more general formulation can be reduced to that case.}}
\end{Lemma}
We shall now indicate that the space of smooth solutions of the
KG-equation (3.3) has the structure of a symplectic space,
locally as well as globally, which comes in several equivalent
versions. To be more specific, observe first that the
Cauchy-data space
$$ {\cal D}_{\Sigma} := C_0^{\infty}(\Sigma,{\bf R}) \oplus C_0^{\infty}(\Sigma,{\bf R}) $$
of an arbitrary given Cauchy-surface $\Sigma$ in $(M,g)$
carries a symplectic form
$$ \delta_{\Sigma}(u_0 \oplus u_1, v_0 \oplus v_1)
:= \int_{\Sigma}(u_0v_1 - v_0u_1)\,d\eta_{\Sigma}\,.$$
It will also be observed that this symplectic form
is dominated by the classical energy scalar product
$\mu^E_{\Sigma}$.
Another symplectic space is $S$, the set
of all real-valued $C^{\infty}$-solutions $\varphi$ of the
KG-equation (3.3) with the property that, given any Cauchy-surface
$\Sigma$ in $(M,g)$, their Cauchy-data $P_{\Sigma}(\varphi)$
have compact support on $\Sigma$. The symplectic form on
$S$ is given by
$$ \sigma(\varphi,\psi) := \int_{\Sigma}(\varphi n^a \nabla_a\psi
-\psi n^a \nabla_a \varphi)\,d\eta_{\Sigma} $$
which is independent of the choice of the Cauchy-surface $\Sigma$
on the right hand side over which the integral is formed;
$n^a$ is again the future-pointing unit normalfield of $\Sigma$.
One clearly finds that for each Cauchy-surface $\Sigma$ the
map $P_{\Sigma} : S \to {\cal D}_{\Sigma}$ establishes a symplectomorphism
between the symplectic spaces $(S,\sigma)$ and $({\cal D}_{\Sigma},\delta_{\Sigma})$.
A third symplectic space equivalent to the previous ones is obtained
as the quotient $K := C_0^{\infty}(M,{\bf R}) /{\rm ker}(E)$ with symplectic form
$$ \kappa([f],[h]) := \int_M f(Eh)\,d\eta \,, \quad
f,h \in C_0^{\infty}(M,{\bf R})\,, $$
where $[\,.\,]$ is the quotient map $C_0^{\infty}(M,{\bf R}) \to K$ and
$d\eta$ is the metric-induced volume measure on $M$.
Then define for any open subset ${\cal O} \subset M$ with compact
closure the set $K({\cal O}) := [C_0^{\infty}({\cal O},{\bf R})]$.
One can see that the space $K$ has naturally the structure of
an isotonous, local net ${\cal O} \to K({\cal O})$ of subspaces, where
locality means that the symplectic form $\kappa([f],[h])$
vanishes for $[f] \in K({\cal O})$ and $[h] \in K({\cal O}_1)$
whenever ${\cal O}_1 \subset {\cal O}^{\perp}$.
Dimock has proved in [16 (Lemma A.3)] that moreover there holds
\begin{equation}
K({\cal O}_G) \subset K(N)
\end{equation}
for all open neighbourhoods $N$ (in $M$) of $G$, whenever
${\cal O}_G$ is a diamond. Using this, one obtains that the map
$(K,\kappa) \to (S,\sigma)$ given by $[f] \mapsto Ef$ is
surjective, and by Lemma A.1 in [16], it is even a
symplectomorphism. Clearly,
$(K({\cal O}_G),\kappa|K({\cal O}_G))$ is a symplectic subspace
of $(K,\kappa)$ for each diamond ${\cal O}_G$
in $(M,g)$. For any such diamond one
then obtains, upon viewing it
(or its connected components separately), equipped with the appropriate
restriction of the spacetime metric $g$, as a globally
hyperbolic spacetime in its own right, local versions
of the just introduced symplectic spaces and the symplectomorphisms
between them. More precisely, if we denote by $S({\cal O}_G)$
the set of all smooth solutions of the KG-equation (3.3)
with the property that their Cauchy-data on $\Sigma$ are
compactly supported in $G$, then the map
$P_{\Sigma}$ restricts to a symplectomorphism $(S({\cal O}_G),
\sigma|S({\cal O}_G)) \to ({\cal D}_{G},\delta_{G})$,
$\varphi \mapsto P_{\Sigma}(\varphi)$. Likewise, the
symplectomorphism $[f] \mapsto Ef$ restricts to a symplectomorphism
$(K({\cal O}_G),\kappa|K({\cal O}_G)) \to
(S({\cal O}_G),\sigma|S({\cal O}_G))$.
To the symplectic space $(K,\kappa)$ we can now associate its
Weyl-algebra ${\cal A}[K,\kappa]$, cf.\ Chapter 2. Using the
afforementioned local net-structure of the symplectic space
$(K,\kappa)$, one arrives at the following result.
\begin{Proposition}
{\rm [16]}. Let $(M,g)$ be a globally hyperbolic
spacetime, and $(K,\kappa)$ the symplectic space, constructed
as above, for the KG-eqn.\ with smooth potential term $r$ on $(M,g)$.
Its Weyl-algebra ${\cal A}[K,\kappa]$ will be called the {\em Weyl-algebra
of the KG-field with potential term $r$ over} $(M,g)$. Define
for each open, relatively compact ${\cal O} \subset M$, the set ${\cal A}({\cal O})$
as the $C^*$-subalgebra of ${\cal A}[K,\kappa]$ generated by all the
Weyl-operators $W([f])$, $[f] \in K({\cal O})$. Then
${\cal O} \to {\cal A}({\cal O})$ is a net of $C^*$-algebras fulfilling
isotony (3.1) and locality (3.2), and moreover {\em primitive
causality}, i.e.\
\begin{equation}
{\cal A}({\cal O}_G) \subset {\cal A}(N)
\end{equation}
for all neighbourhoods $N$ (in $M$) of $G$, whenever ${\cal O}_G$ is
a (relatively compact) diamond.
\end{Proposition}
It is worth recalling (cf.\ [5]) that the Weyl-algebras
corresponding to symplectically equivalent spaces are
canonically isomorphic in the following way: Let
$W(x)$, $x \in K$ denote the Weyl-generators of ${\cal A}[K,\kappa]$
and $W_S(\varphi)$, $\varphi \in S$, the Weyl-generators
of ${\cal A}[S,\sigma]$. Furthermore, let $T$ be a symplectomorphism
between $(K,\kappa)$ and $(S,\sigma)$.
Then there is a uniquely determined
$C^*$-algebraic isomorphism $\alpha_T : {\cal A}[K,\kappa] \to
{\cal A}[S,\sigma]$ given by $\alpha_T(W(x)) = W_S(Tx)$, $x \in K$.
This shows that if we had associated e.g.\ with $(S,\sigma)$
the Weyl-algebra ${\cal A}[S,\sigma]$ as the algebra of quantum observables
of the KG-field over $(M,g)$, we would have obtained an
equivalent net of observable algebras
(connected to the previous one by a net isomorphism,
see [3,16]), rendering the same
physical information.
\\[24pt]
{\bf 3.4 Hadamard States}
\\[18pt]
We have indicated above that quasifree Hadamard states
are distinguished by their short-distance behaviour which
allows the definition of expectation values of energy-momentum
observables with reasonable properties [26,68,69,71]. If
$\omega_{\mu}$ is a quasifree state on the Weyl-algebra
${\cal A}[K,\kappa]$, then we call
$$ \lambda(x,y) := \mu(x,y) + \frac{i}{2}\kappa(x,y)\,, \quad
x,y \in K\,, $$
its {\it two-point function} and
$$ \Lambda(f,h) := \lambda([f],[h])\,, \quad f,h \in C_0^{\infty}(M,{\bf R})\,, $$
its {\it spatio-temporal} two-point function.
In Chapter 2 we have seen that a quasifree state is entirely
determined through specifying $\mu \in {\sf q}(K,\kappa)$, which
is equivalent to the specification of the two-point function
$\lambda$. Sometimes the notation $\lambda_{\omega}$
or $\lambda_{\mu}$ will be used to indicate the quasifree
state $\omega$ or the dominating scalar product $\mu$ which is
determined by $\lambda$.
For a quasifree Hadamard state, the spatio-temporal two-point
function is of a special form, called Hadamard form. The
definition of Hadamard form which we give here follows that due to
Kay and Wald [45]. Let $N$ is a causal normal neighbourhood
of a Cauchy-surface $\Sigma$ in $(M,g)$. Then a smooth function
$\chi : N \times N \to [0,1]$ is called {\it $N$-regularizing}
if it has the following property: There is an open
neighbourhood, $\Omega_*$, in $N \times N$ of the set of
pairs of causally related points in $N$ such that
$\overline{\Omega_*}$ is contained in a set $\Omega$ to
be described presently, and $\chi \equiv 1$ on $\Omega_*$
while $\chi \equiv 0$ outside of $\overline{\Omega}$. Here,
$\Omega$ is an open neighbourhood in $M \times M$ of the
set of those $(p,q) \in M \times M$ which are causally related
and have the property that (1) $J^+(p) \cap J^-(q)$ and
$J^+(q) \cap J^-(p)$ are contained within a convex normal
neighbourhood, and (2) $s(p,q)$, the square of the geodesic
distance between $p$ and $q$, is a well-defined, smooth
function on $\Omega$. (One observes that there are always
sets $\Omega$ of this type which contain a neighbourhood of the
diagonal in $M \times M$, and that an $N$-regularizing function
depends on the choice of the pair of sets $\Omega_*,\Omega$
with the stated properties.) It is not difficult to check that
$N$-regularizing functions always exist for any causal normal
neighbourhood; a proof of that is e.g.\ given in [55].
Then denote by $U$ the square root of the VanVleck-Morette determinant,
and by $v_m$, $m \in {\bf N}_0$ the sequence determined by the
Hadamard recursion relations for the KG-equation (3.3),
see [23,27] and also [30] for their definition.
They are all smooth functions on $\Omega$.\footnote{For any
choice of $\Omega$ with the properties just described.}
Now set for $n \in {\bf N}$,
$$ V^{(n)}(p,q) := \sum_{m = 0}^n v_m(p,q)(s(p,q))^m \,,
\quad (p,q) \in \Omega\,, $$
and, given a smooth time-function $T: M \to {\bf R}$ increasing
towards the future, define for all $\epsilon > 0$ and
$(p,q) \in \Omega$,
$$ Q_T(p,q;\epsilon) := s(p,q) - 2 i\epsilon (T(p) - T(q)) -
\epsilon^2 \,,$$
and
$$G^{T,n}_{\epsilon}(p,q) := \frac{1}{4\pi^2}\left(
\frac{U(p,q)}{Q_T(p,q;\epsilon)} + V^{(n)}(p,q)ln(Q_T(p,q;
\epsilon)) \right) \,, $$
where $ln$ is the principal branch of the logarithm.
With this notation, one can give the
\begin{Definition}{\rm [45]}. A ${\bf C}$-valued
bilinear form $\Lambda$ on $C_0^{\infty}(M,{\bf R})$ is called an {\em Hadamard form}
if, for a suitable choice of a causal normal neighbourhood
$N$ of some Cauchy-surface $\Sigma$, and for suitable choices of an
$N$-regularizing function $\chi$ and a future-increasing
time-function $T$ on $M$, there exists a sequence
$H^{(n)} \in C^n(N \times N)$, so that
\begin{equation}
\Lambda(f,h) = \lim_{\epsilon \to 0+}
\int_{M\times M} \Lambda^{T,n}_{\epsilon}(p,q)f(p)h(q)\,
d\eta(p) \,d\eta(q)
\end{equation}
for all $f,h \in C_0^{\infty}(N,{\bf R})$, where
\footnote{ The set $\Omega$ on which the functions forming
$G^{T,n}_{\epsilon}$ are defined and smooth is here to
coincide with the $\Omega$ with respect to which $\chi$ is
defined.}
\begin{equation}
\Lambda^{T,n}_{\epsilon}(p,q) := \chi(p,q)G^{T,n}_{\epsilon}(p,q)
+ H^{(n)}(p,q)\,,
\end{equation}
and if, moreover, $\Lambda$ is a global bi-parametrix of
the KG-equation (3.3), i.e.\ it satisfies
$$ \Lambda((\nabla^a\nabla_a + r)f,h) = B_1(f,h)\quad {\it and}
\quad \Lambda(f,(\nabla^a\nabla_a + r)h) = B_2(f,h) $$
for all $f,h \in C_0^{\infty}(M)$, where $B_1$ and $B_2$ are
given by smooth integral kernels on $M \times M$.\footnote{
We point out that statement (b) of Prop.\ 3.4 is wrong if
the assumption that $\Lambda$ is a global bi-parametrix is not made.
In this respect, Def.\ C.1 of [66] is imprecisely formulated as
the said assumption is not stated. There, like in several other
references, it has been implicitely assumed that $\Lambda$ is a
two point function and thus a bi-solution of (3.3), i.e. a
bi-parametrix with $B_1 = B_2 \equiv 0$.}
\end{Definition}
Based on results of [24,25], it is shown in [45] that this is
a reasonable definition. The findings of these works will be
collected in the following
\begin{Proposition} ${}$\\[6pt]
(a) If $\Lambda$ is of Hadamard form on a causal normal neighbourhood
$ N$ of a Cauchy-surface $\Sigma$ for some choice of a time-function
$T$ and some $N$-regularizing function $\chi$ (i.e.\ that
(3.9),(3.10) hold with suitable $H^{(n)} \in C^n(N \times
N)$), then so it is for any other time-function $T'$ and
$N$-regularizing $\chi'$. (This means that these changes
can be compensated by choosing another sequence $H'^{(n)} \in
C^n( N \times N)$.)
\\[6pt]
(b) (Causal Propagation Property of the Hadamard Form)\\
If $\Lambda$ is of Hadamard form on a causal normal neighbourhood
$ N$ of some Cauchy-surface $\Sigma$, then it is of Hadamard form
in any causal normal neighbourhood $ N'$ of any other Cauchy-surface
$\Sigma'$.
\\[6pt]
(c) Any $\Lambda$ of Hadamard form is a regular kernel distribution
on $C_0^{\infty}(M \times M)$.
\\[6pt]
(d) There exist pure, quasifree Hadamard states (these will be
referred to as {\em Hadamard vacua}) on the Weyl-algebra ${\cal A}[K,\kappa]$ of the
KG-field in any globally hyperbolic spacetime. The family of quasifree
Hadamard states on ${\cal A}[K,\kappa]$ spans an infinite-dimensional subspace
of the continuous dual space of ${\cal A}[K,\kappa]$.
\\[6pt]
(e) The dominating scalar products $\mu$ on $K$ arising from quasifree
Hadamard states $\omega_{\mu}$ induce locally the same topology,
i.e.\ if $\mu$ and $\mu'$ are arbitrary such scalar products and
${\cal O} \subset M$ is open and relatively compact, then there are two
positive constants $a,a'$ such that
$$ a\, \mu([f],[f]) \leq \mu'([f],[f]) \leq a'\,\mu([f],[f])\,,
\quad [f] \in K({\cal O})\,.$$
\end{Proposition}
{\it Remark.} Observe that this definition of Hadamard form rules out
the occurence of spacelike singularities, meaning that the Hadamard form
$\Lambda$ is, when tested on functions $f,h$ in (3.9) whose
supports are acausally separated, given by a $C^{\infty}$-kernel.
For that reason, the definition of Hadamard form as stated
above is also called {\it global} Hadamard form (cf.\ [45]).
A weaker definition of Hadamard form would be to prescribe (3.9),(3.10)
only for sets $N$ which, e.g., are members of an open covering of $M$
by convex normal neighbourhoods, and thereby to require the Hadamard form
locally. In the case that $\Lambda$ is the spatio-temporal two-point
function of a state on ${\cal A}[K,\kappa]$ and thus dominates the symplectic
form $\kappa$ ($|\kappa([f],[h])|^2 \leq 4\,\Lambda(f,f)\Lambda(h,h)$),
it was recently proved by Radzikowski that if $\Lambda$ is locally
of Hadamard form, then it is already globally of Hadamard form [56].
However, if $\Lambda$ doesn't dominate $\kappa$, this need not hold
[29,51,56]. Radzikowski's proof makes use of a characterization
of Hadamard forms in terms of their wave front sets which was mentioned
above. A definition of Hadamard form which is less technical in appearence
has recently been given in [44].
We should add that the usual Minkowski-vacuum of the free scalar
field with constant, non-negative potential
term is, of course, an Hadamard vacuum.
This holds, more generally, also for ultrastatic spacetimes, see below.
\\[10pt]
{\it Notes on the proof of Proposition 3.4.}
The property (a) is proved in [45]. The argument for (b) is
essentially contained in [25] and in the generality stated here it is
completed in [45]. An alternative proof using the
``propagation of singularities theorem'' for hyperbolic differential
equations is presented in [55].
Also property (c) is proved in [45 (Appendix B)] (cf.\
[66 (Prop.\ C.2)]). The existence of Hadamard vacua (d)
is proved in [24] (cf.\ also [45]); the stated Corollary has
been observed in [66] (and, in slightly different formulation,
already in [24]). Statement (e) has been shown to hold in
[66 (Prop.\ 3.8)].
\\[10pt]
In order to prepare the formulation of the next result, in which we will
apply our result of Chapter 2, we need to collect some more notation.
Suppose that we are given a quasifree state $\omega_{\mu}$ on
the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field over some
globally hyperbolic spacetime $(M,g)$, and that $\Sigma$ is a
Cauchy-surface in that spacetime. Then we denote by $\mu_{\Sigma}$
the dominating scalar product on $({\cal D}_{\Sigma},\delta_{\Sigma})$
which is, using the symplectomorphism between $(K,\kappa)$ and
$({\cal D}_{\Sigma},\delta_{\Sigma})$, induced by the dominating scalar
product $\mu$ on $(K,\kappa)$, i.e.\
\begin{equation}
\mu_{\Sigma}(P_{\Sigma}Ef,P_{\Sigma}Eh) = \mu([f],[h])\,, \quad
[f],[h] \in K\,.
\end{equation}
Conversely, to any $\mu_{\Sigma} \in {\sf q}({\cal D}_{\Sigma},\delta_{\Sigma})$
there corresponds via (3.11) a $\mu \in {\sf q}(K,\kappa)$.
Next, consider a complete Riemannian manifold $(\Sigma,\gamma)$, with
corresponding Laplacian $\Delta_{\gamma}$, and as before, consider the
operator $ -\Delta_{\gamma} +1$ on $C_0^{\infty}(\Sigma,{\bf R})$.
Owing to the completeness
of $(\Sigma,\gamma)$ this operator is,
together with all its powers, essentially selfadjoint in
$L^2_{{\bf R}}(\Sigma,d\eta_{\gamma})$ [10],
and we denote its selfadjoint extension
by $A_{\gamma}$. Then one can introduce the
{Sobolev scalar products} of $m$-th order,
$$ \langle u,v \rangle_{\gamma,m} := \langle u, A_{\gamma}^m v \rangle\,,
\quad u,v \in C_0^{\infty}(\Sigma,{\bf R}),\ m \in {\bf R}\,, $$
where on the right hand side is the scalar product of $L^2_{{\bf R}}(\Sigma,
d\eta_{\gamma})$. The completion of $C_0^{\infty}(\Sigma,{\bf R})$
in the topology of $\langle\,.\,,\,.\,\rangle_{\gamma,m}$
will be denoted by $H_m(\Sigma,\gamma)$.
It turns out that the topology of $H_m(\Sigma,\gamma)$ is locally
independent of the complete Riemannian metric $\gamma$, and that
composition with diffeomorphisms and multiplication with smooth,
compactly supported functions are continuous operations on these
Sobolev spaces. (See Appendix B for precise formulations of these
statements.) Therefore, whenever $G \subset \Sigma$ is open and
relatively compact, the topology which $\langle \,.\,,\,.\, \rangle_{m,\gamma}$
induces on $C_0^{\infty}(G,{\bf R})$ is independent of the particular
complete Riemannian metric $\gamma$, and we shall refer to the
topology which is thus locally induced on $C_0^{\infty}(\Sigma,{\bf R})$
simply as the (local) {\it $H_m$-topology.}
Let us now suppose that we have an ultrastatic spacetime $(\tilde{M},\tilde{\gamma})$,
given in a natural foliation as $({\bf R} \times \tilde{\Sigma},dt^2 \otimes (-\gamma))$
where $(\tilde{\Sigma},\gamma)$ is a complete Riemannian manifold. We shall
identify $\tilde{\Sigma}$ and $\{0\} \times \tilde{\Sigma}$. Consider again
$A_{\gamma}$ = selfadjoint extension of $- \Delta_{\gamma} + 1$ on
$C_0^{\infty}(\tilde{\Sigma},{\bf R})$ in $L^2_{{\bf R}}(\tilde{\Sigma},d\eta_{\gamma})$ with
$\Delta_{\gamma}$ = Laplacian of $(\tilde{\Sigma},\gamma)$, and the scalar product
$\mu^{\circ}_{\tilde{\Sigma}}$ on ${\cal D}_{\tilde{\Sigma}}$ given by
\begin{eqnarray}
\mu^{\circ}_{\tilde{\Sigma}}(u_0 \oplus u_1,v_0 \oplus v_1) & := & \frac{1}{2} \left (
\langle u_0,A_{\gamma}^{1/2}v_0 \rangle
+ \langle u_1,A_{\gamma}^{-1/2}v_1 \rangle \right) \\
& = & \frac{1}{2} \left( \langle u_0,v_0 \rangle_{\gamma,1/2}
+ \langle u_1,v_1 \rangle_{\gamma,-1/2} \right) \nonumber
\end{eqnarray}
for all $u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\tilde{\Sigma}}$. It is now straightforward to
check that $\mu^{\circ}_{\tilde{\Sigma}} \in {\sf pu}({\cal D}_{\tilde{\Sigma}},\delta_{\tilde{\Sigma}})$,
in fact, $\mu^{\circ}_{\tilde{\Sigma}}$ is the purification of the classical energy
scalar product $\mu^E_{\tilde{\Sigma}}$ defined in eqn.\ (3.5). (We refer to
[11] for discussion, and also the treatment of more general situations
along similar lines.) What is furthermore central for the derivation of
the next result is that $\mu^{\circ}_{\tilde{\Sigma}}$ corresponds (via (3.11)) to
an Hadamard vacuum $\omega^{\circ}$ on the Weyl-algebra
of the KG-field with potential term $r \equiv 1$ over the ultrastatic
spacetime $({\bf R} \times \tilde{\Sigma},dt^2 \oplus (-\gamma))$. This has been
proved in [24]. The state $\omega^{\circ}$ is called the
{\it ultrastatic vacuum} for the said KG-field over
$({\bf R} \times \tilde{\Sigma} ,dt^2 \oplus (-\gamma))$; it is the unique pure,
quasifree ground state on the corresponding Weyl-algebra for the
time-translations $(t,q) \mapsto (t + t',q)$ on that ultrastatic
spacetime with respect to the chosen natural foliation (cf.\ [40,42]).
\\[6pt]
{\it Remark.} The passage from $\mu^E_{\tilde{\Sigma}}$ to $\mu^{\circ}_{\tilde{\Sigma}}$,
where $\mu^{\circ}_{\tilde{\Sigma}}$ is the purification of the classical
energy scalar product, may be viewed as a refined form of
``frequency-splitting'' procedures (or Hamiltonian diagonalization),
in order to obtain pure dominating scalar products and hence, pure states
of the KG-field in curved spacetimes, see [11]. However, in the case
that $\tilde{\Sigma}$ is not a Cauchy-surface lying in the natural foliation of
an ultrastatic spacetime, but an arbitrary Cauchy-surface in an
arbitrary globally hyperbolic spacetime, the $\mu^{\circ}_{\tilde{\Sigma}}$
may fail to correspond to a quasifree Hadamard state --- even though,
as the following Proposition demonstrates, $\mu^{\circ}_{\tilde{\Sigma}}$ gives
locally on the Cauchy-data space ${\cal D}_{\tilde{\Sigma}}$ the same topology
as the dominating scalar products induced on it by any quasifree
Hadamard state. More seriously, $\mu^{\circ}_{\tilde{\Sigma}}$ may even correspond to
a state which is no longer locally quasiequivalent to any
quasifree Hadamard state. For an explicit example demonstrating
this in a closed Robertson-Walker universe, and for additional
discussion, we refer to Sec.\ 3.6 in [38].
\\[6pt]
We shall say that a map $T : {\cal D}_{\Sigma} \to {\cal D}_{\Sigma'}$,
with $\Sigma,\Sigma'$ Cauchy-surfaces, is {\it locally continuous} if,
for any open, locally compact $G \subset \Sigma$, the restriction of
$T$ to $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ is continuous
(with respect to the topologies under consideration).
\begin{Proposition}
Let $\omega_{\mu}$ be a quasifree Hadamard state on the Weyl-algebra
${\cal A}[K,\kappa]$ of the KG-field with smooth potential term $r$ over the
globally hyperbolic spacetime $(M,g)$, and $\Sigma,\Sigma'$ two
Cauchy-surfaces in $(M,g)$.
Then the Cauchy-data evolution map
\begin{equation}
T_{\Sigma',\Sigma} : = P_{\Sigma'} {\mbox{\footnotesize $\circ$}} P_{\Sigma}^{-1} :
{\cal D}_{\Sigma} \to {\cal D}_{\Sigma'}
\end{equation}
is locally continuous in the $H_{\tau} \oplus H_{\tau -1}$-topology,
$0 \leq \tau \leq 1$, on the Cauchy-data spaces, and the topology
induced by $\mu_{\Sigma}$ on ${\cal D}_{\Sigma}$ coincides locally
(i.e.\ on each $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$
for $G \subset \Sigma$ open and relatively compact) with the
$H_{1/2} \oplus H_{-1/2}$-topology.
\end{Proposition}
{\it Remarks.} (i) Observe that the continuity statement is
reasonably formulated since, as a consequence of the support
properties of solutions of the KG-equation with Cauchy-data
of compact support (``finite propagation speed'') it holds that
for each open, relatively compact $G \subset \Sigma$ there is
an open, relatively compact $G' \subset \Sigma'$ with
$T_{\Sigma',\Sigma}(C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R}))
\subset C_0^{\infty}(G',{\bf R}) \oplus C_0^{\infty}(G',{\bf R})$.
\\[6pt]
(ii) For $\tau =1$, the continuity statement is just the
classical energy estimate.
It should be mentioned here that the claimed continuity can
also be obtained by other methods. For instance, Moreno [50]
proves, under more restrictive assumptions on $\Sigma$ and $\Sigma'$
(among which is their compactness), the continuity of $T_{\Sigma',\Sigma}$
in the topology of $H_{\tau} \oplus H_{\tau -1}$ for all $\tau \in {\bf R}$,
by employing an abstract energy estimate for first order hyperbolic equations
(under suitable circumstances, the KG-equation can be brought into this form).
We feel, however, that our method, using the results of Chapter 2, is
physically more appealing and emphasizes much better the ``invariant''
structures involved, quite in keeping with the general approach to quantum
field theory.
\\[10pt]
{\it Proof of Proposition 3.5.} We note that there is a diffeomorphism
$\Psi : \Sigma \to \Sigma'$. To see this, observe that we may pick
a foliation $F : {\bf R} \times \tilde{\Sigma} \to M$ of $M$ in Cauchy-surfaces. Then
for each $q \in \tilde{\Sigma}$, the curves $t \mapsto F(t,q)$ are inextendible,
timelike curves in $(M,g)$. Each such curve intersects $\Sigma$ exactly
once, at the parameter value $t = \tau(q)$. Hence $\Sigma$ is the set
$\{F(\tau(q),q) : q \in \tilde{\Sigma}\}$. As $F$ is a diffeomorphism and
$\tau: \tilde{\Sigma} \to {\bf R}$ must be $C^{\infty}$ since, by assumption,
$\Sigma$ is a smooth hypersurface in $M$, one can see that $\Sigma$
and $\tilde{\Sigma}$ are diffeomorphic. The same argument shows that
$\Sigma'$ and $\tilde{\Sigma}$ and therefore, $\Sigma$ and $\Sigma'$, are
diffeomorphic.
Now let us first assume that the $g$-induced Riemannian metrics
$\gamma_{\Sigma}$ and $\gamma_{\Sigma'}$ on $\Sigma$, resp.\
$\Sigma'$, are complete. Let $d\eta$ and $d\eta'$ be the induced
volume measures on $\Sigma$ and $\Sigma'$, respectively. The $\Psi$-transformed
measure of $d\eta$ on $\Sigma'$, $\Psi^*d\eta$, is given through
\begin{equation}
\int_{\Sigma} (u {\mbox{\footnotesize $\circ$}} \Psi) \,d\eta = \int_{\Sigma'} u\,(\Psi^*d\eta)\,,
\quad u \in C_0^{\infty}(\Sigma')\,.
\end{equation}
Then the Radon-Nikodym derivative $(\rho(q))^2 :=(\Psi^*d\eta/d\eta')(q)$,
$q \in \Sigma'$, is a smooth, strictly positive function on $\Sigma'$,
and it is now easy to check that the linear map
$$ \vartheta : ({\cal D}_{\Sigma},\delta_{\Sigma})
\to ({\cal D}_{\Sigma'},\delta_{\Sigma'})\,, \quad
u_0 \oplus u_1 \mapsto \rho \cdot (u_0 {\mbox{\footnotesize $\circ$}} \Psi^{-1}) \oplus \rho \cdot
(u_1 {\mbox{\footnotesize $\circ$}} \Psi^{-1}) \,, $$
is a symplectomorphism. Moreover, by the result given in Appendix
B, $\vartheta$ and its inverse are locally continuous maps in the
$H_s \oplus H_t$-topologies on both Cauchy-data spaces, for all
$s,t \in {\bf R}$.
By the energy estimate, $T_{\Sigma',\Sigma}$ is locally continuous
with respect to the $H_1 \oplus H_0$-topology on the Cauchy-data
spaces, and the same holds for the inverse $(T_{\Sigma',\Sigma})^{-1}
= T_{\Sigma,\Sigma'}$. Hence, the map
$\Theta := \vartheta^{-1} {\mbox{\footnotesize $\circ$}} T_{\Sigma',\Sigma}$ is a symplectomorphism
of $({\cal D}_{\Sigma},\delta_{\Sigma})$, and $\Theta$ together with its inverse
is locally continuous in the $H_1 \oplus H_0$-topology on ${\cal D}_{\Sigma}$.
Here we made use of Remark (i) above. Now pick two sets $G$ and $G'$
as in Remark (i), then there is some open, relatively compact neighbourhood
$\tilde{G}$ of $\Psi^{-1}(G') \cup G$ in $\Sigma$. We can choose a smooth,
real-valued function $\chi$ compactly supported on $\Sigma$ with $\chi \equiv
1$ on $\tilde{G}$. It is then straightforward to check that the maps
$\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi$ and $\chi {\mbox{\footnotesize $\circ$}} \Theta^{-1} {\mbox{\footnotesize $\circ$}} \chi$
($\chi$ to be interpreted as multiplication with $\chi$) is a pair
of symplectically adjoint maps on $({\cal D}_{\Sigma},\delta_{\Sigma})$ which are bounded
with respect to the $H_1 \oplus H_0$-topology, i.e.\ with respect to the
norm of $\mu_{\Sigma}^E$. At this point we use Theorem 2.2(b) and consequently
$\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi$ and $\chi {\mbox{\footnotesize $\circ$}} \Theta^{-1}{\mbox{\footnotesize $\circ$}} \chi$ are
continuous with respect to the norms of the $(\mu^E_{\Sigma})_s$,
$0 \leq s \leq 2$. Inspection shows that
$$ (\mu^E_{\Sigma})_s (u_0 \oplus u_1,v_0 \oplus v_1) =
\frac{1}{2} \left( \langle u_0,A_{\gamma_{\Sigma}}^{1-s/2}v_0 \rangle
+ \langle u_1,A_{\gamma_{\Sigma}}^{-s/2}v_1 \rangle \right)
$$
for $0 \leq s \leq 2$. From this it is now easy to see that
$\Theta$ restricted to $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$
is continuous in the topology of $H_{\tau} \oplus H_{\tau -1}$,
$0 \leq \tau \leq 1$, since
$\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi(u_0 \oplus u_1) = \Theta(u_0 \oplus u_1)$ for all
$u_0 \oplus u_1 \in C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$
by the choice of $\chi$. Using that $\Theta = \vartheta^{-1}{\mbox{\footnotesize $\circ$}} T_{
\Sigma',\Sigma}$ and that $\vartheta$ is locally continuous with respect to
all the $H_s \oplus H_t$-topologies, $s,t \in {\bf R}$, on the Cauchy-data
spaces, we deduce that that $T_{\Sigma',\Sigma}$ is locally continuous
in the $H_{\tau} \oplus H_{\tau -1}$-topology, $0 \leq \tau \leq 1$,
as claimed.
If the $g$-induced Riemannian metrics $\gamma_{\Sigma}$, $\gamma_{\Sigma'}$
are not complete, one can make them into complete ones
$\hat{\gamma}_{\Sigma} := f \cdot \gamma_{\Sigma}$, $\hat{\gamma}_{\Sigma'}
:= h \cdot \gamma_{\Sigma'}$ by multiplying them with suitable smooth,
strictly positive functions $f$ on $\Sigma$ and $h$ on $\Sigma'$ [14].
Let $d\hat{\eta}$ and $d\hat{\eta}'$ be the volume measures corresponding
to the new metrics. Consider then the density functions
$(\phi_1)^2 := (d\eta/d\hat{\eta})$,
$(\phi_2)^2 := (d\hat{\eta}'/d\eta')$,
which are $C^{\infty}$ and strictly positive, and define
$({\cal D}_{\Sigma},\hat{\delta}_{\Sigma})$, $({\cal D}_{\Sigma'},\hat{\delta}_{\Sigma'})$
and $\hat{\vartheta}$ like their unhatted counterparts but with
$d\hat{\eta}$ and $d\hat{\eta}'$ in place of $d\eta$ and $d\eta'$.
Likewise define $\hat{\mu}^E_{\Sigma}$ with respect to $\hat{\gamma}_{\Sigma}$.
Then $\hat{T}_{\Sigma',\Sigma} := \phi_2 {\mbox{\footnotesize $\circ$}} T_{\Sigma',\Sigma}
{\mbox{\footnotesize $\circ$}} \phi_1$ (understanding that $\phi_1,\phi_2$ act as
multiplication operators) and its inverse are symplectomorphisms
between $({\cal D}_{\Sigma},\hat{\delta}_{\Sigma})$ and
$({\cal D}_{\Sigma'},\hat{\delta}_{\Sigma'})$ which are locally
continuous in the $H_1 \oplus H_0$-topology. Now we can apply the
argument above showing that $\hat{\Theta} = \hat{\vartheta}^{-1} {\mbox{\footnotesize $\circ$}}
\hat{T}_{\Sigma',\Sigma}$ and, hence, $\hat{T}_{\Sigma',\Sigma}$ is
locally continuous in the $H_{\tau} \oplus H_{\tau -1}$-topology for
$0 \leq \tau \leq 1$. The same follows then for
$T_{\Sigma',\Sigma} = \phi_2^{-1} {\mbox{\footnotesize $\circ$}} \hat{T}_{\Sigma',\Sigma}
{\mbox{\footnotesize $\circ$}} \phi_1^{-1}$.
For the proof of the second part of the statement, we note first
that in [24] it is shown that there exists another globally
hyperbolic spacetime $(\hat{M},\hat{g})$ of the form
$\hat{M} = {\bf R} \times \Sigma$ with the following properties:
\\[6pt]
(1) $\Sigma_0 : = \{0\} \times \Sigma$ is a Cauchy-surface in $(\hat{M},
\hat{g})$, and a causal normal neighbourhood $N$ of $\Sigma$ in $M$
coincides with a causal normal neigbourhood $\hat{N}$ of
$\Sigma_{0}$ in $\hat{M}$, in such a way that $\Sigma = \Sigma_0$
and $g = \hat{g}$ on $N$.
\\[6pt]
(2) For some $t_0 < 0$, the $(-\infty,t_0) \times \Sigma$-part of
$\hat{M}$ lies properly to the past of $\hat{N}$, and on that part,
$\hat{g}$ takes the form $dt^2 \oplus (- \gamma)$ where
$\gamma$ is a complete Riemannian metric on $\Sigma$.
\\[6pt]
This means that $(\hat{M},\hat{g})$ is a globally hyperbolic
spacetime which equals $(M,g)$ on a causal normal neighbourhood of $\Sigma$
and becomes ultrastatic to the past of it.
Then consider the Weyl-algebra ${\cal A}[\hat{K},\hat{\kappa}]$ of the
KG-field with potential term $\hat{r}$ over $(\hat{M},\hat{g})$, where
$\hat{r} \in C_0^{\infty}(\hat{M},{\bf R})$ agrees with $r$ on the
neighbourhood $\hat{N} = N$ and is identically equal to $1$ on the
$(-\infty,t_0) \times \Sigma$-part of $\hat{M}$. Now observe that
the propagators $E$ and $\hat{E}$ of the respective KG-equations
on $(M,g)$ and $(\hat{M},\hat{g})$ coincide when restricted to
$C_0^{\infty}(N,{\bf R})$. Therefore one obtains an identification map
$$ [f] = f + {\rm ker}(E) \mapsto [f]\,\hat{{}} = f + {\rm ker}(\hat{E}) \,,
\quad f \in C_0^{\infty}(N,{\bf R}) \,,$$
between $K(N)$ and $\hat{K}(\hat{N})$ which preserves the
symplectic forms $\kappa$ and $\hat{\kappa}$. Without danger we may
write this identification as an equality,
$K(N) = \hat{K}(\hat{N})$.
This identification map between $(K(N),\kappa|K(N))$
and $(\hat{K}(\hat{N}),\hat{\kappa}|\hat{K}(\hat{N}))$ lifts to
a $C^*$-algebraic isomorphism between the corresponding
Weyl-algebras
\begin{eqnarray}
{\cal A}[K(N),\kappa|K(N)]& =& {\cal A}[\hat{K}(\hat{N}),\hat{\kappa}|
\hat{K}(\hat{N})]\,, \nonumber \\
W([f])& =& \hat{W}([f]\,\hat{{}}\,)\,,\ \ \
f \in C_0^{\infty}(N,{\bf R})\,.
\end{eqnarray}
Here we followed our just indicated convention to abbreviate
this identification as an equality. Now we have
$D(N) = M$ in $(M,g)$ and $D(\hat{N}) = \hat{M}$ in
$(\hat{M},\hat{g})$, implying that $K(N) = K$ and
$\hat{K}(\hat{N}) = \hat{K}$. Hence ${\cal A}[K(N),\kappa|K(N)] =
{\cal A}[K,\kappa]$ and the same for the ``hatted'' objects.
Thus (3.15) gives rise to an identification between
${\cal A}[K,\kappa]$ and ${\cal A}[\hat{K},\hat{\kappa}]$, and so the
quasifree Hadamard state $\omega_{\mu}$ induces a quasifree
state $\omega_{\hat{\mu}}$ on ${\cal A}[\hat{K},\hat{\kappa}]$
with
\begin{equation}
\hat{\mu}([f]\,\hat{{}},[h]\,\hat{{}}\,) = \mu([f],[h])\,, \quad
f,h \in C_0^{\infty}(N,{\bf R}) \,.
\end{equation}
This state is also an Hadamard state since we have
\begin{eqnarray*}
\Lambda(f,h)& =& \mu([f],[h]) + \frac{i}{2}\kappa([f],[h]) \\
& = & \hat{\mu}([f]\,\hat{{}}\,,[h]\,\hat{{}}\,) + \frac{i}{2}
\hat{\kappa}([f]\,\hat{{}}\,,[h]\,\hat{{}}\,)\,, \quad f,h \in
C_0^{\infty}(N,{\bf R})\,,
\end{eqnarray*}
and $\Lambda$ is, by assumption, of Hadamard form.
However, due to the causal propagation property of the
Hadamard form this means that $\hat{\mu}$ is the dominating
scalar product on $(\hat{K},\hat{\kappa})$ of a quasifree
Hadamard state on ${\cal A}[\hat{K},\hat{\kappa}]$.
Now choose some $t < t_0$, and let $\Sigma_t = \{t\} \times
\Sigma$ be the Cauchy-surface in the ultrastatic part
of $(\hat{M},\hat{g})$ corresponding to this value of the
time-parameter of the natural foliation. As remarked above,
the scalar product
\begin{equation}
\mu^{\circ}_{\Sigma_t}(u_0 \oplus u_1,v_0 \oplus v_1)
= \frac{1}{2}\left( \langle u_0,v_0 \rangle_{\gamma,1/2}
+ \langle u_1,v_1 \rangle_{\gamma,-1/2} \right)\,,
\quad u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\Sigma_t} \,,
\end{equation}
is the dominating scalar product on
$({\cal D}_{\Sigma_t},\delta_{\Sigma_t})$ corresponding to the
ultrastatic vacuum state $\omega^{\circ}$ over the
ultrastatic part of $(\hat{M},\hat{g})$, which is an Hadamard
vacuum. Since the dominating scalar products of all
quasifree Hadamard states yield locally the same topology
(Prop.\ 3.4(e)), it follows that the dominating scalar product
$\hat{\mu}_{\Sigma_t}$ on $({\cal D}_{\Sigma_{t}},\delta_{\Sigma_t})$,
which is induced (cf.\ (3.11)) by the the dominating scalar product
of $\hat{\mu}$ of the quasifree Hadamard state $\omega_{\hat{\mu}}$,
endows ${\cal D}_{\Sigma_t}$ locally with the same topology
as does $\mu^{\circ}_{\Sigma_t}$. As can be read off from (3.17),
this is the local $H_{1/2} \oplus H_{-1/2}$-topology.
To complete the argument, we note that (cf.\ (3.11,3.13))
$$ \hat{\mu}_{\Sigma_0}(u_0 \oplus u_1,v_0 \oplus v_1) =
\hat{\mu}_{\Sigma_t}(T_{\Sigma_t,\Sigma_0}(u_0 \oplus u_1),T_{\Sigma_t,\Sigma_0}
(v_0 \oplus v_1))\,, \quad u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\Sigma_0}\,.$$
But since $\hat{\mu}_{\Sigma_t}$ induces locally the
$H_{1/2} \oplus H_{-1/2}$-topology and since the symplectomorphism
$T_{\Sigma_t,\Sigma_0}$ as well as its inverse are locally continuous
on the Cauchy-data spaces in the $H_{1/2}\oplus H_{-1/2}$-topology,
the last equality entails that $\hat{\mu}_{\Sigma_0}$ induces the
local $H_{1/2} \oplus H_{-1/2}$-topology on ${\cal D}_{\Sigma_0}$.
In view of (3.16), the Proposition is now proved. $\Box$
\\[24pt]
{\bf 3.5 Local Definiteness, Local Primarity,
Haag-Duality, etc.}
\\[18pt]
In this section we prove Theorem 3.6 below on the algebraic
structure of the GNS-representations associated with quasifree
Hadamard states on the CCR-algebra of the KG-field on an
arbitrary globally hyperbolic spacetime $(M,g)$. The
results appearing therein extend our previous work [64,65,66].
Let $(M,g)$ be a globally hyperbolic spacetime.
We recall that a subset ${\cal O}$ of $M$
is called a { regular diamond} if it is of the form
${\cal O} = {\cal O}_G = {\rm int}\,D(G)$ where
$G$ is an open, relatively compact subset of some Cauchy-surface
$\Sigma$ in $(M,g)$ having the property that the boundary
$\partial G$ of $G$ is contained in the
union of finitely many smooth, closed, two-dimensional submanifolds
of $\Sigma$. We also recall the notation ${\cal R}_{\omega}({\cal O})
= \pi_{\omega}({\cal A}({\cal O}))^-$ for the local von Neumann algebras in
the GNS-representation of a state $\omega$. The $C^*$-algebraic
net of observable algebras ${\cal O} \to {\cal A}({\cal O})$
will be understood as being that associated
with the KG-field in Prop.\ 3.2.
\begin{Theorem}
Let $(M,g)$ be a globally hyperbolic spacetime and
${\cal A}[K,\kappa]$ the Weyl-algebra of the KG-field with smooth, real-valued
potential function $r$ over $(M,g)$. Suppose that
$\omega$ and $\omega_1$ are two quasifree Hadamard states on
${\cal A}[K,\kappa]$. Then the following statements hold.
\\[6pt]
(a) The GNS-Hilbertspace ${\cal H}_{\omega}$
of $\omega$ is infinite dimensional and separable.
\\[6pt]
(b) The restrictions of the GNS-representations $\pi_{\omega}|{\cal A(O)}$
and $\pi_{\omega_1}|{\cal A(O)}$ of any open, relatively compact
${\cal O} \subset M$ are quasiequivalent. They are even unitarily
equivalent when ${\cal O}^{\perp}$ is non-void.
\\[6pt]
(c) For each $p \in M$ we have local definiteness,
$$ \bigcap_{{\cal O} \owns p} {\cal R}_{\omega}({\cal O}) = {\bf C} \cdot 1\, . $$
More generally, whenever $C \subset M$ is the subset of a compact
set which is contained in the union of finitely many smooth, closed,
two-dimensional submanifolds of an arbitrary Cauchy-surface
$\Sigma$ in $M$,
then
\begin{equation}
\bigcap_{{\cal O} \supset C} {\cal R}_{\omega}({\cal O}) = {\bf C} \cdot 1\,.
\end{equation}
\\[6pt]
(d) Let ${\cal O}$ and ${\cal O}_1$ be two relatively compact diamonds, based
on Cauchy-surfaces $\Sigma$ and $\Sigma_1$, respectively, such
that $\overline{{\cal O}} \subset {\cal O}_1$. Then the split-property
holds for the pair ${\cal R}_{\omega}({\cal O})$ and
${\cal R}_{\omega}({\cal O}_1)$, i.e.\ there exists a type ${\rm I}_{\infty}$
factor $\cal N$ such that one has the inclusion
$$ {\cal R}_{\omega}({\cal O}) \subset {\cal N} \subset {\cal R}_{\omega}
({\cal O}_1) \,. $$
\\[6pt]
(e) Inner and outer regularity
\begin{equation}
{\cal R}_{\omega}({\cal O}) = \left( \bigcup_{\overline{{\cal O}_I} \subset {\cal O}}
{\cal R}_{\omega}({\cal O}_I) \right) '' =
\bigcap_{{\cal O}_1 \supset \overline{{\cal O}}} {\cal R}_{\omega}({\cal O}_1)
\end{equation}
holds for all regular diamonds ${\cal O}$.
\\[6pt]
(f) If $\omega$ is pure (an Hadamard vacuum), then we have Haag-Duality
$$ {\cal R}_{\omega}({\cal O})' = {\cal R}_{\omega}({\cal O}^{\perp}) $$
for all regular diamonds ${\cal O}$. (By the same arguments as in {\rm
[65 (Prop.\ 6)]}, Haag-Duality extends to all pure (but not necessarily
quasifree or Hadamard) states $\omega$ which are locally normal
(hence, by (d), locally quasiequivalent) to any Hadamard vacuum.)
\\[6pt]
(g) Local primarity holds for all regular diamonds, that is, for
each regular diamond ${\cal O}$, ${\cal R}_{\omega}({\cal O})$ is a factor.
Moreover, ${\cal R}_{\omega}({\cal O})$ is isomorphic to the unique
hyperfinite type ${\rm III}_1$ factor if ${\cal O}^{\perp}$
is non-void. In this case, ${\cal R}_{\omega}({\cal O}^{\perp})$ is
also hyperfinite and of type ${\rm III}_1$, and if $\omega$ is
pure, ${\cal R}_{\omega}({\cal O}^{\perp})$ is again a factor.
Otherwise, if ${\cal O}^{\perp} = \emptyset$, then
${\cal R}_{\omega}({\cal O})$ is a type ${\rm I}_{\infty}$ factor.
\end{Theorem}
{\it Proof.} The key point in the proof is that, by results which
for the cases relevant here are to large extend due to Araki [1],
the above statement can be equivalently translated into statements
about the structure of the one-particle space, i.e.\ essentially the
symplectic space $(K,\kappa)$ equipped with the scalar product
$\lambda_{\omega}$. We shall use, however, the formalism of [40,45].
Following that, given a symplectic space $(K,\kappa)$ and
$\mu \in {\sf q}(K,\kappa)$ one calls a real linear map
${\bf k}: K \to H$ a {\it one-particle Hilbertspace structure} for
$\mu$ if (1) $H$ is a complex Hilbertspace, (2) the complex linear
span of ${\bf k}(K)$ is dense in $H$ and (3)
$$ \langle {\bf k}(x),{\bf k}(y) \rangle = \lambda_{\mu}(x,y)
= \mu(x,y) + \frac{i}{2}\kappa(x,y) $$
for all $x,y \in K$. It can then be shown (cf.\ [45 (Appendix A)])
that the GNS-representation of the quasifree state $\omega_{\mu}$
on ${\cal A}[K,\kappa]$ may be realized in the following way:
${\cal H}_{\omega_{\mu}} = F_s(H)$, the Bosonic Fock-space over the one-particle
space $H$, $\Omega_{\omega_{\mu}}$ = the Fock-vacuum, and
$$ \pi_{\omega_{\mu}}(W(x)) = {\rm e}^{i(a({\bf k}(x)) +
a^+({\bf k}(x)))^-}\,, \quad
x \in K\, ,$$
where $a(\,.\,)$ and $a^+(\,.\,)$ are the Bosonic annihilation and
creation operators, respectively.
Now it is useful to define the symplectic complement
$F^{\tt v} := \{\chi \in H : {\sf Im}\,\langle \chi,\phi \rangle = 0
\ \ \forall \phi \in F \}$ for $F \subset H$, since it is known
that
\begin{itemize}
\item[(i)] ${\cal R}_{\omega_{\mu}}({\cal O})$ is a factor \ \ \ iff\ \ \
$ {\bf k}(K({\cal O}))^- \cap {\bf k}(K({\cal O}))^{\tt v} = \{0\}$,
\item[(ii)] ${\cal R}_{\omega_{\mu}}({\cal O})' = {\cal R}_{\omega_{\mu}}
({\cal O}^{\perp})$\ \ \ iff\ \ \
${\bf k}(K({\cal O}))^{\tt v} = {\bf k}(K({\cal O}^{\perp}))^-$,
\item[(iii)] $\bigcap_{{\cal O} \supset C} {\cal R}_{\omega_{\mu}}({\cal O})
= {\bf C} \cdot 1$ \ \ \ iff\ \ \
$\bigcap_{{\cal O} \supset C}{\bf k}(K({\cal O}))^- = \{0\}\,,$
\end{itemize}
cf.\ [1,21,35,49,58].
After these preparations we can commence with the proof of the various
statements of our Theorem.
\\[6pt]
(a) Let ${\bf k}: K \to H$ be the one-particle Hilbertspace structure
of $\omega$. The local one-particle spaces ${\bf k}(K({\cal O}_G))^-$ of
regular diamonds ${\cal O}_G$ based on $G \subset \Sigma$ are topologically
isomorphic to the completions of $C_0^{\infty}(G,{\bf R}) \oplus
C_0^{\infty}(G,{\bf R})$ in the $H_{1/2} \oplus H_{-1/2}$-topology and
these are separable. Hence ${\bf k}(K)^-$, which is generated by a
countable set ${\bf k}(K({\cal O}_{G_n}))$, for $G_n$ a sequence of
locally compact subsets of $\Sigma$ eventually exhausting $\Sigma$,
is also separable. The same holds then for the one-particle
Hilbertspace $H$ in which the complex span of ${\bf k}(K)$ is
dense, and thus separability is implied for ${\cal H}_{\omega} = F_s(H)$.
The infinite-dimensionality is clear.
\\[6pt]
(b) The local quasiequivalence has been proved in [66] and we refer to
that reference for further details. We just indicate that the
proof makes use of the fact that the difference $\Lambda - \Lambda_1$
of the spatio-temporal two-point functions of any pair of
quasifree Hadamard states is on each causal normal neighbourhood
of any Cauchy-surface given by a smooth integral kernel ---
as can be directly read off from the Hadamard form --- and this turns
out to be sufficient for local quasiequivalence. The statement
about the unitary equivalence can be inferred from (g) below,
since it is known that every $*$-preserving isomorphism between
von Neumann algebras of type III acting on separable Hilbertspaces
is given by the adjoint action of a unitary operator which maps
the Hilbertspaces onto each other. See e.g.\ Thm.\ 7.2.9 and
Prop.\ 9.1.6 in [39].
\\[6pt]
(c) Here one uses that there exist Hadamard vacua, i.e.\ pure
quasifree Hadamard states $\omega_{\mu}$. Since by Prop.\ 3.4
the topology of $\mu_{\Sigma}$ in ${\cal D}_{\Sigma}$ is locally that of
$H_{1/2} \oplus H_{-1/2}$, one can show as in [66 (Chp.\ 4 and
Appendix)] that under the stated hypotheses about $C$ it holds
that $\bigcap_{{\cal O} \supset C} {\bf k}(K({\cal O}))^- = \{0\}$ for the
one-particle Hilbertspace structures of Hadamard vacua. From
the local equivalence of the topologies induced by the dominating
scalar products of all quasifree Hadamard states (Prop.\ 3.4(e)),
this extends to the one-particle structures of all quasifree
Hadamard states. By (iii), this yields the statement (c).
\\[6pt]
(d) This is proved in [65] under the additional assumption that
the potential term $r$ is a positive constant. (The result was
formulated in [65] under the hypothesis that $\Sigma = \Sigma_1$,
but it is clear that the present statement without this hypothesis
is an immediate generalization.) To obtain the general case
one needs in the spacetime deformation argument of [65]
the modification that the potential term $\hat{r}$ of the KG-field
on the new spacetime $(\hat{M},\hat{g})$ is equal to a positive
constant on its ultrastatic part while being equal to $r$ in a
neighbourhood of $\Sigma$. We have used that procedure already in
the proof of Prop.\ 3.5, see also the proof of (f) below where
precisely the said modification will be carried out in more detail.
\\[6pt]
(e) Inner regularity follows simply from the definition of the
${\cal A}({\cal O})$; one deduces that for each $A \in {\cal A}({\cal O})$ and each
$\epsilon > 0$ there exists some $\overline{{\cal O}_I} \subset {\cal O}$
and $A_{\epsilon} \in {\cal A}({\cal O}_I)$ so that
$||\,A - A_{\epsilon}\,|| < \epsilon$. It is easy to see that
inner regularity is a consequence of this property.
So we focus now on the outer regularity.
Let ${\cal O} = {\cal O}_G$ be based on the subset $G$ of the Cauchy-surface
$\Sigma$. Consider the symplectic space $({\cal D}_{\Sigma},\delta_{\Sigma})$
and the dominating scalar product $\mu_{\Sigma}$ induced by $\mu
\in {\sf q}({\cal D}_{\Sigma},\delta_{\Sigma})$, where $\omega_{\mu} = \omega$;
the corresponding one-particle Hilbertspace structure we denote by
${\bf k}_{\Sigma}: {\cal D}_{\Sigma} \to H_{\Sigma}$. Then we denote by
${\cal W}({\bf k}_{\Sigma}({\cal D}_G))$ the von Neumann algebra in
$B(F_s(H_{\Sigma}))$ generated by the unitary groups of the
operators $(a({\bf k}_{\Sigma}(u_0 \oplus u_1)) + a^+({\bf k}_{\Sigma}(u_0 \oplus u_1)))^-$
where $u_0 \oplus u_1$ ranges over ${\cal D}_G := C_0^{\infty}(G,{\bf R}) \oplus
C_0^{\infty}(G,{\bf R})$. So ${\cal W}({\bf k}_{\Sigma}({\cal D}_G)) =
{\cal R}_{\omega}({\cal O}_G)$. It holds
generally that $\bigcap_{G_1 \supset \overline{G}} {\cal W}({\bf k}_{\Sigma}
({\cal D}_{G_1})) = {\cal W}(\bigcap_{G_1 \supset \overline{G}}
{\bf k}_{\Sigma}({\cal D}_{G_1})^-)$ [1], hence, to establish outer
regularity, we must show that
\begin{equation}
\bigcap_{G_1 \supset \overline{G}} {\bf k}_{\Sigma}({\cal D}_{G_1})^-
= {\bf k}_{\Sigma}({\cal D}_G)^-\,.
\end{equation}
In [65] we have proved that the ultrastatic vacuum $\omega^{\circ}$
of the KG-field with potential term $\equiv 1$ over the ultrastatic
spacetime $(M^{\circ},g^{\circ}) = ({\bf R} \times \Sigma,dt^2 \oplus
(-\gamma))$ (where $\gamma$ is any complete Riemannian metric on
$\Sigma$) satisfies Haag-duality. That means, we have
\begin{equation}
{\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ})' =
{\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ}^{\perp})
\end{equation}
for any regular diamond ${\cal O}_{\circ}$ in $(M^{\circ},g^{\circ})$
which is based on any of the Cauchy-surfaces $\{t\}\times \Sigma$ in
the natural foliation, and we have put a ``$\circ$'' on the local
von Neumann algebras to indicate that they refer to a KG-field
over $(M^{\circ},g^{\circ})$. But since we have inner regularity
for ${\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ}^{\perp})$ ---
by the very definition --- the outer regularity of ${\cal R}^{\circ}
_{\omega^{\circ}}({\cal O}_{\circ})$ follows from the Haag-duality (3.21).
Translated into conditions on the one-particle Hilbertspace
structure ${\bf k}^{\circ}_{\Sigma} : {\cal D}_{\Sigma} \to H^{\circ}_{\Sigma}$
of $\omega^{\circ}$, this means that the equality
\begin{equation}
\bigcap_{G_1 \supset \overline{G}} {\bf k}^{\circ}_{\Sigma}
({\cal D}_{G_1})^- = {\bf k}^{\circ}_{\Sigma}({\cal D}_G)^-
\end{equation}
holds. Now we know from Prop.\ 3.5 that $\mu_{\Sigma}$ induces
locally the $H_{1/2} \oplus H_{-1/2}$-topology on ${\cal D}_{\Sigma}$. However,
this coincides with the topology locally induced by $\mu^{\circ}_{\Sigma}$
on ${\cal D}_{\Sigma}$ (cf.\ (3.11)) --- even though $\mu^{\circ}_{\Sigma}$ may,
in general, not be viewed as corresponding to an Hadamard vacuum
of the KG-field over $(M,g)$. Thus the required relation (3.20)
is implied by (3.22).
\\[6pt]
(f) In view of outer regularity it is enough to show that, given
any ${\cal O}_1 \supset \overline{{\cal O}}$, it holds that
\begin{equation}
{\cal R}_{\omega}({\cal O}^{\perp})' \subset {\cal R}_{\omega}({\cal O}_1)\,.
\end{equation}
The demonstration of this property relies on a spacetime deformation
argument similar to that used in the proof of Prop.\ 3.5. Let
$G$ be the base of ${\cal O}$ on the Cauchy-surface $\Sigma$ in $(M,g)$.
Then, given any other open, relatively compact subset $G_1$ of
$\Sigma$ with $\overline{G} \subset G_1$, we have shown in
[65] that there exists an ultrastatic spacetime $(\hat{M},\hat{g})$
with the properties (1) and (2) in the proof of Prop.\ 3.5, and with
the additional property that there is some $t < t_0$ such that
$$ \left( {\rm int}\,\hat{J}(G) \cap \Sigma_t \right )^- \subset
{\rm int}\, \hat{D}(G_1) \cap \Sigma_t\,.$$
Here, $\Sigma_t = \{t\} \times \Sigma$ are the Cauchy-surfaces in the
natural foliation of the ultrastatic part of $(\hat{M},\hat{g})$.
The hats indicate that the causal set and the domain of dependence
are to be taken in $(\hat{M},\hat{g})$. This implies that we can find
some regular diamond ${\cal O}^t := {\rm int}\hat{D}(S^t)$ in
$(\hat{M},\hat{g})$ based on a subset $S^t$ of $\Sigma_t$ which
satisfies
\begin{equation}
\left( {\rm int}\, \hat{J}(G) \cap \Sigma_t \right)^-
\subset S^t \subset
{\rm int}\,\hat{D}(G_1) \cap \Sigma_t \,.
\end{equation}
Setting $\hat{{\cal O}} := {\rm int}\, \hat{D}(G)$ and
$\hat{{\cal O}}_1 := {\rm int}\,\hat{D}(G_1)$, one derives from (3.24)
the relations
\begin{equation}
\hat{{\cal O}} \subset {\cal O}^t \subset \hat{{\cal O}}_1
\,.
\end{equation}
These are equivalent to
\begin{equation}
\hat{{\cal O}}_1^{\perp} \subset ({\cal O}^t)^{\perp} \subset \hat{{\cal O}}^{\perp}
\end{equation}
where $\perp$ is the causal complementation in $(\hat{M},\hat{g})$.
Now as in the proof of Prop.\ 3.5, the given Hadamard vacuum $\omega$
on the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field over $(M,g)$
induces an Hadamard vacuum $\hat{\omega}$ on the Weyl-algebra
${\cal A}[\hat{K},\hat{\kappa}]$ of the KG-field over $(\hat{M},\hat{g})$
whose potential term $\hat{r}$ is $1$ on the ultrastatic
part of $(\hat{M},\hat{g})$. Then by Prop.\ 6 in [65] we have
Haag-duality
\begin{equation}
\hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}_t}^{\perp}) ' =
\hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}_t})
\end{equation}
for all regular diamonds $\hat{{\cal O}_t}$ with base on
$\Sigma_t$; we have put hats on the von Neumann algebras
to indicate that they refer to ${\cal A}[\hat{K},\hat{\kappa}]$.
(This was proved in [65] assuming that $(\hat{M},\hat{g})$
is globally ultrastatic. However, with the same argument, based on
primitive causality, as we use it next to pass from (3.28) to
(3.30), one can easily establish that (3.27) holds if only
$\Sigma_t$ is, as here, a member in the natural foliation of the
ultrastatic part of $(\hat{M},\hat{g})$.)
Since ${\cal O}^t$ is a regular diamond based on $\Sigma_t$, we obtain
$$\hat{\cal R}_{\hat{\omega}}(({\cal O}^t)^{\perp})' =
\hat{\cal R}_{\hat{\omega}}({\cal O}^t) $$
and thus, in view of (3.25) and (3.26),
\begin{equation}
\hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}^{\perp})'
\subset \hat{\cal R}_{\hat{\omega}}(({\cal O}^t)^{\perp})'
= \hat{\cal R}_{\hat{\omega}}({\cal O}^t) \subset
\hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}_1)\,.
\end{equation}
Now recall (see proof of Prop.\ 3.5) that $(\hat{M},\hat{g})$
coincides with $(M,g)$ on a causal normal neighbourhood $N$ of
$\Sigma$. Primitive causality (Prop.\ 3.2) then entails
\begin{equation}
\hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}^{\perp} \cap N)'
\subset \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}_1 \cap N) \,.
\end{equation}
On the other hand, $\hat{{\cal O}}^{\perp} = {\rm int} \hat{D}(\Sigma
\backslash G)$ and $\hat{{\cal O}}_1$ are diamonds in $(\hat{M},\hat{g})$
based on $\Sigma$. Since $(M,g)$ and $(\hat{M},\hat{g})$
coincide on the causal normal neighbourhood $N$ of $\Sigma$,
one obtains that ${\rm int}\,D(\tilde{G}) \cap N =
{\rm int}\, \hat{D}(\tilde{G}) \cap N$ for all $\tilde{G} \in
\Sigma$. Hence, with ${\cal O} = {\rm int}\,D(G)$,
${\cal O}_1 = {\rm int}\, D(G_1)$ (in $(M,g)$), we have that (3.23) entails
$$ {\cal R}_{\omega}({\cal O}^{\perp} \cap N)' \subset
{\cal R}_{\omega}({\cal O}_1 \cap N)
$$
(cf.\ the proof of Prop.\ 3.5) where the causal complement $\perp$
is now taken in $(M,g)$. Using primitive causality once more,
we deduce that
\begin{equation}
{\cal R}_{\omega}({\cal O}^{\perp})' \subset {\cal R}_{\omega}({\cal O}_1)\,.
\end{equation}
The open, relatively compact subset $G_1$ of $\Sigma$ was
arbitrary up to the constraint $\overline{G} \subset G_1$.
Therefore, we arrive at the conclusion that the required inclusion
(3.23) holds of all ${\cal O}_1 \supset \overline{{\cal O}}$.
\\[6pt]
(g) Let $\Sigma$ be the Cauchy-surface on which ${\cal O}$ is based.
For the local primarity one uses, as in (c), the existence of
Hadamard vacua $\omega_{\mu}$ and the fact (Prop.\ 3.5) that
$\mu_{\Sigma}$ induces locally the $H_{1/2} \oplus H_{-1/2}$-topology;
then one may use the arguments of [66 (Chp.\ 4 and Appendix)]
to show that due to the regularity of the boundary
$\partial G$ of the base $G$ of ${\cal O}$
there holds
$$ {\bf k}(K({\cal O}))^- \cap {\bf k}(K({\cal O}))^{\tt v} = \{ 0 \}$$
for the one-particle Hilbertspace structures of Hadamard vacua.
As in the proof of (c), this can be carried over to the
one-particle structures of all quasifree Hadamard states since
they induce locally on the one-particle spaces the same topology,
see [66 (Chp.\ 4)]. We note that for Hadamard vacua the local
primarity can also be established using (3.18) together with Haag-duality
and primitive causality purely at the algebraic level, without
having to appeal to the one-particle structures.
The type ${\rm III}_1$-property of ${\cal R}_{\omega}({\cal O})$ is then
derived using Thm.\ 16.2.18 in [3] (see also [73]).
We note that for some points $p$ in the boundary $\partial G$ of $G$, ${\cal O}$
admits domains which are what is in Sect.\ 16.2.4 of [3] called
``$\beta_p$-causal sets'', as a consequence of the regularity of
$\partial G$ and the assumption ${\cal O}^{\perp} \neq \emptyset$.
We further note that it is straightforward to prove that
the quasifree Hadamard states of the KG-field over $(M,g)$
possess at each point in $M$ scaling limits (in the sense of
Sect.\ 16.2.4 in [3], see also [22,32]) which are equal to the
theory of the massless KG-field in Minkowski-spacetime. Together
with (a) and (c) of the present Theorem this shows that the
the assumptions of Thm.\ 16.2.18 in [3] are fulfilled, and the
${\cal R}_{\omega}({\cal O})$ are type ${\rm III}_1$-factors for all
regular diamonds ${\cal O}$ with ${\cal O}^{\perp} \neq \emptyset$.
The hyperfiniteness follows from the split-property (d) and the
regularity (e), cf.\ Prop.\ 17.2.1 in [3]. The same arguments
may be applied to ${\cal R}_{\omega}({\cal O}^{\perp})$, yielding
its type ${\rm III}_1$-property (meaning that in its central
decomposition only type ${\rm III}_1$-factors occur) and
hyperfiniteness. If $\omega$ is an Hadamard vacuum, then
${\cal R}_{\omega}({\cal O}^{\perp}) = {\cal R}_{\omega}({\cal O})'$ is
a factor unitarily equivalent to ${\cal R}_{\omega}({\cal O})$.
For the last statement note that ${\cal O}^{\perp} = \emptyset$ implies
that the spacetime has a compact Cauchy-surface on which ${\cal O}$
is based. In this case ${\cal R}_{\omega}({\cal O}) =
\pi_{\omega}({\cal A}[K,\kappa])''$ (use the regularity of $\partial G$,
and (c), (e) and primitive causality). But since $\omega$ is
quasiequivalent to any Hadamard vacuum by the relative compactness of
${\cal O}$, ${\cal R}_{\omega}({\cal O}) = \pi_{\omega}({\cal A}[K,\kappa])''$
is a type ${\rm I}_{\infty}$-factor. $\Box$
\\[10pt]
We end this section and therefore, this work, with a few concluding
remarks.
First we note that the split-property signifies a strong notion of
statistical independence. It can be deduced from constraints on the
phase-space behaviour (``nuclearity'') of the considered quantum
field theory. We refer to [9,31] for further information and
also to [62] for a review, as a discussion of these issues lies
beyond the scope of of this article. The same applies to a discussion
of the property of the local von Neumann algebras ${\cal R}_{\omega}({\cal O})$
to be hyperfinite and of type ${\rm III}_1$. We only mention that
for quantum field theories on Minkowski spacetime it can be established
under very general (model-independent)
conditions that the local (von Neumann) observable
algebras are hyperfinite and of type ${\rm III}_1$, and refer the reader to
[7] and references cited therein. However, the property of the
local von Neumann algebras to be of type ${\rm III}_1$, together with
the separability of the GNS-Hilbertspace ${\cal H}_{\omega}$, has an
important consequence which we would like to point out (we have
used it implicitly already in the proof of Thm.\ 3.6(b)):
${\cal H}_{\omega}$ contains a dense subset ${\sf ts}({\cal H}_{\omega})$ of vectors
which are cyclic and separating for all ${\cal R}_{\omega}({\cal O})$
whenever ${\cal O}$ is a diamond with ${\cal O}^{\perp} \neq \emptyset$. But
so far it has only been established in special cases that $\Omega_{\omega}
\in {\sf ts}({\cal H}_{\omega})$, see [64]. At any rate, when
$\Omega \in {\sf ts}({\cal H}_{\omega})$ one may consider for a pair of
regular diamonds ${\cal O}_1,{\cal O}_2$ with $\overline{{\cal O}_1} \subset {\cal O}_2$
and ${\cal O}_2^{\perp}$ nonvoid the modular operator $\Delta_2$
of ${\cal R}_{\omega}({\cal O}_2)$,$\Omega$ (cf.\ [39]). The split property
and the factoriality of ${\cal R}_{\omega}({\cal O}_1)$ and ${\cal R}_{\omega}
({\cal O}_2)$ imply the that the map
\begin{equation}
\Xi_{1,2} : A \mapsto \Delta^{1/4}_2 A \Omega\,, \quad A \in
{\cal R}_{\omega}({\cal O}_1)\,,
\end{equation}
is compact [8]. As explained in [8],
``modular compactness'' or ``modular nuclearity'' may be viewed
as suitable generalizations of ``energy compactness'' or
``energy nuclearity'' to curved spacetimes as notions to measure
the phase-space behaviour of a quantum field theory
(see also [65]). Thus an interesting
question would be if the maps (3.31) are even nuclear.
Summarizing it can be said that Thm.\ 3.6 shows that the nets of von
Neumann observable algebras of the KG-field over a globally hyperbolic
spacetime in the representations of quasifree Hadamard states have
all the properties one would expect for physically reasonable
representations. This supports the point of view that quasifree
Hadamard states appear to be a good choice for physical states
of the KG-field over a globally hyperbolic spacetime. Similar results
are expected to hold also for other linear fields.
Finally, the reader will have noticed that we have been considering
exclusively the quantum theory of a KG-field on a {\it globally hyperbolic}
spacetime. For recent developments concerning quantum fields in the
background of non-globally hyperbolic spacetimes, we
refer to [44] and references cited there.
\\[24pt]
{\bf Acknowledgements.} I would like to thank D.\ Buchholz for
valueable comments on a very early draft of Chapter 2. Moreover,
I would like to thank C.\ D'Antoni, R.\ Longo, J.\ Roberts
and L.\ Zsido for
their hospitiality, and their interest in quantum field theory
in curved spacetimes. I also appreciated conversations with R.\ Conti,
D.\ Guido and L.\ Tuset on various parts of the material of the
present work.
\\[28pt]
\noindent
{\Large {\bf Appendix}}
\\[24pt]
{\bf Appendix A}
\\[18pt]
For the sake of completeness, we include here the interpolation argument
in the form we use it in the proof of Theorem 2.2 and in Appendix B
below. It is a standard argument based on Hadamard's three-line-theorem,
cf.\ Chapter IX in [57].
\\[10pt]
{\bf Lemma A.1}
{\it
Let ${\cal F},{\cal H}$ be complex Hilbertspaces, $X$ and $Y$ two non-negative,
injective, selfadjoint operators in ${\cal F}$ and ${\cal H}$, respectively,
and $Q$ a bounded linear operator ${\cal H} \to {\cal F}$
such that $Q{\rm Ran}(Y) \subset {\rm dom}(X)$.
Suppose that the operator $XQY$ admits a
bounded extension $T :{\cal H} \to {\cal F}$. Then for all $0 \leq \tau \leq 1$,
it holds that $Q{\rm Ran}(Y^{\tau}) \subset {\rm dom}(X^{\tau})$, and
the operators $X^{\tau}QY^{\tau}$ are bounded by $||\,T\,||^{\tau}
||\,Q\,||^{1 - \tau}$. }
\\[10pt]
{\it Proof.} The operators $\ln(X)$ and $\ln(Y)$ are (densely defined)
selfadjoint operators. Let the vectors $x$ and $y$ belong to the
spectral subspaces of $\ln(X)$ and $\ln(Y)$, respectively, corresponding
to an arbitrary finite intervall. Then the functions
${\bf C} \owns z \mapsto {\rm e}^{z\ln(X)}x$ and
${\bf C} \owns z \mapsto {\rm e}^{z\ln(Y)}y$ are holomorphic. Moreover,
${\rm e}^{\tau \ln(X)}x = X^{\tau}x$ and ${\rm e}^{\tau \ln(Y)}y =
Y^{\tau}y$ for all real $\tau$. Consider the function
$$ F(z) := \langle {\rm e}^{\overline{z}\ln(X)}x,Q{\rm e}^{z\ln(Y)}y
\rangle_{{\cal F}} \,.$$
It is easy to see that this function is holomorphic on ${\bf C}$, and
also that the function is uniformly bounded for $z$ in the
strip $\{z : 0 \leq {\sf Re}\,z \leq 1 \}$.
For $z = 1 + it$, $t \in {\bf R}$, one has
$$ |F(z)| = |\langle {\rm e}^{-it\ln(X)}x,XQY{\rm e}^{it\ln(Y)}y \rangle_{{\cal F}} |
\leq ||\,T\,||\,||\,x\,||_{{\cal F}}||\,y\,||_{{\cal H}} \,,$$
and for $z = it$, $t \in {\bf R}$,
$$ |F(z)| = |\langle {\rm e}^{-it\ln(X)}x,Q{\rm e}^{it\ln(Y)}y \rangle_{{\cal F}} |
\leq ||\,Q\,||\,||\,x\,||_{{\cal F}}||\,y\,||_{{\cal H}} \,.$$
By Hadamard's three-line-theorem, it follows that for all $z = \tau + it$
in the said strip there holds the bound
$$ |F(\tau + it)| \leq ||\,T\,||^{\tau}||\,Q\,||^{1 - \tau}||\,x\,||_{{\cal F}}
||\,y\,||_{{\cal H}}\,.$$
As $x$ and $y$ were arbitrary members of the finite spectral intervall
subspaces, the last estimate extends to all $x$ and $y$ lying in cores
for the operators
$X^{\tau}$ and $Y^{\tau}$, from which the
the claimed statement follows. $\Box$
\\[24pt]
{\bf Appendix B}
\\[18pt]
For the convenience of the reader we collect here two well-known
results about Sobolev norms on manifolds which are used in
the proof of Proposition 3.5. The notation is as follows.
$\Sigma$ and $\Sigma'$ will denote smooth, finite dimensional manifolds
(connected, paracompact, Hausdorff); $\gamma$ and $\gamma'$ are
complete Riemannian metrics on $\Sigma$ and $\Sigma'$, respectively.
Their induced volume measures are denoted by $d\eta$ and $d\eta'$.
We abbreviate by $A_{\gamma}$ the selfadjoint extension
in $L^2(\Sigma,d\eta)$
of the operator $-\Delta_{\gamma} +1$ on $C_0^{\infty}(\Sigma)$,
where $\Delta_{\gamma}$ is the Laplace-Beltrami operator on $(\Sigma,\gamma)$;
note that [10] contains a proof that $(-\Delta_{\gamma} + 1)^k$
is essentially selfadjoint on $C_0^{\infty}(\Sigma)$ for all $k \in {\bf N}$.
$A'$ will be defined similarly with respect to the corresponding
objects of $(\Sigma',\gamma')$. As in the main text, the
$m$-th Sobolev scalar product is $\langle u,v \rangle_{\gamma,m}
= \langle u,A_{\gamma}^{m}v \rangle$ for $u,v \in C_0^{\infty}(\Sigma)$ and
$m \in {\bf R}$, where $\langle\,.\,,\,.\, \rangle$ is the scalar product
of $L^2(\Sigma,d\eta)$. Anagolously we define $\langle\,.\,,\,.\,\rangle_{
\gamma',m}$. For the corresponding norms we write $||\,.\,||_{\gamma,m}$,
resp., $||\,.\,||_{\gamma',m}$.
\\[10pt]
{\bf Lemma B.1}
{\it (a) Let $\chi \in C_0^{\infty}(\Sigma)$. Then there is for each
$m \in {\bf R}$ a constant $c_m$ so that
$$ ||\,\chi u \,||_{\gamma,m} \leq c_m ||\, u \,||_{\gamma,m}\,,
\quad u \in C_0^{\infty}(\Sigma) \,.$$
\\[6pt]
(b) Let $\phi \in C^{\infty}(\Sigma)$ be strictly positive and
$G \subset \Sigma$ open and relatively compact. Then there are
for each $m \in {\bf R}$ two positive constants $\beta_1,\beta_2$ so that
$$ \beta_1||\,\phi u\,||_{\gamma,m} \leq ||\,u\,||_{\gamma,m}
\leq \beta_2||\,\phi u\,||_{\gamma,m}\,, \quad u \in C_0^{\infty}(G)\,.$$ }
{\it Proof.} (a) We may suppose that $\chi$ is real-valued (otherwise
we treat real and imaginary parts separately). A tedious but straightforward
calculation shows that the claimed estimate is fulfilled for all $m =2k$,
$k \in {\bf N}_0$. Hence $A^k \chi A^{-k}$ extends to a bounded operator
on $L^2(\Sigma,d\eta)$, and the same is true of the adjoint
$A^{-k}\chi A^k$.
Thus by the interpolation argument, cf.\ Lemma A.1,
$A^{\tau k} \chi A^{-\tau k}$ is bounded for all $-1 \leq \tau \leq 1$.
This yields the stated estimate.
\\[6pt]
(b) This is a simple corollary of (a). For the first estimate, note
that we may replace $\phi$ by a smooth function with compact support.
Then note that the second estimate is equivalent to
$||\,\phi^{-1}v\,||_{\gamma,m} \leq \beta_2||\,v\,||_{\gamma,m}$,
$v \in C_0^{\infty}(G)$, and again we use that instead of $\phi^{-1}$
we may take a smooth function of compact support. $\Box$
\\[10pt]
{\bf Lemma B.2}
{\it
Let $(\Sigma,\gamma)$ and $(\Sigma',\gamma')$ be two complete
Riemannian manifolds, $N$ and $N'$ two open subsets of $\Sigma$ and
$\Sigma'$, respectively, and $\Psi : N \to N'$ a diffeomorphism.
Given $m \in {\bf R}$ and some open, relatively compact subset $G$ of
$\Sigma$ with $\overline{G} \subset N$, there are two positive
constants $b_1,b_2$ such that
$$ b_1||\,u\,||_{\gamma,m} \leq ||\,\Psi^*u\,||_{\gamma',m}
\leq b_2||\,u\,||_{\gamma,m} \,, \quad u \in C_0^{\infty}(G)\,,$$
where $\Psi^*u := u {\mbox{\footnotesize $\circ$}} \Psi^{-1}$. }
\\[10pt]
{\it Proof.} Again it is elementary to check that such a result
is true for $m = 2k$ with $k \in {\bf N}_0$. One infers that, choosing
$\chi \in C_0^{\infty}(N)$ with $\chi|G \equiv 1$ and setting
$\chi' := \Psi^*\chi$, there is for each $k \in {\bf N}_0$ a
positive constant $b$ fulfilling
$$ ||\,A^k\chi\Psi_*\chi'v\,||_{\gamma,0} \leq
b\,||\,(A')^kv\,||_{\gamma',0}\,, \quad v \in C_0^{\infty}(\Sigma')\,;$$
here $\Psi_*v := v {\mbox{\footnotesize $\circ$}} \Psi$. Therefore,
$$ A^k{\mbox{\footnotesize $\circ$}} \chi{\mbox{\footnotesize $\circ$}} \Psi_*{\mbox{\footnotesize $\circ$}} \chi'{\mbox{\footnotesize $\circ$}} (A')^{-k} $$
extends to a bounded operator $L^2(\Sigma',d\eta') \to L^2(\Sigma,d\eta)$
for each $k \in {\bf N}_0$. Interchanging the roles of $A$ and $A'$, one
obtains that also
$$ (A')^k{\mbox{\footnotesize $\circ$}} \chi'{\mbox{\footnotesize $\circ$}} \Psi^*{\mbox{\footnotesize $\circ$}} \chi{\mbox{\footnotesize $\circ$}} A^{-k} $$
extends, for each $k \in {\bf N}_0$, to a bounded operator
$L^2(\Sigma,d\eta) \to L^2(\Sigma',d\eta')$. The boundedness transfers
to the adjoints of these two operators. Observe then that for
$(\Psi_*)^{\dagger}$, the adjoint of $\Psi_*$, we have
$(\Psi_*)^{\dagger} = \rho^2{\mbox{\footnotesize $\circ$}} (\Psi^*)$ on $C_0^{\infty}(N)$, and
similarly, for the adjoint $(\Psi^*)^{\dagger}$ of $\Psi^*$ we have
$(\Psi^*)^{\dagger} = \Psi_* {\mbox{\footnotesize $\circ$}} \rho^{-2}$ on $C_0^{\infty}(N')$,
where $\rho^2 = \Psi^*d\eta/d\eta'$ is a smooth density function
on $N'$, cf.\ eqn.\ (3.14).
It can now easily be worked out that the interpolation argument
of Lemma A.1 yields again the claimed result.
\begin{flushright}
$\Box$
\end{flushright}
{\small
| proofpile-arXiv_065-607 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
An interacting electron gas in one dimension has many unusual
properties, such as the spin-charge separation, the power law of
correlation functions, and the linear dependence of the electron
relaxation rate on temperature and frequency (see Ref.\
\cite{Firsov85} for a review). These one-dimensional (1D) results are
well established, in many cases exactly, by applying a variety of
mathematical methods including the Bethe Ansatz, the bosonization, and
the parquet, or the renormalization group. To distinguish the exotic
behavior of the 1D electron gas from a conventional Fermi-liquid
behavior, Haldane introduced a concept of the so-called Luttinger
liquid \cite{Haldane81}.
The discovery of high-$T_c$ superconductivity renewed interest in
the Luttinger-liquid concept. Anderson suggested that a
two-dimensional (2D) electron gas behaves like the 1D Luttinger
liquid, rather than a conventional Fermi liquid \cite{Anderson92}. It
is difficult to verify this claim rigorously, because the methods that
prove the existence of the Luttinger liquid in 1D cannot be applied
directly to higher dimensions. The Bethe Ansatz construction does not
work in higher dimensions. The bosonization in higher dimensions
\cite{Haldane92,Khveshchenko93a,Khveshchenko94b,Marston93,Marston,Fradkin,LiYM95,Kopietz95}
converts a system of interacting electrons into a set of harmonic
oscillators representing the electron density modes. This procedure
replaces the exact $W_\infty$ commutation relations
\cite{Khveshchenko94b} with approximate boson commutators, which is a
questionable, uncontrolled approximation. On the other hand, the
parquet method, although not being as exact as the two other methods,
has the advantage of being formulated as a certain selection rule
within a standard many-body diagram technique; thus, it can be applied
to higher dimensions rather straightforwardly. The parquet method has
much in common with the renormalization-group treatment of Fermi
liquids \cite{Shankar94}.
The 1D electron gas has two types of potential instabilities: the
superconducting and the density-wave, which manifest themselves
through logarithmic divergences of the corresponding one-loop
susceptibilities with decreasing temperature. Within the parquet
approach, a sum of an infinite series of diagrams, obtained by adding
and inserting the two basic one-loop diagrams into each other, is
calculated by solving a system of nonlinear differential equations,
which are nothing but the renormalization-group equations
\cite{Solyom79}. This procedure was developed for the first time for
meson scattering \cite{Diatlov57} and later was successfully applied
to the 1D electron gas \cite{Bychkov66,Dzyaloshinskii72a}, as well as
to the Kondo problem \cite{Abrikosov} and the X-ray absorption edge
problem \cite{Nozieres69a}. By considering both the superconducting
and the density-wave instabilities on equal footing and adequately
treating their competition, the parquet approximation differs from a
conventional ladder (or mean-field) approximation, commonly applied in
higher dimensions, where only one instability is taken into account.
Under certain conditions in the 1D case, the superconducting and
density-wave instabilities may cancel each other, giving rise to a
non-trivial metallic ground state at zero temperature, namely the
Luttinger liquid. In this case, the parquet derivation shows that the
electron correlation functions have a power-law structure, which is
one of the characteristic properties of the Luttinger liquid
\cite{Dzyaloshinskii72a,Larkin73}. One may conclude that the
competition between the superconducting and density-wave instabilities
is an important ingredient of the Luttinger liquid theory.
In a generic higher-dimensional case, where density-wave
instability does not exist or does not couple to superconducting
instability because of corrugation of the Fermi surface, the parquet
approach is not relevant. Nevertheless, there are a number of
higher-dimensional models where the parquet is applicable and produces
nontrivial results. These include the models of multiple chains
without single-electron hopping \cite{Gorkov74} and with
single-electron hopping but in a magnetic field \cite{Yakovenko87}, as
well as the model of an isotropic electron gas in a strong magnetic
field \cite{Brazovskii71,Yakovenko93a}. In all of these models, the
electron dispersion law is 1D, which permits to apply the parquet
method; at the same time, the interaction between electrons is
higher-dimensional, which makes a nontrivial difference from the
purely 1D case. The particular version of the parquet method used in
these cases is sometimes called the ``fast'' parquet, because, in
addition to a ``slow'', renormalization-group variable, the parquet
equations acquire supplementary, ``fast'' variables, which label
multiple electron states of the same energy.
Taking into account these considerations, it seems natural to start
exploring a possibility of the Luttinger liquid behavior in higher
dimensions by considering a model that combines 1D and
higher-dimensional features. This is the model of an electron gas
whose Fermi surface has flat regions on its opposite sides. The
flatness means that within these regions the electron dispersion law
is 1D: The electron energy depends only on the one component of
momentum that is normal to the flat section. On the other hand, the
size of the flat regions is finite, and that property differentiates
the model from a purely 1D model, where the size is infinite, since
nothing depends on the momenta perpendicular to the direction of a 1D
chain. A particular case of the considered model is one where the 2D
Fermi surface has a square shape. This model describes 2D electrons
on a square lattice with the nearest-neighbor hopping at the half
filling. It is a simplest model of the high-$T_c$ superconductors.
The model has already attracted the attention of
theorists. Virosztek and Ruvalds studied the ``nested Fermi liquid''
problem within a ladder or mean-field approximation
\cite{Ruvalds90,Ruvalds95}. Taking into account the 1D experience,
this approach may be considered questionable, because it does not
treat properly the competition between the superconducting and the
density-wave channels. Houghton and Marston \cite{Marston93} mapped
the flat parts of the Fermi surface onto discrete points. Such an
oversimplification makes all scattering processes within the flat
portion equivalent and artificially enhances the electron interaction.
Mattis \cite{Mattis87} and Hlubina \cite{Hlubina94} used the
bosonization to treat the interaction between the electron density
modes and claimed to solve the model exactly. However, mapping of the
flat Fermi surface onto quantum chains and subsequent bosonization by
Luther \cite{Luther94} indicated that the treatment of Mattis and
Hlubina is insufficient, because the operators of backward and umklapp
scattering on different quantum chains require a consistent
renormalization-group treatment. Luther did not give solution to this
problems, as well as he missed the interaction between the electrons
located on four different quantum chains.
In the present paper, we solve the model consistently, using the
fast parquet approach, where all possible instabilities occurring in
the electron system with the flat regions on the Fermi surface are
treated simultaneously. This approach was applied to the problem
earlier \cite{Dzyaloshinskii72b} in order to explain the
antiferromagnetism of chromium. In the present paper, we advance the
study further by including the order parameters of the odd symmetry,
missed in \cite{Dzyaloshinskii72b}, performing detailed numerical
calculations, and investigating the effect of a curvature of the Fermi
surface. To simplify numerical calculations and to relate to the
high-$T_c$ superconductors, we consider the 2D case, although the
method can be straightforwardly generalized to higher dimensions as
well.
We find that the presence of the boundaries of the flat portions of
the Fermi surface has a dramatic effect on the solutions of the
parquet equations. Even if the initial vertex of interaction between
electrons does not depend on the momenta along the Fermi surface
(which are the ``fast'' variables), the vertex acquires a strong
dependence on these variables upon renormalization, which greatly
reduces the feedback coupling between the superconducting and
density-wave channels relative to the 1D case. Instead of the two
channels canceling each other, the leading channel, which is the
spin-density-wave (SDW) in the case of the repulsive Hubbard
interaction, develops its own phase transition, inducing on the way a
considerable growth of the superconducting $d$-wave susceptibility.
At the same time, the feedback from the superconducting to the SDW
channel, very essential in the 1D case, is found negligible in the 2D
case. These results are in qualitative agreement with the picture of
the antiferromagnetically-induced $d$-wave superconductivity, which
was developed within a ladder approximation for the flat Fermi surface
in \cite{Ruvalds95} and for a generic nested Hubbard model in
\cite{Scalapino}. Recent experiments strongly suggest that the
high-$T_c$ superconductivity is indeed of the $d$-wave type
\cite{d-wave}. On the other hand, our results disagree with Refs.\
\cite{Mattis87,Hlubina94}. The origin of the discrepancy is that the
bosonization arbitrarily replaces the exact $W_\infty$ commutation
relations \cite{Khveshchenko94b} by approximate boson commutators;
thus, the renormalization of the electron-electron interaction, which
is an important part of the problem, becomes neglected.
In addition to having the flat sides, the square Fermi surface also
has sharp corners, where the saddle points of the electron dispersion
law, which produce the van Hove singularity in the density of states,
are located. The presence of the van Hove singularity at the Fermi
level enhances the divergence of the superconducting and density-wave
loops to the square of the temperature logarithm. The fast parquet
problem was formulated in this case in Ref.\ \cite{Dzyaloshinskii87a},
where the contribution from the flat sides, being less divergent than
the contribution from the saddle points, was neglected. The present
paper completes the study by considering a Fermi surface with the flat
sides and rounded corners, that is, without saddle points at the Fermi
level. Our physical conclusions for both models are in qualitative
agreement.
As photoemission experiments \cite{ZXShen93} demonstrate (see also
\cite{Ruvalds95}), many of the high-$T_c$ superconductors indeed have
flat regions on their Fermi surfaces. Hence, some of the results of
this paper may be applicable to these materials. However, the primary
goal of our study is to elucidate general theoretical concepts rather
than to achieve detailed description of real materials.
In order to distinguish the new features brought into the problems
by introducing higher dimensions, we present material in an inductive
manner. In Sec.\ \ref{sec:spinless}, we recall the derivation of the
parquet equations in the simplest case of 1D spinless electrons. In
Sec.\ \ref{sec:spin1D}, we generalize the procedure to the case of 1D
electrons with spin \cite{Bychkov66,Dzyaloshinskii72a}. Then, we
derive the parquet equations in the 2D case in Sec.\ \ref{sec:2D} and
solve them numerically in Sec.\ \ref{sec:numerical}. The paper ends
with conclusions in Sec.\ \ref{sec:conclusion}.
\section{Parquet Equations for One-Dimensional Spinless Fermions}
\label{sec:spinless}
Let us consider a 1D electron gas with a Fermi energy $\mu$ and a
generic dispersion law $\varepsilon(k_x)$, where $\varepsilon$ is the
energy and $k_x$ is the momentum of the electrons. As shown in Fig.\
\ref{fig:1D}, the Fermi surface of this system consists of two points
located at $k_x=\pm k_F$, where $k_F$ is the Fermi momentum. Assuming
that the two points are well separated, let us treat the electrons
whose momenta are close to $\pm k_F$ as two independent species and
label them with the index $\pm$. In the vicinity of the Fermi energy,
the dispersion laws of these electrons can be linearized:
\begin{equation}
\varepsilon_{\pm}(k_x) = \pm v_F k_x ,
\label{eps}
\end{equation}
where the momenta $k_x$ are counted from the respective Fermi points
$\pm k_F$ for the two species of the electrons, $\pm v_F$ are the
corresponding Fermi velocities, and the energy $\varepsilon$ is
counted from the chemical potential $\mu$.
First, let us consider the simplest case of electrons without spin.
The bare Hamiltonian of the interaction between the $\pm$ electrons,
$\hat{H}_{\rm int}$, can be written as
\begin{equation}
\hat{H}_{\rm int}= g \int\frac{dk_x^{(1)}}{2\pi}
\frac{dk_x^{(2)}}{2\pi}\frac{dk_x^{(3)}}{2\pi}
\hat{\psi}^+_+(k_x^{(1)}+k_x^{(2)}-k_x^{(3)})
\hat{\psi}^+_-(k_x^{(3)}) \hat{\psi}_-(k_x^{(2)})
\hat{\psi}_+(k_x^{(1)}),
\label{Interaction:spinless}
\end{equation}
where $g$ is the bare vertex of interaction, and the operators
$\hat{\psi}^+_\pm$ and $\hat{\psi}_\pm$ create and destroy the $\pm$
electrons.
The tendencies toward the superconducting or density-wave ($2k_F$)
instabilities in the system are reflected by the logarithmic
divergences of the two one-loop diagrams shown in Fig.\
\ref{fig:loops}, where the solid and dashed lines represent the Green
functions $G_+$ and $G_-$ of the $+$ and $-$ electrons, respectively.
The two diagrams in Fig.\ \ref{fig:loops} differ in the mutual
orientation of the arrows in the loops. In the Matsubara technique,
the integration of the Green functions over the internal momentum
$k_x$ and energy $\omega_n$ produces the following expressions for the
two diagrams:
\begin{eqnarray}
&&\pm T\sum_n\int\frac{dk_x}{2\pi}
G_-(\mp\omega_n,\mp k_x)G_+(\omega_n+\Omega_n,k_x+q_x)
\nonumber \\
&&=-T\sum_n\int\frac{dk_x}{2\pi}\frac{1}
{(i\omega_n+v_Fk_x)(i\omega_n+i\Omega_m -v_F(k_x+q_x))}
\nonumber \\
&&\approx\frac{1}{2\pi v_F}\ln\left(\frac{\mu}
{ \max\{T,|v_Fq_x|,|\Omega_m|\} } \right) \equiv \xi,
\label{1loop}
\end{eqnarray}
where the upper sign corresponds to the superconducting and the lower
to the density-wave susceptibility. In Eq.\ (\ref{1loop}), $T$ is the
temperature, $\Omega_m$ is the external energy passing through the
loop, and $q_x$ is the external momentum for the superconducting loop
and the deviation from $2k_F$ for the density-wave loop. With
logarithmic accuracy, the value of the integral (\ref{1loop}) is
determined by the upper and lower cutoffs of the logarithmic
divergence. In Eq.\ (\ref{1loop}), the upper and lower cutoffs are
written approximately, up to numerical coefficients of the order of
unity, whose logarithms are small compared to $\xi\gg1$. The variable
$\xi$, introduced by Eq.\ (\ref{1loop}), plays a very important role
in the paper. Since $\xi$ is the logarithm of the infrared cutoff,
the increase of $\xi$ represents renormalization toward low
temperature and energy.
The two primary diagrams of Fig.\ \ref{fig:loops} generate
higher-order corrections to the vertex of interaction between
electrons, $\gamma$, as illustrated in Fig.\ \ref{fig:sample}. In
this Figure, the dots represent the bare interaction vertex $g$,
whereas the renormalized vertex $\gamma$ is shown as a circle. The
one-loop diagrams in Fig.\ \ref{fig:sample} are the same as in Fig.\
\ref{fig:loops}. The first two two-loop diagrams in Fig.\
\ref{fig:sample} are obtained by repeating the same loop twice in a
ladder manner. The last two diagrams are obtained by inserting one
loop into the other and represent coupling between the two channels.
The diagrams obtained by repeatedly adding and inserting the two basic
diagrams of Fig.\ \ref{fig:loops} in all possible ways are called the
parquet diagrams. The ladder diagrams, where only the addition, but
not the insertion of the loops is allowed, represent a subset of the
more general set of the parquet diagrams. Selection of the parquet
diagrams is justified, because, as one can check calculating the
diagrams in Fig.\ \ref{fig:sample}, they form a series with the
expansion parameter $g\xi$: $\gamma=g\sum_{n=0}^\infty a_n(g\xi)^n$.
If the bare interaction vertex $g$ is small and the temperature is
sufficiently low, so that $\xi(T)$ is big, one can argue
\cite{Diatlov57,Bychkov66,Dzyaloshinskii72a} that nonparquet diagrams
may be neglected, because their expansion parameter $g$ is small
compared to the parquet expansion parameter $g\xi$.
Every diagram in Fig.\ \ref{fig:sample}, except the bare vertex $g$,
can be divided into two disconnected pieces by cutting one solid and one
dashed line, the arrows of the cut lines being either parallel or
antiparallel. The sum of those diagrams where the arrows of the cut
lines are parallel (antiparallel) is called the superconducting
(density-wave) ``brick''. Thus, the vertex $\gamma$ can be decomposed
into the bare vertex $g$, the superconducting brick $C$, and the
density-wave brick $Z$:
\begin{equation}
\gamma=g+C+Z.
\label{vertex:spinless}
\end{equation}
Eq.\ (\ref{vertex:spinless}) is illustrated in Fig.\
\ref{fig:SpinlessVertex}, where the bricks are represented as
rectangles whose long sides, one being a solid and another a dashed
line, represent the lines to be cut.
In a general case, the vertices and the bricks depend on the
energies and momenta
($\omega_1,\:\omega_2,\:\omega_3,\:v_Fk_x^{(1)},\:v_Fk_x^{(2)}$, and
$v_Fk_x^{(3)}$) of all incoming and outgoing electrons. Equations for
the bricks can be found in closed form in the case where all their
arguments are approximately equal within the logarithmic accuracy,
that is, the ratios of the arguments and of their linear combinations
are of the order of unity
\cite{Diatlov57,Bychkov66,Dzyaloshinskii72a}. Practically, this means
that all vertices and bricks are considered to be functions of the
single renormalization-group variable $\xi$, defined in Eq.\
(\ref{1loop}). It was proved in \cite{Diatlov57} that the two pieces
obtained by cutting a brick are the full vertices of interaction, as
illustrated graphically in Fig.\ \ref{fig:SpinlessBricks}.
Analytically, the equations for the bricks are
\begin{mathletters}%
\label{integral}
\begin{eqnarray}
C(\xi)&=&-\int_0^\xi d\zeta\,\gamma(\zeta)\gamma(\zeta),
\label{C}\\
Z(\xi)&=&\int_0^\xi d\zeta\,\gamma(\zeta)\gamma(\zeta).
\label{Z}
\end{eqnarray}
\end{mathletters}%
The two vertices $\gamma$ in the r.h.s.\ of Eqs.\ (\ref{integral})
represent the two pieces obtained from a brick by cutting, whereas the
integrals over $\zeta$ represent the two connecting Green functions
being integrated over the internal momentum and energy of the loop.
The value of the renormalized vertex $\gamma(\zeta)$ changes as the
integration over $\zeta$ progresses in Eqs.\ (\ref{integral}). In
agreement with the standard rules of the diagram technique \cite{AGD},
a pair of the parallel (antiparallel) lines in Fig.\
\ref{fig:SpinlessBricks} produces a negative (positive) sign in the
r.h.s.\ of Eq.\ (\ref{C}) [(\ref{Z})].
Eqs.\ (\ref{integral}) can be rewritten in differential,
renormalization-group form:
\begin{mathletters}%
\label{differential}
\begin{eqnarray}
&& \frac{dC(\xi)}{d\xi}=-\gamma(\xi)\gamma(\xi),
\quad\quad C(\xi\!\!=\!\!0)=0; \\
&& \frac{dZ(\xi)}{d\xi}=\gamma(\xi)\gamma(\xi),
\quad\quad Z(\xi\!\!=\!\!0)=0.
\end{eqnarray}
\end{mathletters}%
Combining Eqs.\ (\ref{differential}) with Eq.\
(\ref{vertex:spinless}), we find the renormalization equation for the
full vertex $\gamma$:
\begin{mathletters}%
\label{RG:spinless}
\begin{eqnarray}
&& \frac{d\gamma(\xi)}{d\xi}=\gamma(\xi)\gamma(\xi)
-\gamma(\xi)\gamma(\xi)=0,
\label{cancellation} \\
&& \gamma(\xi\!\!=\!\!0)=g.
\end{eqnarray}
\end{mathletters}%
We see that the two terms in the r.h.s.\ of Eq.\ (\ref{cancellation}),
representing the tendencies toward density-wave and superconducting
instabilities, exactly cancel each other. In a ladder approximation,
where only one term is kept in the r.h.s., the result would be quite
different, because $\gamma(\xi)$ would diverge at a finite $\xi$
indicating an instability or generation of a pseudogap in the system.
In order to study possible instabilities in the system, we need to
calculate corresponding generalized susceptibilities. For that
purpose, let us add to the Hamiltonian of the system two fictitious
infinitesimal external fields $h_{\rm SC}$ and $h_{\rm DW}$ that
create the electron-electron and electron-hole pairs:
\begin{eqnarray}
\hat{H}_{\rm ext}=\int\frac{dq_x}{2\pi}\frac{dk_x}{2\pi}&&\left[
h_{\rm SC}(q_x)\,\hat{\psi}^+_-\left(\frac{q_x}{2}-k_x\right)
\hat{\psi}^+_+\left(\frac{q_x}{2}+k_x\right) \right.
\nonumber \\
&&\left.{}+h_{\rm DW}(q_x)\,\hat{\psi}^+_-\left(k_x+\frac{q_x}{2}\right)
\hat{\psi}_+\left(k_x-\frac{q_x}{2}\right) + {\rm H.c.} \right].
\label{Hext}
\end{eqnarray}
Now we need to introduce triangular vertices ${\cal T}_{\rm SC}$
and ${\cal T}_{\rm DW}$ that represent the response of the system to
the fields $h_{\rm SC}$ and $h_{\rm DW}$. Following the same
procedure as in the derivation of the parquet equations for the bricks
\cite{Bychkov66,Dzyaloshinskii72a,Brazovskii71,Dzyaloshinskii72b}, we
find the parquet equations for the triangular vertices in graphic
form, as shown in Fig.\ \ref{fig:SpinlessTriangle}. In that Figure,
the filled triangles represent the vertices ${\cal T}_{\rm SC}$ and
${\cal T}_{\rm DW}$, whereas the dots represent the fields $h_{\rm
SC}$ and $h_{\rm DW}$. The circles, as in the other Figures,
represent the interaction vertex $\gamma$. Analytically, these
equations can be written as differential equations with given initial
conditions:
\begin{mathletters}%
\label{triangular}
\begin{eqnarray}
\frac{d{\cal T}_{\rm SC}(\xi)}{d\xi}=-\gamma(\xi)
{\cal T}_{\rm SC}(\xi),
&\quad\quad\quad& {\cal T}_{\rm SC}(0)=h_{\rm SC}; \\
\frac{d{\cal T}_{\rm DW}(\xi)}{d\xi}=\gamma(\xi)
{\cal T}_{\rm DW}(\xi),
&\quad\quad\quad& {\cal T}_{\rm DW}(0)=h_{\rm DW}.
\end{eqnarray}
\end{mathletters}%
We will often refer to the triangular vertices ${\cal T}$ as the
``order parameters''. Indeed, they are the superconducting and
density-wave order parameters induced in the system by the external
fields $h_{\rm SC}$ and $h_{\rm DW}$. If, for a finite $h_i$ ($i$=SC,
DW), a vertex ${\cal T}_i(\xi)$, which is proportional to $h_i$,
diverges when $\xi\rightarrow\xi_c$, this indicates that a {\em
spontaneous} order parameter appears in the system, that is, the order
parameter may have a finite value even when the external field $h_i$
is zero. The external fields are introduced here only as auxiliary
tools and are equal to zero in real systems. We also note that the
two terms in the r.h.s.\ of Eq.\ (\ref{Hext}) are not Hermitially
self-conjugate; thus, the fields $h_i$ are the complex fields.
Consequently, the order parameters ${\cal T}_i(\xi)$ are also complex,
so, generally speaking, ${\cal T}$ and ${\cal T}^*$ do not coincide.
According to Eqs.\ (\ref{RG:spinless}), $\gamma(\xi)=g$, so Eqs.\
(\ref{triangular}) have the following solution:
\begin{mathletters}%
\label{triangular:solutions}
\begin{eqnarray}
{\cal T}_{\rm SC}(\xi)&=&h_{\rm SC}\exp(-g\xi), \\
{\cal T}_{\rm DW}(\xi)&=&h_{\rm DW}\exp(g\xi).
\end{eqnarray}
\end{mathletters}%
Now we can calculate the susceptibilities. The lowest order
corrections to the free energy of the system due to the introduction
of the fields $h_{\rm SC}$ and $h_{\rm DW}$, $F_{\rm SC}$ and $F_{\rm
DW}$, obey the parquet equations shown graphically in Fig.\
\ref{fig:SpinlessSusceptibility} and analytically below:
\begin{mathletters}%
\label{FreeEnergy}
\begin{eqnarray}
F_{\rm SC}(\xi)&=&\int_0^\xi d\zeta\;{\cal T}_{\rm SC}(\zeta)
{\cal T}_{\rm SC}^*(\zeta), \\
F_{\rm DW}(\xi)&=&\int_0^\xi d\zeta\;{\cal T}_{\rm DW}(\zeta)
{\cal T}_{\rm DW}^*(\zeta).
\end{eqnarray}
\end{mathletters}%
Substituting expressions (\ref{triangular:solutions}) into Eqs.\
(\ref{FreeEnergy}) and dropping the squares of $h_{\rm SC}$ and
$h_{\rm DW}$, we find the susceptibilities:
\begin{mathletters}%
\label{susceptibilities}
\begin{eqnarray}
\chi_{\rm SC}(\xi)&=&-\bigm[\exp(-2g\xi)-1\bigm]/2g, \\
\chi_{\rm DW}(\xi)&=&\bigm[\exp(2g\xi)-1\bigm]/2g.
\end{eqnarray}
\end{mathletters}%
According to Eqs.\ (\ref{susceptibilities}), when the interaction
between electrons is repulsive (attractive), that is, $g$ is positive
(negative), the density-wave (superconducting) susceptibility increases
as temperature decreases ($T\rightarrow0$ and $\xi\rightarrow\infty$):
\begin{equation}
\chi_{\rm DW(SC)}(\xi)\propto\exp(\pm 2g\xi)
=\left(\frac{\mu}{ \max\{T,|v_Fq_x|,|\Omega_m|\} } \right)^{\pm2g}.
\label{PowerLaw}
\end{equation}
Susceptibilities (\ref{PowerLaw}) have power dependence on the
temperature and energy, which is one of the characteristic properties
of the Luttinger liquid. The susceptibilities are finite at finite
temperatures and diverge only at zero temperature, in agreement with
the general theorem \cite{Landau-V} that phase transitions are
impossible at finite temperatures in 1D systems. Mathematically, the
absence of divergence at finite $\xi$ is due to the cancellation of
the two terms in the r.h.s.\ of Eq.\ (\ref{cancellation}) and
subsequent nonrenormalization of $\gamma(\xi)$. This nontrivial 1D
result can be obtained only within the parquet, but not the ladder
approximation.
\section{Parquet Equations for One-Dimensional Fermions with Spin}
\label{sec:spin1D}
Now let us consider 1D electrons with spin. In this case, there
are three vertices of interaction, conventionally denoted as
$\gamma_1$, $\gamma_2$, and $\gamma_3$, which represent backward,
forward, and umklapp scattering, respectively
\cite{Bychkov66,Dzyaloshinskii72a}. Umklapp scattering should be
considered only when the change of the total momentum of the electrons
in the interaction process, $4k_F$, is equal to the crystal lattice
wave vector, which may or may not be the case in a particular model.
In this paper, we do not consider the vertex $\gamma_4$, which
describes the interaction between the electrons of the same type ($+$
or $-$), because this vertex does not have logarithmic corrections.
The bare Hamiltonian of the interaction, $\hat{H}_{\rm int}$, can be
written as
\begin{eqnarray}
\hat{H}_{\rm int}&=&\sum_{\sigma,\tau,\rho,\nu=\uparrow\downarrow}
\int\frac{dk_x^{(1)}}{2\pi}
\frac{dk_x^{(2)}}{2\pi}\frac{dk_x^{(3)}}{2\pi}
\nonumber \\
&& \times\biggm\{
(-g_1\delta_{\rho\tau}\delta_{\sigma\nu} +
g_2\delta_{\rho\nu}\delta_{\sigma\tau} )
\hat{\psi}^+_{\nu+}(k_x^{(1)}+k_x^{(2)}-k_x^{(3)})
\hat{\psi}^+_{\tau-}(k_x^{(3)})
\hat{\psi}_{\sigma-}(k_x^{(2)}) \hat{\psi}_{\rho+}(k_x^{(1)})
\nonumber \\
&& +\left[ g_3\delta_{\rho\nu}\delta_{\sigma\tau}
\hat{\psi}^+_{\nu-}(k_x^{(1)}+k_x^{(2)}-k_x^{(3)})
\hat{\psi}^+_{\tau-}(k_x^{(3)})
\hat{\psi}_{\sigma+}(k_x^{(2)})
\hat{\psi}_{\rho+}(k_x^{(1)}) + {\rm H.c.} \right]
\biggm\},
\label{Interaction}
\end{eqnarray}
where the coefficients $g_{1-3}$ denote the bare (unrenormalized)
values of the interaction vertices $\gamma_{1-3}$. The operators
$\hat{\psi}^+_{\sigma s}$ and $\hat{\psi}_{\sigma s}$ create and
destroy electrons of the type $s=\pm$ and the spin
$\sigma={\uparrow\downarrow}$. The spin structure of the interaction
Hamiltonian is dictated by conservation of spin. We picture the
interaction vertices in Fig.\ \ref{fig:interaction}, where the solid
and dashed lines represent the $+$ and $-$ electrons. The thin solid
lines inside the circles indicate how spin is conserved: The spins of
the incoming and outgoing electrons connected by a thin line are the
same. According to the structure of Hamiltonian (\ref{Interaction}),
the umklapp vertex $\gamma_3$ describes the process where two +
electrons come in and two -- electrons come out, whereas the complex
conjugate vertex $\gamma_3^*$ describes the reversed process.
The three vertices of interaction contain six bricks, as shown
schematically in Fig.\ \ref{fig:vertices}:
\begin{mathletters}%
\label{vertices}
\begin{eqnarray}
\gamma_1 &=& g_1+C_1+Z_1, \\
\gamma_2 &=& g_2+C_2+Z_2, \\
\gamma_3 &=& g_3+Z_I+Z_{II},
\end{eqnarray}
\end{mathletters}%
where $C_1$ and $C_2$ are the superconducting bricks, and $Z_1$,
$Z_2$, $Z_I$, and $Z_{II}$ are the density-wave bricks. In Fig.\
\ref{fig:vertices}, the thin solid lines inside the bricks represent
spin conservation. The umklapp vertex has two density-wave bricks
$Z_I$ and $Z_{II}$, which differ in their spin structure.
Parquet equations for the bricks are derived in the same manner as
in Sec.\ \ref{sec:spinless} by adding appropriate spin structure
dictated by spin conservation. It is convenient to derive the
equations graphically by demanding that the thin spin lines are
continuous, as shown in Fig.\ \ref{fig:bricks}. Corresponding
analytic equations can be written using the following rules. A pair
of parallel (antiparallel) lines connecting two vertices in Fig.\
\ref{fig:bricks} produces the negative (positive) sign. A closed loop
of the two connecting lines produces an additional factor $-2$ due to
summation over the two spin orientations of the electrons.
\begin{mathletters}%
\label{bricks}
\begin{eqnarray}
\frac{dC_1(\xi)}{d\xi} &=& -2\gamma_1(\xi)\:\gamma_2(\xi), \\
\frac{dC_2(\xi)}{d\xi} &=& -\gamma_1^2(\xi)-\gamma_2^2(\xi), \\
\frac{dZ_1(\xi)}{d\xi} &=& 2\gamma_1(\xi)\:\gamma_2(\xi)
-2\gamma_1^2(\xi), \\
\frac{dZ_2(\xi)}{d\xi} &=& \gamma_2^2(\xi)
+\gamma_3(\xi)\gamma_3^*(\xi), \\
\frac{dZ_I(\xi)}{d\xi} &=& 2\gamma_3(\xi)
[\gamma_2(\xi)-\gamma_1(\xi)], \\
\frac{dZ_{II}(\xi)}{d\xi} &=& 2\gamma_3(\xi)\:\gamma_2(\xi).
\end{eqnarray}
\end{mathletters}%
Combining Eqs.\ (\ref{vertices}) and (\ref{bricks}), we obtain the
well-known closed equations for renormalization of the vertices
\cite{Dzyaloshinskii72a}:
\begin{mathletters}%
\label{RG1D}
\begin{eqnarray}
\frac{d\gamma_1(\xi)}{d\xi} &=& -2\gamma^2_1(\xi), \\
\frac{d\gamma_2(\xi)}{d\xi} &=& -\gamma_1^2(\xi)
+\gamma_3(\xi)\gamma_3^*(\xi), \\
\frac{d\gamma_3(\xi)}{d\xi} &=& 2\gamma_3(\xi)
[2\gamma_2(\xi)-\gamma_1(\xi)].
\end{eqnarray}
\end{mathletters}%
In the presence of spin, the electron operators in Eq.\
(\ref{Hext}) and, correspondingly, the fields $h_i$ and the triangular
vertices ${\cal T}_i(\xi)$ acquire the spin indices. Thus, the
superconducting triangular vertex ${\cal T}_{\rm SC}(\xi)$ becomes a
vector:
\begin{equation}
{\cal T}_{\rm SC}(\xi) = \left( \begin{array}{c}
{\cal T}_{\rm SC}^{\uparrow \uparrow}(\xi) \\
{\cal T}_{\rm SC}^{\uparrow \downarrow}(\xi) \\
{\cal T}_{\rm SC}^{\downarrow \uparrow}(\xi) \\
{\cal T}_{\rm SC}^{\downarrow \downarrow}(\xi)
\end{array} \right).
\label{TSC}
\end{equation}
Parquet equations for the triangular vertices are given by the
diagrams shown in Fig.\ \ref{fig:SpinlessTriangle}, where the spin
lines should be added in the same manner as in Fig.\ \ref{fig:bricks}.
The superconducting vertex obeys the following equation:
\begin{equation}
\frac{d{\cal T}_{\rm SC}(\xi)}{d\xi} =
\Gamma_{\rm SC}(\xi)\;{\cal T}_{\rm SC}(\xi),
\label{MTRX}
\end{equation}
where the matrix $\Gamma_{\rm SC}(\xi)$ is
\begin{equation}
\Gamma_{\rm SC}(\xi) = \left( \begin{array}{cccc}
-\gamma_2 + \gamma_1 & 0 & 0 &0 \\
0 & -\gamma_2 & \gamma_1 & 0 \\
0 & \gamma_1 & -\gamma_2 & 0 \\
0 & 0 & 0 & -\gamma_2 + \gamma_1 \end{array} \right).
\label{GSC}
\end{equation}
Linear equation (\ref{MTRX}) is diagonalized by introducing the
singlet, ${\cal T}_{\rm SSC}$, and the triplet, ${\cal T}_{\rm TSC}$,
superconducting triangular vertices:
\begin{mathletters}%
\label{SC}
\begin{eqnarray}
{\cal T}_{\rm SSC}(\xi) &=&
{\cal T}_{\rm SC}^{\uparrow \downarrow}(\xi) -
{\cal T}_{\rm SC}^{\downarrow \uparrow}(\xi) ,
\label{SCS} \\
{\cal T}_{\rm TSC}(\xi) &=& \left( \begin{array}{c}
{\cal T}_{\rm SC}^{\uparrow \uparrow}(\xi) \\
{\cal T}_{\rm SC}^{\uparrow \downarrow}(\xi) +
{\cal T}_{\rm SC}^{\downarrow \uparrow}(\xi) \\
{\cal T}_{\rm SC}^{\downarrow \downarrow}(\xi)
\end{array} \right),
\label{SCT}
\end{eqnarray}
\end{mathletters}%
which obey the following equations:
\begin{equation}
\frac{d{\cal T}_{\rm SSC(TSC)}(\xi)}{d\xi} =
[\mp\gamma_1(\xi)-\gamma_2(\xi)]
\;{\cal T}_{\rm SSC(TSC)}(\xi).
\label{SCOP}
\end{equation}
In Eq.\ (\ref{SCOP}) the sign $-$ and the index SSC correspond to the
singlet superconductivity, whereas the sign $+$ and the index TSC
correspond to the triplet one. In the rest of the paper, we use the
index SC where discussion applies to both SSC and TSC.
Now let us consider the density-wave triangular vertices, first in
the absence of umklapp. They form a vector
\begin{equation}
{\cal T}_{\rm DW}(\xi) = \left( \begin{array}{c}
{\cal T}_{\rm DW}^{\uparrow \uparrow}(\xi) \\
{\cal T}_{\rm DW}^{\uparrow \downarrow}(\xi) \\
{\cal T}_{\rm DW}^{\downarrow \uparrow}(\xi) \\
{\cal T}_{\rm DW}^{\downarrow \downarrow}(\xi)
\end{array} \right),
\label{TDW}
\end{equation}
which obeys the equation
\begin{equation}
\frac{d{\cal T}_{\rm DW}(\xi)}{d\xi} =
\Gamma_{\rm DW}(\xi)\;{\cal T}_{\rm DW}(\xi)
\label{DWMTRX}
\end{equation}
with the matrix
\begin{equation}
\Gamma_{\rm DW}(\xi) = \left( \begin{array}{cccc}
-\gamma_1 + \gamma_2 & 0 & 0 &-\gamma_1 \\
0 & \gamma_2 & 0 & 0 \\
0 & 0 & \gamma_2 & 0 \\
-\gamma_1 & 0 & 0 & -\gamma_1 + \gamma_2 \end{array} \right).
\label{GDW}
\end{equation}
Eq.\ (\ref{DWMTRX}) is diagonalized by introducing the charge-,
${\cal T}_{\rm CDW}$, and the spin-, ${\cal T}_{\rm SDW}$, density-wave
triangular vertices:
\begin{mathletters}%
\label{DW}
\begin{eqnarray}
{\cal T}_{\rm CDW}(\xi) &=&
{\cal T}_{\rm DW}^{\uparrow \uparrow}(\xi) +
{\cal T}_{\rm DW}^{\downarrow \downarrow}(\xi) ,
\label{DWS} \\
{\cal T}_{\rm SDW}(\xi) &=& \left( \begin{array}{c}
{\cal T}_{\rm DW}^{\uparrow \downarrow}(\xi) \\
{\cal T}_{\rm DW}^{\downarrow \uparrow}(\xi) \\
{\cal T}_{\rm DW}^{\uparrow \uparrow}(\xi) -
{\cal T}_{\rm DW}^{\downarrow \downarrow}(\xi)
\end{array} \right),
\label{DWT}
\end{eqnarray}
\end{mathletters}%
which obey the following equations:
\begin{mathletters}%
\label{DWOP}
\begin{eqnarray}
\frac{d{\cal T}_{\rm CDW}(\xi)}{d\xi} &=&
[-2\gamma_1(\xi)+\gamma_2(\xi)]\;{\cal T}_{\rm CDW}(\xi), \\
\frac{d{\cal T}_{\rm SDW}(\xi)}{d\xi} &=&
\gamma_2(\xi)\;{\cal T}_{\rm SDW}(\xi).
\end{eqnarray}
\end{mathletters}%
When the umklapp vertices $\gamma_3$ and $\gamma_3^*$ are introduced,
they become offdiagonal matrix elements in Eqs.\ (\ref{DWOP}), mixing
${\cal T}_{\rm CDW}$ and ${\cal T}_{\rm SDW}$ with their complex
conjugates. Assuming for simplicity that $\gamma_3$ is real, we find
that the following linear combinations diagonalize the equations:
\begin{equation}
{\cal T}_{{\rm CDW(SDW)}\pm}={\cal T}_{\rm CDW(SDW)}
\pm {\cal T}^*_{\rm CDW(SDW)},
\label{DW+-}
\end{equation}
and the equations become:
\begin{mathletters}%
\label{DWOP+-}
\begin{eqnarray}
\frac{d{\cal T}_{{\rm CDW}\pm}(\xi)}{d\xi} &=&
[-2\gamma_1(\xi)+\gamma_2(\xi)\mp\gamma_3(\xi)]
\;{\cal T}_{{\rm CDW}\pm}(\xi), \\
\frac{d{\cal T}_{{\rm SDW}\pm}(\xi)}{d\xi} &=&
[\gamma_2(\xi)\pm\gamma_3(\xi)]\;{\cal T}_{{\rm SDW}\pm}(\xi).
\end{eqnarray}
\end{mathletters}%
If the external fields $h_i$ are set to unity in the initial
conditions of the type (\ref{triangular}) for all triangular vertices
$i$ = SSC, TSC, CDW$\pm$, and SDW$\pm$, then the corresponding
susceptibilities are equal numerically to the free energy corrections of
the type (\ref{FreeEnergy}):
\begin{equation}
\chi_i(\xi)= \int_0^\xi d\zeta\;
{\cal T}_i(\zeta){\cal T}_i^*(\zeta).
\label{chii}
\end{equation}
Eqs.\ (\ref{RG1D}), (\ref{SCOP}), (\ref{DWOP+-}), and (\ref{chii})
were solved analytically in Ref.\ \cite{Dzyaloshinskii72a}, where a
complete phase diagram of the 1D electron gas with spin was obtained.
\section{Parquet Equations for Two-Dimensional Electrons}
\label{sec:2D}
Now let us consider a 2D electron gas with the Fermi surface shown
schematically in Fig.\ \ref{fig:2DFS}. It contains two pairs of flat
regions, shown as the thick lines and labeled by the letters $a$ and
$b$. Such a Fermi surface resembles the Fermi surfaces of some
high-$T_c$ superconductors \cite{ZXShen93}. In our consideration, we
restrict the momenta of electrons to the flat sections only. In this
way, we effectively neglect the rounded portions of the Fermi surface,
which are not relevant for the parquet consideration, because the
density-wave loop is not divergent there. One can check also that the
contributions of the portions $a$ and $b$ do not mix with each other
in the parquet manner, so they may be treated separately. For this
reason, we will consider only the region $a$, where the 2D electron
states are labeled by the two momenta $k_x$ and $k_y$, the latter
momentum being restricted to the interval $[-k_y^{(0)},k_y^{(0)}]$.
In our model, the energy of electrons depends only on the momentum
$k_x$ according to Eq.\ (\ref{eps}) and does not depend on the
momentum $k_y$. We neglect possible dependence of the Fermi velocity
$v_F$ on $k_y$; it was argued in Ref.\ \cite{Luther94} that this
dependence is irrelevant in the renormalization-group sense.
In the 2D case, each brick or vertex of interaction between
electrons acquires extra variables $k_y^{(1)}$, $k_y^{(2)}$, and
$k_y^{(3)}$ in addition to the 1D variables
$\omega_1,\:\omega_2,\:\omega_3,\:v_Fk_x^{(1)},\:v_Fk_x^{(2)}$, and
$v_Fk_x^{(3)}$. These two sets of variables play very different
roles. The Green functions, which connect the vertices and produce
the logarithms $\xi$, depend only on the second set of variables.
Thus, following the parquet approach outlined in the previous
Sections, we dump all the $\omega$ and $v_Fk_x$ variables of a vertex
or a brick into a single variable $\xi$. At the same time, the
$k_y^{(1)}$, $k_y^{(2)}$, and $k_y^{(3)}$ variables remain independent
and play the role of indices labeling the vertices, somewhat similar
to the spin indices. Thus, each vertex and brick is a function of
several variables, which we will always write in the following order:
$\gamma(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi)$. It is
implied that the first four variables satisfy the momentum
conservation law $k_y^{(1)}+k_y^{(2)}=k_y^{(3)}+k_y^{(4)}$, and each
of them belongs to the interval $[-k_y^{(0)},k_y^{(0)}]$. The
assignment of the variables $k_y^{(1)}$, $k_y^{(2)}$, $k_y^{(3)}$, and
$k_y^{(4)}$ to the ends of the vertices and bricks is shown in Fig.\
\ref{fig:vertices}, where the labels $k_j$ ($j=1-4$) should be
considered now as the variables $k_y^{(j)}$. To shorten notation, it
is convenient to combine these variable into a single four-component
vector
\begin{equation}
{\cal K}=(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)}),
\label{K}
\end{equation}
so that the relation between the vertices and the bricks can be
written as
\begin{mathletters}%
\label{2Dgammas}
\begin{eqnarray}
\gamma_1({\cal K},\xi) &=&
g_1 + C_1({\cal K},\xi) + Z_1({\cal K},\xi),\\
\gamma_2({\cal K},\xi) &=&
g_2 + C_2({\cal K},\xi) + Z_2({\cal K},\xi),\\
\gamma_3({\cal K},\xi) &=&
g_3 + Z_I({\cal K},\xi) + Z_{II}({\cal K},\xi).
\end{eqnarray}
\end{mathletters}%
After this introduction, we are in a position to write the parquet
equations for the bricks. These equations are shown graphically in
Fig.\ \ref{fig:bricks}, where again the momenta $k_j$ should be
understood as $k_y^{(j)}$. Analytically, the equations are written
below, with the terms in the same order as in Fig.\ \ref{fig:bricks}:
\begin{mathletters}%
\label{2Dbricks}
\begin{eqnarray}
\frac{\partial C_1({\cal K},\xi)}{\partial \xi} &=&
-\gamma_1({\cal K}_1,\xi)\circ\gamma_2({\cal K}_1^{\prime},\xi) -
\gamma_2({\cal K}_1,\xi)\circ\gamma_1({\cal K}_1^{\prime},\xi),
\label{C1} \\
\frac{\partial C_2({\cal K},\xi)}{\partial \xi} &=&
-\gamma_1({\cal K}_1,\xi)\circ\gamma_1({\cal K}_1^{\prime},\xi)
-\gamma_2({\cal K}_1,\xi)\circ\gamma_2({\cal K}_1^{\prime},\xi),
\label{C2} \\
\frac{\partial Z_1({\cal K},\xi)}{\partial \xi} &=&
\gamma_1({\cal K}_2,\xi)\circ\gamma_2({\cal K}_2^{\prime},\xi) +
\gamma_2({\cal K}_2,\xi)\circ\gamma_1({\cal K}_2^{\prime},\xi)
- 2 \gamma_1({\cal K}_2,\xi)\circ\gamma_1({\cal K}_2^{\prime},\xi)
\nonumber \\
&& - 2 \tilde{\gamma}_3({\cal K}_2,\xi)
\circ\tilde{\bar{\gamma}}_3({\cal K}_2^{\prime},\xi)
+ \tilde{\gamma}_3({\cal K}_2,\xi)
\circ\bar{\gamma}_3({\cal K}_2^{\prime},\xi)
+ \gamma_3({\cal K}_2,\xi)
\circ\tilde{\bar{\gamma}}_3({\cal K}_2^{\prime},\xi),
\label{Z1} \\
\frac{\partial Z_2({\cal K},\xi)}{\partial \xi} &=&
\gamma_2({\cal K}_2,\xi)\circ\gamma_2({\cal K}_2^{\prime},\xi)
+\gamma_3({\cal K}_2,\xi)\circ\bar{\gamma}_3({\cal K}_2^{\prime},\xi),
\label{Z2} \\
\frac{\partial Z_I({\cal K},\xi)}{\partial \xi} &=&
\tilde{\gamma}_3({\cal K}_3,\xi)
\circ\gamma_2({\cal K}_3^{\prime},\xi)
+ \gamma_2({\cal K}_3,\xi)
\circ\tilde{\gamma}_3({\cal K}_3^{\prime},\xi) +
\gamma_1({\cal K}_3,\xi)\circ\gamma_3({\cal K}_3^{\prime},\xi)
\nonumber \\
&& + \gamma_3({\cal K}_3,\xi)\circ\gamma_1({\cal K}_3^{\prime},\xi)
- 2\tilde{\gamma}_3({\cal K}_3,\xi)
\circ\gamma_1({\cal K}_3^{\prime},\xi) -
2\gamma_1({\cal K}_3,\xi)
\circ\tilde{\gamma}_3({\cal K}_3^{\prime},\xi),
\label{ZI} \\
\frac{\partial Z_{II}({\cal K},\xi)}{\partial \xi} &=&
\gamma_3({\cal K}_2,\xi)\circ\gamma_2({\cal K}_2'',\xi)
+ \gamma_2({\cal K}_2,\xi)\circ\gamma_3({\cal K}_2'',\xi),
\label{ZII}
\end{eqnarray}
\end{mathletters}%
where
\begin{mathletters}%
\label{2DK}
\begin{eqnarray}
&& {\cal K}_1=(k_y^{(1)},k_y^{(2)};\:k_y^{(A)},k_y^{(B)}),\quad
{\cal K}_1'=(k_y^{(B)},k_y^{(A)};\:k_y^{(3)},k_y^{(4)}),\\
&& {\cal K}_2=(k_y^{(1)},k_y^{(B)};\:k_y^{(3)},k_y^{(A)}),\quad
{\cal K}_2'=(k_y^{(A)},k_y^{(2)};\:k_y^{(B)},k_y^{(4)}),\quad
{\cal K}_2''=(k_y^{(2)},k_y^{(A)};\:k_y^{(4)},k_y^{(B)}),\\
&& {\cal K}_3=(k_y^{(1)},k_y^{(B)};\:k_y^{(4)},k_y^{(A)}),\quad
{\cal K}_3'=(k_y^{(2)},k_y^{(A)};\:k_y^{(3)},k_y^{(B)}),
\end{eqnarray}
\end{mathletters}%
and the tilde and the bar operations are defined as
\begin{mathletters}%
\label{TildeBar}
\begin{eqnarray}
\tilde{\gamma}_j(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi)
&\equiv&\gamma_j(k_y^{(1)},k_y^{(2)};\:k_y^{(4)},k_y^{(3)};\:\xi),
\label{tilde}\\
\bar{\gamma}_3(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi)
&\equiv&\gamma_3^*(k_y^{(4)},k_y^{(3)};\:k_y^{(2)},k_y^{(1)};\:\xi).
\end{eqnarray}
\end{mathletters}%
In Eqs.\ (\ref{2Dbricks}), we introduced the operation $\circ$ that
represents the integration over the internal momenta of the loops in
Fig.\ \ref{fig:bricks}. It denotes the integration over the
intermediate momentum $k_y^{(A)}$ with the restriction that both
$k_y^{(A)}$ and $k_y^{(B)}$, another intermediate momentum determined
by conservation of momentum, belong to the interval
$[-k_y^{(0)},k_y^{(0)}]$. For example, the explicit form of the first
term in the r.h.s.\ of Eq.\ (\ref{C1}) is:
\begin{eqnarray}
&&\gamma_1({\cal K}_1,\xi)\circ\gamma_2({\cal K}_1^{\prime},\xi)=
\displaystyle
\int_{ -k_y^{(0)} \leq k_y^{(A)} \leq k_y^{(0)};\;\;
-k_y^{(0)} \leq k_y^{(1)}+k_y^{(2)}-k_y^{(A)} \leq k_y^{(0)} }
\frac{\displaystyle dk_y^{(A)}}{\displaystyle 2\pi}\,
\nonumber \\ && \times
\gamma_1(k_y^{(1)},k_y^{(2)};\:k_y^{(A)},k_y^{(1)}+k_y^{(2)}-k_y^{(A)};\:\xi)
\,
\gamma_2(k_y^{(1)}+k_y^{(2)}-k_y^{(A)},k_y^{(A)};\:k_y^{(3)},k_y^{(4)};\:\xi).
\label{o}
\end{eqnarray}
Eqs.\ (\ref{2Dbricks}) and (\ref{2Dgammas}) with definitions
(\ref{K}), (\ref{2DK}), and (\ref{TildeBar}) form a closed system of
integrodifferential equations, which will be solved numerically in
Sec.\ \ref{sec:numerical}. The initial conditions for Eqs.\
(\ref{2Dbricks}) and (\ref{2Dgammas}) are that all the $C$ and $Z$
bricks are equal to zero at $\xi=0$.
Parquet equations for the superconducting triangular vertices can
be found in the 2D case by adding the $k_y$ momenta to the 1D
equations (\ref{SCOP}). The equations are shown graphically in Fig.\
\ref{fig:SpinlessTriangle}, where the momenta $k$ and $q$ should be
interpreted as $k_y$ and $q_y$:
\begin{equation}
\frac{\partial{\cal T}_{\rm SSC(TSC)}(k_y,q_y,\xi)}{\partial\xi}
= f_{\rm SSC(TSC)}({\cal K}_{\rm SC},\xi)\circ
{\cal T}_{\rm SSC(TSC)}(k'_y,q_y,\xi),
\label{2DSCOP}
\end{equation}
where
\begin{eqnarray}
&& f_{\rm SSC(TSC)}({\cal K}_{\rm SC},\xi)=
\mp\gamma_1({\cal K}_{\rm SC},\xi)-
\gamma_2({\cal K}_{\rm SC},\xi),
\label{fSC} \\
&& {\cal K}_{\rm SC}=(k'_y+q_y/2,-k'_y+q_y/2; -k_y+q_y/2,k_y+q_y/2),
\end{eqnarray}
and the operator $\circ$ denotes the integration over $k'_y$ with the
restriction that both $k'_y+q_y/2$ and $-k'_y+q_y/2$ belong to the
interval $[-k_y^{(0)},k_y^{(0)}]$. The $\mp$ signs in front of
$\gamma_1$ in Eq.\ (\ref{fSC}) correspond to the singlet and triplet
superconductivity. As discussed in Sec.\ \ref{sec:spinless}, the
triangular vertex ${\cal T}_{\rm SC}(k_y,q_y,\xi)$ is the
superconducting order parameter, $q_y$ and $k_y$ being the
$y$-components of the total and the relative momenta of the electrons
in a Cooper pair. Indeed, the vertex ${\cal T}_{\rm SC}(k_y,q_y,\xi)$
obeys the linear equation shown in Fig.\ \ref{fig:SpinlessTriangle},
which is the linearized Gorkov equation for the superconducting order
parameter. As the system approaches a phase transition, the vertex
${\cal T}_{\rm SC}(k_y,q_y,\xi)$ diverges in overall magnitude, but
its dependence on $k_y$ for a fixed $q_y$ remains the same, up to a
singular, $\xi$-dependent factor. The dependence of ${\cal T}_{\rm
SC}(k_y,q_y,\xi)$ on $k_y$ describes the distribution of the emerging
order parameter over the Fermi surface. The numerically found
behavior of ${\cal T}_{\rm SC}(k_y,q_y,\xi)$ is discussed in Sec.\
\ref{sec:numerical}.
Due to the particular shape of the Fermi surface, the vertices of
interaction in our 2D model have two special symmetries: with respect to
the sign change of all momenta $k_y$ and with respect to the exchange of
the $+$ and $-$ electrons:
\begin{mathletters}%
\label{symmetry}
\begin{eqnarray}
\gamma_i(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi) &=&
\gamma_i(-k_y^{(1)},-k_y^{(2)};\:-k_y^{(3)},-k_y^{(4)};\:\xi),
\quad i=1,2,3; \\
\gamma_i(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi) &=&
\gamma_i(k_y^{(2)},k_y^{(1)};\:k_y^{(4)},k_y^{(3)};\:\xi),
\quad i=1,2,3; \\
\gamma_3(k_y^{(1)},k_y^{(2)};\:k_y^{(3)},k_y^{(4)};\:\xi) &=&
\gamma_3(k_y^{(4)},k_y^{(3)};\:k_y^{(2)},k_y^{(1)};\:\xi),
\label{*}
\end{eqnarray}
\end{mathletters}%
where in Eq.\ (\ref{*}) we assume for simplicity that $\gamma_3$ is
real. As a consequence of (\ref{symmetry}), Eqs.\ (\ref{2DSCOP}) are
invariant with respect to the sign reversal of $k_y$ in ${\cal T}_{\rm
SC}(k_y,q_y,\xi)$ at a fixed $q_y$. The following combinations of the
triangular vertices form two irreducible representations of this
symmetry, that is, they are independent and do not mix in Eqs.\
(\ref{2DSCOP}):
\begin{equation}
{\cal T}^\pm_{\rm SSC(TSC)}(k_y,q_y,\xi)=
{\cal T}_{\rm SSC(TSC)}(k_y,q_y,\xi)
\pm {\cal T}_{\rm SSC(TSC)}(-k_y,q_y,\xi).
\label{SASC}
\end{equation}
The triangular vertices ${\cal T}^\pm_{\rm SSC(TSC)}(k_y,q_y,\xi)$
describe the superconducting order parameters that are either
symmetric or antisymmetric with respect to the sign change of $k_y$.
When ${\cal T}^+_{\rm SSC}$ is extended over the whole 2D Fermi
surface (see Fig.\ \ref{fig:2DFS}), it acquires the $s$-wave symmetry,
whereas ${\cal T}^-_{\rm SSC}$ the $d$-wave symmetry. The symmetrized
vertices ${\cal T}^\pm_{\rm SSC(TSC)}(k_y,q_y,\xi)$ obey the same
Eqs.\ (\ref{2DSCOP}) as the unsymmetrized ones.
The equations for the density-wave triangular vertices are obtained
in a similar manner:
\begin{mathletters}%
\label{CSDWOPA}
\begin{eqnarray}
\frac{\partial{\cal T}_{{\rm CDW}\pm}^{\pm}(k_y,q_y,\xi)}
{\partial\xi} &=&
f_{{\rm CDW}\pm}({\cal K}_{\rm DW},\xi)\circ
{\cal T}_{{\rm CDW}\pm}^{\pm}(k'_y,q_y,\xi),
\label{CDWOPA} \\
\frac{\partial{\cal T}_{{\rm SDW}\pm}^{\pm}(k_y,q_y,\xi)}
{\partial\xi} &=&
f_{{\rm SDW}\pm}({\cal K}_{\rm DW},\xi)\circ
{\cal T}_{{\rm SDW}\pm}^{\pm}(k'_y,q_y,\xi),
\label{SDWOPA}
\end{eqnarray}
\end{mathletters}%
where
\begin{eqnarray}
&& f_{{\rm CDW}\pm}({\cal K}_{\rm DW},\xi)=
-2\gamma_1({\cal K}_{\rm DW},\xi)
\mp 2\tilde{\gamma}_3({\cal K}_{\rm DW},\xi)
+\gamma_2({\cal K}_{\rm DW},\xi)
\pm \gamma_3({\cal K}_{\rm DW},\xi),
\label{fCDW} \\
&& f_{{\rm SDW}\pm}({\cal K}_{\rm DW},\xi)=
\gamma_2({\cal K}_{\rm DW},\xi) \pm \gamma_3({\cal K}_{\rm DW},\xi),
\label{fSDW} \\
&& {\cal K}_{\rm DW} = (k'_y+q_y/2,k_y-q_y/2; k'_y-q_y/2,k_y+q_y/2).
\end{eqnarray}
The $\pm$ signs in the subscripts of ${\cal T}$ in Eqs.\
(\ref{CSDWOPA}) and in front of $\gamma_3$ in Eqs.\
(\ref{fCDW})--(\ref{fSDW}) refer to the umklapp symmetry discussed in
Sec.\ \ref {sec:spin1D}, whereas the $\pm$ signs in the superscripts
of ${\cal T}$ refer to the symmetry with respect to sign reversal of
$k_y$, discussed above in the superconducting case. The
$k_y$-antisymmetric density waves are actually the waves of charge
current and spin current \cite{Halperin68,Dzyaloshinskii87a}, also
known in the so-called flux phases \cite{FluxPhases}.
Once the triangular vertices ${\cal T}_i$ are found, the
corresponding susceptibilities $\chi_i$ are calculated according to
the following equation, similar to Eq.\ (\ref{chii}):
\begin{equation}
\chi_i(q_y,\xi)=\int_0^\xi d\zeta \int\frac{dk_y}{2\pi}
{\cal T}_i(k_y,q_y,\zeta){\cal T}_i^*(k_y,q_y,\zeta),
\label{2Dchii}
\end{equation}
where the integration over $k_y$ is restricted so that both
$k_y\pm q_y/2$ belong to the interval $[-k_y^{(0)},k_y^{(0)}]$.
Using functions (\ref{fSC}), (\ref{fCDW}), and (\ref{fSDW}) and
symmetries (\ref{symmetry}), we can rewrite Eqs.\ (\ref{2Dbricks}) in
a more compact form. For that purpose, we introduce the SSC, TSC,
CDW, and SDW bricks that are the linear combinations of the original
bricks:
\begin{mathletters}%
\label{NEWbricks}
\begin{eqnarray}
C_{\rm SSC(TSC)} &=& C_2 \pm C_1,
\label{CST} \\
Z_{{\rm CDW}\pm} &=& \tilde{Z}_2 - 2 \tilde{Z}_1
\pm (\tilde{Z}_{II} - 2 Z_I),
\label{ZCW} \\
Z_{{\rm SDW}\pm} &=& Z_2 \pm Z_{II},
\label{ZSW}
\end{eqnarray}
\end{mathletters}%
where the tilde operation is defined in Eq.\ (\ref{tilde}). Then,
Eqs.\ (\ref{2Dbricks}) become:
\begin{mathletters}%
\label{NEWRG}
\begin{eqnarray}
\frac{\partial C_{\rm SSC(TSC)}({\cal K},\xi)}{\partial \xi}
&=& -f_{\rm SSC(TSC)}({\cal K}_1,\xi)
\circ f_{\rm SSC(TSC)}({\cal K}_1^{\prime},\xi),
\label{SC1} \\
\frac{\partial Z_{{\rm CDW}\pm}({\cal K},\xi)}{\partial \xi}
&=& f_{{\rm CDW}\pm}({\cal K}_3,\xi)
\circ f_{{\rm CDW}\pm}({\cal K}_3^{\prime},\xi),
\label{CW1} \\
\frac{\partial Z_{{\rm SDW}\pm}({\cal K},\xi)}{\partial \xi}
&=& f_{{\rm SDW}\pm}({\cal K}_2,\xi)
\circ f_{{\rm SDW}\pm}({\cal K}_2^{\prime},\xi).
\label{SW1}
\end{eqnarray}
\end{mathletters}%
The parquet equations in the form (\ref{NEWRG}) were obtained in
Ref.\ \cite{Dzyaloshinskii72b}.
It is instructive to trace the difference between the parquet
equations (\ref{NEWRG}) and the corresponding ladder equations. Suppose
that, for some reason, only one brick, say $C_{\rm SSC}$, among the six
bricks (\ref{NEWbricks}) is appreciable, whereas the other bricks may be
neglected. Using definitions (\ref{2Dgammas}) and (\ref{fSC}), we find
that Eq.\ (\ref{SC1}) becomes a closed equation:
\begin{equation}
\frac{\partial f_{\rm SSC}({\cal K},\xi)}{\partial \xi} =
f_{\rm SSC}({\cal K}_1,\xi)\circ f_{\rm SSC}({\cal K}_1',\xi),
\label{fSCRG}
\end{equation}
where
\begin{equation}
f_{\rm SSC}({\cal K}_1,\xi)=-g_1-g_2-C_{\rm SSC}({\cal K},\xi).
\label{fSSC}
\end{equation}
Eq.\ (\ref{fSCRG}) is the ladder equation for the singlet
superconductivity. When the initial value $-(g_1+g_2)$ of the vertex
$f_{\rm SSC}$ is positive, Eq.\ (\ref{fSCRG}) has a singular solution
($f_{\rm SSC}\rightarrow\infty$ at $\xi\rightarrow\xi_c$), which
describes a phase transition into the singlet superconducting state at
a finite temperature. Repeating this consideration for every channel,
we construct the phase diagram of the system in the ladder
approximation as a list of necessary conditions for the corresponding
instabilities:
\begin{mathletters}%
\label{LadderPhaseDiagram}
\begin{eqnarray}
{\rm SSC:} & \quad & g_1+g_2<0, \\
{\rm TSC:} & \quad & -g_1+g_2<0, \\
{\rm CDW+:} & \quad & -2g_1+g_2-g_3>0, \\
{\rm CDW-:} & \quad & -2g_1+g_2+g_3>0, \\
{\rm SDW+:} & \quad & g_2+g_3>0, \\
{\rm SDW-:} & \quad & g_2-g_3>0.
\end{eqnarray}
\end{mathletters}%
The difference between the ladder and the parquet approximations
shows up when there are more than one appreciable bricks in the
problem. Then, the vertex $f_{\rm SSC}$ contains not only the brick
$C_{\rm SSC}$, but other bricks as well, so Eqs.\ (\ref{NEWRG}) get
coupled. This is the case, for example, for the 1D spinless
electrons, where the bricks $C$ and $Z$ are equally big, so they
cancel each other in $\gamma$ (see Sec.\ \ref{sec:spinless}).
\section{Results of Numerical Calculations}
\label{sec:numerical}
The numerical procedure consists of three consecutive steps; each of
them involves solving differential equations by the fourth-order
Runge--Kutta method. First, we solve parquet equations
(\ref{2Dgammas}) and (\ref{2Dbricks}) for the interaction vertices,
which are closed equations. Then, we find the triangular vertices
${\cal T}_i$, whose equations (\ref{2DSCOP}) and (\ref{CSDWOPA})
involve the interaction vertices $\gamma_i$ through Eqs.\ (\ref{fSC}),
(\ref{fCDW}), and (\ref{fSDW}). Finally, we calculate the
susceptibilities $\chi_i$ from Eqs.\ (\ref{2Dchii}), which depend on
the triangular vertices ${\cal T}_i$.
We select the initial conditions for the interaction vertices to be
independent of the transverse momenta ${\cal K}$: $\gamma_i({\cal
K},\,\xi\!\!=\!\!0) = g_i$. The momentum-independent interaction
naturally appears in the Hubbard model, where the interaction is local
in real space. In this Chapter, the results are shown mostly for the
repulsive Hubbard model without umklapp: $g_1=g_2=g,\;g_3=0$ (Figs.\
\ref{fig:GammaData}--\ref{fig:PhaseDiagram110}), or with umklapp:
$g_1=g_2=g_3=g$ (Figs.\ \ref{fig:chi111}--\ref{fig:PhaseDiagram111}),
where $g$ is proportional the Hubbard interaction constant $U$. The
absolute value of $g$ (but not the sign of $g$) is not essential in
our calculations, because it can be removed from the equations by
redefining $\xi$ to $\xi'=|g|\xi$. After the redefinition, we
effectively have $|g|=1$ in the initial conditions. The actual value
of $|g|$ matters only when the logarithmic variable $\xi'$ is
converted into the temperature according to the formula
$T=\mu\exp(-2\pi v_F\xi'/|g|)$.
The initial independence of $\gamma_i({\cal K},\,\xi\!\!=\!\!0)$ on
${\cal K}$ does not imply that this property is preserved upon
renormalization. On the contrary, during renormalization,
$\gamma_i({\cal K},\xi)$ develops a very strong dependence on ${\cal
K}$ and may even change sign in certain regions of the ${\cal
K}$-space. We illustrate this statement in Fig.\ \ref{fig:GammaData}
by showing typical dependences of $\gamma_1({\cal K},\xi)$ and
$\gamma_2({\cal K},\xi)$ on the average momentum
$p_y=(k_y^{(1)}+k_y^{(2)})/2$ of the incoming electrons at $k_1=k_3$
and $k_2=k_4$ after some renormalization ($\xi = 1.4$). In Figs.\
\ref{fig:GammaData}--\ref{fig:TDWData}, the upper and lower limits on
the horizontal axes are the boundaries $\pm k_y^{(0)}$ of the flat
region on the Fermi surface, which are set to $\pm1$ without loss of
generality. One can observe in Fig.\ \ref{fig:GammaData} that the
electron-electron interaction becomes negative (attractive) at large
$p_y$, even though initially it was repulsive everywhere.
Mathematically, the dependence of $\gamma_i({\cal K},\xi)$ on
${\cal K}$ arises because of the finite limits of integration,
$[-k_y^{(0)},k_y^{(0)}]$, imposed on the variables $k_y^{(A)}$ and
$k_y^{(B)}$ in Eqs.\ (\ref{2Dbricks}). For example, in Eq.\
(\ref{C1}), when $p_y=(k_y^{(1)}+k_y^{(2)})/2$ equals zero,
$k_y^{(A)}$ may change from $-k_y^{(0)}$ to $k_y^{(0)}$ while
$k_y^{(B)}$ stays in the same interval. However, when $p_y>0$,
$k_y^{(A)}$ has to be confined to a narrower interval
$[-k_y^{(0)}+2p_y,k_y^{(0)}]$ to ensure that
$k_y^{(B)}=2p_y-k_y^{(A)}$ stays within $[-k_y^{(0)},k_y^{(0)}]$.
This difference in the integration range subsequently generates the
dependence of $\gamma_i({\cal K},\xi)$ on $p_y$ and, more generally,
on ${\cal K}$. Since many channels with different geometrical
restrictions contribute to $\partial\gamma_i({\cal
K},\xi)/\partial\xi$ in Eqs.\ (\ref{2Dbricks}), the resultant
dependence of $\gamma_i({\cal K},\xi)$ on the four-dimensional vector
${\cal K}$ is complicated and hard to visualize. Because of the
strong dependence of $\gamma_i({\cal K},\xi)$ on ${\cal K}$, it is not
possible to describe the 2D system by only three renormalizing charges
$\gamma_1(\xi)$, $\gamma_2(\xi)$, and $\gamma_3(\xi)$, as in the 1D
case. Instead, it is absolutely necessary to consider an infinite
number of the renormalizing charges $\gamma_i({\cal K},\xi)$ labeled
by the continuous variable ${\cal K}$. This important difference was
neglected in Ref.\ \cite{Marston93}, where the continuous variable
${\cal K}$ was omitted.
Having calculated $\gamma_i({\cal K},\xi)$, we solve Eqs.\
(\ref{2DSCOP}) and (\ref{CSDWOPA}) for the triangular vertices (the
order parameters) ${\cal T}(k_y,q_y,\xi)$, which depend on both the
relative ($k_y$) and the total ($q_y$) transverse momenta. We find
numerically that the order parameters with $q_y=0$ diverge faster than
those with $q_y\neq0$. This is a natural consequence of the
integration range restrictions discussed above. For this reason, we
discuss below only the order parameters with zero total momentum
$q_y=0$. We select the initial conditions for the symmetric and
antisymmetric order parameters in the form:
\begin{equation}
{\cal T}_i^+(k_y,\,\xi\!\!=\!\!0)=1,\quad
{\cal T}_i^-(k_y,\,\xi\!\!=\!\!0)=k_y.
\label{SA}
\end{equation}
In Figs.\ \ref{fig:TSCData} and \ref{fig:TDWData}, we present typical
dependences of the superconducting and density-wave order parameters
on the relative momentum $k_y$ at the same renormalization ``time''
$\xi = 1.4$ as in Fig.\ \ref{fig:GammaData}. The singlet
antisymmetric component (${\cal T}_{\rm SSC}^{-}$) dominates among the
superconducting order parameters (Fig.\ \ref{fig:TSCData}), whereas
the symmetric SDW order parameter (${\cal T}_{SDW}^+$) is the highest
in the density-wave channel (Fig.\ \ref{fig:TDWData}).
Having calculated the triangular vertices ${\cal T}$, we find the
susceptibilities from Eq.\ (\ref{2Dchii}). The results are shown in
Fig.\ \ref{fig:chi110}. The symmetric SDW has the fastest growing
susceptibility $\chi^+_{\rm SDW}$, which diverges at $\xi_{\rm
SDW}=1.76$. This divergence indicates that a phase transition from
the metallic to the antiferromagnetic state takes place at the
transition temperature $T_{\rm SDW}=\mu\exp(-2\pi v_F\xi_{\rm
SDW}/g)$. A similar result was obtained in Ref.\
\cite{Dzyaloshinskii72b} by analyzing the convergence radius of the
parquet series in powers of $g\xi$. In the ladder approximation, the
SDW instability would take place at $\xi_{\rm SDW}^{\rm lad}=1/g_2=1$,
as follows from Eqs.\ (\ref{fSDW}) and (\ref{SW1}). Since $\xi_{\rm
SDW}>\xi_{\rm SDW}^{\rm lad}$, the transition temperature $T_{\rm
SDW}$, calculated in the parquet approximation, is lower than the
temperature $T_{\rm SDW}^{\rm lad}$, calculated in the ladder
approximation: $T_{\rm SDW}<T_{\rm SDW}^{\rm lad}$. The parquet
temperature is lower, because competing superconducting and
density-wave instabilities partially suppress each other.
Thus far, we considered the model with ideally flat regions on the
Fermi surface. Suppose now that these regions are only approximately
flat. That is, they can be treated as being flat for the energies
higher than a certain value $E_{\rm cutoff}$, but a curvature or a
corrugation of the Fermi surface becomes appreciable at the smaller
energies $E<E_{\rm cutoff}$. Because of the curvature, the Fermi
surface does not have nesting for $E<E_{\rm cutoff}$; thus, the
density-wave bricks in the parquet equations (\ref{2Dbricks}) stop to
renormalize. Formally, this effect can be taken into account by
introducing a cutoff $\xi_{\rm cutoff}=(1/2\pi v_F)\ln(\mu/E_{\rm
cutoff})$, so that the r.h.s.\ of Eqs.\ (\ref{Z1})--(\ref{ZII}) for
the density-wave bricks are replaced by zeros at $\xi>\xi_{\rm
cutoff}$. At the same time, Eqs.\ (\ref{C1}) and (\ref{C2}) for the
superconducting bricks remain unchanged, because the curvature of the
Fermi surface does not affect the superconducting instability with
$q_y=0$. The change of the renormalization equations at $\xi_{\rm
cutoff}$ is not a completely rigorous way \cite{Luther88} to take into
account the Fermi surface curvature; however, this procedure permits
obtaining explicit results and has a certain qualitative appeal. For
a more rigorous treatment of the corrugated Fermi surface problem see
Ref.\ \cite{Firsov}.
In Fig.\ \ref{fig:chiCutoff}, we show the susceptibilities
calculated using the cutoff procedure with $\xi_{\rm cutoff}=1.4$.
The density-wave susceptibilities remain constant at $\xi>\xi_{\rm
cutoff}$. At the same time, $\chi_{\rm SSC}^-(\xi)$ diverges at
$\xi_{\rm SSC}^-=2.44$ indicating a transition into the singlet
superconducting state of the $d$-wave type. Thus, if the SDW
instability is suppressed, the system is unstable against formation of
the $d$-wave superconductivity. This result is in agreement with the
conclusions of Refs.\ \cite{Ruvalds95,Scalapino,Dzyaloshinskii87a}.
From our numerical results, we deduce that the dependence of
$\xi_{\rm SSC}^-$ on $\xi_{\rm cutoff}$ is linear: $\xi_{\rm
SSC}^-=a-b\,\xi_{\rm cutoff}$ with $b=2.06$, as shown in the inset to
Fig.\ \ref{fig:PhaseDiagram110}. Converting $\xi$ into energy in this
relation, we find a power law dependence:
\begin{equation}
T_{\rm SSC}^- \propto \frac{1}{E_{\rm cutoff}^b}.
\label{TCR1}
\end{equation}
Eq.\ (\ref{TCR1}) demonstrates that increasing the cutoff energy
$E_{\rm cutoff}$ decreases the temperature of the superconducting
transition, $T_{\rm SSC}^-$. Such a relation can be qualitatively
understood in the following way. There is no bare interaction in the
superconducting $d$-wave channel in the Hubbard model, so the
transition is impossible in the ladder approximation. The growth of
the superconducting $d$-wave correlations is induced by the growth of
the SDW correlations, because the two channels are coupled in the
parquet equations (\ref{NEWRG}). If $E_{\rm cutoff}$ is high, the SDW
correlations do not have enough renormalization-group ``time'' $\xi$
to develop themselves because of the early cutoff of the density-wave
channels; thus, $T_{\rm SSC}^-$ is low. Hence, decreasing $E_{\rm
cutoff}$ increases $T_{\rm SSC}^-$. However, when $E_{\rm cutoff}$
becomes lower than $T_{\rm SDW}$, the SDW instability overtakes the
superconducting one. Corresponding phase diagram is shown in Fig.\
\ref{fig:PhaseDiagram110}. Generally speaking, the phase diagram
plotted in the energy variables, as opposed to the logarithmic
variables $\xi$, may depend on the absolute value of the bare
interaction constant $|g|$. In Fig.\ \ref{fig:PhaseDiagram110}, we
placed the points for the several values of $g$ = 0.3, 0.4, and 0.5;
the phase boundary does not depend much on the choice of $g$. The
phase diagram of Fig.\ \ref{fig:PhaseDiagram110} qualitatively
resembles the experimental one for the high-$T_c$ superconductors,
where transitions between the metallic, antiferromagnetic, and
superconducting states are observed. The value of $E_{\rm cutoff}$
may be related to the doping level, which controls the shape of the
Fermi surface. Taking into account the crudeness of our
approximations, detailed agreement with the experiment should not be
expected.
We perform the same calculations also for the Hubbard model with
umklapp scattering ($g_1 = g_2 = g_3 =1$). As one can see in Fig.\
\ref{fig:chi111}, where the susceptibilities are shown, the umklapp
process does not modify the qualitative picture. The leading
instability remains the SDW of the symmetric type, which is now also
symmetric with respect to the umklapp scattering, whereas the next
leading instability is the singlet $d$-wave superconductivity. The
SDW has a phase transition at $\xi_{\rm SDW+}^+=0.54$, which is close
to the ladder result $\xi_{\rm SDW+}^{\rm lad}=1/(g_2+g_3)=0.5$. Some
of the susceptibilities in Fig.\ \ref{fig:chi111} coincide exactly,
which is a consequence of a special SU(2)$\times$SU(2) symmetry of the
Hubbard model at the half filling \cite{SO(4)}. The phase diagram
with the energy cutoff (Fig.\ \ref{fig:PhaseDiagram111}) is similar to
the one without umklapp (Fig.\ \ref{fig:PhaseDiagram110}), but the
presence of the umklapp scattering decreases the transition
temperature of the $d$-wave superconductivity.
An important issue in the study of the 1D electron gas is the
so-called $g$-ology phase diagram, which was constructed for the first
time by Dzyaloshinskii and Larkin \cite{Dzyaloshinskii72a}. They
found that, in some regions of the $(g_1,g_2,g_3)$ space, the 1D
electron system develops a charge or spin gap, which is indicated by
divergence of $\gamma_i(\xi)$ with increasing $\xi$. In the region
where none of the gaps develops, the Luttinger liquid exists. It is
interesting whether such a region may exist in our 2D model. To study
the phase diagram of the 2D system, we repeat the calculations,
systematically changing relative values of $g_1$, $g_2$, and $g_3$.
From the physical point of view, the relative difference of $g_1$,
$g_2$, and $g_3$ roughly mimics dependence of the interaction vertex
on the momentum transfer. As an example, we show the susceptibilities
in the case where $g_1=2$, $g_2=1$, and $g_3=0$ in Fig.\
\ref{fig:chi210}. In this case, the leading instabilities are
simultaneously the triplet superconductivity of the symmetric type
(TSC+) and the spin-density wave.
For all studied sets of $g_i$, we find that the leading
instabilities calculated in the parquet and the ladder approximations
always coincide. (We do not introduce the energy cutoff here.) Thus,
the parquet effects do not modify the $g$-ology phase diagram of the
2D model derived in the ladder approximation, even though the
transition temperatures in the parquet approximation are always lower
than those obtained in the ladder approximation. In that sense, the
parquet corrections are much less important in the 2D case than in the
1D case. From the mathematical point of view, this happens because a
leading divergent brick develops a strong dependence on the transverse
momenta ${\cal K}$ and acquires the so-called mobile pole structure
\cite{Gorkov74,Brazovskii71,Dzyaloshinskii72b}:
\begin{equation}
Z({\cal K},\xi)\propto\frac{1}{\xi_c({\cal K})-\xi}.
\label{MovingPole}
\end{equation}
The name ``mobile pole'' is given, because the position of the pole in
$\xi$ in Eq.\ (\ref{MovingPole}), $\xi_c({\cal K})$, strongly depends
on the momenta ${\cal K}$. It was shown in Refs.\
\cite{Brazovskii71,Gorkov74,Dzyaloshinskii72b} that, because of the
mobility of the pole, the leading channel decouples from the other
channels, and the parquet description effectively reduces to the
ladder one, as described at the end of Sec.\ \ref{sec:2D}. The phase
diagram of the 2D system in the ladder approximation is given by Eqs.\
(\ref{LadderPhaseDiagram}). It follows from Eqs.\
(\ref{LadderPhaseDiagram}) that every point in the $(g_1,g_2,g_3)$
space has some sort of instability. Thus, the Luttinger liquid,
defined as a nontrivial metallic ground state where different
instabilities mutually cancel each other, does not exist in the 2D
model.
Generally speaking, other models may have different types of
solutions of the fast parquet equations, such as immobile poles
\cite{Gorkov74} or a self-similar solution \cite{Yakovenko93a}, the
latter indeed describing some sort of a Luttinger liquid. In our
study of a 2D model with the van Hove singularities
\cite{Dzyaloshinskii87a}, we found a region in the $g$-space without
instabilities, where the Luttinger liquid may exist
\cite{Dzyaloshinskii}. However, we find only the mobile-pole
solutions in the present 2D model.
\section{Conclusions}
\label{sec:conclusion}
In this paper we derive and numerically solve the parquet equations
for the 2D electron gas whose Fermi surface contains flat regions.
The model is a natural generalization of the 1D electron gas model,
where the Luttinger liquid is known to exist. We find that, because
of the finite size of the flat regions, the 2D parquet equations
always develop the mobile pole solutions, where the leading
instability effectively decouples from the other channels. Thus, a
ladder approximation is qualitatively (but not necessarily
quantitatively) correct for the 2D model, in contrast to the 1D case.
Whatever the values of the bare interaction constants are, the 2D
system always develops some sort of instability. Thus, the Luttinger
liquid, defined as a nontrivial metallic ground state where different
instabilities mutually cancel each other, does not exist in the 2D
model, contrary to the conclusions of Refs.\
\cite{Mattis87,Hlubina94}.
In the case of the repulsive Hubbard model, the leading instability
is the SDW, i.e., antiferromagnetism \cite{Dzyaloshinskii72b}. If the
nesting of the Fermi surface is not perfect, the SDW correlations do
not develop into a phase transition, and the singlet superconductivity
of the $d$-wave type appears in the system instead. These results may
be relevant for the high-$T_c$ superconductors and are in qualitative
agreement with the findings of Refs.\
\cite{Ruvalds95,Scalapino,Dzyaloshinskii87a}.
In the bosonization procedure
\cite{Haldane92,Khveshchenko93a,Khveshchenko94b,Marston93,Marston,Fradkin,LiYM95,Kopietz95},
a higher-dimensional Fermi surface is treated as a collection of flat
patches. Since the results of our paper do not depend qualitatively on
the size of the flat regions on the Fermi surface, the results may be
applicable, to some extent, to the patches as well. Precise relation
is hard to establish because of the infinitesimal size of the patches,
their different orientations, and uncertainties of connections between
them. On the other hand, the bosonization procedure seems to be even
better applicable to a flat Fermi surface, which consists of a few big
patches. Mattis \cite{Mattis87} and Hlubina \cite{Hlubina94} followed
that logic and claimed that the flat Fermi surface model is exactly
solvable by the bosonization and represents a Luttinger liquid. The
discrepancy between this claim and the results our paper indicates
that some conditions must restrict the validity of the bosonization
approximations. Luther gave a more sophisticated treatment to the flat
Fermi surface problem by mapping it onto multiple quantum chains
\cite{Luther94}. He found that the bosonization converts the
interaction between electrons into the two types of terms, roughly
corresponding to the two terms of the sine-Gordon model: the
``harmonic'' terms $(\partial \varphi/\partial x)^2$ and the
``exponential'' terms $\exp(i\varphi)$, where $\varphi$ is a
bosonization phase. The harmonic terms can be readily diagonalized,
but the exponential terms require a consistent renormalization-group
treatment. If the renormalization-group equations were derived in the
bosonization scheme of \cite{Luther94}, they would be the same as the
parquet equations written in our paper, because the
renormalization-group equations do not depend on whether the boson or
fermion representation is used in their derivation \cite{Wiegmann78}.
Long time ago, Luther bosonized noninteracting electrons on a
curved Fermi surface \cite{Luther79}; however, the interaction between
the electrons remained intractable because of the exponential
terms. The recent bosonization in higher dimensions
\cite{Haldane92,Khveshchenko93a,Khveshchenko94b,Marston93,Marston,Fradkin,LiYM95,Kopietz95}
managed to reformulate the problem in the harmonic terms only. This is
certainly sufficient to reproduces the Landau description of sound
excitations in a Fermi liquid \cite{Landau-IX}; however, it may be not
sufficient to derive the electron correlation functions. The validity
of the harmonic approximation is hard to trace for a curved Fermi
surface, but considerable experience has been accumulated for the flat
Fermi surface models.
In the model of multiple 1D chains without single-electron
tunneling between the chains and with forward scattering between
different chains, the bosonization produces the harmonic terms only,
thus the model can be solved exactly
\cite{Larkin73,Gutfreund76b}. However, a slight modification of the
model by introducing backward scattering between different chains
\cite{Gorkov74,PALee77} or interaction between four different chains
\cite{Yakovenko87} adds the exponential terms, which destroy the exact
solvability and typically lead to a CDW or SDW instability. Even if no
instability occurs, as in the model of electrons in a high magnetic
field \cite{Yakovenko93a}, the fast parquet method shows that the
electron correlation functions have a complicated, nonpower structure,
which is impossible to obtain within the harmonic bosonization.
Further comparison of the fast parquet method and the bosonization in
higher dimensions might help to establish the conditions of
applicability of the two complementary methods.
The work at Maryland was partially supported by the NSF under Grant
DMR--9417451, by the Alfred P.~Sloan Foundation, and by the David and
Lucile Packard Foundation.
| proofpile-arXiv_065-648 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{I}
The relativistic approach to nuclear physics has attracted much
attention. From a theoretical point of view, it allows to implement,
in principle, the important requirements of relativity, unitarity,
causality and renormalizability~\cite{Wa74}. From the phenomenological
side, it has also been successful in reproducing a large body of
experimental data~\cite{Wa74,Ho81,Se86,Re89,Se92}. In the context of
finite nuclei a large amount of work has been done at the Hartree
level but considering only the positive energy single particle nucleon
states. The Dirac sea has also been studied since it is required to
preserve the unitarity of the theory. Actually, Dirac sea corrections
have been found to be non negligible using a semiclassical expansion
which, if computed to fourth order, seems to be quickly
convergent~\cite{Ca96a}. Therefore, it would appear that the overall
theoretical and phenomenological picture suggested by the relativistic
approach is rather reliable.
However, it has been known since ten years that such a description is
internally inconsistent. The vacuum of the theory is unstable due to
the existence of tachyonic poles in the meson propagators at high
Euclidean momenta~\cite{Pe87}. Alternatively, a translationally
invariant mean field vacuum does not correspond to a minimum; the
Dirac sea vacuum energy can be lowered by allowing small size mean
field solutions~\cite{Co87}. Being a short distance instability it
does not show up for finite nuclei at the one fermion loop level and
within a semiclassical expansion (which is an asymptotic large size
expansion). For the same reason, it does not appear either in the
study of nuclear matter if translational invariance is imposed as a
constraint. However, the instability sets in either in an exact mean
field valence plus sea (i.e., one fermion loop) calculation for finite
nuclei or in the determination of the correlation energy for nuclear
matter (i.e., one fermion loop plus a boson loop). Unlike quantum
electrodynamics, where the instability takes place far beyond its
domain of applicability, in quantum hadrodynamics it occurs at the
length scale of 0.2~fm that is comparable to the nucleon size and
mass. Therefore, the existence of the instability contradicts the
original motivation that lead to the introduction of the field
theoretical model itself. In such a situation several possibilities
arise. Firstly, one may argue that the model is defined only as an
effective theory, subjected to inherent limitations regarding the
Dirac sea. Namely, the sea may at best be handled semiclassically,
hence reducing the scope of applicability of the model. This
interpretation is intellectually unsatisfactory since the
semiclassical treatment would be an approximation to an inexistent
mean field description. Alternatively, and taking into account the
phenomenological success of the model, one may take more seriously the
spirit of the original proposal~\cite{Wa74}, namely, to use specific
renormalizable Lagrangians where the basic degrees of freedom are
represented by nucleon and meson fields. Such a path has been
explored in a series of papers~\cite{Ta90,Ta91,TB92} inspired by the
early work of Redmond and Bogolyubov on non asymptotically free
theories~\cite{Re58,Bo61}. The key feature of this kind of theories is
that they are only defined in a perturbative sense. According to the
latter authors, it is possible to supplement the theory with a
prescription based on an exact fulfillment of the K\"all\'en-Lehmann
representation of the two point Green's functions. The interesting
aspect of this proposal is that the Landau poles are removed in such a
way that the perturbative content of the theory remains unchanged. In
particular, this guarantees that the perturbative renormalizability is
preserved. It is, however, not clear whether this result can be
generalized to three and higher point Green's functions in order to
end up with a completely well-behaved field theory. Although the
prescription to eliminate the ghosts may seem to be ad hoc, it
certainly agrees more with the original proposal and provides a
workable calculational scheme.
The above mentioned prescription has already been used in the context
of nuclear physics. In ref.~\cite{Lo80}, it was applied to ghost
removal in the $\sigma$ exchange in the $NN$ potential. More recently, it
has been explored to study the correlation energy in nuclear matter in
the $\sigma$-$\omega$ model~\cite{TB92} and also in the evaluation of
response functions within a local density approximation~\cite{Ta90}.
Although this model is rather simple, it embodies the essential field
theoretical aspects of the problem while still providing a reasonable
phenomenological description. We will use the $\sigma$-$\omega$ model
in the present work, to estimate the binding energy of finite nuclei
within a self-consistent mean field description, including the effects
due to the Dirac sea, after explicit elimination of the ghosts. An
exact mean field calculation, both for the valence and sea, does make
sense in the absence of a vacuum instability but in practice it
becomes a technically cumbersome problem. This is due to the presence
of a considerable number of negative energy bound states in addition
to the continuum states\cite{Se92}. Therefore, it seems advisable to
use a simpler computational scheme to obtain a numerical estimate.
This will allow us to see whether or not the elimination of the ghosts
induces dramatic changes in the already satisfactory description of
nuclear properties. In this work we choose to keep the full Hartree
equations for the valence part but employ a semiclassical
approximation for the Dirac sea. This is in fact the standard
procedure~\cite{Se86,Re89,Se92}. As already mentioned, and discussed
in previous work~\cite{Ca96a}, this expansion converges rather quickly
and therefore might be reliably used to estimate the sea energy up to
possible corrections due to shell effects.
The paper is organized as follows. In section~\ref{II} we present the
$\sigma$-$\omega$ model of nuclei in the $1/N$ leading approximation,
the semiclassical treatment of the Dirac sea, the renormalization
prescriptions and the different parameter fixing schemes that we will
consider. In section~\ref{III} we discuss the vacuum instability
problem of the model and Redmond's proposal. We also study the
implications of the ghost subtraction on the low momentum effective
parameters. In section~\ref{IV} we present our numerical results for
the parameters as well as binding energies and mean quadratic charge
radii of some closed-shell nuclei. Our conclusions are presented in
section~\ref{V}. Explicit expressions for the zero momentum
renormalized meson self energies and related formulas are given in the
appendix.
\section{$\sigma$-$\omega$ model of nuclei}
\label{II}
In this section we revise the $\sigma$-$\omega$ model description of
finite nuclei disregarding throughout the instability problem; this
will be considered in the next section. The Dirac sea corrections are
included at the semiclassical level and renormalization issues as well
as the various ways of fixing the parameters of the model are also
discussed here.
\subsection{Field theoretical model}
Our starting point is the Lagrangian density of the $\sigma$-$\omega$
model~\cite{Wa74,Se86,Re89,Se92} given by
\begin{eqnarray}
{\cal L}(x) &=& \overline\Psi(x) \left[ \gamma_\mu ( i \partial^\mu - g_v
V^\mu(x)) - (M - g_s \phi(x)) \right] \Psi(x)
+ {1\over 2}\, (\partial_\mu \phi(x)
\partial^\mu \phi(x) - m_s^2 \, \phi^2(x)) \nonumber \\
& & - {1\over 4} \, F_{\mu \nu}(x) F^{\mu \nu}(x)
+ {1\over 2} \, m_v^2 V_\mu(x) V^\mu(x) + \delta{\cal L}(x)\,.
\label{lagrangian}
\end{eqnarray}
$\Psi(x)$ is the isospinor nucleon field, $\phi(x)$ the scalar field,
$V_\mu(x)$ the $\omega$-meson field and $F_{\mu\nu} =\partial_\mu
V_\nu-\partial_\nu V_\mu$. In the former expression the necessary
counterterms required by renormalization are accounted for by the
extra Lagrangian term $\delta{\cal L}(x)$ (including meson
self-couplings).
Including Dirac sea corrections requires to take care of
renormalization issues. The best way of doing this in the present
context is to use an effective action formalism. Further we have to
specify the approximation scheme. The effective action will be
computed at lowest order in the $1/N$ expansion, $N$ being the number
of nucleon species (with $g_s$ and $g_v$ of order $1/\sqrt{N}$), that
is, up to one fermion loop and tree level for bosons~\cite{TB92}. This
corresponds to the Hartree approximation for fermions including the
Dirac sea~\cite{NO88}.
In principle, the full effective action would have to be computed by
introducing bosonic and fermionic sources. However,
since we will consider only stationary situations, we do not
need to introduce fermionic sources. Instead, we will
proceed as usual by integrating out exactly the fermionic degrees of
freedom. This gives directly the bosonic effective action at leading
order in the $1/N$ expansion:
\begin{equation}
\Gamma[\phi,V]= \Gamma_B[\phi,V]+\Gamma_F[\phi,V] \,,
\end{equation}
where
\begin{equation}
\Gamma_B[\phi,V]=\int\left({1\over 2}\, (\partial_\mu \phi
\partial^\mu \phi - m_s^2 \, \phi^2) - {1\over 4} \, F_{\mu \nu} F^{\mu \nu}
+ {1\over 2} \, m_v^2 V_\mu V^\mu \right) d^4x \,,
\label{GammaB}
\end{equation}
and
\begin{equation}
\Gamma_F[\phi,V]= -i\log {\rm Det}\left[ \gamma_\mu ( i \partial^\mu - g_v
V^\mu) - (M - g_s \phi) \right] +\int\delta{\cal L}(x)d^4x \,.
\end{equation}
The fermionic determinant can be computed perturbatively, by adding up
the one-fermion loop amputated graphs with any number of bosonic legs,
using a gradient expansion or by any other technique. The ultraviolet
divergences are to be canceled with the counterterms by using any
renormalization scheme; all of them give the same result after fitting
to physical observables.
The effective action so obtained is uniquely defined and completely
finite. However, there still remains the freedom to choose different
variables to express it. We will work with fields renormalized at zero
momentum. That is, the bosonic fields $\phi(x)$ and $V_\mu(x)$ are
normalized so that their kinetic energy term is the canonical
one. This is the choice shown above in $\Gamma_B[\phi,V]$. Other usual
choice is the on-shell one, namely, to rescale the fields so that the
residue of the propagator at the meson pole is unity. Note that the
Lagrangian mass parameters $m_s$ and $m_v$ do not correspond to the
physical masses (which will be denoted $m_\sigma$ and $m_\omega$ in
what follows) since the latter are defined as the position of the
poles in the corresponding propagators. The difference comes from the
fermion loop self energy in $\Gamma_F[\phi,V]$ that contains terms
quadratic in the boson fields with higher order gradients.
Let us turn now to the fermionic contribution, $\Gamma_F[\phi,V]$. We
will consider nuclear ground states of spherical nuclei, therefore the
space-like components of the $\omega$-meson field vanish \cite{Se92}
and the remaining fields, $\phi(x)$ and $V_0(x)$ are stationary. As
it is well-known, for stationary fields the fermionic energy, i.e.,
minus the action $\Gamma_F[\phi,V]$ per unit time, can be formally written
as the sum of single particle energies of the fermion moving in the
bosonic background~\cite{NO88},
\begin{equation}
E_F[\phi,V_0] = \sum_n E_n \,,
\end{equation}
and
\begin{equation}
\left[ -i {\bf \alpha} \cdot \nabla + g_vV_0(x) + \beta (
M-g_s\phi(x)) \right] \psi_n(x) = E_n \,\psi_n(x)\,.
\label{Dirac-eq}
\end{equation}
Note that what we have called the fermionic energy contains not only
the fermionic kinetic energy, but also the potential energy coming
from the interaction with the bosons.
The orbitals, and thus the fermionic energy, can be divided into
valence and sea, i.e., positive and negative energy orbitals. In
realistic cases there is a gap in the spectrum which makes such a
separation a natural one. The valence energy is therefore given by
\begin{equation}
E_F^{\rm val}[\phi,V] = \sum_n E_n^{\rm val}\,.
\end{equation}
On the other hand, the sea energy is ultraviolet divergent and
requires the renormalization mentioned above~\cite{Se86}. The (at zero
momentum) renormalized sea energy is known in a gradient or
semiclassical expansion up to fourth order and is given by~\cite{Ca96a}
\begin{eqnarray}
E^{\rm sea}_0 & = & -{\gamma\over 16\pi^2} M^4 \int d^3x \,
\Biggl\{ \Biggr.
\left({\Phi\over M}\right)^4 \log {\Phi\over M}
+ {g_s\phi\over M} - {7\over 2} \left({g_s \phi\over M}\right)^2
+ {13\over 3} \left({g_s \phi\over M}\right)^3 - {25\over 12}
\left({g_s \phi\over M}\right)^4 \Biggl. \Biggr\} \nonumber\\
E^{\rm sea}_2 & = & {\gamma \over 16 \pi^2} \int d^3 x \,
\Biggl\{ {2 \over 3} \log{\Phi\over M} (\nabla V)^2
- \log{\Phi\over M} (\nabla \Phi)^2 \Biggr\} \nonumber\\
E^{\rm sea}_4 & = & {\gamma\over 5760 \pi^2} \int d^3 x \,
\Biggl\{ \Biggr. -11\,\Phi^{-4} (\nabla \Phi)^4
- 22\,\Phi^{-4} (\nabla V)^2(\nabla \Phi)^2
+ 44 \, \Phi^{-4} \bigl( (\nabla_i \Phi) (\nabla_i V) \bigr)^2
\nonumber\\
& & \quad - 44 \, \Phi^{-3} \bigl( (\nabla_i \Phi) (\nabla_i V)
\bigr) (\nabla^2 V)
- 8 \, \Phi^{-4} (\nabla V)^4 + 22 \, \Phi^{-3} (\nabla^2 \Phi)
(\nabla \Phi)^2
\nonumber\\
& & \quad
+ 14 \, \Phi^{-3} (\nabla V)^2 (\nabla^2 \Phi)
- 18 \, \Phi^{-2} (\nabla^2 \Phi)^2 + 24 \, \Phi^{-2} (\nabla^2 V)^2
\Biggl. \Biggr\}\,.
\label{Esea}
\end{eqnarray}
Here, $V=g_vV_0$, $\Phi=M-g_s\phi$ and $\gamma$ is the spin and
isospin degeneracy of the nucleon, i.e., $2N$ if there are $N$ nucleon
species (in the real world $N=2$). The sea energy is obtained by
adding up the terms above. The fourth and higher order terms are
ultraviolet finite as follows from dimensional counting. The first two
terms, being renormalized at zero momentum, do not contain operators
with dimension four or less, such as $\phi^2$, $\phi^4$, or $(\nabla
V)^2$, since they are already accounted for in the bosonic term
$\Gamma_B[\phi,V]$. Note that the theory has been renormalized so that
there are no three- or four-point bosonic interactions in the
effective action at zero momentum~\cite{Se86}.
By definition, the true value of the classical fields
(i.e., the value in the absence of external
sources) is to be found by minimization of the
effective action or, in the stationary case, of the energy
\begin{equation}
E[\phi,V] = E_B[\phi,V]+E_F^{\rm val}[\phi,V] + E_F^{\rm sea}[\phi,V] \,.
\end{equation}
Such minimization yields the equations of motion for the bosonic
fields:
\begin{eqnarray}
(\nabla^2-m_s^2)\phi(x) &=& -g_s\left[\rho_s^{\rm val}(x)+
\rho_s^{\rm sea}(x)\right] \,, \nonumber \\
(\nabla^2-m_v^2)V_0(x) &=& -g_v\left[\rho^{\rm val}(x)+
\rho^{\rm sea}(x) \right] \,.
\label{Poisson-eq}
\end{eqnarray}
Here, $\rho_s(x)=\langle\overline\Psi(x)\Psi(x)\rangle$
is the scalar density and $\rho(x)=\langle\Psi^\dagger(x)
\Psi(x)\rangle$ the baryonic one:
\begin{eqnarray}
\rho_s^{\rm val~(sea)}(x) &=& -\frac{1}{g_s}\frac{\delta E_F^{\rm
val~(sea)}}{\delta \phi(x)}\,, \nonumber \\
\rho^{\rm val~(sea)}(x) &=& +\frac{1}{g_v}\frac{\delta E_F^{\rm
val~(sea)}}{\delta V_0(x)} \,.
\label{densities}
\end{eqnarray}
The set of bosonic and fermionic equations, eqs.~(\ref{Poisson-eq})
and (\ref{Dirac-eq}) respectively, are to be solved self-consistently.
Let us remark that treating the fermionic sea energy using a gradient
or semiclassical expansion is a further approximation on top of the
mean field approximation since it neglects possible shell effects in
the Dirac sea. However, a direct solution of the mean field equations
including renormalization of the sum of single-particle energies would
not give a physically acceptable solution due to the presence of
Landau ghosts. They will be considered in the next section.
At this point it is appropriate to make some comments on
renormalization. As we have said, one can choose different
normalizations for the mesonic fields and there are also several sets
of mesonic masses, namely, on-shell and at zero momentum. If one were
to write the mesonic equations of motion directly, by similarity with
a classical treatment, there would be an ambiguity as to which set
should be used. The effective action treatment makes it clear that the
mesonic field and masses are those at zero momentum. On the other
hand, since we have not included bosonic loops, the fermionic
operators in the Lagrangian are not renormalized and there are no
proper vertex corrections. Thus the nucleon mass $M$, the nuclear
densities $\langle\Psi\overline\Psi\rangle$ and the combinations
$g_s\phi(x)$ and $g_v V_\mu(x)$ are fixed unambiguously in the
renormalized theory. The fermionic energy $E_F[\phi,V]$, the
potentials $\Phi(x)$ and $V(x)$ and the nucleon single particle
orbitals are all free from renormalization ambiguities at leading
order in $1/N$.
\subsection{Fixing of the parameters}
The $\sigma$-$\omega$ and related theories are effective models of
nuclear interaction, and hence their parameters are to be fixed to
experimental observables within the considered approximation. Several
procedures to perform the fixing can be found in the literature
\cite{Ho81,Re89,Se92}; the more sophisticated versions try to adjust,
by minimizing the appropriate $\chi^2$ function, as many experimental
values as possible through the whole nuclear table \cite{Re89}. These
methods are useful when the theory implements enough physical elements
to provide a good description of atomic nuclei. The particular model
we are dealing with can reproduce the main features of nuclear force,
such as saturation and the correct magic numbers; however it lacks
many of the important ingredients of nuclear interaction, namely
Coulomb interaction and $\rho$ and $\pi$ mesons. Therefore, we will
use the simple fixing scheme proposed in ref.~\cite{Ho81} for
this model.
Initially there are five free parameters: the nucleon mass ($M$), two
boson Lagrangian masses ($m_s$ and $m_v$) and the corresponding
coupling constants ($g_s$ and $g_v$). The five physical observables to
be reproduced are taken to be the physical nucleon mass, the physical
$\omega$-meson mass $m_\omega$, the saturation properties of nuclear
matter (binding energy per nucleon $B/A$ and Fermi momentum $k_F$) and
the mean quadratic charge radius of $^{40}$Ca. In our approximation,
the equation of state of nuclear matter at zero temperature, and hence
its saturation properties, depends only on the nucleon mass and on
$m_{s,v}$ and $g_{s,v}$ through the combinations~\cite{Se86}
\begin{equation}
C_s^2 = g_s^2 \frac{M^2}{m_s^2}\,, \qquad
C_v^2 = g_v^2 \frac{M^2}{m_v^2}\,.
\end{equation}
At this point, there still remain two parameters to be fixed, e.g.,
$m_v$ and $g_s$. Now we implement the physical $\omega$-meson mass
constraint. From the expression of the $\omega$ propagator at the
leading $1/N$ approximation, we can obtain the value of the physical
$\omega$ pole as a function of the Lagrangian parameters $M$, $g_v$
and $m_v$ or more conveniently as a function of $M$, $C_v$ and $m_v$
(see appendix). Identifying the $\omega$ pole and the physical
$\omega$ mass, and given that $M$ and $C_v$ have already been fixed,
we obtain the value of $m_v$. Finally, the value of $g_s$ is adjusted
to fit the mean quadratic charge radius of $^{40}$Ca. We will refer to
this fixing procedure as the {\em $\omega$-shell scheme}: the name
stresses the correct association between the pole of the
$\omega$-meson propagator and the physical $\omega$ mass. The above
fixing procedure gives different values of $m_s$ and $g_s$ depending
on the order at which the Dirac sea energy is included in the
semiclassical expansion (see section~\ref{IV}).
Throughout the literature the standard fixing procedure when the Dirac
sea is included has been to give to the Lagrangian mass $m_v$ the
value of the physical $\omega$ mass \cite{Re89,Se92} (see, however,
refs.~\cite{Ta91,Ca96a}). Of course, this yields a wrong value for the
position of the $\omega$-meson propagator pole, which is
underestimated. We will refer to this procedure as the {\em naive
scheme}. Note that when the Dirac sea is not included at all, the
right viewpoint is to consider the theory at tree level, and the
$\omega$-shell and the naive schemes coincide.
\section{Landau instability subtraction}
\label{III}
As already mentioned, the $\sigma$-$\omega$ model, and more generally
any Lagrangian which couples bosons with fermions by means of a
Yukawa-like coupling, exhibits a vacuum instability~\cite{Pe87,Co87}.
This instability prevents the actual calculation of physical
quantities beyond the mean field valence approximation in a systematic
way. Recently, however, a proposal by Redmond~\cite{Re58} that
explicitly eliminates the Landau ghost has been implemented to
describe relativistic nuclear matter in a series of
papers~\cite{Ta90,Ta91,TB92}. The main features of such method are
contained already in the original papers and many details have also
been discussed. For the sake of clarity, we outline here the method as
applies to the calculation of Dirac sea effects for closed-shell
finite nuclei.
\subsection{Landau instability}
Since the Landau instability shows up already at zero nuclear density,
we will begin by considering the vacuum of the $\sigma$-$\omega$
theory. On a very general basis, namely, Poincar\'e invariance,
unitarity, causality and uniqueness of the vacuum state, one can show
that the two point Green's function (time ordered product) for a
scalar field admits the K\"all\'en-Lehmann representation~\cite{BD65}
\begin{eqnarray}
D(x'-x) = \int\,d\mu^2\rho(\mu^2)\,D_0(x'-x\, ;\mu^2)\,,
\label{KL}
\end{eqnarray}
where the full propagator in the vacuum is
\begin{eqnarray}
D(x'-x) = -i \langle
0|T\phi(x')\phi(x)|0\rangle\,,
\end{eqnarray}
and the free propagator reads
\begin{eqnarray}
D_0(x'-x\,;\mu^2) = \int\,{d^4p\over (2\pi)^4}{ e^{-ip(x'-x)}\over
p^2-\mu^2+i\epsilon }\,.
\end{eqnarray}
The spectral density $\rho(\mu^2)$ is
defined as
\begin{eqnarray}
\rho(q^2) = (2\pi)^3\sum_n\delta^4(p_n-q)|\langle
0|\phi(0)|n\rangle|^2\,.
\end{eqnarray}
It is non negative, Lorentz invariant and vanishes for space-like four
momentum $q$.
The K\"all\'en-Lehmann representability is a necessary condition for
any acceptable theory, yet it is violated by the $\sigma$-$\omega$
model when the meson propagators are approximated by their leading
$1/N$ term. It is not clear whether this failure is tied to the theory
itself or it is an artifact of the approximation ---it is well-known
that approximations to the full propagator do not necessarily preserve
the K\"all\'en-Lehmann representability---. The former possibility
would suppose a serious obstacle for the theory to be a reliable one.
In the above mentioned approximation, eq.~(\ref{KL}) still holds both for the
$\sigma$ and the $\omega$ cases (in the latter case with obvious
modification to account for the Lorentz structure)
but the spectral density gets modified to be
\begin{eqnarray}
\rho(\mu^2) = \rho^{\rm KL}(\mu^2) - R_G\delta(\mu^2+M_G^2)
\end{eqnarray}
where $\rho^{\rm KL}(\mu^2)$ is a physically acceptable spectral
density, satisfying the general requirements of a quantum field
theory. On the other hand, however, the extra term spoils these
general principles. The residue $-R_G$ is negative, thus indicating
the appearance of a Landau ghost state which contradicts the usual
quantum mechanical probabilistic interpretation. Moreover, the delta
function is located at the space-like squared four momentum $-M_G^2$
indicating the occurrence of a tachyonic instability. As a
perturbative analysis shows, the dependence of $R_G$ and $M_G$ with
the fermion-meson coupling constant $g$ in the weak coupling regime is
$R_G\sim g^{-2}$ and $M_G^2 \sim 4M^2\exp(4\pi^2/g^2)$, with $M$ the
nucleon mass. Therefore the perturbative content of $\rho(\mu^2)$ and
$\rho^{\rm KL}(\mu^2)$ is the same, i.e., both quantities coincide
order by order in a power series expansion of $g$ keeping $\mu^2$
fixed. This can also be seen in the propagator form of the previous
equation
\begin{eqnarray}
D(p) = D^{\rm KL}(p) - {R_G\over p^2+M_G^2}\,.
\label{Delta}
\end{eqnarray}
For fixed four momentum, the ghost term vanishes as
$\exp(-4\pi^2/g^2)$ when the coupling constant goes to zero. As noted
by Redmond~\cite{Re58}, it is therefore possible to modify the theory
by adding a suitable counterterm to the action that exactly cancels
the ghost term in the meson propagator without changing the
perturbative content of the theory. In this way the full meson
propagator becomes $D^{\rm KL}(p)$ which is physically acceptable and
free from vacuum instability at leading order in the $1/N$ expansion.
It is not presently known whether the stability problems of the
original $\sigma$-$\omega$ theory are intrinsic or due to the
approximation used, thus Redmond's procedure can be interpreted either
as a fundamental change of the theory or as a modification of the
approximation scheme. Although both interpretations use the
perturbative expansion as a constraint, it is not possible, at the
present stage, to decide between them. It should be made quite clear
that in spite of the seemingly arbitrariness of the no-ghost
prescription, the original theory itself was ambiguous regarding its
non perturbative regime. In fact, being a non asymptotically free
theory, it is not obvious how to define it beyond finite order
perturbation theory. For the same reason, it is not Borel summable and
hence additional prescriptions are required to reconstruct the Green's
functions from perturbation theory to all orders. As an example, if
the nucleon self energy is computed at leading order in a $1/N$
expansion, the existence of the Landau ghost in the meson propagator
gives rise to a pole ambiguity. This is unlike physical time-like
poles, which can be properly handled by the customary $+i\epsilon$
rule, and thus an additional ad hoc prescription is needed. This
ambiguity reflects in turn in the Borel transform of the perturbative
series; the Borel transform presents a pole, known as renormalon in
the literature~\cite{Zi79}. In recovering the sum of the perturbative
series through inverse Borel transformation a prescription is then
needed, and Redmond's proposal provides a particular suitable way of
fixing such ambiguity. Nevertheless, it should be noted that even if
Redmond's prescription turns out to be justified, there still remains
the problem of how to extend it to the case of three- and more point
Green's functions, since the corresponding K\"all\'en-Lehmann
representations has been less studied.
\subsection{Instability subtraction}
To implement Redmond's prescription in detail we start with the
zero-momentum renormalized propagator in terms of the proper
self-energy for the scalar field (a similar construction can be
carried out for the vector field as well),
\begin{eqnarray}
D_s(p^2) = (p^2-m_s^2 - \Pi_s(p^2))^{-1}\,,
\end{eqnarray}
where the $m_s$ is the zero-momentum meson mass and the corresponding
renormalization conditions are $\Pi_s(0)= \Pi_s^\prime(0)=0$. The
explicit formulas for the scalar and vector meson self energies are
given in the appendix. Of course, $D_s(p^2)$ is just the inverse of
the quadratic part of the effective action $K_s(p^2)$. According to
the previous section, the propagator presents a tachyonic pole. Since
the ghost subtraction is performed at the level of the two-point
Green's function, it is clear that the corresponding Lagrangian
counterterm must involve a quadratic operator in the mesonic fields.
The counterterm kernel $\Delta K_s(p^2)$ must be such that cancels the
ghost term in the propagator $D_s(p^2)$ in eq.~(\ref{Delta}). The
subtraction does not modify the position of the physical meson pole
nor its residue, but it will change the zero-momentum parameters and
also the off-shell behavior. Both features are relevant to nuclear
properties. This will be discussed further in the next section.
Straightforward calculation yields
\begin{eqnarray}
\Delta K_s(p^2) =
-{1\over D_s(p^2)}{R_G^s\over R_G^s+(p^2+{M^s_G}^2)D_s(p^2)} \,.
\label{straightforward}
\end{eqnarray}
As stated, this expression vanishes as $\exp(-4\pi^2/g_s^2)$ for small
$g_s$ at fixed momentum. Therefore it is a genuine non perturbative
counterterm. It is also non local as it depends in a non polynomial
way on the momentum. In any case, it does not introduce new
ultraviolet divergences at the one fermion loop level. However, it is
not known whether the presence of this term spoils any general
principle of quantum field theory.
Proceeding in a similar way with the $\omega$-field $V_\mu(x)$, the
following change in the total original action is induced
\begin{eqnarray}
\Delta S = {1\over 2}\int{d^4p\over (2\pi)^4}\phi(-p)
\Delta K_s(p^2)\phi(p) - {1\over 2}\int{d^4p\over
(2\pi)^4}V_\mu(-p)\Delta K^{\mu\nu}_v(p^2)V_\nu(p) \,,
\label{Delta S}
\end{eqnarray}
where $\phi(p)$ and $V_\mu(p)$ are the Fourier transform of the scalar
and vector fields in coordinate space, $\phi(x)$ and $V_\mu(x)$
respectively. Note that at tree-level for bosons, as we are
considering throughout, this modification of the action is to be added
directly to the effective action ---in fact, this is the simplest way
to derive eq.~(\ref{straightforward})---.
Therefore, in the case of static fields, the total mean field energy
after ghost elimination reads
\begin{equation}
E= E_F^{\rm val} + E_F^{\rm sea} + E_B + \Delta E \,,
\end{equation}
where $E_F^{\rm val}$, $E_F^{\rm sea}$ and $E_B$ were given in
section~\ref{II} and
\begin{equation}
\Delta E[\phi,V] = {1\over 2}\int\,d^3x\phi(x)
\Delta K_s(\nabla^2)\phi(x) - {1\over 2}\int\,d^3x
V_0(x)\Delta K^{00}_v(\nabla^2)V_0(x) \,.
\end{equation}
One can proceed by minimizing the mean field total energy as a
functional of the bosonic and fermionic fields. This yields the usual
set of Dirac equations for the fermions, eqs.~(\ref{Dirac-eq}) and
modifies the left-hand side of the bosonic eqs.~(\ref{Poisson-eq}) by
adding a linear non-local term. This will be our starting point to
study the effect of eliminating the ghosts in the description of
finite nuclei. We note that the instability is removed at the
Lagrangian level, i.e., the non-local counterterms are taken to be new
terms of the starting Lagrangian which is then used to describe the
vacuum, nuclear matter and finite nuclei. Therefore no new
prescriptions are needed in addition to Redmond's to specify how the
vacuum and the medium parts of the effective action are modified by
the removal of the ghosts.
So far, the new counterterms, although induced through the Yukawa
coupling with fermions, have been treated as purely bosonic terms.
Therefore, they do not contribute directly to bilinear fermionic
operators such as baryonic and scalar densities. An alternative
viewpoint would be to take them rather as fermionic terms, i.e., as a
(non-local and non-perturbative) redefinition of the fermionic
determinant. The energy functional, and thus the mean field equations
and their solutions, coincide in the bosonic and fermionic
interpretations of the new term, but the baryonic densities and
related observables would differ, since they pick up a new
contribution given the corresponding formulas similar to
eqs.~(\ref{densities}). Ambiguities and redefinitions are ubiquitous
in quantum field theories, due to the well-known ultraviolet
divergences. However, in well-behaved theories the only freedom
allowed in the definition of the fermionic determinant comes from
adding counterterms which are local and polynomial in the
fields. Since the new counterterms induced by Redmond's method are not
of this form, we will not pursue such alternative point of view in
what follows. Nevertheless, a more compelling argument would be needed
to make a reliable choice between the two possibilities.
\subsection{Application to finite nuclei}
In this section we will take advantage of the smooth behavior of the
mesonic mean fields in coordinate space which allows us to apply a
derivative or low momentum expansion. The quality of the gradient
expansion can be tested a posteriori by a direct computation. The
practical implementation of this idea consists of treating the term
$\Delta S$ by expanding each of the kernels $\Delta K(p^2)$ in a power
series of the momentum squared around zero
\begin{equation}
\Delta K(p^2) = \sum_{n\ge 0}\Delta K_{2n}\, p^{2n}\,.
\end{equation}
The first two terms are given explicitly by
\begin{eqnarray}
\Delta K_0 & = & -\frac{m^4R_G}{M_G^2-m^2R_G} ,\nonumber\\
\Delta K_2 & = & \frac{m^2 R_G(m^2-m^2R_G+2M_G^2)}{(M_G^2-m^2R_G)^2}.
\end{eqnarray}
The explicit expressions of the tachyonic pole
parameters $M_G$ and $R_G$ for each meson can be found below.
Numerically, we have found that the fourth and higher orders in this
gradient expansion are negligible as compared to zeroth- and
second orders. In fact, in ref.~\cite{Ca96a} the same behavior was
found for the correction to the Dirac sea contribution to the binding
energy of a nucleus. As a result, even for light nuclei, $E_4^{\rm
sea}$ in eq.~(\ref{Esea}) can be safely neglected. Furthermore, it has
been shown~\cite{Ca96} that the fourth order term in the gradient
expansion of the valence energy, if treated semiclassically, is less
important than shell effects. So, it seems to be a general rule that,
for the purpose of describing static nuclear properties, only the two
lowest order terms of a gradient expansion need to be considered. We
warn, however, that the convergence of the gradient or semiclassical
expansion is not the same as converging to the exact mean field
result, since there could be shell effects not accounted for by this
expansion at any finite order. Such effects, certainly exist in the
valence part~\cite{Ca96}. Even in a seemingly safe case as infinite
nuclear matter, where only the zeroth order has a non vanishing
contribution, something is left out by the gradient expansion since
the exact mean field solution does not exist due to the Landau ghost
instability (of course, the situation may change if the Landau pole is
removed). In other words, although a gradient expansion might appear
to be exact in the nuclear matter case, it hides the very existence of
the vacuum instability.
From the previous discussion it follows that the whole effect of the
ghost subtraction is represented by adding a term $\Delta S$ to the
effective action with same form as the bosonic part of the original
theory, $\Gamma_B[\phi,V]$ in eq.~(\ref{GammaB}). This amounts to a
modification of the zero-momentum parameters of the effective
action. The new zero-momentum scalar field (i.e., with canonical
kinetic energy), mass and coupling constant in terms of those of the
original theory are given by
\begin{eqnarray}
{\wh\phi}(x) &=& (1+\Delta K^s_2)^{1/2}\phi(x)\,, \nonumber \\
\wh{m}_s &=& \left(\frac{m_s^2-\Delta K^s_0}{1+\Delta
K^s_2}\right)^{1/2} \,, \\
\wh{g}_s &=& (1+\Delta K^s_2)^{-1/2}g_s \,. \nonumber
\end{eqnarray}
The new coupling constant is obtained recalling that $g_s\phi(x)$
should be invariant. Similar formulas hold for the vector meson. With
these definitions (and keeping only $\Delta K_{s,v}(p^2)$ till second
order in $p^2$) one finds\footnote{Note that $E_{B,F}[~]$ refer to the
functionals (the same at both sides of the equations) and not to their
value as is also usual in physics literature.}
\begin{eqnarray}
E_B[\wh{\phi},\wh{V};\wh{m}_s,\wh{m}_v] &=&
E_B[\phi,V; m_s,m_v] + \Delta E[\phi,V; m_s,m_v] \,, \\
E_F[\wh{\phi},\wh{V};\wh{g}_s,\wh{g}_v] &=&
E_F[\phi,V;g_s,g_v] \,. \nonumber
\end{eqnarray}
The bosonic equations for the new meson fields after ghost removal are
hence identical to those of the original theory using
\begin{eqnarray}
\wh{m}^2 &=& m^2 \, M_G^2 \, \frac{M_G^2 - m^2\, R_G}{M_G^4 + m^4 \, R_G }
\,,\nonumber \\
\wh{g}^2 &=& g^2 \, \frac{(M_G^2 - m^2\, R_G)^2}{M_G^4 + m^4 \, R_G } \,,
\label{mg}
\end{eqnarray}
as zero-momentum masses and coupling constants respectively. In the
limit of large ghost masses or vanishing ghost residues, the
reparameterization becomes trivial, as it should be. Let us note that
although the zero-momentum parameters of the effective action
$\wh{m}_{s,v}$ and $\wh{g}_{s,v}$ are the relevant ones for nuclear
structure properties, the parameters $m_{s,v}$ and $g_{s,v}$ are the
(zero-momentum renormalized) Lagrangian parameters and they are also
needed, since they are those appearing in the Feynman rules in a
perturbative treatment of the model. Of course, both sets of parameters
coincide when the ghosts are not removed or if there were no ghosts in
the theory.
To finish this section we give explicitly the fourth order
coefficient in the gradient expansion of $\Delta E$, taking into account
the rescaling of the mesonic fields, namely,
\begin{equation}
\frac{\Delta K_4}{1+\Delta K_2}=
-\frac{R_G (M_G^2 + m^2)^2}{
(M_G^4 + m^4 \, R_G)\, (M_G^2 - m^2 \, R_G)}
- \frac{\gamma g^2}{\alpha \, \pi^2 \, M^2} \,
\frac{m^4 \, R_G^2 - 2 \, m^2 \, M_G^2 \, R_G }
{M_G^4 + m^4 \, R_G } \,,
\end{equation}
where $\alpha$ is $160$ for the scalar meson and $120$ for the vector
meson. As already stated, for typical mesonic profiles the
contribution of these fourth order terms are found to be numerically
negligible. Simple order of magnitude estimates show that squared
gradients are suppressed by a factor $(RM_G)^{-2}$, $R$ being the
nuclear radius, and therefore higher orders can also be
neglected. That the low momentum region is the one relevant to nuclear
physics can also be seen from the kernel $K_s(p^2)$, shown in
fig.~\ref{f-real}. From eq.~(\ref{Delta S}), this kernel is to be
compared with the function $\phi(p)$ that has a width of the order of
$R^{-1}$. It is clear from the figure that at this scale all the
structure of the kernel at higher momenta is irrelevant to $\Delta E$.
\subsection{Fixing of the parameters after ghost subtraction}
As noted in section \ref{II}, the equation of state at zero
temperature for nuclear matter depends only on the dimensionless
quantities $C^2_s$ and $C_v^2$, that now become
\begin{equation}
C_s^2 = \wh{g}_s^2 \, \frac{M^2}{\wh{m}_s^2}\,,\qquad
C_v^2 = \wh{g}_v^2 \, \frac{M^2}{\wh{m}_v^2}\,.
\label{CsCv}
\end{equation}
Fixing the saturation density and binding energy to their observed
values yields, of course, the same numerical values for $C_s^2$ and
$C_v^2$ as in the original theory. After this is done, all static
properties of nuclear matter are determined and thus they are
insensitive to the ghost subtraction. Therefore, at leading order in
the $1/N$ expansion, to see any effect one should study either
dynamical nuclear matter properties as done in ref.~\cite{Ta91} or
finite nuclei as we do here.
It is remarkable that if all the parameters of the model were to be
fixed exclusively by a set of nuclear structure properties, the ghost
subtracted and the original theories would be indistinguishable
regarding any other static nuclear prediction, because bosonic and
fermionic equations of motion have the same form in both
theories. They would differ however far from the zero four momentum
region where the truncation of the ghost kernels $\Delta K(p^2)$ at
order $p^2$ is no longer justified. In practice, the predictions will
change after ghost removal because the $\omega$-meson mass is quite large
and is one of the observables to be used in the fixing of the
parameters.
To fix the parameters of the theory we choose the same observables as
in section \ref{II}. Let us consider first the vector meson
parameters $\wh{m}_v$ and $\wh{g}_v$. We proceed as follows:
1. We choose a trial value for $g_v$ (the zero-momentum coupling
constant of the original theory). This value and the known physical
values of the $\omega$-meson and nucleon masses, $m_\omega$ and $M$
respectively, determines $m_v$ (the zero-momentum mass of the original
theory), namely
\begin{equation}
m_v^2 = m_\omega^2 + \frac{\gamma \, g_v^2}{8 \, \pi^2}\, M^2\,
\left\{ \frac{4}{3} + \frac{5}{9}\, \frac{m_\omega^2}{M^2}
- \frac{2}{3} \left(2 + \frac{m_\omega^2}{M^2}\right)
\sqrt{\frac{4 \, M^2}{m_\omega^2} - 1 } \,
\arcsin\left(\frac{m_\omega}{2 \, M}\right)\right\} \,.
\end{equation}
(This, as well as the formulas given below, can be deduced from those
in the appendix.)
2. $g_v$ and $m_v$ provide the values of the tachyonic
parameters $R_G^v$ and $M_G^v$. They are given by
\begin{eqnarray}
M_G^v &=& \frac{2M}{\sqrt{\kappa_v^2-1}} \nonumber \\
\frac{1}{R_G^v} &=& -1 + \frac{\gamma \, g_v^2}{24\,\pi^2}
\left\{ \left(\frac{\kappa_v^3}{4} + \frac{3}{4\,\kappa_v}\right)
\log \frac{\kappa_v + 1}{\kappa_v - 1} - \frac{\kappa_v^2}{2} -
\frac{1}{6}\right\}\,,
\label{RGv}
\end{eqnarray}
where the quantity $\kappa_v$ is the real solution of the following
equation (there is an imaginary solution which corresponds to the
$\omega$-meson pole)
\begin{equation}
1 + \frac{m_v^2}{4 \, M^2} \, (\kappa_v^2-1)
+\frac{\gamma \, g_v^2}{24\,\pi^2}
\left\{ \left(\frac{\kappa_v^3}{2} - \frac{3\, \kappa_v}{2}\right)
\log \frac{\kappa_v + 1}{\kappa_v - 1} - \kappa_v^2 +
\frac{8}{3}\right\} =0 \,.
\label{kappav}
\end{equation}
3. Known $g_v$, $m_v$, $M_G^v$ and $R_G^v$, the values of $\wh{m}_v$ and
$\wh{g}_v$ are obtained from eqs.~(\ref{mg}). They are then inserted
in eqs.~(\ref{CsCv}) to yield $C^2_v$. If necessary, the initial trial
value of $g_v$ should be readjusted so that the value of $C^2_v$ so
obtained coincides with that determined by the saturation properties
of nuclear matter.
The procedure to fix the parameters $m_s$ and $g_s$ is similar but
slightly simpler since the physical mass of the scalar meson
$m_\sigma$ is not used in the fit. Some trial values for $m_s$ and
$g_s$ are proposed. This allows to compute $M_G^s$ and $R_G^s$ by
means of the formulas
\begin{eqnarray}
M_G^s &=& \frac{2M}{\sqrt{\kappa_s^2-1}} \nonumber \\
\frac{1}{R_G^s} &=& -1 -
\frac{\gamma \, g_s^2}{16\,\pi^2}
\left\{ \left(\frac{\kappa_s^3}{2} \, - \frac{3\,\kappa_s}{2}\right)
\log \frac{\kappa_s + 1}{\kappa_s - 1}
- \kappa_s^2 + \frac{8}{3}\right\}\,,
\label{RGs}
\end{eqnarray}
where $\kappa_s$ is the real solution of
\begin{equation}
1 + \frac{m_s^2}{4 \, M^2} \, (\kappa_s^2-1)
- \frac{\gamma \, g_s^2}{16\,\pi^2}
\left\{ \kappa_s^3 \, \log \frac{\kappa_s + 1}{\kappa_s - 1}
- 2 \, \kappa_s^2 - \frac{2}{3}\right\} = 0\,.
\label{kappas}
\end{equation}
One can then compute $\wh{m}_s$ and $\wh{g}_s$ and thus $C_s^2$ and
the mean quadratic charge radius of $^{40}$Ca. The initial values of
$m_s$ and $g_s$ should be adjusted to reproduce these two quantities.
We will refer to the set of masses and coupling constants so obtained
as the {\em no-ghost scheme} parameters.
\section{Numerical results and discussion}
\label{IV}
As explained in section~\ref{II}, the parameters of the theory are
fitted to five observables. For the latter we take the following
numerical values: $M=939$~MeV, $m_\omega=783$~MeV, $B/A=15.75$~MeV,
$k_F=1.3$~fm$^{-1}$ and $3.82$~fm for the mean quadratic charge radius
of $^{40}$Ca.
If the Dirac sea is not included at all, the numerical values that we
find for the nuclear matter combinations $C_s^2$ and $C_v^2$ are
\begin{equation}
C_s^2 = 357.7\,, \qquad C_v^2 = 274.1
\end{equation}
The corresponding Lagrangian parameters are shown in
table~\ref{t-par-1}. There we also show $m_\sigma$ and $m_\omega$ that
correspond to the position of the poles in the propagators after
including the one-loop meson self energy. They are an output of the
calculation and are given for illustration purposes.
When the Dirac sea is included, nuclear matter properties fix the
following values
\begin{equation}
C_s^2 = 227.8\,, \qquad C_v^2 = 147.5
\end{equation}
Note that in nuclear matter only the zeroth order $E_0^{\rm sea}$ is
needed in the gradient expansion of the sea energy, since the meson
fields are constant. The (zero momentum renormalized) Lagrangian meson
masses $m_{s,v}$ and coupling constants $g_{s,v}$ are shown in
table~\ref{t-par-1} in various schemes, namely, $\omega$-shell,
no-ghost and naive schemes, previously defined. The scalar meson
parameters differ if the Dirac sea energy is included at zeroth order
or at all orders (in practice zeroth plus second order) in the
gradient expansion. For the sake of completeness, both possibilities
are shown in the table. The numbers in brackets in the no-ghost scheme
are the zero-momentum parameters of the effective action,
$\wh{m}_{s,v}$ and $\wh{g}_{s,v}$ (in the other schemes they coincide
with the Lagrangian parameters). Again $m_\sigma$ and $m_\omega$ refer
to the scalar and vector propagator-pole masses after including the
one fermion loop self energy for each set of Lagrangian parameters.
Table~\ref{t-par-2} shows the ghost masses and residues corresponding
to the zero-momentum renormalized propagators. The no-ghost scheme
parameters have been used.
The binding energies per nucleon (without center of mass corrections)
and mean quadratic charge radii (without convolution with the nucleon
form factor) of several closed-shell nuclei are shown in
tables~\ref{t-dat-1} and \ref{t-dat-2} for the $\omega$-shell and for
the naive and no-ghost schemes (these two schemes give the same
numbers), as well as for the case of not including the Dirac sea.
The experimental data are taken from refs.~\cite{Ja74,Va72,Wa88a}.
From table~\ref{t-par-1} it follows that the zero-momentum vector meson
mass $m_v$ in the $\omega$-shell scheme is considerably larger than the
physical mass. This is somewhat unexpected. Let us recall that the
naive treatment, which neglects the meson self energy, is the most
used in practice. It has been known for a long time~\cite{Pe86,FH88}
that the $\omega$-shell scheme is, as a matter of principle, the
correct procedure but on the basis of rough estimates it was assumed
that neglecting the meson self energy would be a good approximation
for the meson mass. We find here that this is not so.
Regarding the consequences of removing the ghost, we find in
table~\ref{t-par-1} that the effective parameters $\wh{m}_{s,v}$ and
$\wh{g}_{s,v}$ in the no-ghost scheme are similar, within a few per
thousand, to those of the naive scheme. This similarity reflects in
turn on the predicted nuclear properties: the results shown in
tables~\ref{t-dat-1} and \ref{t-dat-2} for the no-ghost scheme
coincide, within the indicated precision, with those of the naive
scheme (not shown in the table). It is amazing that the outcoming
parameters from such a sophisticated fitting procedure, namely the
no-ghost scheme, resemble so much the parameters corresponding to the
naive treatment. We believe this result to be rather remarkable for it
justifies a posteriori the nowadays traditional calculations made with
the naive scheme.
The above observation is equivalent to the fact that the zero-momentum
masses, $\wh{m}_{s,v}$, and the propagator-pole masses
$m_{\sigma,\omega}$ are very similar in the no-ghost scheme. This
implies that the effect of removing the ghosts cancels to a large
extent with that introduced by the meson self energies. Note that
separately the two effects are not small; as was noted above $m_v$ is
much larger than $m_\omega$ in the $\omega$-shell scheme. To interpret
this result, it will be convenient to recall the structure of the
meson propagators. In the leading $1/N$ approximation, there are
three kinds of states that can be created on the vacuum by the meson
fields. Correspondingly, the spectral density functions $\rho(q^2)$
have support in three clearly separated regions, namely, at the ghost
mass squared (in the Euclidean region), at the physical meson mass
squared, and above the $N\overline{N}$ pair production threshold
$(2M)^2$ (in the time-like region). The full meson propagator is
obtained by convolution of the spectral density function with the
massless propagator $(q^2+i\epsilon)^{-1}$ as follows from the
K\"alle\'en-Lehmann representation, eq.~(\ref{KL}). The large
cancelation found after removing the ghosts leads to the conclusion
that, in the zero-momentum region, most of the correction induced by
the fermion loop on the meson propagators, and thereby on the
quadratic kernels $K(p^2)$, is spurious since it is due to unphysical
ghost states rather than to virtual $N\overline{N}$ pairs. This can
also be seen from figs.~\ref{f-real} and \ref{f-imaginary}. There, we
represent the real and imaginary parts of $K_s(p^2)$ respectively, in
three cases, namely, before ghost elimination, after ghost elimination
and the free inverse propagator. In all three cases the slope of the
real part at zero momentum is equal to one and the no-ghost (sea 2nd)
set of parameters from table~\ref{t-par-1} has been used. We note the
strong resemblance of the free propagator and the ghost-free
propagator below threshold. A similar result is obtained for the
vector meson.
One may wonder how these conclusions reflect on the sea energy. Given
that we have found that most of the fermion loop is spurious in the
meson self energy it seems necessary to revise the sea energy as well
since it has the same origin. Technically, no such problem appears in
our treatment. Indeed the ghost is found in the fermion loop attached
to two meson external legs, i.e., terms quadratic in the fields.
However, the sea energy used, namely, $E_0^{\rm sea}+E_2^{\rm sea}$,
does not contain such terms. Quadratic terms would correspond to a
mass term in $E_0^{\rm sea}$ and a kinetic energy term in $E_2^{\rm
sea}$, but they are absent from the sea energy due to the
zero-momentum renormalization prescription used. On the other hand,
terms with more than two gradients were found to be
negligible~\cite{Ca96a}. Nevertheless, there still exists the
possibility of ghost-like contributions in vertex functions
corresponding to three or more mesons, similar to the spurious
contributions existing in the two-point function. In this case the
total sea energy would have to be reconsidered. The physically
acceptable dispersion relations for three or more fields have been
much less studied in the literature hence no answer can be given to
this possibility at present.
\section{Summary and conclusions}
\label{V}
We summarize our points. In the present paper, we have studied the
consequences of eliminating the vacuum instabilities which take place
in the $\sigma$-$\omega$ model. This has been done using Redmond's
prescription which imposes the validity of the K\"all\'en-Lehmann
representation for the two-point Green's functions. We have discussed
possible interpretations to such method and have given plausibility
arguments to regard Redmond's method as a non perturbative and non
local modification of the starting Lagrangian.
Numerically we have found that, contrary to the naive expectation, the
effect of including fermionic loop corrections to the mesonic
propagators ($\omega$-shell scheme) is not small. However, it largely
cancels with that of removing the unphysical Landau poles. A priori,
this is a rather unexpected result which in fact seems to justify
previous calculations carried out in the literature using a naive
scheme. Actually, as compared to that scheme and after proper
readjustment of the parameters to selected nuclear matter and finite
nuclei properties, the numerical effect becomes rather moderate on
nuclear observables. The two schemes, naive and no-ghost, are
completely different beyond the zero four momentum region, however,
and for instance predict different values for the vector meson mass.
Therefore it seems that in this model most of the fermionic loop
contribution to the meson self energy is spurious. The inclusion of
the fermionic loop in the meson propagator can only be regarded as an
improvement if the Landau ghost problem is dealt with simultaneously.
We have seen that the presence of Landau ghosts does not reflect on
the sea energy but it is not known whether there are other
spurious ghost-like contributions coming from three or higher point
vertex functions induced by the fermionic loop.
\section*{Acknowledgments}
This work is supported in part by funds provided by the U.S.
Department of Energy (D.O.E.) under cooperative research agreement
\#DF-FC02-94ER40818, Spanish DGICYT grant no. PB92-0927 and Junta de
Andaluc\'{i}a grant no. FQM0225. One of us (J.C.) acknowledges the
Spanish Ministerio de Educaci\'on y Ciencia for a research grant.
| proofpile-arXiv_065-662 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{1. Introduction}
The connection of positive knots with transcendental numbers, via the
counterterms of quantum field theory, proposed in~\cite{DK1} and
developed in~\cite {DK2}, and has been vigorously tested against
previous~\cite{GPX,DJB} and new~\cite{BKP} calculations,
entailing knots with up to 11 crossings, related by counterterms with
up to 7 loops to numbers that are irreducible
multiple zeta values (MZVs)~\cite{DZ,LM}.
Cancellations of transcendentals in gauge
theories have been illuminated by knot theory~\cite{BDK}. All-order results,
from large-$N$ analyses~\cite{BGK} and Dyson-Schwinger methods~\cite{DKT},
have further strengthened the connection of knot theory and number theory,
via field theory. A striking feature of this connection is that the
first irreducible MZV of depth 2 occurs at weight 8~\cite{DJB,BBG}, in
accord with the appearance of the first positive 3-braid knot at crossing
number 8. Likewise the first irreducible MZV of depth 3 occurs at weight
11~\cite{BG}, matching the appearance of the first positive 4-braid
at 11 crossings, obtained from skeining link diagrams that encode momentum
flow in 7-loop counterterms~\cite{BKP}. Moreover, the
investigations in~\cite{BGK} led to a new discovery at weight 12,
where it was found that the reduction of MZVs first entails alternating
Euler sums. The elucidation of this phenomenon
resulted in an enumeration~\cite{EUL} of irreducible Euler sums
and prompted intensive searches for evaluations of sums of
arbitrary depth~\cite{BBB}. A review of all these developments is
in preparation~\cite{DK}.
This paper pursues the connection to 8 and 9 loops, entailing knots
with up to 15 crossings.
In Section~2, we enumerate irreducible MZVs by weight.
Section~3 reports calculations of Feynman diagrams that yield
transcendental knot-numbers entailing MZVs up to weight 15.
In Section~4 we enumerate positive knots, up to 15 crossings,
and give the braid words and HOMFLY polynomials~\cite{VJ} for
all knots associated with irreducible MZVs of weight $n<17$.
Section~5 gives our conclusions.
\subsection*{2. Multiple zeta values}
We define $k$-fold Euler sums~\cite{BBG,BG} as in~\cite{EUL,BBB},
allowing for alternations of signs in
\begin{equation}
\zeta(s_1,\ldots,s_k;\sigma_1,\ldots,\sigma_k)=\sum_{n_j>n_{j+1}>0}
\quad\prod_{j=1}^{k}\frac{\sigma_j^{n_j}}{n_j^{s_j}}\,,\label{form}
\end{equation}
where $\sigma_j=\pm1$, and the exponents $s_j$ are positive integers,
with $\sum_j s_j$ referred to as the weight (or level) and $k$ as the depth.
We combine the strings of exponents and signs
into a single string, with $s_j$ in the $j$th position when $\sigma_j=+1$,
and $\overline{s}_j$ in the $j$th position when $\sigma_j=-1$.
Referring to non-alternating sums as MZVs~\cite{DZ},
we denote the numbers of irreducible Euler sums
and MZVs by $E_n$ and $M_n$, at weight $n$, and find that
\begin{equation}
1-x -x^2=\prod_{n>0}(1-x^n)^{E_n}\,;\quad
1-x^2-x^3=\prod_{n>0}(1-x^n)^{M_n}\,,\label{EM}
\end{equation}
whose solutions, developed in Table~1, are given in closed form by
\begin{eqnarray}
E_n=\frac{1}{n}\sum_{d|n}\mu(n/d)L_d\,;
&&L_n=L_{n-1}+L_{n-2}\,;\quad L_1=1\,;\quad L_2=3\,,\label{Es}\\
M_n=\frac{1}{n}\sum_{d|n}\mu(n/d)P_d\,;
&&P_n=P_{n-2}+P_{n-3}\,;\quad P_1=0\,;\quad P_2=2;\quad P_3=3\,,\label{Ms}
\end{eqnarray}
where $\mu$ is the M\"obius function, $L_n$ is a Lucas number~\cite{EUL} and
$P_n$ is a Perrin number~\cite{AS}.
\noindent{\bf Table~1}:
The integer sequences\Eqqq{Es}{Ms}{Kn} for $n\leq20$.
\[\begin{array}{|r|rrrrrrrrrrrrrrrrrrrr|}\hline
n&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20\\\hline
E_n&1&1&1&1&2&2&4&5&8&11&18&25&40&58&90&135&210&316&492&750\\
M_n&0&1&1&0&1&0&1&1&1&1&2&2&3&3&4&5&7&8&11&13\\
K_n&0&0&1&0&1&1&1&2&2&3&4&5&7&9&12&16&21&28&37&49\\\hline
\end{array}\]
In~\cite{EUL}, $E_n=\sum_k E_{n,k}$ was apportioned, according to
the minimum depth $k$ at which irreducibles of weight $n$ occur.
Similarly, we have apportioned $M_n=\sum_k M_{n,k}$.
The results are elements of Euler's triangle~\cite{EUL}
\begin{equation}
T(a,b)=\frac{1}{a+b}\sum_{d|a,b}\mu(d)\,P(a/d,b/d)\,,
\label{ET}
\end{equation}
which is a M\"obius transform of Pascal's triangle, $P(a,b)={a+b\choose a}
=P(b,a)$.
We find that
\begin{equation}
E_{n,k}=T(\df{n-k}{2},k)\,;\quad
M_{n,k}=T(\df{n-3k}{2},k)\,,
\label{EMb}
\end{equation}
for $n>2$ and $n+k$ even.
There is a remarkable feature of the result for $M_{n,k}$: it gives
the number of irreducible Euler sums of weight $n$ and depth $k$ that occur
in the reduction of MZVs, which is {\em not\/} the same as the
number of irreducible MZVs of this weight and depth. It was shown
in~\cite{BGK,EUL} that alternating multiple sums occur in the reduction of
non-alternating multiple sums. For example, $\zeta(4,4,2,2)$
cannot be reduced to MZVs of lesser depth, but it can~\cite{EUL} be reduced
to the alternating Euler sum $\zeta(\overline9,\overline3)$.
Subsequently we found
that an analogous ``pushdown'' occurs at weight 15, where depth-5 MZVs,
such as $\zeta(6,3,2,2,2)$, cannot be reduced to MZVs of lesser depth,
yet can be reduced to alternating Euler sums, with
$\zeta(9,\overline3,\overline3)-\frac{3}{14}\zeta(7,\overline5,\overline3)$
serving as the corresponding depth-3 irreducible. We conjecture
that the number, $D_{n,k}$, of MZVs of weight $n$ and depth $k$
that are not reducible to MZVs of lesser depth
is generated by
\begin{equation}
1-\frac{x^3 y}{1-x^2}+\frac{x^{12}y^2(1-y^2)}{(1-x^4)(1-x^6)}
=\prod_{n\ge3} \prod_{k\ge1} (1-x^n y^k)^{D_{n,k}},\label{Pd}
\end{equation}
which agrees with~\cite{BBG,BG} for $k<4$ and all weights $n$,
and with available data on MZVs, obtained from binary
reshuffles~\cite{LM} at weights $n\leq20$ for $k=4$, and $n\leq18$
for $k>4$. Further tests of\Eq{Pd} require very large scale
computations, which are in progress, with encouraging results.
However, the work reported here does not rely on this conjecture;
the values of $\{M_n\mid n\le15\}$ in Table~1 are sufficient for present
purposes, and these are amply verified by exhaustive use of
the integer-relation search routine MPPSLQ~\cite{DHB}.
Finally, in this section, we remark on the simplicity of the prediction
of\Eq{Ms} for the dimensionality, $K_n$, of the search space
for counterterms that evaluate to MZVs of weight $n$.
Since $\pi^2$, with weight $n=2$, does
not occur in such counterterms, it follows that they
must be expressible in terms of transcendentals that are enumerated by
$\{M_n\mid n\geq3\}$, and products of such knot-numbers~\cite{DK1,BGK,EUL}.
Thus $K_n$ is given by a Padovan sequence:
\begin{equation}
\sum_n x^n K_n=\frac{x^3}{1-x^2-x^3}\,;\quad
K_n=K_{n-2}+K_{n-3}\,;\quad K_1=0\,;\quad K_2=0\,;\quad K_3=1\,,\label{Kn}
\end{equation}
which is developed in Table~1. Note that the dimension of
the search space for a general MZV of weight $n$ is $K_{n+3}$~\cite{DZ},
which exceeds $K_n$ by a factor~\cite{AS} of $2.324717957$, as $n\to\infty$.
\subsection*{3. Knot-numbers from evaluations of Feynman diagrams}
The methods at our disposal~\cite{DK1,DK2,BKP} do not yet permit us to
predict, {\em a priori\/}, the transcendental knot-number assigned to
a positive knot by field-theory counterterms; instead we need
a concrete evaluation of at least one diagram
which skeins to produce that knot.
Neither do they allow us to predict the rational coefficients with which
such knot-numbers, and their products, corresponding to factor knots,
occur in a counterterm; instead we must, at present, determine these
coefficients by analytical calculation, or by applying a lattice method,
such as MPPSLQ~\cite{DHB}, to (very) high-precision numerical data.
Nonetheless, the consequences of~\cite{DK1,DK2} are highly predictive
and have survived intensive testing with amazing fidelity. The origin
of this predictive content is clear: once a knot-number is determined
by one diagram, it is then supposed, and indeed found, to occur in the
evaluation of all other diagrams which skein to produce that knot.
Moreover, the search
space for subdivergence-free counterterms that evaluate to MZVs
is impressively smaller than that for the MZVs themselves,
due to the absence of any knot associated with $\pi^2$,
and again the prediction is borne out by a wealth of data.
We exemplify these features by considering diagrams that
evaluate to MZVs of depths up to 5, which is the greatest depth
that can occur at weights up to 17, associated with knots up to
crossing-number 17, obtained from diagrams with up to 10 loops.
We follow the economical notation of~\cite{GPX,DJB,BKP}, referring
to a vacuum diagram by a so-called angular diagram~\cite{GPX}, which
results from choosing one vertex as origin, and indicating all vertices
that are connected to this origin by dots, after removing the origin
and the propagators connected to it.
{From} such an angular diagram one may uniquely reconstruct
the original Feynman diagram. The advantage of this notation is that
the Gegenbauer-polynomial $x$-space technique~\cite{GPX} ensures
that the maximum depth of sum which can result is the smallest
number of propagators in any angular diagram that characterizes
the Feynman diagram. Fig.~1 shows a naming convention for log-divergent
vacuum diagrams with angular diagrams that yield up to 5-fold sums.
To construct, for example, the 7-loop diagram $G(4,1,0)$ one places
four dots on line 1 and one dot on line 2.
Writing an origin at any point disjoint from the angular diagram,
and joining all 6 dots to that origin, one recovers the
Feynman diagram in question.
Using analytical techniques developed
in~\cite{GPX,DJB,BG,EUL}, we find that all subdivergence-free diagrams
of $G$-type, up to 13 loops (the highest number computable in
the time available), give counterterms that evaluate
to $\zeta(2n+1)$, their products, and depth-3 knot-numbers
chosen from the sets
\begin{eqnarray}
N_{2m+1,2n+1,2m+1}&=&
\zeta(2m+1,2n+1,2m+1)-\zeta(2m+1)\,\zeta(2m+1,2n+1)\nonumber\\&&{}
+\sum_{k=1}^{m-1}{2n+2k\choose2k}\zeta_P(2n+2k+1,2m-2k+1,2m+1)\nonumber\\&&{}
-\sum_{k=0}^{n-1}{2m+2k\choose2k}\zeta_P(2m+2k+1,2n-2k+1,2m+1)
\,,\label{K3o}\\
N_{2m,2n+1,2m}&=&
\zeta(2m,2n+1,2m)+\zeta(2m)\left\{\zeta(2m,2n+1)+\zeta(2m+2n+1)\right\}
\nonumber\\&&{}
+\sum_{k=1}^{m-1} {2n+2k\choose2k }\zeta_P(2n+2k+1,2m-2k,2m)\nonumber\\&&{}
+\sum_{k=0}^{n-1} {2m+2k\choose2k+1}\zeta_P(2m+2k+1,2n-2k,2m)
\,,\label{K3e}
\end{eqnarray}
where $\zeta_P(a,b,c)=\zeta(a)\left\{2\,\zeta(b,c)+\zeta(b+c)\right\}$.
The evaluation of a 9-loop non-planar example, $G(3,2,2)$,
is given in~\cite{EUL}: it evaluates in terms of MZVs of weights
ranging from 6 to 14. Choosing from\Eqq{K3o}{K3e}
one knot-number at 11 crossings and two at 13 crossings,
one arrives at an expression from which all powers of $\pi^2$ are banished,
which is a vastly more specific result than for a generic collection of MZVs
of these levels, and is in striking accord with what is required by the
knot-to-number connection entailed by field theory. Moreover, all planar
diagrams that evaluate to MZVs have been found to contain terms purely of
weight $2L-3$ at $L$ loops, matching the pattern of the zeta-reducible
crossed ladder diagrams~\cite{DK1,DK2}.
Subdivergence-free counterterms obtained from the $M$-type angular diagrams of
Fig.~1 evaluate to MZVs of weight $2L-4$, at $L$-loops, with depths up to 4.
Up to $L=8$ loops, corresponding to 12 crossings, the depth-4 MZVs can be
reduced to the depth-2 alternating sums~\cite{EUL}
$N_{a,b}\equiv\zeta(\overline{a},b)-\zeta(\overline{b},a)$. The knot-numbers
for the $(4,3)$ and $(5,3)$ torus knots may be taken as $N_{5,3}$
and $N_{7,3}$, thereby banishing $\pi^2$ from the associated 6-loop
and 7-loop counterterms. In general, $N_{2k+5,3}$ is a $(2k+8)$-crossing
knot-number at $(k+6)$ loops. Taking the second knot-number at 12 crossings
as~\cite{BGK,EUL} $N_{7,5}-\frac{\pi^{12}}{2^5 10!}$,
we express all 8-loop $M$-type counterterms in a $\pi$-free form.
At 9 loops, and hence 14 crossings, we encounter the first depth-4 MZV
that cannot be pushed down to alternating Euler sums of depth 2.
The three knot-numbers are again highly specific: to $N_{11,3}$
we adjoin
\begin{equation}
N_{9,5}+\df{5\pi^{14}}{7032946176}\,;\quad
\zeta(5,3,3,3)+\zeta(3,5,3,3)-\zeta(5,3,3)\zeta(3)
+\df{24785168\pi^{14}}{4331237155245}\,.\label{k14}
\end{equation}
Having determined these knot-numbers by applying MPPSLQ to
200 significant-figure evaluations of two counterterms, in a search space
of dimension $K_{17}=21$, requisite for generic MZVs of weight 14,
knot theory requires that we find the remaining five $M$-type counterterms
in a search space of dimension merely $K_{14}=9$. This prediction is totally
successful. The rational coefficients are too cumbersome to write here;
the conclusion is clear: when counterterms evaluate to MZVs they live
in a $\pi$-free domain, much smaller than that inhabited by a generic
MZV, because of the apparently trifling circumstance that a knot with
only two crossings is necessarily the unknot.
Such wonders continue, with subdivergence-free diagrams of types
$C$ and $D$ in Fig.~1
Up to 7 loops we have obtained {\em all\/} of them in terms of the
established knot-numbers $\{\zeta(3),\zeta(5),\zeta(7),N_{5,3},\zeta(9),
N_{7,3},\zeta(11),N_{3,5,3}\}$,
associated in~\cite{BKP,BGK} with the positive knots
$\{(3,2),(5,2),(7,2),8_{19}=(4,3),(9,2),10_{124}=(5,3),(11,2),11_{353}
=\sigma_1^{}\sigma_2^{3}\sigma_3^{2}\sigma_1^{2}\sigma_2^{2}\sigma_3^{}\}$,
and products of those knot-numbers, associated with the corresponding
factor knots.
A non-planar $L$-loop diagram may have terms of different weights,
not exceeding $2L-4$.
Invariably, a planar $L$-loop diagram evaluates purely
at weight $2L-3$.
Hence we expect the one undetermined MZV knot-number at
15 crossings to appear in, for example, the planar 9-loop diagram
$C(1,0,4,0,1)$. To find the precise combination of
$\zeta(9,\overline3,\overline3)-\frac{3}{14}\zeta(7,\overline5,\overline3)$
with other Euler sums would require a search in a space of dimension
$K_{18}=28$. Experience suggests that would require an evaluation
of the diagram to about 800 sf, which is rather ambitious,
compared with the 200 sf which yielded\Eq{k14}. Once the number is found,
the search space for further counterterms shrinks to dimension $K_{15}=12$.
\subsection*{4. Positive knots associated with irreducible MZVs}
Table~2 gives the braid words~\cite{VJ} of 5 classes of positive knot.
For each type of knot, ${\cal K}$,
we used the skein relation to compute the HOMFLY polynomial~\cite{VJ},
$X_{\cal K}(q,\lambda)$, in terms of
$p_n=(1-q^{2n})/(1-q^2)$, $r_n=(1+q^{2n-1})/(1+q)$,
$\Lambda_n=\lambda^n(1-\lambda)(1-\lambda q^2)$.
\noindent{\bf Table~2}:
Knots and HOMFLY polynomials associated with irreducibles MZVs.
\[\begin{array}{|l|l|l|}\hline{\cal K}&X_{\cal K}(q,\lambda)\\\hline
{\cal T}_{2k+1}=\sigma_1^{2k+1}&T_{2k+1}=\lambda^k(1+q^2(1-\lambda)p_k)\\
{\cal R}_{k,m}=\sigma_1^{}\sigma_2^{2k+1}\sigma_1^{}\sigma_2^{2m+1}&
R_{k,m}= T_{2k+2m+3}+q^3p_k p_m\Lambda_{k+m+1}\\
{\cal R}_{k,m,n}=
\sigma_1^{}\sigma_2^{2k}\sigma_1^{}\sigma_2^{2m}\sigma_1^{}\sigma_2^{2n+1}&
R_{k,m,n}=R_{1,k+m+n-1}+q^6p_{k-1}p_{m-1}r_n\Lambda_{k+m+n+1}\\
{\cal S}_{k}=
\sigma_1^{}\sigma_2^{3}\sigma_3^{2}\sigma_1^{2}\sigma_2^{2k}\sigma_3^{}&
S_{k}= T_3^2T_{2k+3}+q^2p_k r_2(q^2(\lambda-2)+q-2)\Lambda_{k+3}\\
{\cal S}_{k,m,n}=
\sigma_1^{}\sigma_2^{2k+1}\sigma_3^{}\sigma_1^{2m}\sigma_2^{2n+1}\sigma_3^{}
&S_{k,m,n}=T_{2k+2m+2n+3}+q^3(p_k p_m+p_m p_n+p_n p_k\\&\phantom{S_{k,m,n}=}
\quad{}+(q^2(3-\lambda)-2q)p_k p_m p_n)\Lambda_{k+m+n+1}\\\hline
\end{array}\]
Noting that ${\cal S}_{1,1,1}={\cal S}_{1}$ and
${\cal S}_{m,n,0}={\cal R}_{m,n,0}={\cal R}_{m,n}$,
one easily enumerates the knots of Table~2. The result is given,
up to 17 crossings, in the last row of Table~3,
where it is compared with the enumeration of all prime knots, which is known
only to 13 crossings, and with the enumeration of positive knots,
which we have
achieved to 15 crossings, on the assumption that the HOMFLY polynomial
has no degeneracies among positive knots. It is apparent
that positive knots are sparse, though they exceed the irreducible
MZVs at 10 crossings and at all crossing numbers greater than 11.
The knots of Table 2 are equal in number to the irreducible MZVs up to
16 crossings; thereafter they are deficient.
Table~4 records a finding that may be new: the Alexander
polynomial~\cite{VJ}, obtained by setting $\lambda=1/q$ in the HOMFLY polynomial,
is not faithful for positive knots. The Jones polynomial~\cite{VJ}, with
$\lambda=q$, was not found to suffer from this defect.
Moreover, by using REDUCE~\cite{RED}, and assuming the fidelity of the
HOMFLY polynomial in the positive sector, we were able to prove,
by exhaustion, that none of the $4^{14}\approx2.7\times10^8$ positive
5-braid words of length 14 gives a true 5-braid 14-crossing knot.
\noindent{\bf Table~3}:
Enumerations of classes of knots by crossing number, $n$,
compared with\Eq{Ms}.
\[\begin{array}{|r|rrrrrrrrrrrrrrr|}\hline
n&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17\\\hline
\mbox{prime knots}&1&1&2&3&7&21&49&165&552&2176&9988&?&?&?&?\\
\mbox{positive knots}&1&0&1&0&1&1&1&3&2&7&9&17&47&?&?\\
M_n&1&0&1&0&1&1&1&1&2&2&3&3&4&5&7\\
\mbox{Table~2 knots}&1&0&1&0&1&1&1&1&2&2&3&3&4&5&5\\\hline
\end{array}\]
\noindent{\bf Table~4}:
Pairs of positive knots with the same Alexander polynomial,
$X_{\cal K}(q,1/q)$.
\[\begin{array}{|l|l|l|}\hline{\cal K}_1&{\cal K}_2&
X_{{\cal K}_1}(q,\lambda)-X_{{\cal K}_2}(q,\lambda)\\\hline
{\cal S}_{2,1,2}=
\sigma_1^{}
\sigma_2^{5}
\sigma_3^{}
\sigma_1^{2}
\sigma_2^{5}
\sigma_3^{}&
\sigma_1^{3}
\sigma_2^{4}
\sigma_3^{}
\sigma_1^{2}
\sigma_2^{2}
\sigma_3^{2}
\sigma_2^{}&
q^4(1-\lambda q)p_2r_2\Lambda_6\\
(\sigma_1^{}
\sigma_2^{2}
\sigma_3^{})^2
\sigma_1^{}
\sigma_2^{5}
\sigma_3^{}
&
(\sigma_1^{}
\sigma_2^{2}
\sigma_3^{})^2
\sigma_1^{3}
\sigma_2^{}
\sigma_1^{2}
\sigma_3^{}&
q^5(1-\lambda q)p_2\Lambda_6\\
\sigma_1^{5}
\sigma_2^{}
\sigma_3^{}
\sigma_1^{2}
\sigma_2^{3}
\sigma_3^{2}
\sigma_2^{}&
\sigma_1^{2}
\sigma_2^{2}
\sigma_1^{3}
\sigma_2^{7}&
q^5(1-\lambda q)p_2\Lambda_6\\\hline
\end{array}\]
The association~\cite{DK1,DK2} of the 2-braid torus knots
$(2k+1,2)={\cal T}_{2k+1}$
with the transcendental numbers $\zeta(2k+1)$ lies at the heart of our work.
In~\cite{DK2,BKP}, we associated the 3-braid torus knot
$(4,3)=8_{19}={\cal R}_{1,1}$ with the unique irreducible MZV
at weight 8, and in~\cite{BKP} we associated $(5,3)=10_{124}={\cal R}_{2,1}$
with that at weight 10. The 7-loop counterterms of $\phi^4$-theory
indicate that the knot-numbers associated with
$10_{139}=\sigma_1^{}\sigma_2^{3}\sigma_1^{3}\sigma_2^{3}$
and
$10_{152}=\sigma_1^{2}\sigma_2^{2}\sigma_1^{3}\sigma_2^{3}$
are not~\cite{BKP} MZVs.
At 11 crossings, the association of the knot-number $N_{3,5,3}$
with ${\cal S}_1={\cal S}_{1,1,1}\equiv11_{353}$ is rock solid:
we have obtained
this number analytically from 2 diagrams and numerically from another 8,
in each case finding it with different combinations of $\zeta(11)$
and the factor-knot transcendental $\zeta^2(3)\zeta(5)$.
In~\cite{BGK} we associated the family of knots ${\cal R}_{k,m}$
with the knot-numbers $N_{2k+3,2m+1}$, modulo multiples of $\pi^{2k+2m+4}$
that have now been determined up to 14 crossings.
It therefore remains to explain here how:
(a) two families of 4-braids, ${\cal S}_{k}$ and ${\cal S}_{k,m,n}$,
diverge from their common origin at 11 crossings, to give two knots
at 13 crossings, and three at 15 crossings, associated with triple Euler sums;
(b) a new family, ${\cal R}_{k,m,n}$, begins at 14 crossings, giving
the $(7,3)$ torus knot, $(\sigma_1^{}\sigma_2^{})^7
=(\sigma_1^{}\sigma_2^4)^2\sigma_1^{}\sigma_2^3
={\cal R}_{2,2,1}$, associated with a truly irreducible four-fold sum.
To relate the positive knots of Table 2 to Feynman diagrams
that evaluate to MZVs we shall
dress their braid words with chords. In each of Figs.~2--8, we
proceed in two stages: first we extract, from a braid word,
a reduced Gauss code that defines a trivalent chord diagram;
then we indicate how to shrink propagators to obtain a scalar diagram that
is free of subdivergences and has an overall divergence
that evaluates to MZVs. Our criterion for reducibility to MZVs is
that there be an angular diagram, obtained~\cite{GPX,DJB}
by choosing one vertex as an origin, such that the angular
integrations may be performed without encountering
6--j symbols, since these appeared in all the diagrams involving the
non-MZV knots $10_{139}$ and $10_{152}$ at 7 loops~\cite{BKP}, whereas
all the MZV-reducible diagrams could be cast in a form free of 6--j
symbols.
The first step -- associating a chord diagram with a knot --
allows considerable freedom: each
chord is associated with a horizontal propagator connecting vertical
strands of the braid between crossings, and there are almost
twice as many crossings as there are chords in the corresponding
diagram. Moreover, there are several braid words representing the same knot.
Thus a knot can be associated with several chord diagrams. Figs.~3b and~4b
provide an example: each diagram obtained from the $(5,2)$ torus knot
yields a counterterm involving $\zeta(5)$,
in a trivalent theory such as QED or Yukawa theory.
In Table 2 we have five families of braid words: the 2-braid torus knots,
two families of 3-braids, and two families of 4-braids. We begin with
the easiest family, ${\cal T}_{2k+1}$.
Consider Fig.~2a. We see the braid $\sigma_1^3$, dressed with two
horizontal propagators. Such propagators will be called chords,
and we shall refer to Figs.~2a--8a as chorded braid diagrams.
In Fig.~2a the two chords are labelled 1 and 2.
Following the closed braid, starting from the upper end
of the left strand, we encounter each chord twice, at vertices
which we label $\{1,{1^\prime}\}$ and $\{2,{2^\prime}\}$. These
occur in the order
$1,2,{1^\prime},{2^\prime}$ in Fig.~2a. This is the same order as they
are encountered on traversing the circle of Fig.~2b, which
is hence the same diagram as the chorded braid of Fig.~2a.
As a Feynman diagram, Fig.~2b is indeed associated with
the trefoil knot, by the methods of~\cite{DK1}.
We shall refer to the Feynman diagrams of Figs.~2b--8b as
chord diagrams\footnote{The reader familiar with
recent developments in knot theory and topological field theory might
notice that our notation is somewhat motivated by the connection between
Kontsevich integrals~\cite{LM} and chord diagrams. In~\cite{DK}
this will be discussed in detail and related to the work in~\cite{DK1}.}.
Each chord diagram is merely
a rewriting of the chorded braid diagram that precedes it,
displaying the vertices
on a hamiltonian circuit that passes through all the vertices.
The final step is trivial in this example:
the scalar tetrahedron is already log-divergent in 4 dimensions, so no
shrinking of propagators is necessary. Fig.~2c records the trivial
angular diagram, obtained~\cite{GPX} by choosing ${2}$ as an origin
and removing the propagators connected to it:
this merely represents a wheel with 3 spokes. In general~\cite{DJB}
the wheel with $n+1$ spokes delivers $\zeta(2n-1)$.
In Fig.~3a we give a chording of the braid $\sigma_1^{2n-1}$,
which is the simplest representation of the $(2n-1,2)$ torus knot,
known from previous work~\cite{DK1,DK2} to be associated with
a $(n+1)$-loop diagram, and hence with a hamiltonian circuit that has $n$
chords. Thus each addition of $\sigma_1^2$ involves adding a
single chord, yielding the chord diagram of Fig.~3b. To obtain
a logarithmically divergent scalar diagram, we shrink the propagators
connecting vertex ${2^\prime}$ to vertex ${n^\prime}$, drawn with a thick
line on the hamiltonian circuit of Fig.~3b, and hence obtain
the wheel with $n+1$ spokes, represented by the angular diagram of Fig.~3c.
To show how different braid-word representations of the
same knot give different chord diagrams, yet the same transcendental,
we consider Fig.~4.
In Fig.~4a we again have a chorded braid with $n$ chords, which this time
is obtained by combining $\sigma_1\sigma_2\sigma_1\sigma_2$
with $n-2$ powers of $\sigma_2^2$.
The resultant braid word,
$\sigma_1^{}\sigma_2^{}\sigma_1^{}\sigma_2^{2n-3}$,
is the $(2n-1,2)$ torus knot written as a 3-braid.
Labelling the pairs of vertices of Fig.~4b, one sees that it is identical
to the closure of the braid of Fig.~4a.
Shrinking together the vertices
$\{{2^\prime},{n^\prime},\ldots,{3^\prime}\}$
gives the angular diagram of Fig.~4c, which is
the same as Fig.~3c and hence delivers $\zeta(2n-1)$.
This ends our consideration of the 2-braid torus knots. We now turn
to the class ${\cal R}_{k,m}$ in Fig.~5.
The first member ${\cal R}_{1,1}=8_{19}=(4,3)$ appears at 6 loops,
with five chords.
It was obtained from Feynman diagrams in \cite{DK2}, and found
in~\cite{BKP} to be associated with an MZV of depth 2.
In Fig.~5a we add singly-chorded powers of $\sigma_2^2$
to a chorded braid word that delivers a Feynman diagram for which the
procedures of~\cite{DK1} gave $8_{19}$ as one of its skeinings.
In general, we have $k+m+3$ chords and thus $k+m+4$ loops.
The resulting chord diagram
is Fig.~5b, whose 7-loop case was the basis for associating
$10_{124}$ with the MZV $\zeta(7,3)$~\cite{BKP}.
Shrinking the propagators indicated by thickened lines in Fig.~5b,
we obtain diagram $M(k,1,1,m)$, indicated by the angular diagram of
Fig.~5c. Explicit computation of all such diagrams, to 9
loops, shows that this family is indeed MZV-reducible, to 14 crossings.
In Fig.~6 we repeat the process of Fig.~5 for the knot class
${\cal R}_{k,m,n}$. Marked boxes, in Fig.~6a,
indicate where we increase the number of chords.
Fig.~6b shows the highly non-planar chord diagram
for this knot class. This non-planarity is maintained
in the log-divergent diagram obtained by shrinking the thickened lines
in Fig.~6b. The parameters $m$ and $k$ correspond to the
series of dots in the corresponding angular diagram of Fig.~6c.
Non-planarity is guaranteed by the two remaining dots,
which are always present.
For $n>1$, we see even more propagators in the angular diagram.
The absence of 6--j symbols from angular integrations leads us to believe
that the results are reducible to MZVs; the non-planarity entails
MZVs of even weight, according to experience up to 7 loops~\cite{BKP}.
We now turn to the last two classes of knots: the 4-braids of Table~2.
In Fig.~7a we give a chorded braid diagram for knots
of class ${\cal S}_k$. Again, the marked box indicates
how we add chords to a chorded braid diagram
that corresponds to a 7-loop Feynman diagram, already known~\cite{BKP}
to skein to produce ${\cal S}_1=11_{353}$.
Shrinking the thickened lines in Fig.~7b, we obtain a log-divergent
planar diagram containing: a six-point coupling,
a $(k+3)$-point coupling, and $k+5$ trivalent vertices.
This is depicted in Fig.~7c as an angular diagram obtained by
choosing the $(k+3)$-point coupling as an origin.
Choosing the 6-point coupling as an origin for
the case $k=1$ confirms that ${\cal S}_1=11_{353}$ is associated
with $\zeta(3,5,3)$ via the 7-loop diagram $G(4,1,0)$.
However, for $k=3$ there is no way of obtaining MZVs of depth
3 from either choice of 6-point origin. Hence we expect a depth-5
MZV to be associated with the 15-crossing knot ${\cal S}_3$, with the
possibility of depth-7 MZVs appearing at higher crossings.
Finally we show that the three-parameter class
${\cal S}_{k,m,n}$, also built on $11_{353}={\cal S}_{1,1,1}$,
is associated with depth-3 MZVs.
The chorded braid of Fig.~8a indicates
the three places where we can add further chords.
Fig.~8b gives the chord diagram associated with it,
and indicates how to shrink propagators to obtain a log-divergent
diagram, represented by the angular diagram $G(m+n+2,k,0)$
of Fig.~8c, which evaluates in terms of depth-3 MZVs
up to 13 loops, and presumably beyond.
\subsection*{5. Conclusions}
In summary, we have
\begin{enumerate}
\item enumerated in\Eqq{Es}{Ms} the irreducibles entailed by Euler sums
and multiple zeta values at weight $n$; apportioned them by depth in\Eq{EMb};
conjectured the generator\Eq{Pd} for the number, $D_{n,k}$, of MZVs of weight
$n$ and depth $k$ that are irreducible to MZVs of lesser depth;
\item determined all MZV knot-numbers to 15 crossings, save one,
associated with a 9-loop diagram that evaluates to MZVs of depth 5
and weight 15;
\item enumerated positive knots to 15 crossings, notwithstanding
degenerate Alexander polynomials at 14 and 15 crossings;
\item developed
a technique of chording braids so as to generate families of knots
founded by parent knots whose relationship to Feynman diagrams
was known at lower loop numbers;
\item combined all the above to identify,
in Table~2, knots whose enumeration, to 16 crossings, matches that of MZVs.
\end{enumerate}
Much remains to be clarified in this rapidly developing area.
Positive knots, and hence the transcendentals associated with them
by field theory, are richer in structure than MZVs: there are more of them
than MZVs; yet those whose knot-numbers are MZVs evaluate in search
spaces that are significantly smaller than those for the MZVs, due to
the absence of a two-crossing knot. After 18 months of intense collaboration,
entailing large scale computations in knot theory, number theory, and field
theory, we are probably close to the boundary of what can be discovered
by semi-empirical methods. The trawl, to date, is impressive, to our minds.
We hope that colleagues will help us to understand it better.
\noindent{\bf Acknowledgements}
We are most grateful to Don Zagier for his generous comments,
which encouraged us to believe in the correctness of our discoveries\Eq{EMb},
while counselling caution as to the validity of\Eq{Pd}
in so far uncharted territory with depth $k>4$.
David Bailey's MPPSLQ~\cite{DHB}, Tony Hearn's REDUCE~\cite{RED}
and Neil Sloane's superseeker~\cite{NJAS} were instrumental in this work.
We thank Bob Delbourgo for his constant encouragement.
\newpage
| proofpile-arXiv_065-679 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The \ensuremath{\mathrm{|\Delta S| \!=\!2}}-Hamiltonian}
Here we briefly report on the \ensuremath{\mathrm{|\Delta S| \!=\!2}}-hamiltonian, the calculation of
its next-to-leading order (NLO) QCD corrections and on the numerical
results. For the details we refer to \cite{hn}.
\subsection{The low-energy \ensuremath{\mathrm{|\Delta S| \!=\!2}}-Hamiltonian}
The effective low-energy hamiltonian inducing the \ensuremath{\mathrm{|\Delta S| \!=\!2}}-transition
reads:
\begin{eqnarray}
H^\ensuremath{\mathrm{|\Delta S| \!=\!2}}_{\mathrm{eff}}
&=&
\frac{G_\mathrm{\!\scriptscriptstyle F}^2}{16\pi^2}M_W^2
\bigl[
\lambda_c^2 \eta_1^\star S\!\left(x_c^\star\right)
+\lambda_t^2 \eta_2^\star S\!\left(x_t^\star\right)
\nonumber\\&&
+2 \lambda_c \lambda_t \eta_3^\star S\!\left(x_c^\star,x_t^\star\right)
\bigr]
b\!\left(\mu\right) \ensuremath{\tilde{Q}_{S2}}\!\left(\mu\right)
\nonumber\\&&
+\mathrm{h.c.}
\label{Heff}
\end{eqnarray}
Here $G_\mathrm{\!\scriptscriptstyle F}$ denotes Fermi's constant, $M_W$ is the W boson mass,
$\lambda_j = V_{jd} V_{js}^\ast, j=c,t$ comprises the CKM-factors, and
$\ensuremath{\tilde{Q}_{S2}}$ is the local dimension-six \ensuremath{\mathrm{|\Delta S| \!=\!2}}\ four-quark operator
\begin{equation}
\ensuremath{\tilde{Q}_{S2}} =
\left[\bar{s}\gamma_\mu\left(1-\gamma_5\right)d\right]
\left[\bar{s}\gamma^\mu\left(1-\gamma_5\right)d\right]
\label{DefOll}
\end{equation}
The $x_q^\star = {m_q^\star}^2/M_W^2$, $q=c,t$ encode the running
{\ensuremath{\ov{\textrm{MS}}}}-quark masses $m_q^\star = m_q\!\left(m_q\right)$. In writing
\eq{Heff} we have used the GIM mechanism
$\lambda_u+\lambda_c+\lambda_t=0$ to eliminate $\lambda_u$, further we
have set $m_u=0$. The Inami-Lim functions $S\!\left(x\right)$,
$S\!\left(x,y\right)$ contain the quark mass dependence of the
{\ensuremath{\mathrm{|\Delta S| \!=\!2}}}-transition in the absence of QCD. They are obtained by
evaluating the box-diagrams displayed in {\fig{fig:full-lo}}.
\begin{nfigure}
\begin{minipage}[t]{\minicolumn}
\includegraphics[clip,width=0.8\minicolumn]{full-lo.eps}
\end{minipage}
\hfill
\begin{minipage}[b]{\minicolumn}
\ncaption{
The lowest order box-diagram mediating a \ensuremath{\mathrm{|\Delta S| \!=\!2}}-transition. The
zig-zag lines denote W-bosons or fictitious Higgs particles.}
\label{fig:full-lo}
\end{minipage}
\end{nfigure}
In \eq{Heff} the short-distance QCD corrections are comprised in the
coefficients $\eta_1$, $\eta_2$, $\eta_3$ with their explicit
dependence on the renormalization scale $\mu$ factored out in the
function $b\!\left(\mu\right)$. The $\eta_i$ depend on the
\emph{definition} of the quark masses. In \eq{Heff} they are
multiplied with $S$ containing the arguments $m_c^\star$ and
$m_t^\star$, therefore we marked them with a star. In absence of QCD
corrections $\eta_i b\!\left(\mu\right)=1$.
For physical applications one needs to know the matrix-element of
$\ensuremath{\tilde{Q}_{S2}}$ \eq{DefOll}. Usually it is parametrized as
\begin{equation}
\left\langle \overline{\mathrm{K^0}} \left| \ensuremath{\tilde{Q}_{S2}}\!\left(\mu\right)
\right| \mathrm{K^0} \right\rangle =
\frac{8}{3} \frac{\ensuremath{B_\mathrm{K}}}{b\!\left(\mu\right)} \ensuremath{f_\mathrm{K}}^2 \ensuremath{m_\mathrm{K}}^2.
\label{DefBK}
\end{equation}
Here $\ensuremath{f_\mathrm{K}}$ denotes the Kaon decay constant and $\ensuremath{B_\mathrm{K}}$ encodes the
deviation of the matrix-element from the vacuum-insertion result. The
latter quantity has to be calculated by non-perturbative methods. In
physical observables the $b\!\left(\mu\right)$ present in \eq{DefBK}
and \eq{Heff} cancel to make them scale invariant.
The first complete determination of the coefficients $\eta_i$,
$i=1,2,3$ in the leading order (LO) is due to Gilman and Wise
\cite{gw}. However, the LO expressions are strongly dependent on the
factorization scales at which one integrates out heavy particles.
Further the questions about the \emph{definition} of the quark masses
and the QCD scale parameter \ensuremath{\Lambda_{\scriptscriptstyle\mathrm{QCD}}}\ to be used in \eq{Heff} remain
unanswered. Finally, the higher order corrections can be sizeable and
therefore phenomenologically important.
To overcome these limitations one has to go to the NLO. This program
has been started with the calculation of $\eta_2^\star$ in \cite{bjw}.
Then Nierste and myself completed it with $\eta_1^\star$ \cite{hn2}
and $\eta_3^\star$ \cite{hn,hn1}.
We have summarized the result of the three $\eta_i^\star$'s in
{\tab{tab:result}}.
\begin{ntable}
\ncaption{The numerical result for the three $\eta_i^\star$ using
$\alpha_s\!\left(M_Z\right)=0.117$, $m_c^\star=1.3\,\mathrm{GeV}$,
$m_t^\star=167\,\mathrm{GeV}$ as the input parameters. The error of the NLO
result stems from scale variations.}
\label{tab:result}
\begin{tabular}{l@{\hspace{2em}}*{3}{l}}
& $\eta_1^\star$ & $\eta_2^\star$ & $\eta_3^\star$ \\
\hline
LO&
$\approx$ 0.74 &
$\approx$ 0.59 &
$\approx$ 0.37
\\
NLO&
1.31\errorpm{0.25}{0.22} &
0.57\errorpm{0.01}{0.01} &
0.47\errorpm{0.03}{0.04}
\end{tabular}
\end{ntable}
\subsection{A short glance at the NLO calculation of $\eta_3^\star$}
Due to the presence of largely separated mass scales \eq{Heff}
develops large logarithms $\log x_c$, which spoil the applicability of
naive perturbation theory (PT). Let us now shortly review the
procedure which allows us to sum them up to all orders in PT, finally
leading to the result presented in \tab{tab:result}. The basic idea
is to construct a hierarchy of effective theories describing {\ensuremath{\mathrm{|\Delta S| \!=\!1}}}-
and {\ensuremath{\mathrm{|\Delta S| \!=\!2}}}-transitions for low-energy processes. The techniques
used for that purpose are Wilson's operator product expansion (OPE)
and the application of the renormalization group (RG).
At the factorization scale $\mu_{tW}=O\!\left(M_W,m_t\right)$ we
integrate out the W boson and the top quark from the full Standard
Model (SM) Lagrangian. Strangeness changing transitions\footnote{In
general all flavour changing transitions} are now described by an
effective Lagrangian of the generic form
\begin{equation}
{\mathcal{L}}^\ensuremath{\mathrm{|\Delta S| \!=\!2}}_{\mathrm{eff}} =
-\frac{G_\mathrm{\!\scriptscriptstyle F}}{\sqrt{2}}\ensuremath{V_\mathrm{\scriptscriptstyle CKM}}\!\sum_k C_k Q_k
-\frac{G_\mathrm{\!\scriptscriptstyle F}^2}{2}\ensuremath{V_\mathrm{\scriptscriptstyle CKM}}\!\sum_l \tilde{C}_k \tilde{Q}_k.
\label{LeffAbove}
\end{equation}
The $\ensuremath{V_\mathrm{\scriptscriptstyle CKM}}$ comprise the relevant CKM factors. The $Q_k$
($\tilde{Q}_l$) denote local operators mediating \ensuremath{\mathrm{|\Delta S| \!=\!1}}- (\ensuremath{\mathrm{|\Delta S| \!=\!2}}-)
transitions, the $C_k$ ($\tilde{C}_l$) are the corresponding Wilson
coefficient functions which may simply be regarded as the coupling
constants of their operators. The latter contain the short distance
(SD) dynamics of the transition while the long distance (LD) physics
is contained in the matrix-elements of the operators.
The \ensuremath{\mathrm{|\Delta S| \!=\!1}}-part of {\eq{LeffAbove}} contributes to \ensuremath{\mathrm{|\Delta S| \!=\!2}}-transitions
via diagrams with double operator insertions like the ones displayed
in {\fig{fig:cc-cc-lo}}.
\begin{nfigure}
\includegraphics[clip,width=0.9\minicolumn]{eff-lo.eps}
\hfill
\includegraphics[clip,width=0.9\minicolumn]{peng-lo.eps}
\ncaption{
Two diagrams contributing to \ensuremath{\mathrm{|\Delta S| \!=\!2}}-transitions in the effective
five- and four-quark theory. The crosses denote insertions of
different species of local {\ensuremath{\mathrm{|\Delta S| \!=\!1}}}-operators.}
\label{fig:cc-cc-lo}
\end{nfigure}
The comparison of the Green's functions obtained from the full SM
lagrangian and the ones derived from \eq{LeffAbove} allows to fix the
values of the Wilson coefficients $C_k\!\left(\mu_{tW}\right)$ and
$\tilde{C}_l\!\left(\mu_{tW}\right)$. The scale $\mu_{tW}$ being of
the order of $M_W$, $m_t$ ensures that there will be no large
logarithms in $C_k\!\left(\mu_{tW}\right)$ and
$\tilde{C}_l\!\left(\mu_{tW}\right)$, which therefore can be reliably
calculated in ordinary perturbation theory.
The next step is to evolve the Wilson coefficients
$C_k\!\left(\mu_{tW}\right)$, $\tilde{C}_l\!\left(\mu_{tW}\right)$
down to some scale $\mu_c = O\!\left(m_c\right)$, thereby summing up
the $\ln\left(\mu_c/\mu_{tW}\right)$ terms to all orders.\footnote{We
neglect the intermediate scale $\mu_b\!=\!O(m_b)$ for simplicity.} To
do so, one needs to know the corresponding RG equations. While the
scaling of the {\ensuremath{\mathrm{|\Delta S| \!=\!1}}}-coefficients is quite standard, the evolution
of the {\ensuremath{\mathrm{|\Delta S| \!=\!2}}}-coefficients is modified due to the presence of
diagrams containing two insertions of {\ensuremath{\mathrm{|\Delta S| \!=\!1}}}-operators (see
{\fig{fig:cc-cc-lo}}). From $\dmu{\mathcal{L}}_{\mathrm{eff}}^\ensuremath{\mathrm{|\Delta S| \!=\!2}}\!=\!0$ follows:
\begin{equation}
\dmu \tilde{C}_k\!\left(\mu\right) =
\tilde{\gamma}_{k'k} \tilde{C}_{k'}\!\left(\mu\right)
+\tilde{\gamma}_{ij,k} C_i\!\left(\mu\right) C_j\!\left(\mu\right).
\label{RGinhom}
\end{equation}
In addition to the usual homogeneous differential equation for
$\tilde{C}_l$ an inhomogenity has emerged. The overall divergence of
diagrams with double insertions has been translated into an
{\textsl{anomalous dimension tensor}} $\tilde{\gamma}_{ij,k}$, which
is a straightforward generalization of the usual anomalous dimension
matrices $\gamma_{ij}$ ($\tilde{\gamma}_{ij}$). The special structure
of the operator basis relevant for the calculation of $\eta_3$ allows
for a very compact solution of \eq{RGinhom} \cite{hn}.
Finally, at the factorization scale $\mu_c$ one has to integrate out
the charm-quark from the theory. The effective three-flavour
lagrangian obtained in that way already resembles the structure of
{\eq{Heff}}, The only operator left over is $\ensuremath{\tilde{Q}_{S2}}$. Double insertions
no longer contribute, they are suppressed with positive powers of
light quark masses.
We want to emphasize that throughout the calculation one has to be
very careful about the choice of the operator basis. It contains
several sets of unphysical operators. Certainly the most important
class of these operators are the so-called \emph{evanescent
operators}. Their precise definition introduces a new kind of
scheme-dependence in intermediate results, e.g.\ anomalous dimensions
and matching conditions. This scheme-dependence of course cancels in
physical observables. Evanescent operators have been studied in great
detail in {\cite{hn3}}.
\subsection{Numerical Results for $\eta_3^\star$}
The numerical analysis shows $\eta_3^\star$ being only mildly
dependent on the physical input variables $m_c^\star$, $m_t^\star$ and
$\laMSb$ what allows us to treat $\eta_3^\star$ essentially as a
constant in phenomenological analyses.
More interesting is $\eta_3^\star$'s residual dependence on the
factorization scales $\mu_c$ and $\mu_{tW}$. In principle
$\eta_3^\star$ should be independent of these scales, all residual
dependence is due to the truncation of the perturbation series. We
may use this to determine something like a ``theoretical error''.
The situation is very nice with respect to the variation of
$\mu_{tW}$. Here the inclusion of the NLO corrections reduces the
scale-dependence drastically compared to the LO. For the interval
$M_W\leq\mu_{tW}\leq m_t$ we find a variation of less than 3\% in NLO
compared to the 12\% of the LO.
The dependence of $\eta_3^\star$ on $\mu_c$ has been reduced in NLO
compared to the LO analysis. It is displayed in \fig{fig:e3s-muc}.
This variation is the source of the error of $\eta_3^\star$ quoted in
{\tab{tab:result}}.
\begin{nfigure}
\includegraphics[clip,width=0.65\miniwidth]{muc.eps}
\hfill
\parbox[b]{0.3\miniwidth}{
\ncaption{\sloppy
The variation of $\eta_3^\star$ with respect to the factorization
scale $\mu_c$, where the charm-quark gets integrated out.}}
\label{fig:e3s-muc}
\end{nfigure}
\section{The 1996 Phenomenology of $\left|\ensuremath{\eps_\mathrm{K}}\right|$}\label{sect:pheno}
The first phenomenological analysis using the full NLO result of the
{\ensuremath{\mathrm{|\Delta S| \!=\!2}}}-hamiltonian has been done in \cite{hn1}. Here we present a
1996 update.
\subsection{Input Parameters}
Let us first recall our knowledge of the CKM matrix as reported at
this conference \cite{ichep96-gibbons}:
\begin{subequations}
\begin{eqnarray}
\left|V_{cb}\right| &=& 0.040\pm0.003, \\
\left|V_{ub}/V_{cb}\right| &=& 0.08\pm0.02.
\end{eqnarray}
\end{subequations}
Fermilab now provides us with a very precise determination of
$m_t^\mathrm{pole} = 175\pm6 \,\mathrm{GeV}${\cite{ichep96-tipton}} which translates
into the \ensuremath{\ov{\textrm{MS}}}-scheme as $m_t^\star = 167\pm6 \,\mathrm{GeV}$.
There have been given more precise results on \mmmix{B_d^0} and \mmmix{B_s^0}
{\cite{ichep96-gibbons}}:
\begin{subequations}
\begin{eqnarray}
\ensuremath{\Delta m_\mathrm{B_d}} &=& \left(0.464\pm0.012\pm0.013\right) ps^{-1},\\
\ensuremath{\Delta m_\mathrm{B_s}} &>& 9.2 ps^{-1}
\label{LimitDmBs}
\end{eqnarray}
\end{subequations}
We will further use some theoretical input:
\begin{subequations}
\begin{eqnarray}
\ensuremath{B_\mathrm{K}} &=& 0.75\pm0.10\\
\ensuremath{f_\mathrm{B_d}}\sqrt{\ensuremath{B_\mathrm{B_d}}} &=& \left(200\pm40\right) \,\mathrm{MeV},\label{fbd}\\
\frac{\ensuremath{f_\mathrm{B_s}}\sqrt{\ensuremath{B_\mathrm{B_s}}}}{\ensuremath{f_\mathrm{B_d}}\sqrt{\ensuremath{B_\mathrm{B_d}}}} &=& 1.15\pm0.05.
\label{xisd}
\end{eqnarray}
\end{subequations}
{\eq{fbd}} and{\eq{xisd}} are from quenched lattice QCD, the latter
may go up by 10\% due to unquenching {\cite{ichep96-flynn}}.
The other input parameters we take as in \cite{hn1}.
\subsection{Results}
In extracting information about the still unknown elements of the CKM
matrix we still get the strongest restrictions from unitarity and
$\ensuremath{\eps_\mathrm{K}}$:
\begin{equation}
\left|\ensuremath{\eps_\mathrm{K}}\right| = \frac{1}{\sqrt{2}} \left[
\frac{{\mathrm{Im}\,} \left\langle \mathrm{K^0} \right|
H^\ensuremath{\mathrm{|\Delta S| \!=\!2}} \left| \overline{\mathrm{K^0}} \right\rangle}{\ensuremath{\Delta m_\mathrm{K}}}
+\xi \right].
\label{epsKcond}
\end{equation}
Here $\xi$ denotes some small quantity related to direct \setbox0=\hbox{\textrm{CP}}\setbox1=\hbox{\textsl{\LARGE/}\
contributing about 3\% to $\ensuremath{\eps_\mathrm{K}}$. The key input parameters entering
\eq{epsKcond} are $V_{cb}$, $|V_{ub}/V_{cb}|$, $m_t^\star$ and $\ensuremath{B_\mathrm{K}}$
One may use \eq{epsKcond} to determine lower bounds on one of the four
key input parameters as functions of the other three. In
{\fig{fig:vubcb-vcb}} the currently most interesting lower bound
curve which was invented in \cite{hn1} is displayed.
\begin{nfigure}
\includegraphics[clip,width=\miniwidth]{vubcb-vcb.eps}
\ncaption{ The lower-bound curves for $|V_{ub}/V_{cb}|$ as a function
of $V_{cb}$ for different values of the key input parameters
$m_t^\star$ and $\ensuremath{B_\mathrm{K}}$.}
\label{fig:vubcb-vcb}
\end{nfigure}
Further we are interested in shape of the unitarity triangle, i.e.\
the allowed values of the top corner $\left(\bar\rho,\bar\eta\right)$
\begin{equation}
\bar\rho +i \bar\eta = - V_{ud} V_{ub}^* / V_{cd} V_{cb}^* .
\end{equation}
Here, in addition to \eq{epsKcond}, we take into account the constraint
from \mmmix{B_d^0}
\begin{equation}
\ensuremath{\Delta m_\mathrm{B_d}} = \left|V_{td}\right|^2 \left|V_{ts}\right|^2
\frac{G_\mathrm{\!\scriptscriptstyle F}^2}{6\pi^2} \eta_\mathrm{B} \ensuremath{m_\mathrm{B}} \ensuremath{B_\mathrm{B_d}} \ensuremath{f_\mathrm{B_d}}^2 M_W^2 S\!\left(x_t\right)
\label{condBBmd}
\end{equation}
and \mmmix{B_s^0}
\begin{equation}
\ensuremath{\Delta m_\mathrm{B_s}} = \ensuremath{\Delta m_\mathrm{B_d}} \cdot \frac{\left|V_{ts}\right|^2}{\left|V_{td}\right|^2}
\cdot \frac{\ensuremath{m_\mathrm{B_d}} \ensuremath{f_\mathrm{B_d}}^2 \ensuremath{B_\mathrm{B_d}}}{\ensuremath{m_\mathrm{B_s}} \ensuremath{f_\mathrm{B_s}}^2 \ensuremath{B_\mathrm{B_s}}} .
\label{condBBms}
\end{equation}
The allowed region for $\left(\bar\rho,\bar\eta\right)$ depends
strongly on the treatment of the errors. We use the following
procedure: first we apply \eq{epsKcond} to find the CKM phase $\delta$
of the standard parametrization from the input parameters, which are
scanned in an 1$\sigma$ ellipsoid of their errors. Second, we check
the consistency of the obtained phases $\delta$ with \mmmix{B_d^0}
{\eq{condBBmd}}. Here we treat the errors in are fully conservative
way. Last we apply the constraint from lower limit on $\ensuremath{\Delta m_\mathrm{B_s}}$
{\eq{condBBms}}. This constraint is very sensitive to the value of
the flavour-SU(3) breaking term $\ensuremath{f_\mathrm{B_s}}\sqrt{\ensuremath{B_\mathrm{B_s}}}/\ensuremath{f_\mathrm{B_d}}\sqrt{\ensuremath{B_\mathrm{B_d}}}$.
Using the quenched lattice QCD value \eq{xisd} one finds the allowed
values of $\left(\bar\rho,\bar\eta\right)$ as displayed in
{\fig{fig:dmbs-ut}}.
\begin{nfigure}
\includegraphics[clip,width=\miniwidth]{dmbs-ut.eps}
\ncaption{The allowed values of $\left(\bar\rho,\bar\eta\right)$. The
outer contour is obtained solely from $\left|\ensuremath{\eps_\mathrm{K}}\right|$ and
unitarity, the medium one takes into account \mmmix{B_d^0}, the inner curve
\mmmix{B_s^0}\ \eq{LimitDmBs} using the quenched lattice value \eq{xisd} for
illustrative reasons. If one would use a 10\% higher value for the
flavour SU(3) breaking as expected for an unquenched calculation no
effect is visible for the current limit \eq{LimitDmBs}.}
\label{fig:dmbs-ut}
\end{nfigure}
If one would increase $\ensuremath{f_\mathrm{B_s}}\sqrt{\ensuremath{B_\mathrm{B_s}}}/\ensuremath{f_\mathrm{B_d}}\sqrt{\ensuremath{B_\mathrm{B_d}}}$ by 10\% as
expected for an unquenched calculation, no effect is visible for the
current limit \eq{LimitDmBs}. This can be read off from
{\fig{fig:dmbs-ratio}}, where we plot the fraction of area cut out
from the allowed region of $\left(\bar\rho,\bar\eta\right)$ by the
$\ensuremath{\Delta m_\mathrm{B_s}}$ constraint as a function of $\ensuremath{\Delta m_\mathrm{B_s}}$.
\begin{nfigure}
\includegraphics[clip,width=\miniwidth]{dmbs-ratio-d1-area.eps}
\ncaption{ The fraction of the allowed area for
$\left(\bar\rho,\bar\eta\right)$ which is excluded by the constraint
from \mmmix{B_s^0}\ as a function of $\ensuremath{\Delta m_\mathrm{B_s}}$. The curve labelled ``quenched''
is obtained using \eq{xisd}, the line labelled ``unquenched (est.)''
uses a 10\% larger value.}
\label{fig:dmbs-ratio}
\end{nfigure}
From \fig{fig:dmbs-ut} we read off the allowed ranges of the
parameters describing the unitarity triangle:
\begin{equation}
\begin{array}{c}
40^\circ \leq \alpha \leq 101^\circ, \quad
57^\circ \leq \gamma \leq 127^\circ, \\
0.42 \leq \sin\!\left(2\beta\right) \leq 0.79 \\
-0.20 \leq \bar\rho \leq 0.22, \quad
0.25 \leq \bar\eta \leq 0.43.
\end{array}
\end{equation}
\section*{Acknowledgements}
I would like to thank Guido Martinelli for a clarifying discussion on
$\ensuremath{B_\mathrm{K}}$ at this conference.
\enlargethispage*{2ex}
\section*{References}
| proofpile-arXiv_065-704 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There have been enormous advances in understanding the low-energy
properties of supersymmetric
gauge theories in the last couple of years \cite{SW}. In particular, for
$N=2$ gauge theories
with $N_f$ matter hypermultiplets, the exact solution for the
low-energy properties of
the Coulomb phase of the theory is given in principle by a
hyperelliptic curve
\cite{LERCHE}--\cite{MINAHAN}. In practice, however, a great
deal of additional work is required to extract the physics embodied
in the curve that describes the
theory in question. A given theory is characterised by a number of
moduli \cite{SW}--\cite{MINAHAN} which
are related to the vacuum expectation values of the scalar fields of
the
$N=2$ vector multiplet and the bare masses of the matter
hypermultiplets. If the scalar fields of the
matter hypermultiplets have vanishing expectation values,
then one is in the Coulomb phase of the
theory; otherwise one is in a Higgs or in a mixed phase
\cite{SW,SEIBERG}. This paper will only be
concerned with the Coulomb phase of asymptotically free $N=2$
supersymmetric theories.
The Seiberg-Witten (SW) period integrals
\begin{equation}
{\vec \Pi}=\pmatrix{{\vec a}_D\cr {\vec a}\cr}
\label{eq:zia}
\end{equation}
are related to the prepotential \cite{Matone}
${\cal F}({\vec a})$ characterising
the low-energy effective
Lagrangian by
\b
a_D^i={\partial {\cal F}\over \partial a_i}
\label{eq:zib}
\end{equation}
One can use the global monodromy properties of ${\vec \Pi}$ to
essentially fix it, and then find
the prepotential ${\cal F}({\vec a})$ by integration. In practice,
one needs to construct
the SW
periods ${\vec \Pi}$ from the known, auxiliary hyperelliptic curve;
not an easy task for groups
with rank 2 or greater. One particular way of obtaining the
necessary information is to derive a
set of Picard-Fuchs (PF) equations for the SW period integrals.
The PF equations have been
formulated for $SU(2)$ and $SU(3)$ with $N_f=0$
\cite{KLEMM,BILAL}, and
for
$N_f\neq 0$ for massless
hypermultiplets $(m=0)$ \cite{ITO,RYANG}. The solutions to these
equations have been considered for $
SU(2)$
with $N_f=0, 1, 2, 3$, for $SU(3)$ with $N_f=0, 1, \ldots, 5 $,
and for other classical
gauge groups in \cite{KLEMM,ITO,RYANG,THEISEN}. In the
particular
case of massless
$SU(2)$, the
solutions to the PF equations are
given by hypergeometric functions \cite{BILAL,ITO}, while for
$N_f=0$
$SU(3)$, they are given (in
certain regions of moduli space) by Appell functions, which
generalise the hypergeometric function
\cite{KLEMM}. In other regions of the
$SU(3)$ moduli space, only double power-series solutions are
available. Thus, even given the
hyperelliptic curve characterising the Coulomb phase of an $N=2$
supersymmetric gauge theory,
considerable analysis is required to obtain the SW periods and the
effective Lagrangian in various
regions of moduli space.
The first task in such a programme is to obtain the PF equations
for the SW period integrals. Klemm
{\it et al\/.} \cite{KLEMM} describe a particular procedure which
enables them to obtain the PF equations
for
$S
U(3)$ with $N_f=0$, which in principle is applicable to other
theories as well. One would like to
obtain and solve the PF equations for a wide variety of theories
in order to explore the physics
contained in particular solutions, and also to obtain an
understanding of the general features of
$N=2$ gauge theories. Therefore it is helpful to have an efficient
method for constructing PF
equations from a given hyperelliptic curve, so that one can obtain
explicit solutions for groups
with rank greater than or equal to 2.
It is the purpose of this paper to present a systematic method for
finding the PF equations for the
SW periods which is particularly convenient for symbolic computer
computations, once the
hyperelliptic curve appropriate to a given $N=2$ supersymmetric
gauge theory is known. Our method
should be considered as an alternative to that of Klemm
{\it et al\/.} \cite{KLEMM}. A key element in our
treatment is the Weyl group symmetry underlying the algebraic
curve
that
describes the vacuum
structure of the effective $N=2$ SYM theory (with or without
massless hypermultiplets). For
technical reasons, we will not treat the theories with non-zero
bare masses, but leave a discussion
of such cases to subsequent work \cite{NOS}.
This paper is organised as follows. In section 2, our method is
described in general, so that given
a hyperelliptic curve for some $N=2$ theory, one will obtain a
coupled set of partial, first-order
differential equations for the periods. The method is further
elucidated in section 3,
where a technique is developed to obtain a decoupled set of partial,
second-order differential
equations satisfied by the SW periods. A number of technical
details pertaining to the application of our method to different
gauge groups (both classical and
exceptional) are also given in section 3. Some relevant examples in
rank 2 are worked out in
detail in section 4, for illustrative purposes. Our results are
finally summarised in section 5.
Appendix A deals with a technical proof that is omitted from the
body
of the text. An extensive
catalogue of results is presented in appendix B, including
$N_f\neq 0$ theories (but always with zero bare mass).
Explicit solutions to the PF equations themselves for rank greater
than 2 can be quite
complicated, so we will
restrict this paper to the presentation of the methods and a
catalogue of PF equations. Solutions to some
interesting cases will be presented in a sequel in preparation
\cite{NOS}. The methods of this paper
will have applications to a variety of questions, and are not
limited to the SW problem.
\renewcommand{\theequation}{2.\arabic{equation}}
\setcounter{equation}{0}
\section{The Picard-Fuchs Equations: Generalities}
\subsection{Formulation of the problem}
Let us consider the complex algebraic curve
\begin{equation}
y^2=p^2(x)-x^k\Lambda ^l
\label{eq:za}
\end{equation}
where $p(x)$ is the polynomial
\begin{equation}
p(x)=\sum_{i=0}^n u_i x^i=x^n+u_{n-2}x^{n-2}+\ldots+
u_1x+u_0
\label
{eq:zai}
\end{equation}
$p(x)$ will be the characteristic polynomial corresponding to
the fundamental representation of
the Lie algebra of the effective $N=2$ theory. We can therefore
normalise the leading coefficient
to 1, {\it i.e.}, $u_n=1$. We can also take $u_{n-1}=0$, as all
semisimple Lie algebras can be
represented by traceless matrices. The integers $k$, $l$ and $n$,
as
well as the required coefficients $u_i$ corresponding to various
choices of a gauge group and
matter content, have been
determined in
\cite{LERCHE}--\cite{DANI}.
From dimensional analysis we have
$0\leq k<2n$ \cite{SW}--\cite{MINAHAN}.
$\Lambda$ is the quantum scale of the effective $N=2$ theory.
Without loss of generality, we will
set $\Lambda=1$ for simplicity in what follows. If needed, the
required powers of $\Lambda$ can be
reinstated by imposing the condition of homogeneity of the
equations
with respect to the (residual)
$R$-symmetry.
Equation \ref{eq:za} defines a family of hyperelliptic Riemann
surfaces
$\Sigma_g$ of genus
$g=n-1$ \cite{FARKAS}. The moduli space of the curves \ref{eq:za}
coincides
with the moduli
space of quantum vacua of the $N=2$ theory
under consideration. The coefficients $u_i$ are called the
{\it moduli} of the surface. On
$\Sigma_g$ there are $g$ holomorphic 1-forms which, in the canonical
representation, can be expressed as
\begin{equation}
x^j\,{{\rm d} x\over y},\qquad j=0, 1, \ldots, g-1
\label{eq:zb}
\end{equation}
and are also called {\it abelian differentials of the first
kind}\/.
The following $g$ 1-forms are meromorphic on $\Sigma_g$ and
have
vanishing residues:
\begin{equation}
x^j\,{{\rm d} x\over y},\qquad j=g+1, g+2, \ldots, 2g
\label{eq:zbi}
\end{equation}
Due to this property of having zero residues, they are also called
{\it abelian differentials of
the second kind}\/. Furthermore, the 1-form
\begin{equation}
x^g{{\rm d} x\over y}
\label{eq:zza}
\end{equation}
is also meromorphic on $\Sigma_g$, but with non-zero residues.
Due to
this property of having non-zero
residues it is also called an {\it abelian differential of the third
kind}\/. Altogether, the
abelian differentials $x^j{\rm d} x/y$ in equations \ref{eq:zb} and
\ref{eq:zbi} will be denoted collectively by $\omega_j$, where
$j=0, 1, \ldots, 2g$, $j\neq g$. We define
the {\it basic range} $R$ to be $R=\{0, 1, \ldots, \check g,\ldots
2g\}$, where a check over $g$
means the value $g$ is to be omitted.
In effective
$N=2$ supersymmetric gauge theories, there exists a preferred
differential $\lambda_{SW}$, called
the Seiberg-Witten (SW) differential, with the following property
\cite{SW}: the electric and magnetic
masses
$a_i$ and $a^D_i$ entering the BPS mass formula are given by the
periods of $\lambda_{SW}$ along some
specified closed cycles $\gamma_i, \gamma^D_i \in H_1(\Sigma_g)$,
{\it i.e.},
\begin{equation}
a_i=\oint_{\gamma_i} \lambda_{SW},\qquad a^D_i=\oint_
{\gamma^D_i}
\lambda_{SW}
\label{eq:zc}
\end{equation}
The SW differential further enjoys the property that its modular
derivatives
$\partial\lambda_{SW}/\partial u_i$ are (linear combinations of the)
holomorphic
1-forms \cite{SW}. This ensures
positivity of the K\"ahler metric on moduli space. Specifically, for
the curve given in \ref{eq:za} we have
\cite{ARG,OZ,DANI}
\begin{equation}
\lambda_{SW}=\Big[{k\over 2} p(x) - x p'(x)\Big]{{\rm d} x\over y}
\label{eq:ze}
\end{equation}
In the presence of non-zero (bare) masses for matter
hypermultiplets, the SW differential picks up a non-zero residue
\cite{SW}, thus causing it to be of the third kind. Furthermore, when
the
matter hypermultiplets are massive, the SW differential is no longer
given by equation \ref{eq:ze}. In what follows $\lambda_{SW}$
will never be
of the third kind, as we are restricting ourselves to the pure SYM
theory, or to theories with massless matter.
Let us define $W=y^2$, so equation \ref{eq:za} will read
\begin{equation}
W=p^2(x)-x^k=\sum_{i=0}^n\sum_{j=0}^n u_iu_j x^{i+j} -x^k
\label{eq:zf}
\end{equation}
Given any differential $x^m{\rm d} x/y$, with $m\geq 0$ an integer, let
us define its {\it
generalised $\mu$-period} $\Omega_m^{(\mu)}(u_i;\gamma)$
along a
fixed 1-cycle $\gamma\in
H_1(\Sigma_g)$ as the line integral \cite{MUKHERJEE}
\begin{equation}
\Omega_m^{(\mu)}(u_i; \gamma):=(-1)^{\mu +1}\Gamma
(\mu + 1)
\oint_{\gamma}{x^m\over W^{\mu + 1}}\,{\rm d} x
\label{eq:zg}
\end{equation}
In equation \ref{eq:zg}, $\Gamma(\mu)$ stands for Euler's
gamma function,
while $\gamma\in H_1(\Sigma_g)$ is
any closed 1-cycle on the surface. As $\gamma$ will be arbitrary
but otherwise kept fixed,
$\gamma$ will not appear explicitly in the notation. The {\it usual}
periods of the Riemann
surface (up to an irrelevant normalisation factor) are of course
obtained upon setting $\mu =-1/2$, taking $m=0,1,\ldots, g-1$,
and
$\gamma$ to run over a canonical (symplectic) basis of
$H_1(\Sigma_g)$ \cite{FARKAS}. However,
we will find it convenient to work with an arbitrary $\mu$ which
will only be set equal to $-1/2$ at
the very end.
The objects $\Omega_m^{(\mu)}$, and the differential
equations they satisfy (called
Picard-Fuchs (PF) equations), will be our prime focus of attention.
With abuse of language, we will
continue to call the $\Omega_m^{(\mu)}$ {\it periods}, with the
added
adjectives {\it of the first,
second}, or {\it third kind}, if $m=0,
\ldots, g-1$, $m=g+1, \ldots, 2g$, or $m=g$, respectively.
\subsection{The recursion relations}
We now proceed to derive a set of recursion relations that will be
used to set up to PF equations.
{}From equation \ref{eq:zf} one easily finds
\begin{eqnarray}
{\partial W\over\partial x}&=&2 n x^{2n-1}+\sum_{j=0}^{n-1} \sum_{l=0}
^{n-1}(j+l)
u_ju_l x^{j+l-1}
+2\sum_{j=0}^{n-1}(n+j) u_j x^{n+j-1}-kx^{k-1}\nonumber\\
{\partial W\over\partial u_i}&=&2\sum_{j=0}^n u_j x^{i+j}
\label{eq:zh}
\end{eqnarray}
Solve for the highest power of $x$ in $\partial W/\partial x$, {\it i.e.}, $x^{2n-1}$,
in equation
\ref{eq:zh}, and substitute the result into
\ref{eq:zg} to find
\begin{eqnarray}
\Omega_{m+2n-1}^{(\mu +1)} & = &
(-1)^{\mu +2}\Gamma(\mu +2)\oint_{\gamma} {x^m\over W^
{\mu +2}}
\Big[{1\over 2n}{\partial W\over\partial x}\nonumber \\
& - &{1\over 2n}\sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(j+l)u_ju_lx^{j+l-1}-
{1\over
n}\sum_{j=0}^{n-1}(n+j) u_j x^{n+j-1}+{k\over 2n}x^{k-1}\Big]=
\nonumber \\
& - &{m\over 2n} \Omega_{m-1}^{(\mu)}-{1\over
2n}\sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(j+l)u_ju_l\Omega
_{m+j+l-1}^
{(\mu +1)}\nonumber \\
& - &{1\over n}\sum_{j=0}^{n-1}(n+j)u_j\Omega_{m+n+j-1}
^{(\mu+1)}
+{k\over 2n}\Omega_{m+k-1}^{(\mu + 1)}
\label{eq:zi}
\end{eqnarray}
To obtain the last line of equation \ref{eq:zi}, an integration
by parts has
been
performed and a total
derivative dropped. If $m\neq 0$, shift $m$ by one unit to
obtain
from equation \ref{eq:zi}
\begin{equation}
\Omega_m^{(\mu)}={1\over m+1}\Big[k\Omega_{m+k}
^{(\mu +1)}-2n \Omega_
{m+2n}^{(\mu +1)}
-\sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(j+l)u_ju_l\Omega
_{m+j+l}^{(\mu +1)}
-2\sum_{j=0}^{n-1}(n+j)u_j\Omega_{m+n+j}^{(\mu +1)}
\Big]
\label{eq:zj}
\end{equation}
Next we use equation \ref{eq:zf} to compute
\begin{eqnarray}
& - & (1+\mu )\Omega_m^{(\mu)}=(-1)^{\mu +2}\Gamma
(\mu+2) \oint_{\gamma}
\frac{x^m W}{W^{\mu+2}} = \nonumber \\
& = &(-1)^{\mu +2}\Gamma(\mu+2)\oint_{\gamma}{x^m \over
W^{\mu+2}}
\Big[\sum_{l=0}^n\sum_{j=0}^n u_lu_j x^{l+j} -x^k\Big]=
\nonumber \\
& = & \sum_{l=0}^n\sum_{j=0}^n u_lu_j\Omega_{m+l+j}
^{(\mu +1)}-\Omega_
{m+k}^{(\mu +1)}
\label{eq:zk}
\end{eqnarray}
and use this to solve for the period with the highest value of the
lower index, $m+2n$, to get
\begin{equation}
\Omega_{m+2n}^{(\mu+1)}=\Omega_{m+k}^{(\mu+1)}-
\sum_{j=0}^{n-1}\sum_
{l=0}^{n-1} u_ju_l
\Omega_{m+j+l}^{(\mu+1)} -2\sum_{j=0}^{n-1} u_j
\Omega_{m+n+j}
^{(\mu+1)} -(1+\mu)\Omega_m^{(\mu)}
\label{eq:zl}
\end{equation}
Replace $\Omega_{m+2n}^{(\mu+1)}$ in equation \ref{eq:zj}
with its value from
\ref{eq:zl} to arrive at
\begin{eqnarray}
\lefteqn{
\Omega_m^{(\mu)}=
{1\over m+1-2n(1+\mu)}\Big[(k-2n)\Omega_{m+k}^{(\mu+1)}}
\nonumber \\
&&+\sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(2n-j-l)u_ju_l
\Omega_{m+j+l}^
{(\mu+1)}+
2\sum_{j=0}^{n-1}(n-j)u_j\Omega_{m+n+j}^{(\mu+1)}\Big]
\label{eq:zm}
\end{eqnarray}
Finally, take $\Omega_m^{(\mu)}$ as given in equation \ref{eq:zm}
and
substitute it
into \ref{eq:zl} to obtain an equation
involving $\mu +1$ on both sides. After shifting $m\rightarrow m-2n$,
one gets
\begin{eqnarray}
\Omega_{m}^{(\mu+1)} & = & {1\over m+1-2n(\mu+2)}\Big[\big(m-2n
+1-k (1+\mu)
\big)\Omega_{m+k-2n}^{(\mu+1)}\nonumber \\
&+ & 2\sum_{j=0}^{n-1}\big((1+\mu)(n+j)-(m-2n+1)\big)u_j
\Omega_{m-n+j}^
{(\mu+1)}\nonumber \\
&+ & \sum_{j=0}^{n-1}\sum_{l=0}^{n-1}\big((j+l)(1+\mu)-
(m-2n+1)\big)u_j
u_l\Omega_{m+j+l-2n}^{(\mu+1)}
\Big]
\label{eq:zmk}
\end{eqnarray}
We now set $\mu=-1/2$ and
collect the two recursion relations
\ref{eq:zm}
and \ref{eq:zmk}
\begin{eqnarray}
\Omega_m^{(-1/2)} & = & {1\over m-(n-1)}\Big[(k-2n)
\Omega_{m+k}^
{(+1/2)}\nonumber \\
&+ & \sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(2n-j-l)u_ju_l
\Omega_{m+j+l}^
{(+1/2)}+
2\sum_{j=0}^{n-1}(n-j)u_j\Omega_{m+n+j}^{(+1/2)}
\Big]
\label{eq:zn}
\end{eqnarray}
and
\begin{eqnarray}
\Omega_{m}^{(+1/2)}& = & {1\over
m+1-3n}\Big[\big(m-2n+1-{k\over 2}\big)\Omega
_{m+k-2n}^{(+1/2)}\nonumber \\
&+ & \sum_{j=0}^{n-1}\big(n+j-2(m-2n+1)\big)u_j
\Omega_{m-n+j}^{(+1/2)
}
\nonumber \\
&+ & \sum_{j=0}^{n-1}\sum_{l=0}^{n-1}\big({1\over
2}(j+l)-(m-2n+1)\big)u_ju_l\Omega_{m+j+l-2n}^{(+1/2)}
\Big]
\label{eq:zo}
\end{eqnarray}
Let us now pause to explain the significance of equations
\ref{eq:zn} and
\ref{eq:zo}. The existence of a
particular symmetry of the curve under consideration may simplify
the analysis of these equations.
For the sake of clarity, we will for the moment assume that equation
\ref{eq:zn} will not have to be
evaluated at
$m=g=n-1$, where it blows up
, and that its right-hand side will not
contain occurrences of the
corresponding period $\Omega_{n-1}^{(+1/2)}$. A similar
assumption
will be made regarding equation
\ref{eq:zo}, {\it i.e.}, it will not have to be evaluated at $m=3n-1$, and is
right-hand side will not contain the
period $\Omega_{n-1}^{(+1/2)}$. In other words, we are for the
moment assuming that we can
restrict ourselves to the subspace of differentials $\omega_j$
where $j\in R$ (or likewise for the corresponding
periods $\Omega_j
^{(\pm1/2)}$). This is the subspace of
differentials of the first and second kinds, {\it i.e.}, the 1-forms with
vanishing residue. For a given curve, the particular
subspace of differentials that one
has to restrict to depends on the corresponding gauge group; this
will be explained in section 3, where these issues are dealt
with. For the sake of the present discussion, the particular subspace
of differentials that we are
restricting to only serves an illustrative purpose.
Under such assumptions, equation \ref{eq:zn}
expresses $\Omega_m^{(-1/2)}$ as a linear combination, with
$u_i$-dependent coefficients, of
periods $\Omega_l^{(+1/2)}$. As $m$ runs over $R$, the linear
combination in the
right-hand side of equation \ref{eq:zn} contains increasing values
of $l$,
which will eventually lie outside
$R$. We can bring them back into $R$ by means of equation
\ref{eq:zo} it is
a recursion relation expressing
$\Omega_l^{(+1/2)}$ as a linear combination (with $u_i$-dependent
coefficients) of
$\Omega_l^{(+ 1/2)}$ with lower values of the subindex. Repeated
application of equations \ref{eq:zn} and
\ref{eq:zo} will eventually allow one to express $\Omega_m^{(-1/2)}$,
where
$m\in R$, as a linear
combination of
$\Omega_l^{(+1/2)}$, with $l\in R$. The coefficients entering those
linear combinations will be
some polynomials in the moduli $u_i$, in principle computable using
the above recursion relations.
Let us call $M^{(-1/2)}$ the matrix of such coefficients
\cite{MUKHERJEE}
. Suppressing
lower indices for
simplicity, we have
\begin{equation}
\Omega^{(-1/2)}= M^{(-1/2)}\cdot \Omega^{(+1/2)}
\label{eq:zp}
\end{equation}
where the $\Omega_l^{(\pm 1/2)}$, $l\in R$, have been arranged
as a
column vector. We will from now
on omit the superscript $(-1/2)$ from $M^{(-1/2)}$, with the
understanding that the value $\mu=-1/2$
has been fixed.
\subsection{Derivation of the Picard-Fuchs equations}
Having derived the necessary recursion relations, we can now
start taking modular derivatives of the periods.
{}From equation \ref{eq:zh} and the definition of the periods
\ref{eq:zg} we have
\begin{equation}
{\partial \Omega_m^{(\mu)}\over\partial u_i} =(-1)^{\mu+2}\Gamma
(\mu+2)\oint_{\gamma}
{x^m\over W^{\mu+2}}{\partial W\over\partial u_i}=
2\sum_{j=0}^n u_j \Omega_{m+i+j}^{(\mu+1)}
\label{eq:zq}
\end{equation}
Again, the right-hand side of equation \ref{eq:zq} will eventually
contain
values of the lower index outside
the basic range $R$, but use of the recursion relations above will
reduce it to a linear combination
of periods
$\Omega_l^{(\mu+1)}$ with $l\in R$. The
coefficients will
be polynomials in the moduli $u_i$; let
$D(u_i)$ be this matrix of coefficients. Setting $\mu=-1/2$, we end
up with a system of equations
which, in matrix form, reads
\begin{equation}
{\partial\over\partial u_i}\Omega^{(-1/2)}=D(u_i)\cdot \Omega^{(+1/2)}
\label{eq:zr}
\end{equation}
As a second assumption to be justified presently, suppose for the
moment that the matrix $M$ in
equation \ref{eq:zp} can be inverted to solve for
$\Omega^{(+1/2)}$ as a function of $\Omega^{
(-1/2)}$.
Substituting
the result into equation \ref{eq:zr}, one gets
\begin{equation}
{\partial\over\partial u_i}\Omega^{(-1/2)}=D(u_i)\cdot M^{-1}\cdot
\Omega^{(-1/2)}
\label{eq:zs}
\end{equation}
Equation \ref{eq:zs} is a coupled system of first-order, partial
differential equations for the periods
$\Omega^{(-1/2)}$. The coefficients are rational functions
of the
moduli $u_i$, computable from a
knowledge of $W$ and the recursion relations derived above. In
principle, integration of this
system of equations yields the periods as functions of the moduli
$u_i$. The particular 1-cycle
$\gamma\in H_1(\Sigma_g)$ being integrated over appears
in the specific choice of
boundary conditions that one makes. In practice, however, the
fact that
the system \ref{eq:zs} is coupled makes it very difficult to solve. A
possible strategy is to concentrate
on one particular period and try to obtain a reduced system of
equations satisfied by it.
Decoupling of the equations may be achieved at the cost of
increasing the order of derivatives. Of
course, in the framework of effective $N=2$ SYM theories,
one is especially interested in
obtaining a system of equations satisfied by the periods of the SW
differential $\lambda_{SW}$.
In what follows we will therefore concentrate on solving the problem
for the SW periods within the subspace of differentials
with vanishing residue, as assumed in section 2.2. In order to do so,
the first step is to include the differential $\lambda_{SW}$ as a
basis vector by means of
a change of basis. From equations
\ref{eq:zai} and
\ref{eq:ze} we have
\begin{equation}
\lambda_{SW}=\sum_{j=0}^n\big({k\over 2}-j\big)u_jx^j\,
{{\rm d} x\over y}
\label{eq:zt}
\end{equation}
We observe that $\lambda_{SW}$ is never of the third kind,
because
$u_g=u_{n-1}=0$. As $k<2n$ and $u_n=1$, $\lambda_{SW}$
always
carries a nonzero component along $x^{g+1}{\rm d} x/y$, so we
can take the new basis of differentials of the first and second
kinds to be spanned by
\begin{equation}
x^i\, {{\rm d} x\over y}, \quad {\rm for} \;i\in\{0, 1, \ldots,
g-1\},
\quad\lambda_{SW}, \quad x^j \,{{\rm d} x\over
y},\quad {\rm for}\;j\in\{g+2,\ldots, 2g\}
\label{eq:zu}
\end{equation}
We will find it convenient to arrange the new basic differentials
in this order. Call $K$ the
matrix implementing this change of basis from the original one in
equations \ref{eq:zb} and \ref{eq:zbi} to the above in equation
\ref{eq:zu}; one easily
checks that ${\rm det}\, K\neq 0$. If $\omega$ and $\pi$ are
column
vectors representing the
old and new basic differentials, respectively, then from the matrix
expression
\begin{equation}
K\cdot\omega =\pi
\label{eq:zv}
\end{equation}
there follows a similar relation for the corresponding periods,
\begin{equation}
K\cdot\Omega =\Pi
\label{eq:zw}
\end{equation}
where $\Pi$ denotes the periods associated with the new basic
differentials, {\it i.e.}, those defined in equation \ref{eq:zu}. Converting
equation \ref{eq:zs} to the new basis is immediate:
\begin{equation}
{\partial\over\partial u_i}\Pi^{(-1/2)}=\Big[K\cdot D(u_i)\cdot M^{-1}
\cdot K^{-1}
+{\partial K\over\partial u_i}\cdot
K^{-1}
\Big]\, \Pi^{(-1/2)}
\label{eq:zx}
\end{equation}
Finally, define $U_i$ to be
\begin{equation}
U_i:=\Big[K\cdot D(u_i)\cdot M^{-1}\cdot K^{-1} +
{\partial K\over\partial u_i}
\cdot K^{-1}\Big]
\label{eq:zy}
\end{equation}
in order to reexpress equation \ref{eq:zx} as
\begin{equation}
{\partial\over\partial u_i}\Pi^{(-1/2)}=U_i\cdot \Pi^{(-1/2)}
\label{eq:zz}
\end{equation}
The matrix $U_i$ is computable from the above; its entries are
rational functions of the moduli
$u_i$.
The invertibility of $M$ remains to be addressed. Clearly, as the
definition of the $M$ matrix
requires restriction to
an appropriate subspace of differentials,
this issue will have to be dealt
with on a case-by-case basis. However, some general arguments
can be
put forward. From \cite{FULTON} we
have the following decomposition for the discriminant $\Delta(u_i)$
of the curve:
\begin{equation}
\Delta (u_i)=a(x) W(x) + b(x) {\partial W(x)\over\partial x}
\label{eq:zzi}
\end{equation}
where $a(x)$ and $b(x)$ are certain polynomials in $x$. This
property
is used in
\cite{KLEMM} as follows. Taking the modular derivative
$\partial/\partial u_i$ of
the pe
riod integral causes the
power in the denominator to increase by one unit,
as in equation \ref{eq:zq}
$\mu+1\rightarrow \mu+2$. In \cite{KLEMM},
this exponent is made to decrease again by use of the formula
\begin{equation}
{\phi(x)\over W^{\mu/2}}={1\over \Delta(u_i)}{1\over W^{\mu/2-1}}
\Big(a\phi+{2\over\mu-2}{{\rm d}\over{\rm d} x}(b\phi)\Big)
\label{eq:zzii}
\end{equation}
where $\phi(x)$ is any polynomial in $x$. Equation \ref{eq:zzii} is
valid only under the integral sign. It ceases to hold when the curve
is singular, {\it i.e.},
at those
points of moduli space such that $\Delta(u_i) =0$. The
defining equation of the
$M$ matrix, \ref{eq:zp}, is equivalent to equation \ref{eq:zzii},
when the latter
is read from right to left, {\it i.e.},
in decreasing order of $\mu$. We therefore expect $M$ to be
invertible
except at the singularities
of moduli space, {\it i.e.}, on the zero locus of $\Delta(u_i)$. A
proof of this fact is given in
appendix A.
To further elaborate on the above argument, let us observe that the
homology cycles of
$H
_1(\Sigma_g)$ are defined so as to encircle the zeroes of $W$.
A vanishing discriminant
$\Delta(u_i)=0$ at some given point of moduli space implies the
vanishing of the homology cycle that
encircles the two collapsing roots,
{\it i.e.}, a degeneration of $\Sigma_g$. With this vanishing cycle there
is also some differential in the cohomology of $\Sigma_g$
disappearing as well. We therefore expect the PF
equations to exhibit some type of singular behaviour when
$\Delta(u_i)=0$, as they in fact do.
Equation \ref{eq:zz} is the most general expression that one can derive
without making any specific
assumption as to the nature of the gauge group or the (massless)
matter content of the theory. From
now on, however, a case-by-case analysis is necessary, as required
by the different gauge groups.
This is natural, since the SW differential depends on the choice of
a gauge group and matter
content. However, some general features do emerge, which allow
one to observe a general pattern, as
will be explained in the following section.
\section{Decoupling the Picard-Fuchs Equations}
\subsection{The $B_r$ and $D_r$ gauge groups}
\renewcommand{\theequation}{3.\arabic{equation}}
\setcounter{equation}{0}
Let us first consider the $SO(2r+1)$ and $SO(2r)$ gauge
theories\footnote{Although there exists a well defined relation
between
the rank
$r$ and the genus $g=n-1$ of the corresponding curve, we will not
require it, and will continue to use $n$, rather than its expression
as a function of $r$. For the gauge groups in this section, we have
$g=2r-1$.}, either for the pure SYM case, or in the presence of
massless matter hypermultiplets in
the fundamental representation.
We restrict ourselves to asymptotically free theories. From
\cite{BRAND,DANI,DANII}, the polynomial $p(x)$ of equation
\ref{eq:zai} is
even, as
$u_{2j+1}=0$. Therefore, the curves
\ref{eq:za} describing moduli space are invariant under an
$x\rightarrow -x$ symmetry.
This invariance is a consequence of two facts: a
${\bf Z}_2$ factor present in the
corresponding Weyl groups, which causes the odd Casimir operators
of the group to vanish, and the
property that the Dynkin index of the fundamental representation is
even. This symmetry turns out
to be useful in decoupling the PF equations, as it determines the
right subspace of differentials
that one must restrict to.
Call a differential $\omega_m=x^m{\rm d} x /y$ {\it even} (respectively,
{\it odd}\/) if
$m$ is even (respectively, odd).\footnote{This definition excludes the
${\rm d} x$ piece of the differential;
thus, {\it e.g}, $x\,{\rm d} x/y$ is defined to be odd. Obviously, this
is purely a matter of convention.} We will thus talk about even or
odd periods accordingly.
{}From equation \ref{eq:zt} we have that $\lambda_{SW}$ is even
for these
gauge groups. One also sees from equations \ref{eq:zn} and
\ref{eq:zo} that the
recursion relations involved in
deriving the matrices $D(u_i)$ and $M$ do not mix even with odd
periods, as they always have a step of
two units. This is a natural decoupling which strongly suggests
omitting the odd and restricting to
the even periods, something we henceforth do. In particular, the
matrix $M$ of equation \ref{eq:zp} will
also be restricted to this even subspace; we will check that
${\rm det}\, M$ then turns out to be
proportional to (some powers of) the factors of $\Delta(u_i)$.
The genus $g=n-1$ is always
odd, so the periods $\Omega_{n-1}^{(\pm1/2)}$
do not appear after such a restriction. Another
consequence is that the values $m=n-1$ and $m=3n-1$ at which
equations \ref{eq:zn} and \ref{eq:zo} blow
up are automatically jumped over by the recursions.
The even basic differentials of equation \ref{eq:zu} are
\begin{equation}
{{\rm d} x\over y}, \quad x^2{{\rm d} x\over y},\quad \ldots, x^{g-1}
{{\rm d} x\over y},
\quad \lambda_{SW},\quad x^{g+3}{{\rm d} x\over
y},\ldots, x^{2g}{{\rm d} x\over y}
\label{eq:zaai}
\end{equation}
and there are $n$ of them. We have that $n$ itself is even, {\it i.e.},
the subspace of even differentials is
even-dimensional, so $n=2s$ for some $s$.\footnote{The precise
value of
$s$ can be given as a function of the gauge group, {\it i.e.}, as a
function of $n$, but it is irrelevant to the present discussion.}
According to the notation
introduced in equation \ref{eq:zv}, let
us denote the basic differentials of equation \ref{eq:zaai} by
\begin{equation}
\{\pi_1, \ldots, \pi_{s}, \pi_{s+1}=\lambda_{SW},
\pi_{s+2},
\ldots,
\pi_{2s}\}
\label{eq:zaaii}
\end{equation}
where, for the sake of clarity, indices have been relabelled so as
to run from 1 to $2s$.
In equation \ref{eq:zaaii}, all differentials preceding
$\lambda_{SW}=\pi_
{s+1}$ are of
the first kind; from
$\lambda_{SW}$ onward, all differentials are of the second kind.
The periods corresponding to the
differentials of equation \ref{eq:zaaii} are
\begin{equation}
\{\Pi_1, \ldots, \Pi_{s}, \Pi_{s+1}=\Pi_{SW}, \Pi_{s+2},
\ldots,\Pi_
{2s}\}
\label{eq:zabi}
\end{equation}
Notice that this restriction to the even
subspace is compatible with the change of
basis implemented by
$K$,
{\it i.e.}, $K$ itself did not mix even with odd differentials.
Once restricted to the even subspace, equation \ref{eq:zz} reads
\begin{equation}
{\partial\over\partial u_i}
\left( \begin{array}{c}
\Pi_1\\
\vdots\\
\Pi_{s}\\
\Pi_{s+1}\\
\vdots\\
\Pi_{2s}
\end{array} \right)
=
\left( \begin{array}{cccccc}
U^{(i)}_{11}&\ldots&U^{(i)}_{1s}&U^{(i)}_{1 s+1}&\ldots&
U^{(i)}_{1 2s}
\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
U^{(i)}_{s1}&\ldots&U^{(i)}_{ss}&U^{(i)}_{s s+1}&\ldots&
U^{(i)}_
{s
2s}\\
0&1&0&0&\ldots&0\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
U^{(i)}_{2s1}&\ldots&U^{(i)}_{2ss}&U^{(i)}_{2s s+1}&
\ldots&U^{(i)}_
{2s 2s}
\end{array}
\right)
\left( \begin{array}{c}
\Pi_1\\
\vdots\\
\Pi_s\\
\Pi_{s+1}\\
\vdots\\
\Pi_{2s}
\end{array}
\right)
\label{eq:zaa}
\end{equation}
where the $(s+1)$-th row is everywhere zero, except at the $i$-th
position, $1\leq i\leq s$,
whose entry is 1, so that
\begin{equation}
{\partial\over\partial u_i}\Pi_{s+1}={\partial\over\partial u_i}\Pi_{SW}=\Pi_i,
\qquad 1\leq i\leq s
\label
{eq:zab}
\end{equation}
Equation \ref{eq:zab} follows from the property that the SW differential
$\lambda_{SW}=\pi_{s+1}$ is a {\it
potential} for the even holomorphic differentials, {\it i.e.},
\begin{equation}
{\partial\lambda_{SW}\over\partial u_i}={\partial\pi_{s+1}\over\partial u_i}=\pi_i,\qquad
1\leq i
\leq s
\label{eq:zac}
\end{equation}
since integration of equation \ref{eq:zac} along some 1-cycle produces
the corresponding
statement for the periods. An analogous property for the odd periods
does not hold, as all the odd
moduli $u_{2i+1}$ vanish by symmetry.
To proceed further, consider the $U_i$ matrix in equation \ref{eq:zaa}
and block-decompose it as
\begin{equation}
U_i=\pmatrix{A_i&B_i\cr C_i&D_i}
\label{eq:zacc}
\end{equation}
where all four blocks $A_i$, $B_i$, $C_i$ and $D_i$ are $s\times s$.
Next
take the equations for the derivatives of holomorphic periods $\partial
\Pi_j/\partial u_i$ , $1\leq j\leq s$,
and solve them for the meromorphic periods $\Pi_j$, $s\leq j\leq 2s$,
in terms of the holomorphic
ones and their modular derivatives. That is, consider
\begin{equation}
{\partial\over\partial u_i}\pmatrix{\Pi_1\cr\vdots\cr\Pi_s\cr}- A_i\pmatrix{\Pi_1
\cr\vdots\cr\Pi_s\cr}=
B_i\pmatrix{\Pi_{s+1}\cr\vdots\cr\Pi_{2s}\cr}
\label{eq:zad}
\end{equation}
Solving equation \ref{eq:zad} for the meromorphic periods involves
inverting
the matrix $B_i$. Although we lack a
formal proof that $B_i$ is invertible, ${\rm det}\, B_i$ turns out
to vanish on the zero locus
of the discriminant of the curve in all the cases catalogued in
appendix B, so $B_i$ will be invertible
except at the singularities of moduli space. From equation \ref{eq:zad},
\begin{equation}
\pmatrix{\Pi_{s+1}\cr\vdots\cr\Pi_{2s}\cr}=
B_i^{-1}\cdot\Big({\partial\over\partial u_i}-A_i\Big)\pmatrix{\Pi_1\cr\vdots\cr
\Pi_s\cr}
\label{eq:zae}
\end{equation}
We are interested in the SW period $\Pi_{s+1}$ only, so we discard
all equations in \ref{eq:zae} but the
first one:
\begin{equation}
\Pi_{s+1}=(B_i^{-1})_1^r \,{\partial\Pi_r\over\partial u_i} - (B_i^{-1}A_i)_1^r \,
\Pi_r
\label{eq:zaf}
\end{equation}
where a sum over $r$, $1\leq r\leq s$, is implicit in equation
\ref{eq:zaf}. Finally, as the right-hand
side of equation
\ref{eq:zaf} involves nothing but holomorphic periods and modular
derivatives thereof, we can use \ref{eq:zab} to
obtain an equation involving the SW period $\Pi_{s+1}=\Pi_{SW}$
on both sides:
\begin{equation}
\Pi_{SW}=(B_i^{-1})_1^r \,{\partial^2\Pi_{SW}\over\partial u_i\partial u_r} -
(B_i^{-1}
A_i)_1^r \,{\partial\Pi_{SW}\over\partial u_r}
\label{eq:zag}
\end{equation}
Equation \ref{eq:zag} is a partial differential equation, second-order
in modular derivatives, which is
completely decoupled, {\it i.e.}, it
involves nothing but the SW period
$\Pi_{SW}$. The number of such
equations equals the number of moduli; thus giving a decoupled
system of partial, second-order
differential equations satisfied by the SW period: the desired PF
equations for the $N=2$ theory.
\subsection{The $C_{r}$ gauge groups}
Let us consider the $N_f>0$ $Sp(2r)$ gauge theory as described by the
curves given in \cite{EGUCHI,SASAKURA}.\footnote{The pure
$Sp(2r)$ SYM
theory can be described by a curve whose polynomial $p(x)$ is even
\cite{IRAN}, so it can be studied by the methods of section
3.1}${^,}$\footnote{For $Sp(2r)$, we have $g=2r$.}
As the Weyl group of $Sp(2r)$ contains a ${\bf
Z}_2$ factor, all the odd Casimir operators vanish, {\it i.e.}, we have
$u_{2j+1}=0$. However, a close examination reveals that the curves
given in \cite{EGUCHI,SASAKURA} contain a factor of $x^2$ in the
left-hand side. Pulling this factor to the right-hand side has the
effect of causing the resulting polynomial $p(x)$ to be {\it odd}
under $x\rightarrow -x$. The genus $g=n-1$ will
now be even because the degree $n$ of this resulting $p(x)$ will be
odd. The ${\bf Z}_2$ symmetry dictated by the Weyl group is {\it not}
violated, as the complete curve $y^2=p^2(x)-x^k\Lambda ^l$ continues
to be even, since $k=2(N_f-1)$ is also even \cite{EGUCHI}.
From this one can suspect that the right subspace of differentials
(or periods) that one must restrict to is given by the odd
differentials of equation \ref{eq:zu}. That this is so is
further confirmed
by the fact that the recursion relations \ref{eq:zn} and \ref{eq:zo} now
have a
step of 2 units, and that the SW differential $\lambda_{SW}$ will now
be odd, as revealed by equations \ref{eq:ze} and \ref{eq:zt}.
In consequence, the values $m=n-1$ and
$m=3n-1$ at which the recursions \ref{eq:zn} and \ref{eq:zo} blow
up are jumped
over, and the periods $\Omega_{n-1}^{(\pm1/2)}$ do not appear.
As was the case for the $SO(2r)$
and $SO(2r+1)$ groups, this subspace of odd differentials is
even dimensional. Furthermore, the change of basis implemented by
the matrix $K$ of equation \ref{eq:zv} respects this even-odd
partition, since $\lambda_{SW}$ is odd.
One technical point that appears for $Sp(2r)$, but not for the
orthogonal gauge groups, is the following. Let us remember that
$m=2g=2n-2$ is the highest value of $m$ such that $m\in R$. We
would
therefore expect the period $\Omega_{2n-1}^{(+1/2)}$ to be
expressible in terms
of some $\Omega_{m}^{(+1/2)}$ with lower
values of $m$,
according to
equation \ref{eq:zo}. However, we
cannot use equation \ref{eq:zo} to obtain this linear combination,
since
the derivation of the latter relation formally involved division by
zero when one takes $m=0$. \footnote{Remember that $m$ was
supposed to
be non-zero in passing from equation
\ref{eq:zi} to \ref{eq:zj}.} Instead, we must return to equation \ref{eq:zi}
and set
$m=0$ to arrive at
\begin{equation}
\Omega_{2n-1}^{(+1/2)}=-{1\over
2n}\sum_{j=0}^{n-1}\sum_{l=0}^{n-1}(j+
l)u_ju_l\Omega_{j+l-1}
^{(+1/2)}
-{1\over n}\sum_{j=0}^{n-1}(n+j)u_j\Omega_{n+j-1}^{(+1/2)} +{k\over
2n}\Omega_ {k-1}^{(+1/2)}
\label{eq:zcai}
\end{equation}
As $2n-1$ is odd, this period was omitted from the computations
of the previous section, but it will be required for the resolution
of the recursion relations for $Sp(2r)$.
With this proviso, the same arguments explained for the
$SO(2r)$
and $SO(2r+1)$ groups in section 3.1 hold throughout, with the
only difference that we will be working in the subspace
of odd periods of the first and second kinds. As a consequence, the
SW differential
$\lambda_{SW}$ will be a potential for the odd differentials only.
\subsection{The $A_{r}$ gauge groups}
The Weyl group of $SU(r+1)$ does not possess a ${\bf Z}_2$ factor for
$r>1$.\footnote{Obviously, $SU(2)$
is an exception to this discussion. The corresponding PF equations
are very easy to derive and to
decouple for the SW period. See, {\it e.g.},
\cite{BILAL,FERRARI,ITO,RYANG}.}{$^,$}\footnote{For
$SU(r+1)$ we have $r=g$.}
This implies the existence of
even as well as odd Casimir operators for the group. Correspondingly,
the characteristic polynomial
$p(x)$ of equation \ref{eq:zai} will also have non-zero odd moduli
$u_{2j+1}$. In general, the SW differential will be neither even nor
odd, as equation \ref{eq:ze} reveals. The same will be true for the
polynomial $p(x)$. The $x
\rightarrow -x$ symmetry used in the previous sections to decouple
the PF equations, be it under its even or under its odd
presentation, is spoiled.
The first technical consequence of the above is that equation
\ref{eq:zcai}
will have to be taken into consideration when solving the recursion
relations
\ref{eq:zn} and \ref{eq:zo}, because the period
$\Omega_{2n-1}^{(+1/2)}$ will be
required. Moreover, we have learned that an essential point to be
addressed is the identification of the appropriate subspace of
differentials (or periods) that we must
restrict to. It turns out
that the recursion relation \ref{eq:zo} must eventually be evaluated
at
$m=3n-1$. To prove this assertion,
consider the value $m=2n-2$ in equation \ref{eq:zn}, which is
allowed since
we still have $2g=2n-2\in R$. From
the right-hand side of this equation we find that, whenever
$j+l=n+1$, the period
$\Omega_{3n-1}^{(+1/2)}$ is required. However, equation
\ref{eq:zo}
blows up when $m=3n-1$. We have seen that this
problem did not occur for the orthogonal and symplectic gauge
groups.
The origin of this difficulty can be traced back to the fact that,
in the sequence of differentials
of the first and second kind given in equations \ref{eq:zb} and
\ref{eq:zbi},
there is a gap at $m=g$, since $x^g
{\rm d} x/y$ is always a differential of the third kind. As the recursion
relations now have a step of one
unit, we cannot jump over the value $m=g=n-1$. To clarify this
point, let us give an
alternative expression for $\Omega_{2n-2}^{(-1/2)}$ that will bear
this out.
Consider $x^{2g}W=x^{2n-2} W$ as a
polynomial in $x$, and
divide
it by $\partial W/\partial x$ to obtain a
certain quotient $q(x)$, plus a certain remainder $r(x)$:
\begin{equation}
x^{2n-2}W(x)=q(x) {\partial W(x)\over\partial x} + r(x)
\label{eq:zca}
\end{equation}
The coefficients of both $q(x)$ and $r(x)$ will be certain
polynomial functions in the moduli
$u_i$, explicitly obtainable from \ref{eq:zca}. The degree of
$r(x)$ in
$x$ will be $2n-2$, while that of
$q(x)$ will be $2n-1$, so let us put
\begin{equation}
q(x)=\sum_{j=0}^{2n-1} q_j(u_i)x^j, \qquad r(x)=
\sum_{l=0}^{2
n-2}
r_l(u_i)x^l
\label{eq:zcb}
\end{equation}
We have
\begin{equation}
W(x)=x^{2n}+\ldots, \qquad {\partial W(x)\over \partial x} =2nx^{2n-1}
+\ldots,
\qquad x^{2n-2}W(x)=x^{4n-2}+\ldots
\label{eq:zcbi}
\end{equation}
so $q(x)$ must be of the form
\begin{equation}
q(x)={1\over 2n} x^{2n-1} +\ldots
\label{eq:zcci}
\end{equation}
Furthermore, from equation \ref{eq:zca},
\begin{eqnarray}
&-&(1+\mu)\Omega_{2n-2}^{(\mu)}=(-1)^{\mu +2}
\Gamma(\mu+2)\oint_
{\gamma}{x^{2n-2} W\over W^{\mu+2}}=\nonumber \\
&=&(-1)^{\mu+2}\Gamma(\mu+2)\oint_{\gamma}
{1\over W^{\mu+2}}
\Big[\sum_{j=0}
^{2n-1}q_j(u_i)x^j{\partial W\over\partial x}+
\sum_{l=0}^{2n-2}r_l
(u_i)x^l\Big]
\label{eq:zcc}
\end{eqnarray}
Integrate by parts in the first summand of equation \ref{eq:zcc}
to obtain
\begin{equation}
-(1+\mu)\Omega_{2n-2}^{(\mu)}=-\sum_{j=0}^{2n-1}jq_j(u_i)
\Omega
_{j-1}^{(\mu)}+
\sum_{l=0}^{2n-2}r_l(u_i)\Omega_l^{(\mu+1)}
\label{eq:zcd}
\end{equation}
Now set $\mu=-1/2$ in equation \ref{eq:zcd} and solve for
$\Omega_{2n-2}^{(-1/2)}$ using \ref{eq:zcci}:
\begin{equation}
\Omega_{2n-2}^{(-1/2)}={2n\over
n-1}\Big(\sum_{l=0}^{2n-2}r_l\Omega_l^{(+1/
2)}-
\sum_{j=0}^{2n-2}jq_j
\Omega_{j-1}^{(-1/2)}\Big)
\label{eq:zce}
\end{equation}
Clearly, the right-hand side of equation \ref{eq:zce} will in
general
involve the periods
$\Omega_{n-1}^{(\pm 1/2)}$ corresponding to the gap
in the sequence
that defines the basic range
$R$. In principle, this implies that the subspace of differentials
we must
restrict to is that of the $\omega_m$ with
$m\in R\cup\{n-1\}$.
Let us recall that $M$ is the matrix of coefficients in the
expansion
of $\Omega_m
^{(- 1/2)}$ as linear functions of the
$\Omega_m^{(+ 1/2)}$.
Inclusion of $\Omega_{n-1}^{(\pm 1/2)}$ would, in principle,
increase the number of rows and columns of
$M$ by one unit, the increase being due to the expansion of
$\Omega_
{n-1}^{(- 1/2)}$ as a
linear combination of the $\Omega_m^{(+ 1/2)}$, where
$m\in R\cup
\{n-1\}$. However, we have no such
expansion at hand. We cannot define $M$ as a $(2g+1)
\times(2g+1)$
matrix; the best we
can have is $2g$ rows (corresponding to the $2g
$ periods
$\Omega_m^
{(-1/2)}$, where $m\in R$), and
$2g+1$ columns (corresponding to the $2g+1$ periods
$\Omega
_m^{(+1/2)}$, where $m\in R\cup
\{n-1\}$). As a non-square matrix cannot be invertible,
this seems
to imply the need to restrict
ourselves to a $2g\times 2g$ submatrix with maximal rank,
and look
for an invertible
$M$ matrix on that subspace. The procedure outlined in
what follows
serves precisely that
purpose:
$\bullet$ Use equations \ref{eq:zn} and \ref{eq:zo} to express
$\Omega_m^{(-1/2)}$,
where $m\neq n-1$, as linear
combinations of the $\Omega_m^{(+1/2)}$, where $m\in
R\cup\{n-1\}$,
plus possibly also of
$\Omega_{n-1}^{(-1/2)}$. That the latter period can appear
in the
right-hand side of these
expansions has already been illustrated in equation \ref{eq:zce}.
This gives
us a $2g\times (2g+1)$ matrix.
$\bullet$ Any occurrence of $\Omega_{n-1}^{(- 1/2)}$ in the
expansions
that define the $2g$ rows
of $M$ is to be transferred to the left-hand
side of the equations.
Such occurrences
will only happen when $m>n-1$ as, for $m<n-1$, equations
\ref{eq:zn} and
\ref{eq:zo} do not involve
$\Omega_{n-1}^{(\pm 1/2)}$. Transferring
$\Omega_{n-1}^{(- 1/2)}$ to
the left will also affect the
$D(u_i)$ matrices of equation \ref{eq:zr}: whenever the left-hand side
presents occurrences of
$\Omega_{n-1}^{(-1/2)}$ (with $u_i$-dependent coefficients), the
corresponding modular derivatives
will have to be modified accordingly.
$\bullet$ As the number of columns of $M$ will exceed that of rows
by one, a
linear dependence relation between the $\Omega_m^{(+ 1/2)}$ is
needed. This will have the
consequence of effectively reducing $M$ to a {\it square} matrix.
Only so will it
have a chance of being invertible, as required by the preceding
sections.
In what follows we will derive the sought-for linear dependence
relation between the $\Omega_m^{(+
1/2)}$, where $m\in R \cup \{n-1\}$. The procedure is
completely
analogous to
that
used in equations \ref{eq:zca} to \ref{eq:zce}. Consider
$x^{n-1}W$ as a polynomial in $x$, and divide it by $\partial W/\partial x$
to
obtain a certain quotient
$\tilde q(x)$, plus a certain remainder $\tilde r(x)$, whose
degrees are $n$ and $2n-2$,
respectively:
\begin{equation}
\tilde q(x)=\sum_{j=0}^n \tilde q_j(u_i)x^j, \qquad \tilde r(x)=
\sum_{l=0}^{2n-2}\tilde r_l(u_i)x^l
\label{eq:zcf}
\end{equation}
By the same arguments as in equations \ref{eq:zcbi} and
\ref{eq:zcci}, we can write
\begin{equation}
\tilde q(x)={1\over 2n}
x^n +\ldots
\label{eq:zcgg}
\end{equation}
Furthermore, following the same reasoning as in equations
\ref{eq:zcc} and
\ref{eq:zcd}, we find
\begin{eqnarray}
&-&(1+\mu)\Omega_{n-1}^{(\mu)}=(-1)^{\mu +2}
\Gamma(\mu+2)
\oint_{\gamma}
{x^{n-1} W\over W^{\mu+2}}=\nonumber \\
&=&-\sum_{j=0}^n j\tilde q_j(u_i)\Omega_{j-1}^
{(\mu)}+
\sum_{l=0}^{2n-2}\tilde r_l(u_i)\Omega_l^{(\mu+1)}
\label{eq:zch}
\end{eqnarray}
Setting $\mu=-1/2$ and solving equation \ref{eq:zch} for
$\Omega_{n-1}^
{(-1/2)}$ produces
\begin{equation}
0=\Big(
n\tilde q_n -{1\over 2}\Big) \Omega_{n-1}^{(-1/2)}=
-\sum_{j=0}^
{n-1} j\tilde
q_j(u_i)\Omega_{j-1}^{(-1/2)}+
\sum_{l=0}^{2n-2}\tilde r_l(u_i)\Omega_l^{(+1/2)}
\label{eq:zci}
\end{equation}
where equation \ref{eq:zcgg} has been used to equate the
left-hand side to
zero. We therefore have
\begin{equation}
\sum_{j=0}^{n-1} j\tilde q_j(u_i)\Omega_{j-1}^{(-1/2)}=
\sum_{l=0}^{2n-2}\tilde r_l(u_i)\Omega_l^{(+1/2)}
\label{eq:zcj}
\end{equation}
Equation \ref{eq:zcj} is a linear dependence relation between
$\Omega^
{(-1/2)}$ and $\Omega^{(+1/2)}$ which
does not involve $\Omega_{n-1}^{(-1/2)}$. Therefore, we
are now able
to make use of equations \ref{eq:zn}
and
\ref{eq:zo}, {\it i.e.}, of the allowed rows of $M$, to recast the
left-hand side
of equation \ref{eq:zcj} as a linear
combination of the $\Omega^{(+1/2)}$:
\begin{equation}
\sum_{j=0}^{n-1}j\tilde q_j(u_i)\sum_{r\in R}
\big[M\big]_{j-1}^r
\Omega_{r}^{(+1/2)}=
\sum_{l=0}^{2n-2}\tilde r_l(u_i)\Omega_l^{(+1/2)}
\label{eq:zcg}
\end{equation}
Equation \ref{eq:zcg} is the sought-for linear
dependence
relation that
appears due to the inclusion of
$\Omega_{n-1}^{(\pm 1/2)}$. Restriction to the subspace
defined by
this relation produces a
{\it square}\/, $2g\times 2g$ matrix:
\begin{equation}
\tilde\Omega^{(-1/2)}= \tilde M \tilde\Omega
^{(+1/2)}
\label{eq:zchh}
\end{equation}
The tildes in the notation remind us that the left-hand
side will
include
occurrences of $\Omega_{n-1}^{(-1/2)}$ when $m>n-1$,
possibly
multiplied by some $u_i$-dependent
coefficients, while the right-hand side has been reduced as
dictated
by the linear dependence
relation \ref{eq:zcg}. {\it We claim that the $\tilde M$ matrix
so defined
is invertible, its
determinant vanishing on the zero locus of the discriminant
$\Delta(u_i)$ of
the curve}\/. It is on this $2g$-dimensional space of differentials
(or periods) that we will be
working.
Let us make some technical observations on the procedure just
described. In practice, restriction to
the subspace determined by equation \ref{eq:zcg} means solving
for some given $\Omega_{m}^{(+ 1/2)}$,
where $m\in R\cup\{n-1\}$, as a linear combination
(with $u_i$-dependent coefficients) of the
rest. The particular $\Omega_m^{(+1/2)}$ that can be solved
for depends on the
coefficients entering equation \ref{eq:zcg}; any period whose
coefficient
is non-zero will do. Obviously,
the particular
$\Omega_m^{(+1/2)}$ that is being solved for in equation
\ref{eq:zcg} is
irrelevant (as long as its
coefficient is non-zero), since any such $\Omega_
m^{(+1/2)}$
so
obtained is just a different, but
equivalent, expression of the same linear relation \ref{eq:zcg}.
Whatever
the choice, ${\rm det}\, \tilde
M$ will continue to vanish on the zero locus of $\Delta(u_i)$.
However, differences may arise in the actual entries of $\tilde M$,
due to the fact that
different (though equivalent) sets of basic $\Omega_m^{(+1/2)}$
are being used. Once a given set of
$2g$ independent $\Omega_m^{(+1/2)}$ has been picked, {\it i.e.},
after
imposing equation \ref{eq:zcg}, this one
set must be used throughout. In particular, the right-hand sides of
equations \ref{eq:zr} will also have to
be expressed in this basis. As the $\Omega_m^{(+1/2)}$ disappear
from the computations already at
the level of equation \ref{eq:zs}, the particular choice made is
irrelevant. For the same reason, one can easily convince oneself
that the final PF equations obtained are independent of the actual
choice made.
Let us point out two further consequences of this prescription used
to define $\tilde
M$. First, some of the equations collected in \ref{eq:zs} may
possibly involve, both in the
right and the left-hand sides, occurrences of
$\Omega_{n-1}^{(-1/2)}$
and its modular derivatives,
as dictated by the prescription. One might worry that the latter will
not disappear from the final
result for the SW differential $\lambda_{SW}$. That it will always
drop out follows from the fact
that none of the first $g$ equations in \ref{eq:zs} involves
$\Omega_{n-1}^
{(-1/2)}$, as they are untouched
by the defining prescription of $\tilde M$. The decoupling procedure
followed to
decouple $\lambda_{SW}$ also respects this property, as it basically
discards all equations for the periods of
non-holomorphic differentials (with the exception of the SW
differential itself, of
course).
A second consequence of the prescription used to define $\tilde M$
is the fact that its
entries may now become {\it rational} functions of the moduli,
rather than {\it polynomial}
functions. This is different from the situation for the $SO(2r+1)$,
$Sp(2r)$ and $SO(2r)$ gauge groups, where these entries were always
polynomials in the $u_i$. The reason is that solving the linear
relation \ref{eq:zcg} for one particular $\Omega_m^{(+1/2)}$ may
involve division by a polynomial in the $u_i$.
Having taken care of the difficulty just mentioned, {\it i.e.}, the
identification of the appropriate
space of periods on which $\tilde M$ will be invertible, the rest
of
the decoupling
procedure already explained for the $SO(2r+1)$, $Sp(2r)$ and
$SO(2r)$
gauge groups holds throughout.
In particular, expressions totally analogous to those from equation
\ref{eq:zaa} to \ref{eq:zag} continue to be
valid, with $s=g$. As a compensation for this technical difficulty
of having nonzero odd Casimir
operators, one has that the SW differential truly becomes a
potential for {\it all} holomorphic
differentials on the curve, so the equivalent of equation
\ref{eq:zab}
now includes the odd holomorphic
periods as well.
\subsection{The exceptional gauge groups}
The method developed in section 2 can also be applied to obtain
the PF equations associated with
$N=2$ SYM theories (with or without massless matter) when the
gauge
group is an exceptional group, as the vacuum structure of these
theories is also described by hyperelliptic
curves \cite{ARDALAN,IRAN,DANII}. In principle, a set of
(1st-order) PF
equations similar to
those given in equation
\ref{eq:zz} can also be derived. However, we have seen that the
decoupling
procedure described in section 3 depends crucially on our ability to
identify an appropriate subspace of periods to restrict to. Such an
identification makes use of the structure of the corresponding Weyl
group. With the exception of $G_2$, whose Weyl group is
$D_6$ (the dihedral group of order 12), the Weyl groups of $F_4$,
$E_6$, $E_7$ and $E_8$ are not
easily manageable, given their high orders. Therefore, we cannot
hope to
be able to develop a
systematic decoupling prescription similar to the one given for
the classical gauge groups. This
is just a reflection of the exceptionality of the groups involved.
Another difficulty, which we have illustrated in
the particular case of $G_2$ below, is the fact that the genus $g$
of the corresponding Riemann
surface $\Sigma_g$ will in general be too high compared with the
number of independent Casimir
operators. As there is one modulus $u_i$ per Casimir operator, we
cannot expect the SW differential
to be a potential for {\it all} $g$ holomorphic differentials
on $\Sigma_g$. This fact has
already been observed for the $B_r$, $C_r$ and $D_r$ gauge
groups. However, the
novelty here is that, in
general, the best one can hope for
is to equate
$\partial\lambda_{SW}/\partial u_i$ to some linear combination (with
$u_i$-dependent coefficients) of a number of
holomorphic differentials $\omega_j$,
\renewcommand{\theequation}{3.\arabic{equation}}
\setcounter{equation}{26}
\begin{equation}
{\partial\lambda_{SW}\over\partial u_i}=\sum_{j=0}^{g-1} c^j_i(u_l)
\omega_j
\label{eq:zda}
\end{equation}
Although the requirement of homogeneity with respect to the
(residual) $R$-symmetry can give us a
clue as to the possible terms that can enter the right-hand
side of equation \ref{eq:zda}, the actual linear
combinations can only be obtained by explicit computation. In
general, such linear combinations
may involve more than one non-zero coefficient $c^j_i(u_l)$. As a
consequence, the decoupling
procedure
explained in previous sections breaks down, since it
hinged on the SW differential
$\lambda_{SW}$ being a potential for (some well
defined subspace of) the holomorphic
differentials, {\it i.e.}, on all $c^j_i(u_l)$
but one being zero. In other words, even if it were possible to
identify the appropriate subspace of
periods that one must restrict to, the high value of the genus
$g$ would probably prevent a
decoupling of the PF equations.
As an illustration, we have included the details pertaining to
$G_2$ in section 4.3.
\section{Examples}
\subsection{Pure $SO(5)$ SYM theory}
The vacuum structure of the effective, pure $N=2$ SYM theory
with
gauge group $SO(5)$ is described
by the curve \cite{DANI}
\renewcommand{\theequation}{4.\arabic{equation}}
\setcounter{equation}{0}
\begin{equation}
W=y^2= p(x)^2-x^2=x^8+2ux^6+(u^2+2t)x^4+2tux^2+t^2-x^2
\label{eq:zah}
\end{equation}
where $p(x)=x^4+ux^2+t$. The quantum scale has been set to unity,
$\Lambda=1$, and the moduli
$u$ and $t$ can be identified with the second- and fourth-order
Casimir operators of
$SO(5)$, respectively. The discriminant $\Delta(u, t)$ is given by
\begin{equation}
\Delta(u, t)=256t^2(-27+256t^3+144tu-128t^2u^2-4u^3+16tu^4)^2
\label{eqn:zahi}
\end{equation}
Equation \ref{eq:zah} describes a hyperelliptic Riemann surface of
genus
$g=3$, $\Sigma_3$. The holomorphic differentials on $\Sigma_3$
are ${\rm d} x/y$, $x{\rm d} x/y$ and $x^2{\rm d}
x/y$, while $x^4{\rm d} x/y$, $x^5{\rm d} x/y$ and $x^6{\rm d} x/y$ are
meromorphic
differentials of the second
kind. From
equation \ref{eq:ze}, the SW differential is given by
\begin{equation}
\lambda_{SW}= (-3x^4-ux^2+t){{\rm d} x\over y}
\label{eq:zajii}
\end{equation}
Both $p(x)$ and $\lambda_{SW}$ are even under
$x \rightarrow -x$.\footnote{Recall that our convention leaves out
the ${\rm d} x$ term in the differential.} We therefore restrict
ourselves to the subspace of
differentials on $\Sigma_3$ spanned by
$\{{\rm d} x/y, x^2{\rm d} x/y, x^4{\rm d} x/y, x^6{\rm d} x/y\}$. This is
further confirmed by the fact that the recursion relations
\ref{eq:zn} and \ref{eq:zo} now have a step of 2 units,
\begin{equation}
\Omega_n^{(-1/2)}={8\over n-3}\Big[t^2\Omega_n^{(+1/2)}+{3\over 4}
(2tu-1)\Omega_{n+2}^{(+1/2)}
+{1\over 2}(u^2+2t)\Omega_{n+4}^{(+1/2)}+{u\over 2}
\Omega_{n+6}^{(+1/2)}
\Big]
\label{eq:zaji}
\end{equation}
and
\begin{eqnarray}
\Omega_n^{(+1/2)}
& = & {1\over n-11}\Big[(10-n)2u\Omega_{n-2}^{(+1/2)}
+(9-n)(u^2+2t)
\Omega_{n-4}^{(+1/2)}\nonumber \\
&+&(8-n)(2tu-1)\Omega_{n-6}^{(+1/2)}+(7-n)t^2
\Omega_{n-8}^{(+1/2)}
\Big]
\label{eq:zaj}\end{eqnarray}
so that even and odd values don't mix. The solution of these
recursions can be given in terms of the
initial data $\{\Omega_0^{(+1/2)}, \Omega_2^{(+1/2)},
\Omega_4^{(+1/2)}, \Omega_6^{(+1/2)}\}$,
where the indices take on the values allowed by the even subspace
of differentials. From
equations \ref{eq:zaji} and \ref{eq:zaj}, the $M$ matrix of
equation \ref{eq:zp} can be
readily computed. Its determinant
is found to be a product of powers of the factors of the
discriminant
$\Delta(u,t)$:
\begin{equation}
{\rm det}\, M={16\over 9} t^2(-27+256t^3+
144tu-128t^2u^2-4u^3
+16tu^4)
\label{eq:zal}
\end{equation}
Therefore, it has the same zeroes as $\Delta(u,t)$ itself, but with
different multiplicities.
Next, the change of basis in the space of differentials required by
equation \ref{eq:zv} is effected by the
matrix
\begin{equation}
K=\pmatrix{1&0&0&0\cr
0&1&0&0\cr
t&-u&-3&0\cr
0&0&0&1\cr}
\label{eq:zam}
\end{equation}
The matrices $D(u)$ and $D(t)$ defined in equations \ref{eq:zq}
and \ref{eq:zr}
can be computed using the
expressions for $W$ and the
recursion relations given in equations
\ref{eq:zaji} and \ref{eq:zaj}.\footnote{For the sake of brevity,
the explicit
expressions of these matrices are not reproduced here.} Once
$D(u)$ and $D(t)$ are reexpressed in the new basis $\{\pi_0,
\pi_2, \pi_4=\lambda_{SW}, \pi_6\}$
defined by $K$ in \ref{eq:zam}, they produce the $U_i$
matrices of
equation \ref{eq:zy}. Let us just observe that,
from the corresponding third rows of $U_t$ and $U_u$,
one finds
\begin{equation}
{\partial\Pi_4\over\partial t}=\Pi_0, \qquad
{\partial\Pi_4\over\partial u}=\Pi_2
\label{eq:zan}
\end{equation}
as expected for the SW period $\Pi_4=\Pi_{SW}$.
Let us consider the $t$-modulus and carry out the decoupling
procedure for the SW period
explicitly. Block-divide the $U_t$ matrix as required by equation
\ref{eq:zacc}:
\begin{equation}
U_t=\pmatrix{A_t&B_t\cr C_t&D_t}
\label{eq:zao}
\end{equation}
Specifically, one finds
\begin{eqnarray}
\lefteqn{A_t = {1\over {\rm det}\, M} {16t\over 27}}
\nonumber \\
& \left(
\begin{array}{cc}
t(-384t^2+27u-80tu^2+4u^4) &
(135t-240t^2u-27u^2+68tu^
3-4u^5)\\
4t^2(-18+76tu+u^3)&4t(60t^2-7tu^2-u^4)
\end{array}
\right)
\label{eq:zap}
\end{eqnarray}
\begin{equation}
B_t={1\over {\rm det}\, M}
\left(
\begin{array}{cc}
-{64\over 27}t(12t^2+27u-47tu^2+4u^4)&
{16\over 3}t(27-48tu+4u^3)\\
-{16\over 27}t^2(9+160tu+16u^3)&{64\over 3}t^2(12t+u^2)
\end{array}
\right)
\label{eq:zaq}
\end{equation}
We observe that
\begin{equation}
{\rm det}\, B_t = {256\over 9}t^3(27-256t^3-144tu+
128t^2u^2+4u^3-16tu^4)
\label{eq:zar}
\end{equation}
so $B_t$ is invertible except at the singularities of moduli space.
Carry out the
matrix
multiplications of equation \ref{eq:zaf} to get
\begin{equation}
\Pi_4=-(16t^2+{4\over 3}tu^2){\partial\Pi_0\over\partial t}+(9-16tu+
{4\over 3}u^3)
{\partial\Pi_2\over\partial t}-8t\Pi_0
\label{eq:zas}
\end{equation}
Finally, use \ref{eq:zan} to obtain a decoupled equation
for the SW
period $\Pi_4=\Pi_{SW}$:
\begin{equation}
{\cal L}_1 \Pi_{SW}=0,\qquad
{\cal L}_1=4t(u^2+12t){\partial^2\over\partial t^2}-(27-48tu+4u^3)
{\partial^2\over\partial t\partial u}
+24t{\partial\over\partial t}+3
\label{eq:zat}
\end{equation}
Analogous steps for the $u$ modulus lead to
\begin{equation}
{\cal L}_2 \Pi_{SW}=0,\qquad
{\cal L}_2=(
9-32tu){\partial^2\over\partial t\partial u}-4(12t+u^2)
{\partial^2\over\partial u^2}-
8t{\partial\over\partial t}-1
\label{eq:zau}
\end{equation}
\subsection{Pure $SU(3)$ SYM theory.}
The vacuum structure of the effective, pure $N=2$ SYM
theory with
gauge group $SU(3)$ is described
by the curve \cite{LERCHE,ARG,KLEMM}\footnote{Our moduli
$(u, t)$ correspond
to
$(-u, -v)$ in \cite{LERCHE}.}
\begin{equation}
W=y^2=p(x)^2-1=x^6+2ux^4+2tx^3+u^2x^2+2tux+t^2-1
\label{eq:zba}
\end{equation}
where $p(x)=x^3+ux+t$. The quantum scale has been set to unity,
$\Lambda=
1$, and the moduli $u$
and $t$ can be identified with the second- and third-order Casimir
operators of
$SU(3)$, respectively. The discriminant is given by
\begin{equation}
\Delta(u, t)=-64(27-54t+27t^2+4u^3)(27+54t+27t^2+4u^3)
\label{eq:zbb}
\end{equation}
Equation \ref{eq:zba} describes a Riemann surface of genus
$g=2$, $\Sigma_2$. The holomorphic differentials on
$\Sigma_2$
are ${\rm d} x/y$ and $x{\rm d} x/y$, while
$x^3{\rm d} x/y$ and $x^4{\rm d} x/y$ are meromorphic differentials of the
second kind. From equation
\ref{eq:ze}, the SW differential is given by
\begin{equation}
\lambda_{SW}= -(ux+3x^3){{\rm d} x\over y}
\label{eq:zbc}
\end{equation}
The recursion relations \ref{eq:zn} and \ref{eq:zo} can be
easily computed. They have a step of 1 unit,
\begin{eqnarray}
\lefteqn{\Omega_n^{(-1/2)} = } \nonumber \\
& {1\over n-2}\Big[4u\Omega_{n+4}^{(+1/2)}+6t
\Omega_{n+3}^{(+1/2)}+
4u^2\Omega_{n+2}^{(+1/2)}+10tu
\Omega_{n+1}^{(+1/2)}+6(t^2-1)
\Omega_n^{(+1/2)}\Big]
\label{eq:zbd}
\end{eqnarray}
and
\begin{eqnarray}
\lefteqn {\Omega_{n}^{(+1/2)}= {1\over n-8}\Big[2u(7-n)
\Omega_{n-
2}^{(+1/2)
}+2t({13\over 2}-n)\Omega_{n-3}^{(+1/2)}}\nonumber \\
& +(6-n)u^2\Omega_{n-4}^{(+1/2)}+({11\over
2}-n)2tu\Omega_{n-5}^{(+1/2)}+(n-5)(1-t^2)\Omega_{n-6}
^{(+1/2)}\Big]
\label{eq:zbe}
\end{eqnarray}
which means that the solution for $\{\Omega_0^{(-1/2)},
\Omega_1^{(-1/2)}, \Omega_3^{(-1/2)},
\Omega_4^{(-1/2)}\}$ in terms of the initial values
$\{\Omega_0^{(+1/2)}, \Omega_1^{(+1/2)},
\Omega_3^{(+1/2)}, \Omega_4^{(+1/2)}\}$ will involve both
even and
odd values of $n$.
Inspection of the recursion relations above shows that, in the
computation of $\Omega_4^{(-1/2)}$,
the value of $\Omega_8^{(+1/2)}$ is needed. This illustrates the
situation described in section 3.3.
As already explained, in order to properly identify the right
subspace of periods to work with,
it suffices to include $\Omega_2^{(-1/2)}$ and its counterpart
$\Omega_2^{(+1/2)}$. It is then
possible to use equations \ref{eq:zbd} and \ref{eq:zbe} in
order to write a set of 4 equations expressing
$\Omega_n^{(-1/2)}$, where $n=0,1,3,4$, in terms of
$\Omega_n^{(+1/2)}$ with $n=0, 1, 2, 3,4$ {\it
and} $\Omega_2^{(-1/2)}$. As prescribed in section 3.3, we pull
all occurrences of
$\Omega_2^{(-1/2)}$ to the left-hand side. In the case at hand,
such occurrences take place under
the form
\begin{equation}
\Omega_4^{(-1/2)}=-u\Omega_2^{(-1/2)}+ {\rm terms}\;
{\rm in}\;\Omega_m^{(+1/2)}
\label{eq:zbei}
\end{equation}
while the expansions for the $\Omega_m^{(-1/2)}$, $m\neq 4$,
do not involve $\Omega_2^{(-1/2)}$.
We therefore define $
\tilde M$ as the matrix of coefficients in
the expansion of
\begin{equation}
\Omega_0^{(-1/2)},\quad
\Omega_1^{(-1/2)},\quad \Omega_3^{(-1/2)}, \quad
\Omega_4^{(-1/2)}+
u\Omega_2^{(-1/2)}
\label{eq:zdaa}
\end{equation}
in terms of $\Omega_n^{(+1/2)}$ with $n=0, 1, 2, 3, 4$.
Next we use
the linear dependence relation
between the latter periods, as derived in equation \ref{eq:zcg},
which for
the particular case at hand reads
\begin{equation}
\Omega_2^{(+1/2)}=-{u\over 3} \Omega_0^{(+1/2)}
\label{eq:zdb}
\end{equation}
We use equation \ref{eq:zdb} to express $\Omega_2^{(+1/2)}$
in terms of
$\Omega_0^{(+1/2)}$. This
choice has the computational advantage that, as the coefficicent of
$\Omega_2^{(+1/2)}$ in
equation \ref{eq:zdb} is a constant, no division by a polynomial is
involved, and the entries of $\tilde M$
continue to be polynomial functions in $u$ and $t$. From the above
one can immediately conclude that the periods of equation
\ref{eq:zdaa} can be
expressed as certain linear combinations, with $u$ and $t$-dependent
coefficients, of
$\{\Omega_0^{(+1/2)}, \Omega_1^{(+1/2)}, \Omega_3^{(+1/2)},
\Omega_4^{(+1/2)}\}$.
After this change of basis in the space of periods (or
differentials) has been performed, the rest
of the construction already explained goes through. We first
confirm that $\tilde M$ so
defined is invertible except at the singularities of moduli space,
as
\begin{equation}
{\rm det}\, \tilde M={4\over 9}(27-54t+27t^2+4u^3)
(27+54t+27t^2+4u^3)
\label{eq:zbf}
\end{equation}
The SW differential is included in
the computations upon
performing
the change of basis from
$\{\omega_0,\omega_1, \omega_3, \omega_4+u\omega_2\}$
to $\{\pi_1,
\pi_2, \pi_3=\lambda_{SW}, \pi_4\}$, as given
by the matrix
\begin{equation}
K=\pmatrix{1&0&0&0\cr
0&1&0&0\cr
0&-u&-3&0\cr
0&0&0&1\cr}
\label{eq:zbg}
\end{equation}
where basis indices have also been relabelled for simplicity.
The matrices $D(u)$ and $D(t)$ defined in equations \ref{eq:zq}
and \ref{eq:zr} can be equally computed using the
expressions f
or $W$ and the recursion relations given above, and
recast in the new basis
$\{\pi_1, \pi_2, \pi_3=\lambda_{SW}, \pi_4\}$ defined by $K$.
This produces the $U_i$ matrices of
equation \ref{eq:zy}. Let us just observe that, from the third rows of
$U_u$ and $U_t$, one finds
\begin{equation}
{\partial\Pi_3\over\partial t}=\Pi_1, \qquad {\partial\Pi_3\over\partial u}=\Pi_2
\label{eq:zbh}
\end{equation}
as expected for the SW period $\Pi_3=\Pi_{SW}$. Further
carrying out the decoupling procedure as already prescribed yields
\begin{equation}
{\cal L}_i\Pi_{SW}=
0,\qquad i=1,2
\label{eq:zbii}
\end{equation}
where
\begin{eqnarray}
{\cal L}_1 & = &(27+4u^3-27t^2){\partial^2\over\partial u^2}+12u^2t
{\partial^2\over\partial u\partial t}+
3tu{\partial\over\partial t}+u \nonumber \\
{\cal L}_2 & = &(27+4u^3-27t^2){\partial^2\over\partial t^2}-36ut
{\partial^2\over\partial u\partial t}-
9t{\partial\over\partial t}-3
\label{eq:zbj}
\end{eqnarray}
in complete agreement with \cite{KLEMM}, once the differences in
notation have been taken into account.
\subsection{Pure $G_2$ SYM theory.}
The vacuum structure of the effective, pure $N=2$ SYM
theory with gauge group $G
_2$ is described
by the curve \cite{ARDALAN,IRAN,DANII}
\begin{equation}
W=y^2=p(x)^2-x^4=x^{12}+2ux^{10}+{3u^2\over 2} x^8+
(2t+{u^3\over 2})x^6+
({u^4\over 16}+2tu-1)x^4+{tu^2\over
2}x^2+t^2
\label{eq:zea}
\end{equation}
where $p(x)=x^6+ux^4+u^2 x^2/4+t$.\footnote{Some doubts
about this
curve have recently been expressed in \cite{GIDDINGS}. The
difficulty
encountered with equation (4.36) might perhaps be circumvented
with a
different curve.} The quantum scale has been set to
unity, {\it i.e.}, $\Lambda=1$, and
the moduli $u$ a
nd $t$ can be identified with the second- and
sixth-order Casimir operators of
$G_2$, respectively. We observe the absence of a fourth-order
Casimir
operator; instead, its role is fulfilled by the {\it square} of the
second-order one. On the curve, this is reflected in the coefficient
of $x^2$. This is a consequence of the exceptionality of the Lie
algebra $G_2$. The discriminant $\Delta (u, t)$ is given by
\begin{eqnarray}
\Delta(u, t)& = & 65536t^6
(-16+108t^2+72tu+8u^2-2tu^3-u^4)^2\nonumber \\
&& ~~~~~~~~~~(16+108t^2-72tu+8u^2-
2tu^3+u^4)^2
\label{eq:zeb}
\end{eqnarray}
Equation \ref{eq:zea} describes a hyperelliptic Riemann
surface of genus
$g=5$, $\Sigma_5$. The
holomorphic differentials on $\Sigma_5$ are $x^j{\rm d} x/y$, where
$j\in \{0,1,2,3,4\}$, while
$x^{6+j}{\rm d} x/y$, with $j$ in the same range, are meromorphic
differentials of the second kind. From equation
\ref{eq:ze}, the SW differential is given by
\begin{equation}
\lambda_{SW}=(2t-2ux^4-4x^6){{\rm d} x\over y}
\label{eq:zec}
\end{equation}
Both $p(
x)$ and $\lambda_{SW}$ are even under
$x\rightarrow -x$. We
therefore restrict ourselves to
the subspace of differentials on $\Sigma_5$ spanned by
$\{{\rm d} x/y$,
$x^2{\rm d} x/y$, $x^4{\rm d} x/y$, $x^6{\rm d}
x/y$, $x^8{\rm d} x/y$, $x^{10}{\rm d} x/y\}$. This is further
confirmed by
the fact that the recursion
relations
\ref{eq:zn} and \ref{eq:zo} now have a step of 2 units,
\begin{eqnarray}
\Omega_n^{(-1/2)} & = & {1\over n-5}\Big[12t^2
\Omega_n^{(+1/2)}+5tu^2
\Omega_{n+2}^{(+1/2)}
+(16tu+{1\over 2}u^4-8) \Omega_{n+4}^{(+1/2)}
\nonumber \\
&+ & (12t +3u^3)\Omega_{n+6}^{(+1/2)}+
6u^2\Omega_{n+8}^{(+1/2)}+4u \Omega_{n+10}
^{(+1/2)}\Big]
\label{eq:zed}
\end{eqnarray}
and
\begin{eqnarray}
\Omega_n^{(+1/2)} & = & {1\over n-17}\Big[(11-n)t^2
\Omega_{n-12}^{(+1/2)}
+{1\over 2}(12-n)
tu^2\Omega_{n-10}^{(+1/2)}\nonumber \\
& + & (13-n)({u^4\over 16}+2tu-1)\Omega_{n-8}^
{(+1/2)}+(14-n)
(2t+{u^3\over 2})\Omega_{n-6}^{(+1/2)}\nonumber \\
&+ & (15-n){3\over 2} u^2 \Omega_{n-4}^{(+1/2)}
+(16-n)2u
\Omega_{n-2}^{(+1/2)}\Big]
\label{eq:zee}
\end{eqnarray}
so that even and odd values don't mix. The solution of these
recursions can be given in terms of the
initial data $\{\Omega_0^{(+1/2)}, \Omega_2^{(+1/2)},
\Omega_4^{(+1/2)}, \Omega_6^{(+1/2)},
\Omega_8^{(+1/2)}, \Omega_{10}^{(+1/2)}\}$, where
the indices take
on the values allowed by the even
subspace of differentials picked above. From equations
\ref{eq:zed} and
\ref{eq:zee}, the
$M$ matrix of equation \ref{eq:zp} can be
readily computed. Its determinant is found to be a
product of
powers of the factors of the discriminant
$\Delta(u,t)$:
\begin{equation}
{\rm det}\, M={256\over
225}t^4(-16+108t^2+72tu+8u^2-2tu^3-u^4)
(16+108t^2-72tu+8u^2-2tu^3+u^4)
\label{eq:zef}
\end{equation}
Therefore, it has the same zeroes as $\Delta(u,t)$ itself, but
with different multiplicities.
Next, the change of basis required by equation \ref{eq:zv}
is effected
by the matrix
\begin{equation}
K=\pmatrix{1&0&0&0&0&0\cr
0&1&0&0&0&0\cr
0&0&1&0&0&0\cr
2t&0&-2u&-4&0&0\cr
0&0&0&0&1&0\cr
0&0&0&0&0&1\cr}
\label{eq:zeg}
\end{equation}
The matrices $D(u)$ and $D(t)$ defined in equations
\ref{eq:zq} and \ref{eq:zr} can be equally computed using the
expressions for $W$ and the recursion relations given in
\ref{eq:zea},
\ref{eq:zed} and \ref{eq:zee}.\footnote{The complete system of
coupled, first-order
equations is not reproduced here for the sake of brevity.}
Once $D(u)$ and
$D(t)$ are reexpressed in the new basis $\{\pi_0, \pi_2,\pi_4,
\pi_6=\lambda_{SW}, \pi_8,
\pi_{10}\}$ defined by $K$ in equation \ref{eq:zeg}, they
produce the
$U_i$ matrices of equation \ref{eq:zy}. Let us
just observe that, from the corresponding fourth rows of $U_t$
and
$U_u$, one finds
\begin{equation}
{\partial \Pi_6\over \partial t}=\Pi_0, \qquad {\partial \Pi_6\over \partial u}=\Pi_4+
{u\over 2}\Pi_2
\label{eq:zeh}
\end{equation}
Equation \ref{eq:zeh} is the expression of \ref{eq:zda} when
the gauge group is
$G_2$. We observe
the presence of a linear combination of even holomorphic
periods in
the right-hand side, rather
than a clear-cut correspondence between holomorphic
periods
(or differentials) and moduli. The
presence of an additional term $u\Pi_2/2$ is a consequence
of the
exceptionality of $G_2$. To
understand this fact in more detail, we observe that $G_2$ has
rank 2, so the number of
independent moduli is therefore 2. Like any other algebra, it has a
quadratic Casimir operator
(associated with the $u$ modulus). In contrast to $SO(5)$, which
is also rank 2, $G_2$ possesses no
fourth-order Casimir operator; instead, the next Casimir is of
order 6. It is associated with the
$t$ modulus. There is no fourth-order Casimir operator other than
the trivial one, namely, the one
obtained upon squaring the quadratic one. The existence of
third- and fifth-order Casimir
operators is forbidden by the ${\bf Z}_2$ symmetry of the Weyl
group. This leaves us with just 2
independent moduli, $u$ and $t$, to enter the definition of the
curve as coefficients of $p(x)$.
As the latter is even and of degree 6
(as dictated by the order
of the highest Casimir operator),
we are clearly missing one modulus (associated with a would-be
quartic Casimir), whose role is
then fulfilled by the square of the quadratic one. The consequence
is a smaller number of moduli
(2) than would be required (3) for the SW differential
$\lambda_{SW}$ to serve as a potential for
the even holomorphic differentials. This causes the presence
of the linear combination in the
right-hand side of equation \ref{eq:zeh},
and the decoupling
procedure
breaks down.
\section{Summary and Outlook}
In this paper an alternative derivation of the Picard-Fuchs equations
has been presented which
is systematic and well suited for symbolic computer computations.
It holds for any classical
gauge group, and aims explicitly at effective $N=2$ supersymmetric
gauge theories in 4
dimensions. However, the techniques presented here may well find
applications beyond these
specific areas. Our method makes use of the underlying
group theory
in order to obtain a decoupled
set of partial, second-order equations satisfied by the period
integrals of the Seiberg-Witten
differential. This computational simplicity allows one to derive the
PF equations for large values of
the rank of the gauge group with comparatively very little effort.
The inclusion of
massless matter
hypermultiplets is also straightforward.
One of the strengths of the presentation of this paper is
that the techniques studied
here lend themselves to a
wide variety of applications, and are not
limited to the SW problem only.
More interesting than the derivation of the PF equations themselves
is of course the extraction of
physical information from their solutions. This topic has already
been studied in the literature in
a number of cases
\cite{KLEMM,BILAL,FERRARI,ITO,SASAKURA,THEISEN},
and provides an interesting application of our techniques, which we
are currently addressing \cite{NOS}. Another important extension of
our
work is
the consideration of massive
matter hypermultiplets; this poses some technical challenges which
are also under investigation
\cite{NOS}. We hope to be able to report on these issues in the near
future.
\noindent{\bf Acknowledgements}
It's a great pleasure to thank Manuel Isidro San Juan for
technical advice on the use of
{\sl Mathematica}. Discussions with him made the preparation of
this paper much more enjoyable.
\newpage
\begin{center}
{\bf Appendix A}
\end{center}
\setcounter{equation}{0}
Below is a proof of the statement that all zeroes of
${\rm det}\, M$ are also zeroes of the discriminant
$\Delta(u_i)$,
possibly with different
multiplicities. For the sake of simplicity, we give the details
pertaining to the gauge groups
$SO(2r+1)$, $SO(2r)$ and $Sp(2r)$ (with or without massless
matter
in the fundamental representation).
The proof for the $SU(r+1)$ gauge
groups is slightly more involved, but should be technically
analogous
to the one presented below.
\renewcommand{\theequation}{A.\arabic{equation}}
To this purpose we recall equations \ref{eq:zzi} and
\ref{eq:zzii}:
\begin{equation}
\Delta (u_i)=a(x) W(x) + b(x) {\partial W(x)\over\partial x}
\label{eq:ya}
\end{equation}
\begin{equation}
{\phi(x)\over W^{\mu/2}}={1\over \Delta(u_i)}{1\over
W^{\mu/2-1}}
\Big(a\phi+{2\over\mu-2}{{\rm d}\over{\rm d}
x}(b\phi)\Big)
\label{eq:yb}
\end{equation}
As explained in the body of the paper, equation \ref{eq:yb} is an
equivalent expression for the
inversion of $M$, once it has been integrated along some closed
1-cycle $\gamma\in H_1(\Sigma_g)$.
Let the polynomials $a(x)$ and $b(x)$ in equation \ref{eq:ya}
have the
expansions
\begin{equation}
a(x)=\sum_{i=0}^s a_ix^i, \qquad b(x)=\sum_{j=0}^{s'} b_jx^j
\label{eq:yc}
\end{equation}
The respective degrees $s$ and $s'$ of $a(x)$ and $b(x)$ are
easily found to be related by
$s'=s+1$. This follows from the fact that the left-hand side of
equation \ref{eq:ya} has degree zero in
$x$. Moreover, $W(x)$ has degree $2n=2g+2$ in $x$. Using
equation
\ref{eq:ya}, and imposing the condition
that the number of unknown coefficients $a_i$ and $b_j$ equal
the number of equations they must
satisfy, one easily finds $s=2g$ and $s'=2g+1$. It should be borne
in mind that the coefficients
$a_i$ and $b_j$ themselves will be polynomial functions in the
moduli $u_i$. For the gauge groups
$SO(2r+1)$, $SO(2r)$ and $Sp(2r)$ (with or without massless
matter),
we observe that
$a(x)$ will be even as a polynomial in $x$, while $b(x)$
will be odd.
Let us take the polynomial $\phi(x)$ in equation \ref{eq:yb}
to be
$\phi(x)=x^m$, and set $\mu=3$ in equation \ref{eq:yb}
to obtain the
periods $\Omega^{(\pm1/2)}$ as used in the body of the
text.
Now assume the polynomial
$a(x)x^m+2(b(x)x^m)'$ has the following expansion in powers of
$x$,
\begin{equation}
a(x)x^m+2{{\rm d}\over{\rm d} x}\big(b(x)x^m\big)=\sum_{r=0}
^{m+2g}
c_r(u_i) x^r
\label{eq:yd}
\end{equation}
where the $c_r(u_i)$ are some $u_i$-dependent coefficients.
For the $SO(2r)$ and $SO(2r+1)$ gauge groups, $m$ can
be assumed to
be even, while it can be assumed odd for $Sp(2r)$.
Integration of
\ref{eq:yb} along a closed 1-cycle
$\gamma\in H_1(\Sigma_g)$ produces
\begin{equation}
\Omega_m^{(+1/2)}=-{3\over \Delta(u_i)}\sum_{r=0}
^{m+2g} c_r(u_i)
\Omega^{(-1/2)}_r
\label{eq:ye}
\end{equation}
As explained, equation \ref{eq:ye} can be taken to
define the inverse $M$
matrix, $M^{-1}$. In fact, it
just misses the correct definition by a minor technical point.
As we
let the integer $m$ run over
(the even or the odd values of) the basic range $R$, the
subscripts on
the
right-hand side of equation \ref{eq:ye} will
eventually take values outside $R$. We can correct this
by making
use of the recursion relation
\ref{eq:zmk}, in order to bring $m$ back into $R$. One
just has to apply
equation \ref{eq:zmk} for $\mu=-3/2$ and
substitute the value of $k$ appropriate to the gauge group and
matter content under
consideration.\footnote{For $N_f$ massless multiplets, one
has $k=2+2N_f$
for $SO(2r+1)$, $k=4+2N_f$ for
$SO(2r)$, and $k=2(N_f-1)$ for $Sp(2r)$.} The resulting linear
combination of periods
$
\Omega_q^{(-1/2)}$ in the right-hand side has lower values
of the
subindex $q$. One easily checks
that, as the degree of $a(x)x^m+2(b(x)x^m)'$ is $2g+m$, the
values
attained by $q$ in the right-hand side of equation \ref{eq:zmk}
never become
negative when $m$ runs over (the even or the odd subspace of)
$R$. Eventually, a linear combination (with $u_i$-dependent
coefficients) will be obtained such that
all lower indices $q$ of the periods $\Omega_q^{(-1/2)}$ will lie
within (the even or the odd subspace of)
$R$. Now this correctly defines an inverse $M$ matrix.
We have thus expressed the $(j,l)$ element of $M^{-1}$ as
\begin{equation}
\Big[M^{-1}\Big]_{jl}={1\over \Delta(u_i)} P_{jl}(u_i)
\label{eq:yg}
\end{equation}
where the $P_{jl}(u_i)$ are certain polynomial functions of the
moduli $u_i$. On the other hand,
from the definition of matrix inversion,
\begin{equation}
\Big[M^{-1}\Big]_{jl}={1\over {\rm det}\, M} C_{jl}(u_i)
\label{eq:yh}
\end{equation}
where $C_{jl}$ is the matrix of cofactors of $M$. Obviously,
not all the $C_{jl}(u_i)$ are
divisible by
${\rm det}\, M$, as otherwise $M$ would be invertible even
when
${\rm det}\, M=0$. From the
equality
\begin{equation}
{1\over {\rm det}\, M} C_{jl}(u_i) = {1\over \Delta(u_i)}
P_{jl}(u_i)
\label{eq:yi}
\end{equation}
it follows that the right-hand side of equation \ref{eq:yi}
will have
to blow up whenever the left-hand
side does, {\it i.e.}, {\it all zeroes of ${\rm det}\, M$ are also
zeroes
of $\Delta(u_i)$}, possibly with
different multiplicities. The converse need not hold, as in
principle
nothing prevents all the
$P_{jl}(u_i)$ from simultaneously having $\Delta(u_i)$ as a
common
factor.
Let us finally make an observation on the above proof for the
$SU(r+1)$ gauge groups. From section 3.3
in the paper, we know that there are several different, though
equivalent, ways of defining the
$\tilde M$ matrix. The particular choice made in solving
the linear
dependence relation \ref{eq:zcg} may
imply division by a non-constant polynomial $f(u_i)$ in the
moduli
$u_
i$, if the period that is
being solved for in equation \ref{eq:zcg} is multiplied by a
non-constant
coefficient. This has the effect
of causing a power of $f(u_i)$ to appear in the determinant
${\rm det}\, \tilde M$, besides the
required powers of the factors of the discriminant
$\Delta(u_i)$. Obviously, only the zeroes of the
latter are relevant, as they are the ones associated with the
singularities of the curve. The zeros
of ${\rm det}\, \tilde M$ due to the presence of $f(u_i)$
are a
consequence of the prescription used
to define $\tilde M$.
\vfill
\break
\bigskip
\begin{center}
{\bf Appendix B}
\end{center}
Below are listed the PF equations satisfied by the period integrals
of the SW differential of a
number of effective $N=2$ SYM theories (with and without
massless matter), as classified by their gauge groups.
Rather than an exhaustive list, we give a sample of cases in
increasing order of the rank $r$ of the
gauge group, with some examples including massless matter
hypermultiplets in the fundamental
representation. The hyperelliptic curves describing their
corresponding vacua are also quoted for
notational completeness. For computational simplicity, we
systematically set the quantum scale
$\Lambda$ of the theory to unity, {\it i.e.}, $\Lambda=1$ throughout.
The notation is as in the body of
the paper, {\it i.e.}, the PF equations read
$$
{\cal L}_i \Pi_{SW}=0, \qquad i=1,2, \ldots, r
$$
Wherever applicable, our results are coincident with those
in the literature \cite{KLEMM,ITO,SASAKURA,THEISEN}.
$\bullet$ $N_f=0$ $SU(3)$
$$
y^2=(x^3+ux+t)^2-1
$$
\begin{eqnarray*}
{\cal L}_1 & = & (27+4u^3-27t^2){\partial^2\over\partial u^2}+12u^2t
{\partial^2\over
\partial u\partial t}+3tu{\partial\over\partial t}+u\\
{\cal L}_2 & = & (27+4u^3-27t^2){\partial^2\over\partial t^2}-36ut
{\partial^2\over\partial u\partial t}-9t{\partial\over
\partial t}-3
\end{eqnarray*}
$\bullet$ $N_f=1$ $SU(3)$
$$
y^2=(x^3+ux+t)^2-x
$$
\begin{eqnarray*}
{\cal L}_1 & = &\Big(-3t+{16u^3\over 45 t}\Big){\partial\over\partial t}\\
& + & \Big(-9t^2-{5u^2\over 9t} + {28u^3\over
15}\Big){\partial^2\over\partial t^2}+\Big({25\over 4}-12tu+ {16u
^4\over 45 t}
\Big){\partial^2
\over\partial u\partial t}-1\\
{\cal L}_2 & = & {1\over {(336ut - 100)}}{\Biggl\{} \Big[{300t} -
{432 t^2 u}\Big]{\partial\over\partial t}
+ \Big[{-625}+ {3300tu}
- {3456t^2u^2}\Big]{\partial^2\over\partial t\partial u}\\
& + & \Big[{6480t^3}+{400u^2}-
{1344tu^3}\big]{\partial^2\over\partial u^2}{\Biggr\}} - 1\\
\end{eqnarray*}
\newpage
$\bullet$ $N_f=2$ $SU(3)$
$$
y^2=(x^3+ux+t)^2-x^2
$$
\begin{eqnarray*}
{\cal L}_1 & = & \Big(-3t-{8u\over 9t}+{8u^3\over 9t}\Big)
{\partial\over\partial t}+
\Big(-9t^2-{8u\over 3}+{8
u^3\over
3}\Big) {\partial^2\over\partial t^2}\\
& + &\Big({8\over 9t}-12 tu -{16u^2\over 9t}+{8u^4
\over 9t}\Big){\partial^2\over\partial t\partial u}-1\\
{\cal L}_2 & = & -\big({3t\over u}+9tu\Big){\partial^2\over\partial t\partial u}+
\Big(4+{27t^2 \over
2 u}-4u^2\Big){\partial^2\over\partial u^2}-1
\end{eqnarray*}
$\bullet$ $N_f=0$ $SO(5)$
$$
y^2=(x^4+ux^2+t)^2-x^2
$$
\begin{eqnarray*}
{\cal L}_1 & = & 4t(u^2+12t){\partial^2\over\partial t^2}-(27-48tu+4u^3)
{\partial^2
\over\partial t\partial u}+24t{\partial\over\partial t}+3\\
{\cal L}_2 & = & (9-32tu){\partial^2\over\partial t\partial u}-4(12t+u^2)
{\partial^2\over\partial u^2}-8t{\partial
\over\partial t}-1
\end{eqnarray*}
$\bullet$ $N_f=1$
$SO(5)$
$$
y^2=(x^4+ux^2+t)^2-x^4
$$
\begin{eqnarray*}
{\cal L}_1 & = & -16tu {\partial^2\over\partial t \partial u}+(4-16t-4u^2)
{\partial^2\over\partial
u^2}-1\\
{\cal L}_2 & = & -16tu
{\partial^2\over\partial t \partial u}+(4t-16t^2-4tu^2){\partial^2\over\partial t^2}+
(2-8t-2u^2){\partial\over\partial
t}-1
\end{eqnarray*}
$\bullet$ $N_f=2$ $SO(5)$
$$
y^2=(x^4+ux^2+t)^2-x^6
$$
\begin{eqnarray*}
{\cal L}_1 & = & (3t-{32\over 3}tu) {\partial^2\over\partial u\partial t}+
(u-4u^2-{16t
\over 3}){\partial\over\partial u^2}+{8t\over
3}{\partial\over\partial t}-1\\
{\cal L}_2 & = & (3tu -16t^2-12tu^2){\partial^2\over\partial t^2}+
(3t -16tu+u^2-4u^3){\partial^2\over\partial t\partial
u}+(2u
-8t-8u^2){\partial\over\partial t}-1
\end{eqnarray*}
\vfill
\break
$\bullet$ $N_f=1$ $Sp(6)$
$$
y^2=(x^7+ux^5+sx^3+tx)^2-1
$$
\begin{eqnarray*}
{\cal L}_1 & = & -\Big[24t-{32\over 7}su+{40\over 49}
u^3\Big]{\partial\over\partial
t}-\Big[8s-{16\over 7}u^2\Big] {\partial\over\partial s}\\
& + & \Big[-{147s\over t}
-36t^2+{16\over 7}stu-{20\over 49}tu^3\Big]
{\partial^2\over\partial t^2}\\
& + & \Big[-48st+{48\over 7}s^2u-{245u\over t}+
{4\over 7}tu^2-{60\over
49}su^3\Big]{\partial^2\over\partial s\partial t}\\
& + & \Big[-16s^2-{343\over t }-24tu+{92\over
7}su^2-{100\over 49}u^4\big]{\partial^2\over\partial u\partial t}-1\\
{\cal L}_2 & = & \Big[{294s\over t^2}+48t\Big]
{\partial\over\partial t}+\Big[-8s+{16\over
7}u^2\Big]{\partial\over\partial s}\\
& + & \Big[{441s^2\over t^2}+60st-{245u\over t}+{4\over
7}tu^2\Big]{\partial^2\over\partial t\partial s}\\
& + & \Big[-16s^2-{343\over t}+{735\over t^2}su
+156tu+{12\over 7}su^2\Big]{\partial^2\over\partial s^2}\\
& + & \Big[{1029s\over
t^2}+252t-16su+{20\over 7}u^3\Big]{\partial^2 \over\partial u\partial s}
-1\\
{\cal L}_3 & = & \Big[{294s\over t^2}+48t\Big]{\partial\over\partial t}+
\Big[-248s-{1764\over
t^3}s^2+{980\over t^2}u\Big] {\partial\over\partial s}\\
& + & \Big[-196s^2-{1323s^3\over
t^3}-{343\over t}+{1470su\over t^2}+156tu\Big]
{\partial^
2\over \partial t\partial u}\\
& + & \Big[{1029s\over t^2}+252t-316su-
{2205s^2u\over t^3}+{1225u^2\over
t^2}\Big]{\partial^2\over\partial s\partial u}\\
& + & \Big[-420s-{3087\over t^3}s^2+{1715\over
t^2}u-4u^2\Big]{\partial^2\over\partial u^2}-1
\end{eqnarray*}
$\bullet$ $N_f=0$ $SO(7)$
$$
y^2=(x^6+ux^4+sx^2+t)^2-x^2
$$
\begin{eqnarray*}
{\cal L}_1 & = & (-180t-268 su+{75u\over t}){\partial^2\over\partial u\partial s}
+
(-100s^2+{25s\over t}-132tu){\partial^2\over\partial
u\partial t}\\
& + & (-420 s+{125\over t}-4u^2){\partial^2\over\partial u^2}-24 t{\partial\over\partial t}
+
(-176s+{50\over t}){\partial\over\partial s} -1\\
{\cal L}_2 & =
& (25-48st+{16\over 5}s^2 u-{4\over 5} tu^2-
{12\over 25}su^3){\partial^2
\over\partial t\partial s}+ (-36t^2-{16\over 5}stu
+{12\over 25}tu^3){\partial^2\over\partial t ^2}-1\\
& + & (-16s^2-24tu+{52\over 5}su^2-{36
\over 25}u^4){\partial^2\over\partial t \partial u}-24 t
{\partial\over \partial t}+({8\over 5}u^2-8s){\partial\over\partial s}-1\\
{\cal L}_3 & = & (-16s^2-
132 tu+{4\over 5} su^2){\partial^2\over\partial
s^2}+(25-84st-{4\over 5}tu^2){\partial^2\over\partial s \partial t}\\
& + & (-180 t-16su +
{12\over 5}u^3){\partial^2\over\partial s\partial u}-24
t{\partial\over\partial t} + ({8\over 5}u^2-8s){\partial\over\partial s}-1
\end{eqnarray*}
$\bullet$ $N_f=1$ $SO(7)$
$$
y^2=(x^6+ux^4+sx^
2+t)^2-x^4
$$
\begin{eqnarray*}
{\cal L}_1 & = & -(72t+64su){\partial^2\over\partial u\partial s}+
(16-16s^2-60tu)
{\partial^2\over\partial u\partial t}\\
& - & (96s+4u^2){\partial^2\over\partial u^2}- 6t{\partial\over\partial t}-32 s
{\partial\over\partial s} -1\\
{\cal L}_2 & = & -(48st+2tu^2){\partial^2\over\partial t\partial s}+
(-36
t^2-8stu+tu^3){\partial^2\over\partial t^2}\\
& + & (16-16s^2-24tu +8su^2
-u^4){\partial^2\over\partial t\partial u} +(-24t-4su+{u^3\over 2})
{\partial\over\partial t}+(u^2-8s){\partial\over\partial
s}-1\\
{\cal L}_3 & = & (16-16s^2-60tu){\partial^2\over\partial s^2}-
(48st+2tu^2){\partial^2\over\partial s\partial t}\\
& + & (-72t-16su+2u^3){\partial^2\over\partial s
\partial u}-6t {\partial\over\partial t}+(u
^2-8s){\partial\over\partial s}-1
\end{eqnarray*}
$\bullet$ $N_f=2$ $SO(7)$
$$
y^2=(x^6+ux^4+sx^2+t)^2-x^6
$$
\begin{eqnarray*}
{\cal L}_1 & = & {2\over 9}(27-108t-48su+4u^3)
{\partial\over\partial t}-8s{\partial\over\partial
s}+(9t-36t^2-16stu+{4\over 3}tu^3){\partial^2\over\partial t^2}\\
& + & (3s -48st-{16\over
3} s^2 u-4tu^2+{4\over 9} su^3){\partial^2\over\partial t\partial s}\\
& + & (-16s^2-3u-24tu+4su^2-{4\over 9}u^4)
{\partial^2\over\partial u\partial t}-1\\
{\cal L}_2 & = & -8s {\partial\over\partial s}-(36st+4tu^2)
{\partial^2\over\partial t\partial s}\\ & - & (16s^2+36tu+{4\over
3}su^2){\partial^2\over
\partial s^2}+(9-36t-16su+{4\over 3}u^3)
{\partial^2\over\partial u\partial s}-1\\
{\cal L}_3 & = & -8s
{\partial\over\partial s}-(4s^2+36tu) {\partial^2\over\partial t\partial u}\\
& + & (9-36t-28su){\partial^2\over\partial u
\partial s}-(36s+4u^2){\partial^2\over\partial u^2}-1
\end{eqnarray*}
\vfill
\break
$\bullet$ $N_f=0$ $SU(4)$
$$
y^2=(x^4+sx^2+ux+t)^2-1
$$
\begin{eqnarray*}
{\cal L}_1 & = & (s^2-8t){\partial\over\partial t}-3u{\partial\over\partial u}
+
(16-16t^2+3su^2){\partial^2\over\partial t^2}\\
& + & (7s^2u-24tu){\partial^2\over\partial u \partial t}+(2s^3-16st-9u^2)
{\partial^2\over\partial s\partial t}
-1\\
{\cal L}_2 & = & (s^2-8t){\partial\over\partial
t}-3u{\partial\over\partial u}+\Big(-{32s\over u}+{32st^2\over u}+
s^2u-24tu\Big){\partial^2\over
\partial t\partial u}\\& + & (2s^3-16st-9u^2){\partial^2\over\partial u^2}+\Big(-{64\over u}
+{64t^2\over u} -12su
\Big){\partial^2\over\partial s \partial u}-1\\
{\cal L}_3 & = & \Big(16t +{32s\over u^2}-{32st^2\over u^2}
\Big){\partial\over\partial t}-3u {\partial\over
\partial u}+\Big(32st+{64s^2\over
u^2}-{64s^2t^2\over u^2}-9u^2\Big){\partial^2\over\partial t\partial s}\\
& + & \Big(-{64\over u}
+{64t^2\over u}-12su\Big){\partial^2\over\partial
u\partial s}+\Big(-4s^2+96t+{128s\over u^2}-{128st^2\over u^2}
\Big) {\partial^2\over\partial
s^2}-1
\end{eqnarray*}
\vfill
\break
$\bullet$ $N_f=1$ $SU(4)$
$$
y^2=(x^4+sx^2+ux+t)^2-x
$$
\begin{eqnarray*}
{\cal L}_1 & = & {1\over{({s^2} + 28t)}}{\Biggl\{} \Big[ {\left({4\over 7}{s^4} + 8{s^2}t
- 224 {t^2} + 27s{u^2} \right)}{\partial\over\partial t}
+ {\left( {9s^2u}-{84tu} \right)}
{\partial\over\partial u}\Big] \\
& + & \Big[ {-{4\over 7}s^4t}-
{32s^2t^2} - {448t^3}
-{{147\over 4}su}+{120stu^2}\Big]
{\partial^2\over\partial t^2}\\
& + & \Big[{{49\over 4}s^2} + {343t} + {{4\over 7}s^4u}
+ {184s^2tu} - {672t^2u}
+{27su^3}\Big]{\partial^2\over\partial t\partial u}\\
& + & \Big[ {{12\over 7}s^5}+
{32
s^3t} - {448st^2} + {27s^2u^2} - {252tu^2}
\Big]{\partial^2\over\partial
s\partial t}{\Biggr\}} -1\\
{\cal L}_2 & = & {1\over{({160ut - 49})}} {\Biggl\{} \Big[{-28s^2}+
{392t}+{112s^2tu}
-{704t^2u}\Big]
{\partial\over\partial t} + \Big[{{64\over 7}s^3t}+
{256st^2}+
{147u} -{480tu^2}\Big] {\partial\over\partial u}\\
& + & \Big[{-2401\over 4}+
{1024s^3t^2\over 7}+{4096st^3}-{28s^2u} + {3136tu}+{112s^2tu^2}
-{3264t^2u^2}\Big]{\partial^2\over\partial t \partial u}\\
& + & \Big[{-84s^3}+{784st}+{{2112\over 7}s^3tu} -
{1792st^2u}
+ {441u^2}- {1440tu^3}\Big]{\partial^2\over\partial u^2}\\
& + & \Big[{64s^4t\over 7}+
{512s^2t^2}+
{7168t^3}
+ {588su}-{1920stu^2}\Big]
{\partial^2\over\partial s\partial u}{\Biggr\}} - 1
\end{eqnarray*}
\begin{eqnarray*}
{\cal L}_3 & = & {1\over {(256s{t^2} - 49u + 196{u^2}t)}}{\Biggl\{}
\Big[{2401\over 4} -
{6144st^3} - {3185 tu} +
{3136t^2u^2}
\Big]{\partial\over\partial t}\\
& + &
\Big[{-196st} -
{128st^2u}
+ {147u^2}-{588tu^3}\Big]
{\partial\over\partial u}\\
& + & \Big[{{7203\over 4} s} - {16384s^2t^3} -
{9212stu} + {6272st^2u^2}+
{441u^3} - {1764tu^4}
\Big]{\partial^2\over
\partial t \partial s}\\
& + &
\Big[{-196s^2t}-
{5488t^2}-{2432s^2t^2u}
+ {17920t^3u}+
{588su^2} - {2352stu^3}\Big]
{\partial^2\over\partial u\partial s}\\
& + &
\Big[{16807\over 4}-
{1024s^3t^2} - {28672st^3}
+ {196s^2u}-{21952tu}-{784s^2tu^2} + {228
48t^2u^2}\Big]
{\partial^2\over\partial s^2}{\Biggr\}} - 1
\end{eqnarray*}
\vfill
\break
$\bullet$ $N_f=2$ $SU(4)$
$$
y^2=(x^4+sx^2+ux+t)^2-x^2
$$
\begin{eqnarray*}
{\cal L}_1 & = & {1\over{({s^2} + 12t)}} {\Biggl\{} \Big[{\left( {-8s^2t}-
{96t^2}+
{27su^2}\right)}{\partial\over\partial t}
+ {\left( {9s^2u}-{36tu}\right)}
{\partial\over\partial u}\Big] \\
& + & \Big[{-{4\over 3}s^4t}-{32s^2t^2}-{192t^3} +
{72stu^2}\Big]
{\partial^2\over\partial t^2} + \Big[{-27su} + {72s^2tu}-{288t^2u}+
{27su^3}
\Big]{\partial^2\over\partial t\partial u}\\
& + & \Big[{9s^2}+{4s^5\over 3}+
{108t}-
{192st^2}+{27s^2u^2}-{108tu^2}\Big]
{\partial^2\over\partial t\partial s}{\Biggr\}} - 1\\
{\cal L}_2 & = & \Big[{s^2\over 2}-2t\Big]{\partial\over\partial
t}+\Big[{2s^3\over 9u}+{8st\over 3u}-3u\Big]{\partial\over\partial u}\\
& + & \Big[{-s^2\over 2u}-{6t\over u}+{16s^3t\over
9u}+{64st^2\over 3u}+{s^2u\over 2}-18tu\Big]
{\partial^2\over \partial t\partial u} +
(9+2s^3-8st-9u^2){\partial^2\over\partial u^2}\\
& + & \Big[{2s^4\over 9u}+{16s^2t\over 3u}+{32t^2\over u}
-12 su\Big]
{\partial^2\over\partial s\partial u
} - 1\\
{\cal L}_3 & = & {1\over{(-9 + 32st + 9{u^2})}}{\Biggl\{} \Big[
{\left ({72t}-{256st^2}+{144tu^2}
\right)} {\partial\over\partial t}
+ {\left( {27u}-{27u^3}\right)}
{\partial\over\partial u}\Big]\\
& + & \Big[{-81} + {576 st} -
{1024s^2t^2}
+ {162u^2}+{288stu^2}-{81u^4}\Big]
{\partial^2\over\partial t \partial s}\\
& + & \Big[{108su}-{288s^2tu}+
{1152t^2u}
-{108su^3}\Big]{\partial^2\over\partial u\partial s}\\
& + & \Big[{36s^2}
+{432t}-{128s^3t} - {1536st^2}-{36s^2u^2}+{1296tu^2}\Big]{\partial^2\over\partial
s^2}{\Biggr\}} - 1
\end{eqnarray*}
\newpage
| proofpile-arXiv_065-710 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Quantum Statistical Mechanics: Generalities}
\label{qsm}
Before we discuss what happens in the vicinity of a QPT, let us recall some
very general
features of the statistical mechanics of quantum systems. The quantities
of interest are the partition function of the system,
\begin{equation}
Z(\beta) = {\rm Tr}\,\, e^{-\beta H}
\end{equation}
and the expectation values of various operators,
\begin{equation}
\langle O \rangle = \frac{1}{Z(\beta)}{\rm Tr}\,\, O e^{-\beta H}.
\end{equation}
In writing these formal expressions we have assumed a finite temperature,
$k_B T = 1/\beta$. To get at what happens exactly at $T=0$ we take the
$T \rightarrow 0$ limit. Upon doing so, the free energy,
$F = - \frac{1}{\beta}\ln{Z}(\beta)$, becomes the ground state energy and
the various thermal averages become ground state expectation values.
\null From $Z$ we can get all the thermodynamic quantities of interest.
Expectation values of operators
of the form $O \equiv A({\bf r} t) A({\bf r'} t')$ are related to the
results of dynamical scattering and linear response measurements. For
example, $A$ might be the local density (X-ray scattering) or current
(electrical transport).
\subsection{Partition Functions and Path Integrals}
\label{pfpi}
Let us focus for now on the expression for $Z$. Notice that the operator
density matrix, $e^{-\beta H}$, is the same as the time evolution operator,
$e^{-iH{\cal T}/\hbar}$,
provided we assign the {\em imaginary} value ${\cal T} = - i \hbar\beta$ to the
time interval over which the system evolves. More precisely, when
the trace is written in terms of a complete set of states,
\begin{equation}
Z(\beta) = \sum_n \langle n | e^{-\beta H} | n \rangle,
\label{eq:Zimag}
\end{equation}
$Z$ takes the form
of a sum of imaginary time transition amplitudes for the system to
start in some state $|n\rangle$ and
{\em return to the same state} after an imaginary time interval $-i\hbar\beta$.
Thus we see that calculating the thermodynamics
of a quantum system is the same as calculating transition amplitudes
for its evolution in imaginary time, the total time interval being fixed
by the temperature of interest. The fact that the time interval happens to
be imaginary is not central. The key idea we hope to transmit to the
reader is that Eq.(\ref{eq:Zimag}) should evoke an image of quantum
dynamics and temporal propagation.
This way of looking at things can be given a particularly beautiful
and practical
implementation in the language of Feynman's path integral formulation
of quantum mechanics \cite{feynman}.
Feynman's prescription is that the net transition
amplitude between two states of the system can be calculated by summing
amplitudes for all possible paths between them.
The path taken by the system is defined by specifying the state of the
system at a sequence of finely spaced intermediate time steps. Formally we
write
\begin{equation}
e^{-\beta H} =
\left[e^{-\frac{1}{\hbar}\delta\tau H} \right]^N,
\end{equation}
where $\delta\tau$ is a time interval\footnote{For convenience we have
chosen $\delta\tau$ to be real, so that the small interval of imaginary
time that it represents is $-i\delta\tau$.}
which is small on the time scales
of interest ($\delta\tau = \hbar/\Gamma$ where $\Gamma$ is some ultraviolet
cutoff) and
$N$ is a large integer chosen so that $N\delta\tau = \hbar\beta$.
We then insert a sequence of sums over complete sets of
intermediate states into
the expression for $Z(\beta)$:
\begin{equation}
Z(\beta) = \sum_n \sum_{m_1,m_2,\ldots,m_N}
\langle n |
e^{-\frac{1}{\hbar}\delta\tau H}
|m_1\rangle\langle m_1|
e^{-\frac{1}{\hbar}\delta\tau H}
|m_2\rangle\langle m_2|
\dots
|m_N\rangle\langle m_N|
e^{-\frac{1}{\hbar}\delta\tau H}
| n \rangle.
\label{eq:complete_set}
\end{equation}
This rather messy expression actually has a rather simple physical
interpretation. Formally inclined readers will
observe that the expression for the {\em quantum} partition function in
Eq.~(\ref{eq:complete_set}) has the form of a {\em classical} partition
function, i.e. a sum over configurations expressed in terms of a transfer
matrix, if we think of imaginary time as an additional
spatial dimension. In particular, if our quantum system lives in $d$
dimensions,
the expression for its partition function looks like a classical partition
function for a system with $d+1$ dimensions, except that the extra dimension
is finite in extent---$\hbar \beta$ in units of time. As $T \rightarrow 0$
the system size in this extra ``time'' direction diverges and we get
a truly $d+1$ dimensional, effective, classical system.
Since this equivalence
between a $d$ dimensional quantum system and a $d+1$ dimensional classical
system is crucial to everything else we have to say,
and since Eq.~(\ref{eq:complete_set}) is probably not very illuminating for
readers not used to a daily regimen of transfer matrices,
it will be very
useful to consider a specific example in order to be able to visualize what
Eq.~(\ref{eq:complete_set}) means.
\subsection{Example: 1D Josephson Junction Arrays}
\label{e1djja}
Consider a one-dimensional array comprising a large number $L$
of identical Josephson junctions as illustrated in
Fig.~(\ref{fig:JJdiagram}). Such arrays have recently been studied by
by Haviland and Delsing. \cite{haviland}
A Josephson junction is a tunnel junction
connecting two superconducting metallic grains. Cooper pairs of electrons
are able to tunnel back and forth between the grains and hence communicate
information about the quantum state on each grain. If the Cooper pairs are
able to move freely from grain to grain throughout the array, the system is
a superconductor. If the grains are very small however, it costs a large
charging energy to move an excess Cooper pair onto a grain. If this energy
is large enough, the Cooper pairs fail to propagate and become stuck on
individual grains, causing the system to be an insulator.
The essential degrees of freedom in this
system are the phases of the complex superconducting order parameter on the
metallic elements connected by the junctions\footnote{It is believed that
neglecting fluctuations in the magnitude of the order parameter is a good
approximation; see Bradley and Doniach (1984); Wallin {\it et al.} (1994)}
and their conjugate variables, the charges (excess Cooper pairs, or
equivalently the voltages) on
each grain. The intermediate state $|m_j\rangle$ at time $\tau_j \equiv
j\delta\tau$, that enters the quantum partition function
Eq.~(\ref{eq:complete_set}), can thus be defined by specifying
the set of phase angles $\{\theta(\tau_j)\} \equiv
[\theta_1(\tau_j),\theta_2(\tau_j),\ldots,\theta_L(\tau_j)]$.
Two typical paths or time histories
on the interval $[0,\hbar\beta]$ are illustrated in
Fig.~(\ref{fig:JJpath}) and Fig.~(\ref{fig:JJpath2}),
where the orientation of the arrows (`spins') indicates the local
phase angle of the order parameter.
The statistical weight of a given path, in the sum in
Eq.~(\ref{eq:complete_set}), is given by the
product of the matrix elements
\begin{equation}
\prod_j\langle \{\theta(\tau_{j+1})\} |
e^{-\frac{1}{\hbar}\delta\tau H} |\{\theta(\tau_j)\}\rangle,
\label{eq:statwgt}
\end{equation}
where
\begin{equation}
H = \frac{C}{2} \sum_j V_j^2 - E_{\rm J} \cos\left(\hat\theta_j -
\hat\theta_{j+1}\right),
\label{eq:H6}
\end{equation}
is the quantum Hamiltonian of the Josephson junction
array.
Here $\hat\theta_j$ is the operator representing
the phase of the superconducting order parameter on the $j$th
grain\footnote{Our notation here is that
$\{\theta(\tau)\}$ refers to the configuration of the entire set of angle
variables at time slice $\tau$. The $\hat\theta$'s appearing in the
Hamiltonian in Eq.(\ref{eq:H6}) are angular coordinate operators and $j$ is
a site label. The state at time slice $\tau$ is an eigenfunction of these
operators:
$\cos\left(\hat\theta_j -
\hat\theta_{j+1}\right)|\{\theta(\tau)\}\rangle
= \cos\left(\theta_j(\tau) -
\theta_{j+1}(\tau)\right)|\{\theta(\tau)\}\rangle.$};
$V_j \equiv -i\frac{2e}{C}\frac{\partial}{\partial\theta_j}$
is conjugate to the phase\footnote{It is useful to think of this as a
quantum rotor model. The state with wave function $e^{im_j\theta_j}$
has $m_j$ units of angular momentum representing $m_j$ {\em excess}
Cooper pairs on grain $j$.
The Cooper-pair number operator in this representation is
$n_j = - i\frac{\partial}{\partial\theta_j}$. See \cite{wallinetalPRB}.
The cosine term in Eq.(\ref{eq:H6}) is a `torque' term which transfers units of
conserved angular momentum (Cooper pairs) from site to site. Note that
the potential energy of the bosons is represented,
somewhat paradoxically, by the kinetic energy of the quantum rotors
and vice versa.}
and is the voltage on the $j$th junction, and
$E_{\rm J}$ is the Josephson coupling energy.
The two terms in the Hamiltonian
then represent the charging energy of each grain
and the Josephson coupling of the phase across the junction between grains.
As indicated previously, we can map the quantum statistical mechanics of the
array onto classical statistical mechanics by identifying the the statistical
weight of a space-time path in Eq.~(\ref{eq:statwgt}) with the Boltzmann weight
of a two-dimensional spatial configuration of a classical system.
In this case the classical system is therefore a {\em two-dimensional}
X-Y model, i.e. its degrees of freedom are planar spins, specified by angles
$\theta_i$, that live on a two-dimensional square lattice. (Recall that at temperatures
above zero, the lattice has a finite width $\hbar \beta/\delta \tau$ in the
temporal direction.) While the degrees
of freedom are easily identified, finding the classical hamiltonian for this
X-Y model is somewhat more work and requires an explicit evaluation of the
matrix elements which interested readers can find in the Appendix.
It is shown in the Appendix
that, in an approximation that preserves the universality
class of the problem\footnote{That is, the approximation is such that the
universal aspects of the critical behavior such as the exponents and scaling
functions will be given without error. However, non-universal quantities such
as the critical coupling will differ from an exact evaluation. Technically,
the neglected terms are irrelevant at the fixed point
underlying the transition.},
the product of matrix elements in Eq.~(\ref{eq:statwgt}) can be rewritten in
the form $e^{-H_{\rm XY}}$ where the Hamiltonian of the equivalent classical
X-Y model is
\begin{equation}
H_{\rm XY} =
\frac{1}{K}\sum_{\langle ij\rangle} \cos(\theta_i - \theta_j),
\label{eq:equivXY}
\end{equation}
and the sum runs over near-neighbor points in the {\em two-dimensional}
(space-time)
lattice.\footnote{Notice this crucial change in notation from Eq.(\ref{eq:H6})
where $j$ refers to a point in 1D space, not 1+1D space-time.} The nearest
neighbor character of the couplings identifies the classical model as {\em the}
2D X-Y model, extensively studied in the context of
superfluid and superconducting
films \cite{goldenfeld,chaikin-lubensky}.
We emphasize that while the straightforward
identification of the degrees of freedom of the classical model
in this example is robust, this simplicity of the resulting classical
Hamiltonian is something of a minor miracle.
It is essential to note that the dimensionless
coupling constant $K$ in $H_{\rm XY}$, which plays the role of the
temperature in the classical model, depends on the ratio of the
capacitive charging energy $E_{\mathrm C} = \frac{(2e)^2}{C}$
to the Josephson coupling $E_{\rm J}$ in the array,
\begin{equation}
K \sim \sqrt{\frac{E_{\mathrm C}}{E_{\rm J}}}.
\end{equation}
and has nothing to do with the physical temperature. (See Appendix.)
The physics here is that a large Josephson coupling produces a small value
of $K$ which
favors coherent ordering of the phases. That is, small $K$ makes it
unlikely that $\theta_i$ and $\theta_j$ will differ significantly, even
when sites $i$ and $j$ are far apart (in space and/or time).
Conversely,
a large charging energy leads to a large value of $K$ which favors
zero-point fluctuations of the phases and disorders the system.
That is, large $K$
means that the $\theta$'s are nearly independent and all values are
nearly equally likely.\footnote{Because particle number is conjugate to the
phase [$\hat n_j = -i \frac{\partial}{\partial\theta_j}$], a state of
indefinite phase on a site has definite charge on that site,
as would be expected for an insulator.}
Finally, we note that this
equivalence generalizes to d-dimensional arrays and d+1-dimensional
classical XY models.
\subsection{Quantum-Classical Analogies}
This specific example of the equivalence between a quantum
system and a classical system with an extra `temporal' dimension,
illustrates several general correspondences between quantum systems
and their effective classical analogs.
Standard lore tells us that the classical XY model has an order-disorder
phase transition
as its temperature $K$ is varied. It follows that the quantum array
has a phase transition as the ratio of its charging and Josephson
energies is varied.
One can thus see why it is said that
the superconductor-insulator quantum phase transition in a 1-dimensional
Josephson junction array is in the same universality class as the
order-disorder phase transition of the 1+1-dimensional classical XY
model. [One crucial caveat is that the XY model universality class has
strict particle-hole symmetry for the bosons (Cooper pairs) on each
site. In reality, Josephson junction arrays contain
random `offset charges' which destroy this symmetry and change the
universality class \cite{wallinetalPRB}, a fact which is all too often
overlooked.]
We emphasize again that $K$ is the temperature only in the effective
classical problem. In the quantum case,
the physical temperature is presumed to be nearly zero
and only enters as the finite size of the system in the imaginary time
direction. {\em The coupling constant $K$, the fake `temperature,' is a
measure not of thermal fluctuations, but of the strength of quantum
fluctuations, or zero point motion of the phase variables}.\footnote{Zero
point motion of the phase variables is caused by the fact that there is an
uncertainty relation between the phase and the number of Cooper pairs on a
superconducting grain. The more well-defined the phase is, the larger the
uncertainty in the charge is. This costs capacitive energy.} This
notion is quite confusing, so the reader might be well advised to pause
here and contemplate it further. It may be useful to examine
Fig.~(\ref{fig:two-sizes}), where we show a space time lattice for the XY
model corresponding to a Josephson junction array at a certain
temperature, and at a temperature half as large. The size of the lattice
constant in the time direction [$\delta\tau$ in the path integral in
Eq.~(\ref{eq:complete_set})] and $K$ are the {\em same} in both cases
even though the physical temperature is not the same. The
only difference is that one lattice is larger in the time direction than
the other.
In developing intuition about this picture, it may be helpful to see how
classical physics is recovered at very high temperatures. In that limit,
the time interval $\hbar\beta$ is very short compared to the periods
associated with the natural
frequency scales in the system and typical time histories will consist of
a single static configuration which is the same at each time slice.
The dynamics therefore drops out of the problem and a Boltzmann weight
$\exp(-\beta H_{\rm classical})$ is recovered from the path integral.
The thermodynamic phases of the array can be identified from those of the
XY model. A small value of
$K$ corresponds to low temperature in the classical system and so
the quantum system will be in the ordered ferromagnetic phase of the XY
model, as illustrated in Fig.~(\ref{fig:JJpath}). There will be long-range
correlations in both space and time of the phase variables.\footnote{In
this special 1+1D case, the correlations happen to decay algebraically
rather than being truly of infinite range.} This indicates that the
Josephson coupling dominates over the charging energy, and the order
parameter is not fluctuating wildly in space or time so that the system is
in the {\em superconducting} phase. For large $K$, the system is disordered
and the order parameter fluctuates wildly. The correlations decay
exponentially in space and time as illustrated in
Fig.~(\ref{fig:JJpath2}). This indicates that the system is in the {\em
insulating} phase, where the charging energy dominates over the Josephson
coupling energy.
Why can we assert that correlations which decay
exponentially in imaginary time indicate an excitation gap characteristic
of an insulator? This is readily seen by noting that the Heisenberg
representation of an operator in imaginary time is
\begin{equation}
A(\tau) = e^{H\tau/\hbar} A e^{-H\tau/\hbar}
\end{equation}
and so the (ground state) correlation function for any
operator can be expressed in terms of a complete set of states as
\begin{equation}
G(\tau) \equiv \langle 0|A(\tau)A(0)|0\rangle =
\sum_m e^{-(\epsilon_m - \epsilon_0)\tau/\hbar} |\langle 0|A|m\rangle|^2,
\label{eq:qmcorr}
\end{equation}
where $\epsilon_m$ is the energy of the $m$th excited state.
The existence of a finite minimum excitation gap
$\Delta_{01} \equiv \epsilon_1 - \epsilon_0$
guarantees that for long (imaginary)
times the correlation function will decay
exponentially,\footnote{At $T\ne 0$ and for very long times (comparable to
$\hbar\beta$), the finiteness in the time direction will modify this result.
Also, we implicitly assume here that $\langle 0|A|0\rangle=0$.} i.e.,
\begin{equation}
G(\tau) \sim e^{-\Delta_{01}\tau/\hbar} \ .
\end{equation}
To recapitulate, we have managed to map the finite temperature 1D quantum
problem into
a 2D classical problem with one finite dimension that diverges as
$T \rightarrow 0$. The parameter that controls the fluctuations in the
effective classical problem does {\em not} involve $T$, but instead
is a measure of the quantum fluctuations. The classical model exhibits
two phases, one ordered and one disordered. These correspond to the
superconducting and insulating phases in the quantum problem. In the
former the zero-point or quantum fluctuations of the order parameter are
small. In the latter they are large.
The set of analogies developed here between quantum and classical critical
systems is summarized in Table~\ref{tableA}.
Besides the beautiful formal properties of the analogy between
the quantum path integral and $d+1$ dimensional statistical mechanics,
there are very practical advantages to this analogy. In many cases,
particularly for systems without disorder, the universality class of
the quantum transition is one that has already been
studied extensively classically and a great deal may already be known about
it. For new universality classes, it is possible to do the quantum
mechanics by classical Monte Carlo or molecular dynamics simulations of the
appropriate $d+1$-dimensional model.
Finally, there is a special feature of
our particular example that should be noted. In this case the quantum
system, the 1D Josephson junction array (which is also the 1D quantum X-Y
model), has mapped onto a classical model in which space and time enter
in the same fashion, i.e., the isotropic
2D classical X-Y model. Consequently, the dynamical exponent $z$ (to be
defined below) is unity. This is not true in general---depending upon
the quantum kinetics,
the coupling in the time direction can have a very different
form and the effective classical system is then intrinsically anisotropic
and not so simply related to the starting quantum system.
\subsection{Dynamics and Thermodynamics}
We end this account of quantum statistical mechanics by commenting
on the relationship between dynamics and thermodynamics.
In classical statistical mechanics, dynamics and thermodynamics are
separable, i.e., the momentum and position sums in the partition
function are totally independent. For example, we do not need to know
the mass of the particles to compute their positional correlations.
In writing down simple non-dynamical models, e.g. the
Ising model, we typically take advantage of this simplicity.
This freedom is lost in the quantum problem because coordinates and momenta
do not commute.\footnote{Stated more formally,
calculating $Z$ classically only requires knowledge
of the form of the Hamiltonian function and not of the equations of
motion, while both enter the quantum calculation. Recall that
$H$ alone does not fix the equations of motion; one also needs
the Poisson brackets/commutators among the phase space variables. While
these determine the classical oscillation frequencies, they
do not enter the classical calculation of $Z$. In quantum mechanics
$\hbar$ times the classical oscillation frequencies yields the energy level
spacing. Hence the commutators are needed to find the eigenvalues of
the quantum $H$ needed to compute the quantum $Z$.}
It is for this reason that our path integral expression for $Z$
contains information on the imaginary time evolution of the system
over the interval $[0,\hbar\beta]$,
and, with a little bit of care, that information can be used to get the dynamics
in real time by the analytic continuation,
\begin{equation}
G(\tau) \longrightarrow G(+it)
\end{equation}
in Eq.~(\ref{eq:qmcorr}).
Stating it in reverse, one cannot solve for the
thermodynamics without also solving for the dynamics---a feature that
makes quantum statistical mechanics more interesting but that much
harder to do!
Heuristically, the existence of $\hbar$ implies that energy scales that
enter thermodynamics necessarily determine time scales which then enter
the dynamics and vice-versa. Consider the effect of a characteristic
energy scale, such as a gap $\Delta$, in the spectrum. By the uncertainty
principle
there will be virtual excitations across this gap on a time scale
$\hbar/\Delta$, which will appear as the characteristic time scale for
the dynamics. Close to the critical point, where $\Delta$ vanishes, and
at finite temperature this argument gets modified---the relevant uncertainty
in the energy is now $k_BT$ and the characteristic time scale is
$\hbar\beta$. In either case, the linkage between dynamics and
thermodynamics is clear.
\section{Quantum Phase Transitions} \label{qpt}
We now turn our attention to the immediate neighborhood of a
quantum critical point. In this region the mapping of the quantum
system to a d+1 dimensional classical model will allow us
to make powerful general statements about the former using
the extensive lore on critical behavior in the latter. Hence
most of the following will consist of a reinterpretation of
standard ideas in classical statistical mechanics in terms appropriate
for $d+1$ dimensions, where the extra dimension is imaginary time.
\subsection{$T=0$: Dynamic Scaling}
In the vicinity of a continuous
quantum phase transition we will find several features of
interest. First, we will find a correlation length that diverges as
the transition is approached.
That diverging correlation lengths are a generic feature
of classical critical points, immediately tells us that diverging lengths
and diverging {\em times} are automatically a
generic feature of quantum critical points, since one of the directions in
the d+1 dimensional space is time. This makes sense from the
point of view of causality. It {\em should} take a longer and longer time to
propagate information across the distance of the correlation length.
Actually, we have to be careful---as
we remarked earlier, the time direction might easily involve a different
set of interactions than the spatial directions, leading to a
distinct correlation ``length'' in the time direction. We will call the
latter $\xi_\tau$, reserving the symbol $\xi$ for the spatial correlation
length.
Generically, at $T=0$ both $\xi(K)$ and $\xi_\tau(K)$
diverge as $\delta \equiv K - K_c \longrightarrow 0$ in the
manner,\footnote{Here and in the following, we do not write the
microscopic length and time scales that are needed to make dimensional
sense of these equations. See Goldenfeld (1992).}
\begin{eqnarray}
\xi &\sim& |\delta|^{-\nu} \nonumber \\
\xi_\tau &\sim& \xi^z.
\end{eqnarray}
These asymptotic forms serve to define the correlation length exponent $\nu$,
and the {\em dynamical scaling} exponent, $z$.
The nomenclature is historical, referring to the extension of scaling
ideas from the study of static classical critical phenomena to dynamics
in the critical region associated with critical slowing down
\cite{hh1,hh2}. In the classical problem the extension was
a non-trivial step, deserving of a proper label. As remarked
before, the quantum problem involves statics and dynamics on the
same footing and so nothing less is possible. For the case of the
Josephson junction array considered previously, we found the simplest
possible result, $z=1$. As noted however this is a special isotropic
case and in general, $z \ne 1$.
As a consequence of the diverging $\xi$ and $\xi_\tau$, it turns out
that various physical quantities in the critical region close to the
transition have ({\em dynamic}) {\em scaling forms}, i.e. their dependence
on the independent variables involves homogeneity relations of the form:
\begin{equation}
{\cal O}(k, \omega, K) = \xi^{d_{\cal O}} O(k \xi, \omega \xi_\tau)
\label{eq:homogen}
\end{equation}
where $d_{\cal O}$ is called the scaling dimension\footnote{The
scaling dimension describes how physical quantities change under a
renormalization group transformation in which short wavelength degrees
of freedom are integrated out. As this is partly a naive change of scale,
the scaling dimension is often close to the naive (``engineering'')
dimension of the observable but (except at special, non-generic,
fixed points) most
operators develop ``anomalous'' dimensions. See Goldenfeld (1992).}
of the observable ${\cal O}$ measured at wavevector $k$ and frequency $\omega$.
The meaning of (and assumption behind) these scaling forms is
simply that, close to the critical point, there is no characteristic
length scale other than $\xi$ itself\footnote{For a more precise statement
that includes the role of cutoff scales, see Goldenfeld (1992).}
and no characteristic time scale other than $\xi_\tau$. Thus the specific
value of the coupling $K$ does not appear explicitly on the RHS of
Eq.(\ref{eq:homogen}). It is present only implicitly through the $K$
dependence of $\xi$ and $\xi_\tau$.
If we specialize to the scale invariant critical point, the scaling form
in Eq.(\ref{eq:homogen}) is no longer
applicable since the correlation length and times have diverged to infinity.
In this case the only characteristic length left is the wavelength
$2\pi/k$ at which
the measurement is being made, whence the only characteristic frequency is
$\bar\omega \sim k^z$. As a result we find the simpler scaling form:
\begin{equation}
{\cal O}(k, \omega, K_c) = k^{-d_{\cal O}} \tilde{O}(k^z/\omega),
\label{eq:homogen_crit}
\end{equation}
reflecting the presence of {\em quantum} fluctuations on all length and time
scales.\footnote{Equivalently, we could have argued that the scaling
function on the RHS of Eq.(\ref{eq:homogen}) must for large arguments
$x,y$ have the form
$O(x,y) \sim x^{-d_{\cal O}} \tilde{\cal O}(x^z y^{-1})$ in order for the
observable to have a sensible limit as the critical point is approached.}
The utility and power of these scaling forms can be illustrated by
the following example. In an ordinary classical system at a critical point
in $d$ dimensions
where the correlation length has diverged, the correlations of many
operators typically fall off as a power law
\begin{equation}
\tilde G(r) \equiv
\langle{\cal O}({\bf r}){\cal O}({\bf 0})\rangle \sim
\frac{1}{r^{(d-2+\eta_d)}},
\end{equation}
so that the Fourier transform diverges at small wavevectors like
\begin{equation}
G(k) \sim k^{-2+\eta_d}.
\end{equation}
Suppose that we are interested in a QPT for which the d+1-dimensional
classical system is effectively isotropic and the dynamical exponent $z=1$.
Then the Fourier transform of the correlation function for the
$d+1$-dimensional problem is
\begin{equation}
G(k,\omega_n) \sim \left[\sqrt{k^2 + \omega_n^2}\right]^{-2+\eta_{d+1}},
\end{equation}
where the $d+1$ component of the `wavevector' is simply the Matsubara
frequency used to Fourier transform in the time direction. Analytic
continuation to real frequencies via the usual prescription \cite{Mahan}
$i\omega_n \longrightarrow \omega + i\delta$ yields the retarded
correlation function
\begin{equation}
G_{\rm R}(k,\omega + i\delta) \sim
\
\left[{k^2 - (\omega + i\delta)^2}\right]^{(-2+\eta_{d+1})/2}.
\label{eq:branchcut}
\end{equation}
Instead of a pole at the frequency of some coherently oscillating
collective mode, we see instead
that $G_{\rm R}(k,\omega + i\delta)$ has a branch cut for frequencies above
$\omega = k$ (we have implicitly set the characteristic velocity to unity).
Thus we see that there is no characteristic frequency other than $k$
itself (in general, $k^z$ as in Eq.(\ref{eq:homogen_crit})),
as we discussed above.
This implies that collective modes have become
overdamped and the system is in an incoherent diffusive regime. The review
by Sachdev contains some specific examples which nicely illustrate these
points \cite{sachdevIUPAP}.
Finally, three comments are in order. First, as we saw in the example
of the Josephson junction array,
a finite temporal correlation length
means that there is a gap in the spectrum of the quantum problem.
Conversely, critical systems are gapless.
Second, the exponent $z$ is a measure of
how skewed time is, relative to space, in the {\em critical} region.
This does not, {\em a priori}, have anything to do with what happens in
either of the phases. For example, one should resist the temptation to
deduce the value of $z$ via $\omega \sim q^z$
from the dispersion of any Goldstone mode\footnote{A Goldstone mode
is a gapless excitation that is present as a result of a broken continuous
symmetry in the ordered phase of a system. Broken continuous symmetry
means that the energy is degenerate under a continuous family of uniform
{\em global} symmetry transformations, for example uniform rotation of the
magnetization in an XY magnet. This implies that non-uniform but long
wavelength rotations must cost very little energy and hence there exists a
low-energy collective mode in which the order parameter fluctuates at long
wavelengths. See Goldenfeld (1992) and Chaikin and Lubensky (1995).}
in the ordered phase. This is incorrect since the exponent $z$ is a
property of the critical point itself, not of the ordered phase.
Third, we should restate the well known wisdom that
the diverging lengths and the associated scaling of physical quantities
are particularly interesting because they represent {\em universal}
behavior, i.e., behavior
insensitive to microscopic details within certain global constraints such
as symmetry and dimensionality \cite{goldenfeld}.
\subsection{$T \ne 0$: Finite Size Scaling}
So far we have described the framework, appropriate to the system at
$T=0$, that would describe the underlying QPT in any system.
As the experimentally accessible behavior of the system necessarily
involves a non-zero temperature, we need to understand how to modify the
scaling forms of the previous section for the $T \ne 0$ problem.
The crucial observation for this purpose is , as noted earlier and
illustrated in Fig.~(\ref{fig:two-sizes}), that the {\em only} effect of
taking $T \ne 0$ in the partition function (\ref{eq:complete_set})
is to make the
temporal dimension {\em finite}; in particular, there is no change in the
coupling $K$ with physical temperature. The effective classical system now
resembles a hyper-slab with $d$ spatial dimensions (taken to be infinite
in extent) and
one temporal dimension of size $L_\tau \equiv \hbar \beta$. As phase
transitions depend sensitively upon the dimensionality of the system,
we expect the finiteness of $L_\tau$ to modify the critical
behavior, since at the longest length scales the system is now $d$ dimensional.
This modification can take two forms. First, it can destroy the transition
entirely so that the only critical point is at $T=0$. This happens in
the case of the 1D Josephson array. Its finite temperature physics
is that of an XY model on an infinite strip which, being a one dimensional
classical system with short-range forces,
is disordered at all finite values of $K$ (finite
temperatures in the classical language).
In the second form, the modification is such that
the transition persists to $T \ne 0$ but crosses over
to a different universality class. For example, the problem of a 2D Josephson
junction array maps onto a 3($=$2+1) dimensional classical XY model.
Its phase diagram is illustrated in Fig.~(\ref{fig:phase_diagram}).
At $T=0$ the
QPT for the transition from superconductor to insulator
is characterized by the exponents of the 3D XY model. That is, it looks
just like the classical lambda transition in liquid helium with $K-K_{\rm
c}$ playing the role of $T-T_{\rm c}$ in the helium.
However at $T \ne 0$ the system is effectively two dimensional and
undergoes a transition of the 2D
Kosterlitz-Thouless\footnote{The Kosterlitz-Thouless phase transition
is a special transition exhibited by two-dimensional systems having
a continuous XY symmetry. It involves the unbinding of topological vortex
defects. See Goldenfeld (1992) and Chaikin and Lubensky (1995).}
XY variety at a new,
smaller, value of $K$, much like a helium film. The Kosterlitz-Thouless
(KT) transition occurs on the solid line in Fig.~(\ref{fig:phase_diagram}).
We see that it is necessary to
reduce the quantum fluctuations (by making $K$ smaller) in order to allow
the system to order at finite temperatures. Above some critical
temperature, the system will become disordered (via the KT mechanism)
owing to thermal fluctuations. Of course, if we
make $K$ {\em larger} the quantum fluctuations are then so large that the
system does not order at any temperature, even zero. The region on the
$K$ axis (i.e., at $T=0$) to the right of the QCP in
Fig.~(\ref{fig:phase_diagram}) represents the quantum disordered
superconductor, that is, the insulator. At finite temperatures, no system
with a finite gap is ever truly insulating. However there is a crossover
regime, illustrated by the dotted line in Fig.~(\ref{fig:phase_diagram})
separating the regimes where the temperature is smaller and larger than the
gap.
At this point the reader might wonder how one learns anything at all
about the QPT if the effects of temperature are so dramatic. The
answer is that even though the finiteness of $L_\tau$ causes a {\em crossover}
away from the $T=0$ behavior, sufficiently close to the $T=0$ critical
point,
it does so in a fashion controlled by the physics at that critical point.
This is not an unfamiliar assertion. In the language of the renormalization
group, critical points are always unstable fixed points and lead to scaling
not because they decide where the system ends
up but because they control
how ``long'' it takes to get there. Here, instead of moving the system
away from the critical fixed point by tuning a parameter, we do so by
reducing its dimensionality.
Since the physics has to be continuous in temperature, the question
arises of how large the temperature has to be before the system
`knows' that its dimension has been reduced.
The answer to
this is illustrated in Fig.~(\ref{fig:finite-size}). When the coupling
$K$ is far away from the
zero-temperature critical coupling $K_{\rm c}$ the correlation length $\xi$
is not large and the corresponding correlation time $\xi_\tau \sim \xi^z$
is small. As long as the correlation time is smaller than the system
`thickness' $\hbar \beta$, the system does not realize that the temperature
is finite. That is, the characteristic fluctuation frequencies obey
$\hbar\omega \gg k_{\rm B}T$ and so are quantum mechanical in nature.
However as the critical coupling is approached, the correlation time grows
and eventually exceeds $\hbar\beta$. (More precisely, the correlation time
that the system
would have had at zero temperature exceeds $\hbar\beta$; the
actual fluctuation correlation time is thus limited by temperature.)
At this point the system `knows' that
the temperature is finite and realizes that it is now effectively
a $d$-dimensional classical system rather than a $d+1$-dimensional system.
The formal theory of the effect of reduced dimensionality near
critical points, which quantifies the above considerations, is called
{\em finite size scaling} \cite{fss}.
For our problem
it asserts that for $[K-K_{\rm c}]/K_{\rm c} \ll 1$ and
$T \rightarrow 0$, physical quantities have the finite size scaling form,
\begin{equation}
{\cal O}(k, \omega, K, T) = L_\tau^{d_{\cal O}/z} {\cal O}(k L_\tau^{1/z},
\omega L_\tau , L_\tau/\xi_\tau).
\label{eq:finite_scaling}
\end{equation}
The interpretation of this form is the following. The quantity
$L_\tau\equiv\hbar\beta$ defined above leads, as discussed in more detail
below,
to a characteristic length $\sim L_\tau^{1/z}$
associated with the temperature. Hence the
prefactor $L_\tau^{d_{\cal O}/z}$ is the analog of the corresponding
prefactor in Eq.~(\ref{eq:homogen}). This same characteristic length is
the only one against which to measure the wave vector $k$. The associated
time
$L_\tau$ is the time scale against which to measure the frequency in the
second term. Finally the distance to the zero temperature critical
coupling is measured via the ratio of $L_\tau$ to the zero temperature
correlation time $\xi_\tau$.
The message here is that what matters is the ratio of the finite size
in the time direction
to the $T=0$ correlation length in that direction. We will return to
the uses of this form shortly.
Our considerations in this section also show us why the phase boundary in
Fig.~(\ref{fig:phase_diagram}) (solid line) and the crossover line (dashed
line) reach zero temperature in a singular way as the quantum critical
point is approached. Simple dimensional analysis tells us that the
characteristic energy scale ($\Delta$ for the insulator, $T_{\rm KT}$ for
the superfluid) vanishes near the critical point like $\hbar/\xi_\tau$,
implying
\begin{eqnarray}
\Delta &\sim& |K-K_{\rm c}|^{\nu z}\theta(K-K_{\rm c})\nonumber\\
T_{\rm KT} &\sim& |K-K_{\rm c}|^{\nu z}\theta(K_{\rm c}-K).
\end{eqnarray}
\subsection{The Quantum-Classical Crossover and the Dephasing Length}
We now turn to a somewhat different understanding and interpretation
of the effect of temperature that is conceptually of great importance.
Recall that the $T=0$ critical points of interest to us are
gapless and scale invariant, i.e., they have quantum fluctuations at
all frequencies down to zero. Temperature introduces a new energy scale
into the problem. Modes whose frequencies are larger than $k_B T/\hbar$
are largely unaffected,
while those with frequencies lower than $k_B T/\hbar$ become
occupied by many quanta with the consequence that they behave
{\em classically}. Put differently, the temperature cuts off
coherent quantum fluctuations in the infrared.
What we want to show next is that this existence of a quantum to classical
crossover frequency ($k_B T/\hbar$) leads to an associated length
scale for the same crossover, as alluded to in the previous section.
We shall refer to this length scale
as the dephasing length, $L_\phi$, associated with the QPT. The
temperature dependence of $L_\phi$ is easy enough to calculate. From
our imaginary time formalism we recall that quantum fluctuations are
fluctuations in the temporal direction.
Evidently these cannot be longer-ranged than the size of
the system in the time direction, $L_\tau = \hbar \beta$.
Since spatial and temporal correlations are linked via $\xi_\tau \sim \xi^z$,
it follows that the spatial correlations {\em linked} with quantum
fluctuations are not longer ranged than $L_\tau^{1/z}$. Since the spatial
range of quantum fluctuations is the dephasing length, we find
$L_\phi \sim 1/T^{1/z}$.
We use the term ``dephasing'' deliberately. Readers may know,
from other contexts where quantum effects are observed, that
such observation requires {\em phase coherence} for the relevant
degrees of freedom. In other words, interference terms should not be
wiped out by interactions with an ``environment'', i.e. other degrees of
freedom \cite{stern}. If dephasing takes place and the phase coherence
is lost, the resulting
behavior is classical. Thus our definition of $L_\phi$ is in line with
that notion. However, readers familiar with the notion of a dephasing length,
$\ell_\phi$, in mesoscopic condensed matter physics or the theory of
Anderson localization, might be concerned at our appropriation of this notion.
The concern arises because, in the standard lore in those fields, one is
often dealing with models of non-interacting or even
interacting electrons, whose quantum coherence is limited by degrees
of freedom, e.g. phonons, that are not being considered explicitly.
This has given rise to a tradition of thinking of $\ell_\phi$ as being
a length which is set {\em externally}.
Unfortunately this sort of separation of degrees of freedom should {\em not}
be expected to work near a QPT, since there one needs to keep track of
all {\em relevant} interactions. If a given interaction,
e.g. the Coulomb interaction, is relevant, then it already sits in the
Hamiltonian we need to solve and enters the calculation of $L_\phi$.
In contrast, if an interaction, e.g. coupling to phonons, is irrelevant
then we do not expect it to enter
the low energy physics as it should not in general affect the
quantum-classical crossover either.
Another way of formulating this is in terms of dephasing
{\em rates}. Since temperature is the only energy scale available,
a generic quantum critical point will be characterized by a
dephasing rate $\xi_\tau^{-1}$
that is linear in $T$, since we expect $\hbar/\xi_\tau \sim k_B T$. By
definition, irrelevant interactions, e.g. phonons, have an effective
coupling to the electrons which scales away to zero as one examines the
system on larger length and time scales. Hence such couplings will produce
a dephasing rate which
vanishes as $T^p$ with $p > 1$ and will therefore become negligible
compared to $T$ at low
temperatures.\footnote{Equivalently, the associated length scale will diverge
faster than $L_\phi$ as $T \rightarrow 0$ and hence will not control
the quantum-classical crossover of the relevant degrees of
freedom, since it is the shortest length that counts.
There are times when this
sort of reasoning can break down. These situations
involve operators that are irrelevant,
i.e., decrease in the infrared, but cannot be naively set to zero since that
produces extra singularities. Such operators are known in the trade as
``dangerous irrelevant operators'' and we will meet an example in the
next section, in the context of current scaling.}
Thus we conclude that $L_\phi$ is the {\em unique} dephasing length
in the vicinity of a generic quantum critical point. Further discussion
and explicit examples
of the dissipative dynamics near a critical point can be found in the
article by Sachdev (1996).
\section{Experiments: QPTs in Quantum Hall Systems}
We now turn from our somewhat abstract considerations to
examples of how one actually looks for a QPT in experiments. The
basic idea is relatively simple. We try to arrange that the system
is close to the conjectured critical point in parameter space ($K \sim
K_c$) and temperature ($T \sim 0$) and look for mutually consistent
evidence of scaling with various {\em relevant parameters}.
By these we mean either the deviation of the quantum coupling constant
from its critical value $K-K_c$,
the temperature, or the wavevector,
frequency and amplitude of a probe. We call these relevant parameters
since when they are non-zero the response of the system has no
critical singularities due to the quantum critical point---hence the
analogy to relevant operators in renormalization
group theory.\footnote{Consider for example a weak magnetic field applied
to a system undergoing ferromagnetic ordering. The magnetic field is
relevant and removes the sharp singularity in magnetization at the critical
temperature, replacing it with a rapid but smooth increase in magnetization.
Likewise, measurements
at a non-zero frequency and wavevector do not exhibit singularities across
a transition. For quantum systems,
changes in the coupling and the temperature can produce more
subtle effects: they will cut off the critical fluctuations coming from the
proximity to the quantum critical point, but either Goldstone modes coming from
a broken continuous symmetry or purely classical (thermal) fluctuations could
lead to independent singularities in the thermodynamics and response. We
saw an example of the latter in the persistence of a phase transition at finite
temperatures for the 2D Josephson junction array.}
Additionally, we can look for
universal critical amplitudes or amplitude ratios that are implicit in scaling
forms for various quantities.
To see how this works, we will consider as a specific example,
a set of recent experiments on phase transitions in quantum Hall systems.
We derive various scaling relations appropriate to the
experiments, even though we do not actually know the correct theory
describing the QPT in this disordered, interacting fermi system.
The very generality of the scaling arguments we will apply implies
that one need not have a detailed microscopic understanding of the physics
to extract useful information.
Nevertheless, we will start with some introductory
background for readers unfamiliar with the quantum Hall
effect \cite{bible1,bible2,bible3,bible4,bible5,bible6}.
The quantum Hall effect (QHE) is a property of a two dimensional electron
system placed in a strong transverse magnetic field, $B \sim 10T$. These systems
are produced, using modern semiconductor fabrication techniques,
at the interface of two semiconductors with different band gaps. The
electronic motion perpendicular to the interface is confined to a potential
well $\sim 100$\AA\ thick. Because the well is so thin, the minimum
excitation energy perpendicular to the 2D plane ($\sim 200$K)
is much larger than the temperature ($\sim 1$K) and so motion in this third
dimension is frozen out, leaving the system dynamically two-dimensional.
As the ratio of the density to the magnetic field is varied at $T=0$,
the electrons condense into a remarkable sequence of distinct
thermodynamic phases.\footnote{The exact
membership of the sequence is sample specific but obeys certain selection
rules \cite{KLZ}.} These phases are most strikingly
characterized by their unique electrical transport properties, as
illustrated in Fig~(\ref{fig:qhdata}). Within each
phase the current flow is dissipationless, in that the longitudinal
resistivity, $\rho_{\rm L}$, that gives the electric field along
the direction of current flow ($E_{\rm L} = \rho_{\rm L} j$) vanishes.
At the same time that the longitudinal resistivity vanishes, the Hall
resistivity, $\rho_{\rm H}$, that gives the electric field transverse to
the direction of current flow ($E_{\rm H} = \rho_{\rm H} j$) becomes
quantized in rational multiples of the quantum of resistance
\begin{equation}
\rho_{\rm H} = \frac{h}{\nu_B e^2}
\label{eq:qhallr}
\end{equation}
where $\nu_B$ is an integer or simple rational fraction which serves to
distinguish between the different phases.
This quantization has been verified to an accuracy of about one part
in $10^7$ and to an even higher precision.
The QHE arises from a commensuration between electron density and magnetic
flux density, i.e. a sharp lowering of the energy when their ratio, the
filling factor $\nu_B$, takes on particular rational values. This
commensuration is equivalent to the existence of an energy gap in the
excitation spectrum at these ``magic'' filling factors and leads to
dissipationless flow at $T=0$ since degrading a small current requires
making arbitrarily small energy excitations which are unavailable. In the
absence of
disorder, Eqn.~(\ref{eq:qhallr}) follows straightforwardly, for example by
invoking Galilean invariance \cite{bible1}. As the magnetic field (or
density) is varied away from the magic filling factor, it is no longer
possible for the system to maintain commensuration over its entire area,
and it is forced to introduce a certain density of defects to take up
the discommensuration. In the presence of disorder, these defects, which
are the quasiparticles of the system, become localized and do not contribute
to the resistivities, which remain at their magic values. In this fashion,
we get a QH phase or a plateau.
Transitions between QH phases occur when too many quasiparticles have been
introduced into the original QH state and it becomes energetically
favorable to move to the ``basin of attraction'' of a different state
and its associated defects.
It might appear that these transitions between neighboring phases are
first order, since $\rho_{\rm H}$ jumps discontinuously
by a discrete amount between them, but they are not.
Qualitatively, they involve the quasiparticles of each phase which
are localized on a length scale, the localization length, that diverges
as the transition is approached from either side.
However, as these quasiparticles are
always localized at the longest length scale away from criticality, they
do not lead to dissipation ($\rho_{\rm L}=0$) and do not renormalize the
Hall resistivities of their respective phases.
Exactly at the transition they are delocalized and lead to a non-zero
$\rho_{\rm L}$. The shift in $\rho_{\rm H}$ on moving through the
transition can be understood in terms of either set of quasiparticles
condensing into a fluid state---there being an underlying duality in
this description.
In our description of the QH phases and phase transitions we have employed
a common language for all of them. We should note that this does not,
{\em ipso facto} imply that all quantum Hall transitions are in the same
universality class; however, experiments, as we discuss later, do seem to
suggest that conclusion. The reason for this caution is that different
QH states can arise from quite different physics at the microscopic level.
States with integer $\nu_B$ arise, to first approximation, from single
particle physics. An electron in a plane can occupy a {\em Landau level}
which comprises a set of degenerate states with energy
$(n+1/2) \hbar \omega_c$; these reflect the quantization of the classical
cyclotron motion having frequency $\omega_c = \frac{eB}{m}$
and the arbitrariness
of the location of that motion in
the plane. When an integer number of Landau levels are full, and this
corresponds to an integer filling factor, excitations involve the promotion
of an electron across the cyclotron gap and we have the commensuration/gap
nexus necessary for the observation of the (integer) QHE. In contrast,
fractional
filling factors imply fractional occupations of the Landau levels,
with attendant macrosopic degeneracies, and they exhibit a gap only when
the Coulomb interaction between the electrons is taken into account (the
fractional QHE).\footnote{This leads to the remarkable feature that while
the quasiparticles of the integer states
are essentially electrons, those of the fractional states are
fractionally charged and obey fractional statistics.}
Readers interested in the details of this magic trick are encouraged
to peruse the literature.
Before proceeding to the details of experiments, we need to discuss two
important points about the units of the quantities measured in
electrical transport where two spatial dimensions are rather special.
Experiments measure resistances, which are ratios of total voltages
to current and these are related to local resistivities by ratios of
cross-sectional ``areas'' to lengths. In two dimensions, a cross-sectional
area {\em is} a length and consequently no factor of length intervenes
between the global and local quantities. In the QH phases, this has
the important implication that {\em no} geometrical factor affects the
measurement of the Hall resistance, which is why the ratio of fundamental
constants $h/e^2$ and hence the fine structure constant can be measured
to high accuracy on samples whose geometry is certainly not known to
an accuracy of one part in $10^7$.
What we have said above is a statement about the engineering dimensions of
resistance and resistivity. Remarkably, this also has an analog when it
comes to their {\em scaling} dimensions at a quantum critical point,
i.e. their scaling dimensions vanish.\footnote{See \cite{fgg,chaPRB}.
This is analogous to the behavior of the superfluid density at the
classical Kosterlitz-Thouless phase transition \cite{chaikin-lubensky}
and leads to a universal jump in it.}
Consequently, the resistivities vary as the zeroth power of the diverging
correlation length on
approaching the transition, i.e. will remain constant on either side.
Precisely at criticality they will be independent of the length scale used
to define them but can take values distinct from the neighboring
phases.
In this fashion, we have recovered from a purely scaling argument our
earlier conclusion that even though the quantum Hall transitions are
continuous, the resistivities at $T=0$ differ from the quantized values
{\em only} at critical points. Detecting the continuous transitions then
requires measurements at a non-zero temperature, frequency
or current, all of which lead to a more gradual
variation which can then be examined for scaling behavior.
Below are some examples of how that works.
\subsection{Temperature and Frequency Scaling}
Consider the caricature of a
typical set of data shown in Fig.~(\ref{fig:Tdata}).
Note that $\rho_{\rm H}$ interpolates
between its quantized values over a transition region of non-zero width,
while $\rho_{\rm L}$ is peaked in the same region, but is extremely small
outside the region. The change in the shape of these curves with temperature
can be understood on the basis of the finite size scaling form
\begin{equation}
\rho_{\rm {L/H}}(B,T,\omega)
= f_{\rm {L/H}}(\hbar\omega/ k_{\rm B}T,\delta/T^{1/\nu z}),
\label{eq:rhoscaling}
\end{equation}
where $\delta \equiv (B-B_{\rm c})/B_{\rm c}$ measures the distance to the
zero temperature critical point.
This form is equivalent to the general finite-size scaling form in
Eq.~(\ref{eq:finite_scaling}) except that we have assumed the limit $k=0$,
and used the previously cited result that the scaling dimension
of the resistivity vanishes in $d=2$ \cite{fgg,chaPRB}.
The first argument
in the scaling function here is the same as the second in
Eq.~(\ref{eq:finite_scaling}). The second argument in the scaling
function here is simply a power of the third argument in
Eq.~(\ref{eq:finite_scaling}). This change is inconsequential; it can be
simply absorbed into a redefinition of the scaling function.
First, let us consider a DC or $\omega =0$ measurement. In this case
our scaling form implies that the resistivities are not independent
functions of $\delta$ (or $B$) and $T$ but instead are functions
of the single
{\em scaling variable} $\delta/T^{1/\nu z}$. Hence the effect of lowering
$T$ is to rescale the deviation of the field from its critical
value by the factor $T^{1/\nu z}$. It follows that the
transition appears sharper and sharper as the temperature is lowered,
its width vanishing as a universal power of the temperature,
$\Delta B \sim T^{1/\nu z}$.
In Fig.~(\ref{fig:wei-data}) we show the pioneering data of Wei {\it et al.}
(1988) that indeed shows such an algebraic dependence for several
different
transitions all of which yield the value $1/\nu z \approx 0.42$. These
transitions are between integer quantum Hall states.
Remarkably, this temperature scaling behavior seems to be ubiquitous
at quantum Hall transitions and suggests that there is a single underlying
fixed point for all of them. It was shown by Engel {\it et al.} (1990)
that it holds at transitions between two fractional quantum Hall states.
Subsequently,
Wong {\it et al.} (1995) found the same scaling for the transition
between a Hall state and the insulator. In Fig.~(\ref{fig:shahar-data}) we
show some recent data
of Shahar (1995) near another such transition, plotted both
as a function of the magnetic
field at several values of $T$, and against the scaling variable
$\delta/T^{1/\nu z}$, exhibiting the data collapse characteristic
of the critical region.
Consider now the results of measurements at non-zero frequencies. In
their full generality these require a two variable scaling
analysis \cite{engel-two} but we focus instead on two distinct
regimes. In the regime
$\hbar \omega \ll k_B T$ we expect that the behavior of the scaling
function will be governed by its $\omega=0$ limit analyzed previously,
i.e. at small $\omega$ we expect the scaling to be dominated
by $T$. In the second regime, $\hbar \omega \gg k_B T$, we expect
the scaling to be dominated by $\omega$ and the scaling function to
be independent of $T$. In order for the temperature to drop out,
the scaling function in Eq.~(\ref{eq:rhoscaling}) must have the form
\begin{equation}
f(x,y) \sim \tilde{f} (y x^{-1/\nu z})
\end{equation}
for large $x$ so that the scaling variables
conspire to appear in the combination,
\begin{equation}
\left(\frac{\hbar \omega}{k_B T}\right)^{-1 /\nu z} {\delta \over T^{1/\nu z}}
\sim {\delta \over \omega^{1/\nu z}} \ .
\end{equation}
It follows that at high frequencies the resistivities are functions
of the scaling variable $\delta/\omega^{1/\nu z}$ and that the width
of the transition regions scales as $\omega^{1/\nu z}$.
Fig.~(\ref{fig:engel-data}) shows frequency dependent
conductivity\footnote{The conductivities scale in exactly the same fashion
as the resistivities.} data of Engel {\it et al.} (1993) which exhibits
this algebraic increase in the width of the transition region with
frequency and yields a value of $\nu z$ consistent with the temperature
scaling.
We should note an important point here. As the ratio $\hbar \omega/k_B T$
is varied in a given experiment we expect to see a crossover between
the $T$ and $\omega$ dominated scaling regimes. The criterion for
this crossover is $\hbar \omega \approx k_B T$.
The observation by Engel {\it et al.} (1990), that this is indeed the
correct crossover criterion (see Fig.~(\ref{fig:engel-data})) is
important for two reasons. First, it involves $\hbar$ and clearly implies
that {\em quantum} effects are at issue. Second it implies that $T$ is the
relevant infrared scale. If dephasing effects coming from coupling to
some irrelevant degree of freedom
were important, one would expect the crossover to take place
when $\omega \tau \approx 1$, where $1/\tau$ is some microscopic
scattering or relaxation rate associated with this coupling.
Since the coupling is irrelevant it will, as noted earlier,
give a scattering rate that vanishes as $AT^p$ where $p$ is greater
than unity and $A$ is non-universal \cite{sls-stevek} (e.g., it depends on
the precise value of the electron-phonon coupling constant for the material).
In contrast, what is
observed is that the relaxation rate obeys $1/\tau = Ck_{\rm B}T/\hbar$
where $C$ is a {\em universal} \cite{sachdevIUPAP}
dimensionless constant of order unity.
It is important to note that frequency scaling does not give
us any new information on exponents that we did not already have from the
temperature scaling. The main import of frequency scaling is its ability
to confirm the quantum critical nature of the transition by showing that
the characteristic time scales have diverged, leaving the temperature
itself as the only frequency scale.
\subsection{Current Scaling}
A third relevant parameter that is experimentally useful is the
magnitude of the measuring current or, equivalently, of the applied
electric field. In talking about resistivities we have assumed that there
is an ohmic regime at small currents, i.e., a regime in which the voltages
are linear in the current.
In general, there is no reason to
believe that the non-linear response can be related to equilibrium
properties---i.e., there is no fluctuation-dissipation theorem beyond
the linear regime. However, in the vicinity of a critical point
we expect the dominant non-linearities to come from critical
fluctuations. At $T=0$, the electric field scale for these can be estimated
from an essentially dimensional argument. We start by defining a
characteristic length $\ell_E$ associated with the electric field. Imagine
that the system is at the critical point so that $\ell_E$ is the only
length scale available. Then the only characteristic time for
fluctuations of the system will scale like $\ell_E^{+z}$.
We can relate the length $\ell_E$ to the electric field that produces it by
\begin{equation}
eE\ell_E \sim \hbar\ell_E^{-z}.
\end{equation}
This expression simply equates the energy gained from the electric field by
an electron moving a distance $\ell_E$ to the characteristic energy
of the equilibrium system at that same scale. Thus
\begin{equation}
\ell_E \sim E^{-1/(1+z)}.
\end{equation}
If the system is not precisely at the critical point, then it is this
length $\ell_E$ that we should compare to the correlation length
\begin{equation}
\frac{\ell_E}{\xi} \sim \delta^{\nu} E^{-1/(1+z)}\sim
\left(\frac{\delta}{E^{1/\nu(z+1)}}\right)^\nu.
\end{equation}
From this we find that the
non-linear DC resistivities for a 2D system obey the scaling forms
\begin{equation} \label{eq:iscaling}
\rho_{L/H}(B,T,E) = g_{\rm L/H}(\delta/T^{1/\nu z}, \delta/E^{1/\nu(z+1)}) .
\label{current_scaling1}
\end{equation}
This is a very useful result because it tells us that electric field
scaling will give us new information not available from temperature scaling
alone. From temperature scaling we can measure the combination of
exponents $\nu z$. Because an electric field requires multiplication by
one power of the correlation
length to convert it to a temperature (energy),
electric field scaling measures the
combination of exponents $\nu(z+1)$. Thus the two measurements can be
combined to separately determine $\nu$ and $z$.
The data of Wei {\it et al.} (1994), Fig.~(\ref{fig:wei-Idata}),
confirm this
and yield the value $\nu(z+1) \approx 4.6$.
Together the $T$, $\omega$ and $I$ scaling experiments lead to the
assignment $\nu \approx 2.3$ and $z \approx 1$.
Equation~(\ref{current_scaling1}) tells us that there are two scaling
regimes. At sufficiently high temperatures, $L_\phi \ll \ell_E$ and
the scaling is controlled by the
temperature. Below a crossover temperature scale
\begin{equation}
T_0(E) \sim \ell_E^{-z} \sim E^{z/(1+z)},
\label{eq:T_0(E)}
\end{equation}
$L_\phi > \ell_E$ and
the scaling is controlled by the electric field $E$ (or equivalently, the
applied current $I$). One
might be tempted to identify $T_0(E)$ as the effective temperature of the
electrons in the presence of the electric field, but this is not strictly
appropriate since the system is assumed to have been driven out of
equilibrium on length scales larger than $\ell_E$.
This quantum critical scaling picture explicitly assumes that the slow internal
time scales of the system near its critical point control the response to
the electric field and implicitly assumes that we can
ignore the time scale which determines how fast the Joule heat
can be removed by phonon radiation.
Thus this picture is quite distinct from that of a simple heating scenario
in which the electron gas itself equilibrates rapidly, but undergoes
a rise in temperature if there is a bottleneck for the energy deposited by
Joule heating to be carried away by the phonons.
This effect can give rise to an apparent
non-linear response that is, in fact, the linear response of the electron
gas at the higher temperature. The power radiated into phonons at low
electron temperatures scales as
\begin{equation}
P_{\rm ph} = A T_{\rm e}^\theta,
\end{equation}
where $\theta = 4-7$ depending on details \cite{chow}.
Equating this to the Joule
heating (assuming a scale invariant conductivity) yields an electronic
temperature
\begin{equation}
T_{\rm elec} \sim E^{2/\theta}.
\label{eq:T_e(E)}
\end{equation}
We now have a paradox. The more irrelevant phonons are at low temperatures
(i.e., the larger $\theta$ is), the smaller is the exponent $2/\theta$
and hence {\em the more singular is the temperature rise
produced by the Joule heat}. Comparing Eqs.(\ref{eq:T_0(E)})
and (\ref{eq:T_e(E)})
we see that for
\begin{equation}
\frac{2}{\theta} < \frac{z}{z+1},
\end{equation}
we have
\begin{equation}
T_{\rm elec} > T_0(E).
\end{equation}
That is, we then have that
the temperature rise needed to radiate away sufficient power
is larger than the characteristic energy (`temperature')
scale predicted by the quantum critical scaling picture.
In this case the phonons are `dangerously
irrelevant' and the simple quantum critical scaling prediction fails.
It happens that for the case of GaAs, which is piezoelectric, $2/\theta =
1/2$ which gives the same singularity exponent as the quantum critical model
$z/(z+1) = 1/2$ (since $z=1$).
Hence both quantum critical and heating effects are important.
(The phonon coupling is `marginally dangerous'.)
This result is discussed in more detail
elsewhere \cite{chow,girvin-sondhi-fisher}.
\subsection{Universal Resistivities}
The final signatures of critical behavior which we wish to discuss
are universal amplitudes, and, more generally, amplitude ratios.
These are readily illustrated in the quantum Hall problem without
considering their general setting, for which we direct the reader
to the literature
\cite{amplitudes1,amplitudes2,amplitudes3,2dafm2,sachdevIUPAP}.
Note that the scaling forms (\ref{eq:rhoscaling}) and
(\ref{eq:iscaling}) imply that the resistivities at $B=B_c$ in the
critical region are independent of $T, \omega$ and $I$. Under certain
assumptions it is possible to argue that they are, in fact, universal
\cite{KLZ,fgg}. The observation of such universality
between microscopically different samples would then be strong evidence
for an underlying QPT as well.
Recently Shahar {\it et al.} (1995) have carried out a study of the
critical resistivities at the transition from the $\nu_B =1$ and $1/3$
quantum Hall states to the insulating state. An
example of their data is shown in Fig.~(\ref{fig:shahar-data}). Notice
that there exists a critical value of $B$ field at which the resistivity
is temperature-independent. For $B<B_{\rm c}$ the resistivity scales
upward with decreasing $T$, while for $B>B_{\rm c}$, it scales downward
with decreasing $T$. Since we can think of lowering $T$ as increasing the
characteristic length scale $\L_\phi$ at which we examine the system, we
see that the point where all the curves cross is the scale-invariant point
of the system and hence must be the critical point.
Shahar {\it et al.} (1995) find that at these critical points,
$\rho_{\rm L}$ is {\em independent} of the sample studied and in fact
appears to be $h/e^2$ within experimental error for both transitions.
Preliminary studies \cite{shahar_privcomm} also seem to find
sample-independent values of $\rho_{\rm H}$ at the critical points with
values of $h/e^2$ and $3h/e^2$ for the two transitions.
\subsection{Unresolved Issues}
As we have tried to indicate, the success of experimental work
in making a case for universal critical behavior at
transitions in the quantum Hall regime is impressive.
However, not everything is settled on this score. Apart from the
delicate issues surrounding the interpretation of the current
scaling data mentioned earlier, there is one significant puzzle.
This concerns the failure of $\rho_L$ at the transition between two
generic QH states to exhibit a $T$-independent value at a critical field
even as the width of the curve exhibits algebraic behavior.\footnote{Hence
our unwillingness to plot the actual traces in Fig.~(\ref{fig:Tdata}).}
This is generally believed to stem from macroscopic inhomogeneities
in the density and some recent theoretical work offers support
for this notion \cite{ruzinetc}. Nevertheless, this is an issue that
will need further work. The transitions to the insulator studied more
recently, are believed to be much less sensitive to this problem, and
hence the consistency of the data on those is encouraging. However, in
these cases the temperature range over which there is evidence for
quantum critical scaling is quite small as in the data in
Fig.~(\ref{fig:shahar-data}) which leads us to a general caveat.
Evidence for power laws and scaling should properly consist of overlapping
data that cover several decades in the parameters. The various
power law dependences that we have exhibited span at best two decades,
most of them fewer and the evidence for data collapse within the error bars of
the data exists only over a small range of the scaled variables. Consequently,
though the overall picture of the different types of
data is highly suggestive, it cannot really
be said that it does more than indicate consistency with the scaling expected
near a quantum critical point. Regrettably, there is at present no example
of a quantum critical phase transition as clean as
the remarkable case of the classical lambda transition in superfluid
helium for which superb scaling can be demonstrated. \cite{ahlers}
On the theoretical front the news is mixed.
Remarkably, the experimental value of the correlation length exponent
$\nu \approx 2.3$ is consistent with numerical
calculations of the behavior of {\em non-interacting} electrons in a strong
magnetic field \cite{bodo}. Also, the critical resistivities at the
transition from
the $\nu_B=1$ state to the insulator are also consistent with these
calculations \cite{bodo}. This agreement is still a puzzle at this time,
especially
as the value of the dynamic scaling exponent $z \approx 1$ strongly
suggests that Coulomb interactions are playing an important role.
The evidence for a super-universality of the transitions, however
does have some theoretical support in the form of a set of physically
appealing ``correspondence rules'' \cite{KLZ}. Unfortunately, their
{\em a priori} validity in the critical regions is still unclear.
\cite{leeandwang}
In sum, theorists have their work cut out for them!
\section{Concluding Remarks, Other Systems}
Let us briefly recapitulate our main themes. Zero temperature phase transitions
in quantum systems are fundamentally different from finite temperature
transitions in classical systems in that their thermodynamics and dynamics
are inextricably mixed. Nevertheless, by means of the path integral
formulation of quantum mechanics, one can view the statistical mechanics of a
d-dimensional $T=0$ quantum system as the statistical mechanics of a
d+1 dimensional classical system with a fake temperature which
is some measure of zero-point
fluctuations in the quantum system. This allows one to apply ideas and
intuition developed for classical critical phenomena to quantum critical
phenomena. In particular this leads to an understanding of the $T \ne 0$
behavior of the quantum system in terms of finite size scaling
and to the identification of a $T$-dependent length scale, $L_\phi$, that
governs the crossover between quantum and classical fluctuations.
The identification of QPTs in experiments relies upon finding
scaling behavior with relevant parameters. These are the temperature
itself and the frequency, wavelength and amplitude of various probes.
Additional signatures are universal values of certain dimensionless
critical amplitudes such as the special case of
resistivities at critical points in conducting systems in d=2
and, more generally, amplitude ratios.
In this Colloquium we have illustrated these ideas in the context of
a single system, the two dimensional electron gas in the quantum Hall
regime. The ideas themselves are much more widely applicable. Interested
readers may wish to delve, for example, into work on the one dimensional
electron gas \cite{luther,emery}, metal insulator
transitions in zero magnetic field (``Anderson-Mott transitions'')
\cite{bk}, superconductor-insulator
transitions
\cite{wallinetalPRB,chaPRB,sit1,sit2,sit3,sit4,sit5,sit6,sit7,sit8,%
sit9,sit10,sit11,sit12,sit13,sit14},
two-dimensional antiferromagnets associated with high temperature
superconductivity
\cite{sachdevIUPAP,2dafm6,2dafm1,2dafm2,2dafm3,2dafm4,2dafm5} and
magnetic transitions in metals \cite{hertz,fm1,fm2,fm3}. This list is by
no means exhaustive and we are confident that it will continue to expand
for some time to come!
\acknowledgements
It is a pleasure to thank R. N. Bhatt, M. P. A. Fisher, E. H. Fradkin,
M. P. Gelfand, S. A. Kivelson, D. C. Tsui and H. P. Wei for numerous
helpful conversations. We are particularly grateful to K. A. Moler,
D. Belitz, S. Nagel, T. Witten, T. Rosenbaum, and U. Zuelicke
for comments on early versions of the manuscript.
DS is supported by the NSF, the work at Indiana
is supported by NSF grants DMR-9416906, DMR-9423088 and DOE grant
DE-FG02-90ER45427, and SLS is supported
by the NSF through DMR-9632690 and by the A. P. Sloan Foundation.
\newpage
| proofpile-arXiv_065-721 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There are two schools of thought as to
the origin of the galactic magnetic field.
The first school holds that, after formation
of the galactic disk, there was a very weak seed
field of order $ 10^{ -17} $gauss or so, and that this
field was amplified to its present strength by dynamo action driven
by interstellar turbulence. The description
of how this happens is well known(Ruzmaikin, Shukorov and
Sokoloff 1988), and
is summarized by the mean field dynamo theory.
The second school of thought concerning
the origin of the galactic field is, that it
is primordial. That is, it is assumed that there
was a reasonably strong magnetic field present
in the plasma before it collapsed to form the
galaxy. It is assumed that the field was coherent in
direction and magnitude prior to galactic formation.
The actual origin of the magnetic field is not
specified. However, it could be formed by dynamo
action during the actual formation of the galaxy,
(Kulsrud et al 1996).
This paper discusses the evolution of the
structure and strength of such a primordial magnetic field
during the life of the galaxy. The origin of the field
prior to formation of the galaxy is not considered.
It was Fermi(1949) who first proposed
that the galactic magnetic field was of primordial
origin. Piddington(1957, 1964, 1967, 1981) in a number of papers
suggested
how this might actually
have happened. However, he supposed the magnetic
field strong enough to influence
the collapse. One of the first
problems with the primordial origin
is the wrap up problem. It was pointed
out by Hoyle and Ireland(1960, 1961) and by $ \hat{O}ke $ et. al.
(1964), that the field
would wrap up into a spiral
similar to the spiral arms in only
two or three galactic rotations. It
was supposed that this is the natural
shape of the magnetic field lines. If the
winding up continued for the fifty or so
rotations of
the galaxy, the field lines would
reverse in direction every one hundred
parsecs. This seemed absurd. The various attempts
to get around this problem involved
magnetic fields which were strong enough to
control the flow, and also strong radial outflows.
Several forceful arguments against
the primordial origin were advanced by Parker.
The arguments were that the field would be expelled from the
galaxy, either by ambipolar diffusion(Parker 1968), or by
rapid turbulent diffusion(Parker 1973a, 1973b)
In this paper we reexamine the problem of a
a primordial field.
We proceed along lines first initiated by
Piddington. Just as in his model for the
primordial field,
we start with a cosmic field with lines
of force threading the galaxy.
We further assume
that, after the collapse to the disk, the lines remain
threaded through the galactic disk.
The lines enter the lower boundary
of the disk in a vertical direction,
extend a short horizontal distance in the disk,
and then leave through the upper boundary.
Thus, each line initially extends a finite
horizontal distance in the disk.
However, in contrast to Piddington's model, we
assume the field is too weak to affect the plasma
motions, especially the rotational motion
about the galactic center. Consequently, it
tends to be wrapped up by the differential rotation of the
galaxy. In addition, we include additional
physics of the interstellar medium in which
the magnetic field evolves.
After toroidal stretching
strengthens the
toroidal field sufficiently,
a strong vertical force is exerted
on the ionized part of the interstellar medium forcing
it through the neutral part. We contend
that this force will not expel the entire lines
of force from the galactic disk,
but the resulting ambipolar velocity
will only tend to shorten the length
which each line spends in the disk as it
threads through it.
In particular, ambipolar diffusion will
decrease the radial component of the horizontal
field and shorten its radial extent in the disk.
This will decrease the toroidal stretching
of the field line, and as a result the magnetic
field strength will approach a
slowly decreasing saturated condition.
Thus, after several gigayears, the lines will end up
extending a longer distance in the azimuthal direction
than they did initially, but a much shorter
distance in the radial direction.
At any given time after the field saturates,
the final strength will be independent
of its initial value. In addition, the field strength
will depend only on the
ambipolar properties of the interstellar medium.
[For earlier work see Howard (1995) and Kulsrud(1986, 1989, 1990).]
As a consequence of ambipolar diffusion plus
stretching, the magnetic field will be almost entirely toroidal,
and it will have the same strength everywhere
at a fixed time. However, its sign will depend on the sign
of the initial radial component of the field at its
initial position. Because of differential rotation
the toroidal field will still vary rapidly in sign over a radial
distance of about one hundred parsecs.
This model for the magnetic field evolution
would seem at first to leave us
with a wrap up problem and produce a field
at variance with the observed field. However,
the initial regions where the radial field is of one
sign are expected to be
of different area than those of the other
sign provided that the initial field is not exactly uniform.
As a consequence, at the present time in
any given region of the galaxy, the toroidal field should
have a larger extent of one sign
than of the other, even, though the sign
varies rapidly. If one now averages over larger
regions than the size of variation, one would see a mean magnetic
field of one sign. This averaging is actually performed
by the finite resolution of the observations of the magnetic field in our
galaxy or in external galaxies. From the observations
one would not be aware of this rapid variation.
If one ignored the spiral arms, the field
would be almost completely azimuthal. However,
it is known that the density compression in the
spiral density wave,
twists the magnetic field to be parallel
to the spiral arm and increases its field strength
(Manchester 1974, Roberts and Yuan 1970).
In observing radio emission from external
galaxies, one tends to see the radiation
mostly from the spiral arm, where cosmic
rays are intense and the magnetic field
is stronger. Because in these regions the
field is aligned along the arms, one would
naturally get the impression that the field
extends along the entire arm. In fact,
on the basis of our model,
one would actually
be seeing short pieces of field lines pieced
together as they cross the spiral arm and
thread through the disk. The magnetic
field would be mainly
azimuthal in between the spiral arms,
and only as it crosses the arm
would it be twisted to align along the arm.
Observationally the magnetic field of this model
would appear the same as that of a large scale
magnetic field.
Similarly, one should see Faraday
rotation which is proportional to the amount
by which the effect of the toroidal field
of one sign exceeds the other sign
in this region. The
amount of rotation produced by any one region would
be related to difference in areas occupied
by the fields of different sign. This in turn,
is given by the amount by which the
initial area at the initial position
where the radial field is of one sign exceeds
the initial area where the radial field is of opposite
sign.
In Parker's argument(Parker 1973a, 1973b) concerning
the expulsion of the primordial field he
introduces the concept of turbulent mixing.
Turbulent mixing correctly describes the rate
at which the
mean field will decrease by mixing. However,
it can not change the number of lines
of force threading the disk, since this
is fixed by their topology. It only
gives the lines a displacement. Since, near the
edge of the disk, the turbulent motions probably
decrease as the sources of turbulence do,
it is not expected that the lines will be
mixed into the halo. Also, because of
their topology the lines are not lost.
Only the length of their extent in the disk
can be altered by turbulence. Hence,
for our model, turbulent diffusion need
not destroy the primordial field as
Parker suggested. Also,
blowing bubbles in the lines by cosmic ray
pressure will always leave
the remainder of the line behind, and the total number of lines
unaltered. Of course, if the lines were entirely horizontal,
ambipolar diffusion could destroy the
field, since the lines may be lifted out of the disk
bodily(Parker 1968).
Therefore, Parker's
contention that the primordial
field has a short life, need not apply,
to our model.
\subsection{Our Model}
The model which we consider is very simple.
We start with a large ball of plasma whose mass is
equal to the galactic mass, but whose
radius is much larger than the current size of our
galaxy (figure 1a). We first assume that the
magnetic field filling this sphere is uniform,
and makes a finite angle $ \alpha $ with the rotation
axis of the galaxy.
Then we allow the ball to collapse to
a sphere the size of the galaxy (figure 1b), and finally to a
disk the thickness of the galactic disk (figure 1c). We
assume the first collapse is radial and uniform,
while the second collapse into the disk is linear
and one dimensional.
During the time when the galaxy contracts uniformly
into the disk
along the $ z $ direction, where $ {\bf \hat{z}} $ is in the
direction of $ {\bf \Omega} $, we ignore any rotation.
Then the resulting magnetic field
configuration is as in fig, (1c). The horizontal
component of the magnetic field has been amplified by
the large compressional factor, while the vertical
component is unchanged, so that the resulting
field is nearly parallel to the galactic disk.
At this stage, some lines enter the disk from the top and leave
from the top, e.g. line $ {\it a }$. Some lines enter from
the bottom and leave through the top, e.g. lines
$ {\it b } $ and $ {\it c} $.
Finally, some lines enter from the bottom and leave through
the bottom, e.g. line $ {\it d} $.
It turns out that lines such as $ {\it a } $ and
$ {\it d} $ are eventually expelled from the disk
by ambipolar diffusion,
so that we ignore them. Now,
set the disk into differential rotation at time $ t = 0 $.
(If we include the rotation during collapse
the magnetic field will start to wind up
earlier. However, the result would be the same as though
we were to ignore the rotation during collapse,
and then after the collapse let the disk rotate by an additional
amount equal to the amount by which
the disk rotated during the collapse.
This is true
provided that the initial field is too weak for ambipolar diffusion
to be important.)
\subsection{Results of the Model}
We here summarize the conclusions
that we found from the detailed analysis of our model,
given in the body of this paper.
Initially, after the disk forms, all the lines
have a horizontal component that is larger than
the vertical component by a factor equal
to the radius of the disk divided by its thickness
$ \approx R/D \approx 100 $. This is the case if the initial
angle $ \alpha $ was of order 45 degrees
or at least not near 0 or 90 degrees.
Now, because of neglect of rotation
in the collapse,
the horizontal component of all the field lines is in
a single direction, the $ x $ direction say, so that
\begin{equation}
{\bf B } = B_i {\bf \hat{x}} + B_i(D \tan \alpha /R ) {\bf \hat{z} } =
B_i \cos \theta {\bf {\hat r }} - B_i \sin \theta {\bf {\hat \theta}}
+ B_i (D \tan \alpha /R ) {\bf \hat{z} }.
\label{eq:1}
\end{equation}
After $ t = 0 $ the differential rotation of the
disk stretches the radial magnetic field into the
toroidal direction so that, following a given fluid
element whose initial angle is $ \theta_1 $, one has
\begin{equation}
{\bf B} = -B_i \cos \theta_1 {\bf {\hat r }}
+\left[B_i (r \frac{d \Omega }{d r} t) \cos \theta_1 -
B_i \sin \theta_1 \right]
{\bf {\hat \theta}} + B_i (D \tan \alpha /R ) {\bf \hat{z} }.
\label{eq:2}
\end{equation}
After a few rotations the second component dominates
the first and third components.
It is seen that the total magnetic
field strength grows linearly with time. After the toroidal magnetic
field becomes strong enough,
the magnetic force on the ionized part of the
disk forces it through the neutral component, primarily in the
$ z $ direction.
Consider a single line of force.
Let us turn off the differential
rotation for a moment. In this case,
the $ z $ motion steepens the line of force,
but does not change the vertical field component.
This leads to a shortening of the line, both
in the radial direction and in the azimuthal
direction (see figure 1d). Now, let the differential rotation
continue. Because the radial component of
the magnetic field is reduced, the toroidal
component increases more slowly. Eventually, there
comes a time when the radial field is
small enough that the shortening motions
in the azimuthal direction are stronger
than the stretching motion, and the azimuthal component of the magnetic
field actually decreases even in the presence
of differential rotation. From this time on, the magnetic
field strength decreases at a rate such
that the vertical ambipolar velocity is just
enough to move the plasma a distance approximately
equal to the
thickness of the galactic disk, in the time $ t $,
i. e.
\begin{equation}
v_D t = D .
\label{eq:3}
\end{equation}
Now, $ v_D $ is essentially proportional to the
average of the square of the magnetic field strength.
Therefore, for a uniform partially ionized plasma we have
\begin{equation}
v_D \approx \frac{1}{\rho_i \nu } \frac{B^2}{8 \pi D} ,
\label{eq:4}
\end{equation}
where $ D $ is the half thickness of the disk,
$ \rho_i $ is the ion mass density, and
where $ \nu $ is the ion neutral collision rate.
Thus, for asymptotically long times one finds that
\begin{equation}
B \approx D \sqrt{ \nu \rho_i /t} .
\label{eq:}
\end{equation}
That is, the magnetic field strength
approaches a saturated time behavior
independent of its initial value. The saturated time behavior
only depends on the ambipolar diffusion properties
of the interstellar medium, and on the time $ t $.
However, the time to reach saturation does
depend on the initial value of the magnetic field strength.
The qualitative behavior is shown in figure 2,
where the dependence of $ B $ on time for different
initial values is shown.
For a very weak initial radial component of the field,
saturation is not reached during a Hubble time.
However, for fields substantially larger than the critical initial
field strength for reaching saturation, the final saturated field
is independent of
the initial radial field.
In the interstellar medium,
ambipolar diffusion is not well
modeled by diffusion through a uniform
plasma. In fact, the bulk of the mass of the
interstellar medium is in dark clouds, in which
the degree of ionization is very low. Also,
the outward magnetic force is concentrated
in the volume of the clouds.
As a result the ambipolar diffusion velocity
is more accurately given by the formula
\begin{equation}
v_D = \frac{B^2 (1 + \beta/\alpha )}
{8 \pi \rho_i \nu f D} ,
\label{eq:6}
\end{equation}
where $ f $ is the filling factor for the clouds,
$ \rho_i $ is the effective ion density in the clouds,
$ \nu $ is the effective ion-neutral collision
rate in the clouds, and $ \beta/\alpha $ is the ratio of cosmic ray
pressure to magnetic pressure.
The model for the interstellar medium, which
we employ to study the magnetic field evolution,
is sketched in figure 3.
The magnetic field is anchored in the clouds
whose gravitational mass holds the magnetic
field in the galactic disk.
The magnetic field lines bow up in between
the clouds pulled outward by magnetic pressure and by
cosmic ray pressure. The outward force
is balanced by ion--neutral friction in
the clouds themselves. A derivation of
equation 6 based on this model
is given in section 5.
Taking plausible values for the properties of
the clouds, one finds that the critical initial
value of the magnetic field, for it to reach saturation
in a Hubble time
is about $ 10^{-8} $ gauss.
Further, the saturated value of the field at
$ t = 10^{10 } $ years is estimated to be
2 microgauss. At this field strength
the time to diffuse across the disk $ D/v_d \approx 3 \times 10^{ 9} $
years, which is of order of the lifetime of the disk.
Finally, let us consider the structure of the saturated
field. The time evolution of the magnetic field
refers to the field in the rotating frame.
Thus, if we wish the toroidal field at time $ t $ and at $ \theta = 0 $
we need to know the initial radial component of the field at
$ \theta_1 = - \Omega(r) t $. Thus, assuming that the field
reaches a saturation value of $ B_S $,
one has
\begin{equation}
B_{\theta } = \pm B_S ,
\label{eq:7}
\end{equation}
where the sign is plus or minus according to the sign of
\begin{equation}
[\cos(-\Omega(r) t )] .
\label{eq:8}
\end{equation}
Since $ \Omega $ depends on $ r $, we see that $ B_{\theta} $
changes sign over a distance $ \Delta r $ such that
$ (\Delta \Omega) t = ( \Delta r d \Omega/dr) t \approx \pi $, i.e.
\begin{equation}
\frac{\Delta r}{r} = \frac{-\pi }{r |d \Omega/dr| t} \approx
\frac{\pi}{|\Omega | t} .
\label{eq:9}
\end{equation}
Taking the rotation period of the galaxy to be
200 million years, one finds for $ t = 10^{ 10} $ years,
$ \Delta r \approx 100 $ parsecs.
Therefore, because the field saturates
to a constant field strength, it is predicted that a
uniform initial field, that has a value greater
than $ > 10^{-8} $ gauss, immediately
after the collapse to the disk, leads to a
toroidal field that varies with $ r $, at fixed $ \theta $,
as a square wave with a reversal
in sign every $ 100 $ parsecs or so. Such a field structure
would produce no
net Faraday rotation and would contradict observations.
However, this result can be traced to
our assumption that the initial magnetic field in
figure 1 was entirely uniform. Suppose it were not uniform.
Then, after collapse, the radial field is not
purely sinusoidal (see figure 4).
In fact, one expects a behavior more like
\begin{equation}
B_r(t=0) = B_i[\cos \theta + \epsilon \cos 2 \theta ] ,
\label{eq:10}
\end{equation}
where $ \epsilon $ is an index of the nonuniformity.
As an example, for moderately small $ \epsilon ,
(\cos \theta + \epsilon \cos 2 \theta ) $ is positive
for
\begin{equation}
- \pi/2 + \epsilon < \theta < \pi/2 - \epsilon ,
\label{eq:11}
\end{equation}
and negative for
\begin{equation}
\pi/2 - \epsilon < \theta < 3 \pi/ 2 + \epsilon .
\label{eq:12}
\end{equation}
Therefore the radial extent in which $ B_{\theta}(t) $ is negative
is larger than the radial extent in which it is positive
by a factor
\begin{equation}
\frac{\pi - 2 \epsilon }{\pi + 2 \epsilon }
\label{eq:13}
\end{equation}
The resulting field is positive over a smaller range of $ r $
than the range in which it is negative. Consequently,
in such a magnetic field there would be a net Faraday
rotation of polarized radio sources.
Note that it is easily possible that the regions in which
the radial field is initially weaker
can end up as regions that dominate in flux
over those region that come from regions
in which the radial field was stronger! Now, if one averages this
field over regions much larger than $ 100 $ parsecs,
then the resulting mean field is smooth and
axisymmetric. This result is contrary
to the generally held belief that
a primordial field should lead to a field with bisymmetric
symmetry(Sofue et al 1986). This result alone shows that a more
careful treatment of the evolution of the primordial field,
such as that discussed above, leads to quite different
results from those commonly assumed. (However, as mentioned
above, including the effect of the
spiral density wave will produce a magnetic field lines
in the spiral arms that will appear to have bisymmetrical
shape.)
We emphasize that, although the actual magnetic field
derived in our model is a tightly
wound spiral, the magnetic field averaged over a
moderate size scale appears to be smooth and axisymmetric.
A second important conclusion of our model
is that because of their topology, the field lines
cannot not be expelled from the disk by ambipolar diffusion.
The same line of force
diffuses downward in the lower part of the
disk, and upward in the upper part of the disk.
The line must thus continue to be threaded through the disk.
(See lines $ b $ and $ c $ in figure 1c.)
A third conclusion that can be drawn from
our model
is that any single line after saturation
has only a finite extent
in the disk. For example, if the initial field is a few microgauss,
then it turns out that the line only extends a radian
or so before leaving the disk. This finite extent of
the magnetic lines would make the escape
of cosmic rays from the galaxy possible without
the necessity of
any disconnection of the magnetic field lines in the
interstellar medium.
A final conclusion that should be noted is, that
the saturated magnetic field of our model has
a mean field strength smaller than the rms field strength.
This has the consequence that different
methods for measuring the magnitude of the magnetic field
strength should lead to different results.
The measurement by nonthermal
radio emission of the cosmic ray electrons
measures the rms field strength while
Faraday rotation measures the mean
strength.
The outline of the paper is as follows:
In section 2 an analytic model
is developed to demonstrate the properties
described in this introduction.
In section 3 a more precise one dimensional
numerical simulation is carried out that confirms the evolution
of the field in the $ z $ direction given in section 2.
In section 4 it is shown that the three
dimensional equations for the evolution of the field
can be reduced to two independent variables. These are
$ z $ and an angular coordinate $ u= \theta - \Omega(r) t $
that is constant along the spirals generated by the
differential rotation of the galaxy.
A numerical simulation of the resulting
differential equations is carried out.
It is shown that after a long time the resulting
magnetic field does evolve locally
in the essentially same way as is given in sections 2 and 3.
In addition, it varies
in radius as a square wave with uneven lobes.
In section 5 the astrophysics of the
interstellar medium clouds is discussed, and
a derivation is given of expression 6,
for the
effective mean ambipolar motion of the field in the disk.
In the concluding section 6, the implications of this
model for the evolution of a magnetic field of primordial origin
are given. The bearing of these implications
on the various
criticisms of the primordial field hypothesis are discussed.
\section{Local Theory: Analytic }
In this and the next section we wish to
consider the local behavior of
the magnetic field following a fluid element that
moves with the galactic rotation. If the ambipolar diffusion
were strictly in the $ z $ direction, the evolution
of the field in a given fluid element would be
independent of its behavior
in other fluid elements at different $ r $ or $ \theta $.
It would only be affected by the differential rotation
of the galaxy, and by ambipolar velocity in the $ z $ direction.
Thus, in particular,
we could imagine the magnetic field at different
values of $ \theta $, behaving in an identical manner.
In other words, we could replace the general
problem by an axisymmetric one.
Let
\begin{equation}
{\bf B } = B_r(r,z){\bf \hat{ r}} +
B_{\theta} (r,z){\bf \hat{ \theta }} +
B_z(r,z){\bf \hat{z}} .
\label{eq:14}
\end{equation}
Let us neglect the radial velocity, and
let the ambipolar velocity be only in the $ {\bf {\hat z}} $
direction and proportional to the $ z $ derivative of
the magnetic field strength squared. Also, let us neglect any turbulent
velocity. The only velocities which we consider
are the differential rotation of the galaxy, $ \Omega(r) $,
and the ambipolar diffusion velocity of the ions.
Then
\begin{equation}
{\bf v} = r \Omega(r) {\bf \hat{\theta }} + v_z {\bf \hat{z}} ,
\label{eq:15}
\end{equation}
where
\begin{equation}
v_z = -K \frac{\partial B^2/8 \pi }{\partial z} ,
\label{eq:15a}
\end{equation}
where
\begin{equation}
K= \frac{ (1+ \beta/\alpha)}{\rho_i f \nu } ,
\label{eq:15b}
\end{equation}
where as in the introduction $ \beta/\alpha $ is the ratio
of the cosmic ray pressure to the magnetic pressure,
$ \rho_i $ is the ion density,
and, $ \nu $ is the ion-neutral collision rate in the
clouds.
The equation for the evolution of the magnetic field is
\begin{equation}
\frac{\partial {\bf B }}{\partial t} = \nabla \times {\bf (v \times B }) ,
\label{eq:16}
\end{equation}
or, in components,
\begin{equation}
\frac{\partial B_r}{\partial t} = \frac{\partial }{\partial z} (v_z B_r ) ,
\label{eq:17}
\end{equation}
\begin{equation} \frac{\partial B_{\theta}}{\partial t} =
\frac{\partial }{\partial z}( v_z B_{\theta})
+ \frac{d \Omega }{dr } B_r ,
\label{eq:18}
\end{equation}
\begin{equation}
\frac{\partial B_z}{\partial t} = \frac{\partial v_z }{\partial r} B_r .
\label{eq:19}
\end{equation}
We integrate these equations numerically in section 3.
They are too difficult to handle analytically. To treat
them approximately, we make the assumption that for all time $ B_r $ and
$ B_{\theta} $ vary parabolically in $ z $ i.e.
\begin{eqnarray}
B_r =& B_r^0 (1- z^2/D^2) , \label{eq:21} \\
B_{\theta} =& B_{\theta}^0 (1-z^2/D^2) ,
\label{eq:22}
\end{eqnarray}
where $ B^0_r $ and $ B^0_{\theta} $ are functions of time.
We apply equation~\ref{eq:17} and equation~\ref{eq:18}
at $ z = 0 $, and $ r = r_0 $, and solve for $ B^0_r,$ and $ B^0_{\theta} $
as functions of $ t $.
Thus, making use of equations 16, 18 and 19, we find
\begin{equation}
\frac{\partial B_r}{\partial t} = - \frac{v_D}{D} B_r ,
\label{eq:23}
\end{equation}
\begin{equation}
\frac{\partial B_{\theta} }{\partial t} = - \frac{v_D}{D} B_{\theta}
+ (r \frac{d \Omega }{d r})_{r_0} B_r ,
\label{eq:24}
\end{equation}
\begin{equation}
v_D = \frac{ K}{2 \pi D} (B_r^2+ B_{\theta}^2) ,
\label{eq:25}
\end{equation}
where everything is evaluated at $ z=0 , r= r_0$ so we drop
the superscripts on $ B_r $ and $ B_{\theta} $.
Since for galactic rotation $ r \Omega $ is essentially constant,
we have $ (r d \Omega / d r )_{r_0} = - \Omega(r_0) \equiv -\Omega_0 $.
Initially $ B_r $ and $ B_{\theta} $
are of the same order of magnitude. Also, we expect $ B_{\theta} $
to grow, by stretching, to a value much greater than its
initial value, before ambipolar diffusion becomes
important. Thus, for simplicity, we make the choice of initial
conditions $ B_r = B_1 , B_{\theta}= 0 $ where $ B_1 $ is the
initial value for the radial component of the field.
According to equation~\ref{eq:2}, $ B_1 = B_i \cos \theta $
for a fluid element starting at $ \theta $,
for a fluid element starting at $ \theta $.
Now, if initially $ v_D/D \ll \Omega $, then
according to equation~\ref{eq:24}, we
expect $ B_{\theta} $ to at first grow linearly in time,
Then by equation~\ref{eq:25}, $ v_D $ increases quadratically in time
till $ v_D/D \approx \Omega $, and the ambipolar velocity
starts to affect the evolution
of $ B_{\theta} $ and $ B_r $. If the initial field is small,
then the contribution of $ B_r $ to the ambipolar diffusion velocity
is never important. In this case we can write
\begin{equation}
v_D = v_{D1} \frac{B_{\theta}^2 }{B_1^2} ,
\label{eq:26}
\end{equation}
where $ v_{D1}=K B^2_1/(2 \pi D) $.
The solution to the differential equation~\ref{eq:23}
and equation~\ref{eq:24}
with $ v_D $ given by equation~\ref{eq:26} is
\begin{equation}
B_r = \frac{B_1}{(1 + \frac{2 v_{D1} }{3 D} \Omega^2 t^3)^{1/2}} ,
\label{eq:27}
\end{equation}
\begin{equation}
B_{\theta} = \frac{B_1 \Omega t}
{(1 + \frac{2 v_{D1} }{3 D} \Omega^2 t^3)^{1/2}} ,
\label{eq:28}
\end{equation}
and
\begin{equation}
v_D= \frac{v_{D1}}{D}\frac{\Omega^2 t^2}{(1 + \frac{2 v_{D1} }
{3 D} \Omega^2 t^3)} .
\label{eq:29}
\end{equation}
(The above solution, equations 28 and 29, of equations 24, 25, and 27,
is derived explicitly in Appendix A.
However, it can be shown by direct substitution that it is
a solution of equations 24, 25, and 27.)
From these equations we see that for
\begin{equation}
t \ll \left( \frac{3 D}{2 v_{D1} \Omega^2} \right)^{1/3} ,
\label{eq:30}
\end{equation}
$ B_r $ is unchanged, $ B_{\theta} $ increases as $ \Omega t $,
and $ \int v_D dt \ll D $, so that up to this time ambipolar diffusion
carries the plasma only a small fraction of the disk thickness.
On the other hand, if
\begin{equation}
t \gg \left( \frac{ 3D}{2 v_{D1} \Omega^2} \right)^{1/3} ,
\label{eq:31}
\end{equation}
then
\begin{equation}
B_{\theta} \approx B_1 \sqrt{ \frac{3 D}{2 v_{D1} t }}
= D \sqrt{ \frac{6 \pi }{K t} }
= D \sqrt{ \frac{6 \pi f \rho_i \nu}{(1 + \beta /\alpha ) t} } ,
\label{eq:32}
\end{equation}
and $ B_{\theta} $ is independent of both the initial
value of $ B_r $ and of $ \Omega $. It depends only
on the ambipolar diffusion properties of the
interstellar medium.
For all $ t $, we have
\begin{equation}
\frac{B_r}{B_{\theta} } = \frac{1}{\Omega t} ,
\label{eq:33}
\end{equation}
so for $ t = 10^{ 10 } $ years, $ B_r = B_{\theta}/300 $,
and the field becomes strongly toroidal.
The question arises as to how strong
$ B_1 $ must be in order that the saturated solution,
equation~\ref{eq:32}, is reached. Taking $ t = t_H $, the
Hubble time, and making use of the expression for $ v_{D1} $,
one finds from equation~\ref{eq:31} that for saturation
to be reached we must have
\begin{equation}
B_1 > \frac{D}{t_H^{3/2}}\sqrt{ \frac{4 \pi f \rho_i \nu}
{3 ( 1+ \beta/\alpha ) }} .
\label{eq:34}
\end{equation}
Making use of the numbers derived in
section 5 one finds that if $ B_{\theta} $ is saturated
at $ t = t_H $, then
\begin{equation}
B_{\theta} = \frac{D}{100 \mbox{pc} }
\left( \frac{10^{ 10} \mbox{years} }{t_h} \right)^{1/2}
\left( \frac{6 \times 10^{ -4} }{n_i n_0} \right) ^{1/2}
1.9 \times 10^{ -6} \mbox{gauss} ,
\label{eq:36}
\end{equation}
where $ n_i $ is the ion density in the clouds, assumed to
be ionized carbon, and $ n_0 $ is the mean hydrogen
density in the interstellar medium. The densities
are in cgs units.
For saturation, the critical value for $ B_1 $ is
\begin{equation}
B_{1crit} = \frac{D}{100 \mbox{pc} }
\left( \frac{10^{ 10} \mbox{years} }{t_h} \right)^{3/2}
\left( \frac{6 \times 10^{ -4} }{n_i n_0} \right) ^{1/2}
4 \times 10^{ -9} \mbox{gauss}
\label{eq:37}
\end{equation}
where the densities are in cgs units.
Hence, for the above properties of the clouds
and for an initial radial field greater than the critical
value, the magnetic field at $ t = t_H $ saturates
at about the presently observed value. Such
a field arises from compression if, when the galaxy was a sphere
of radius $ 10 $ kiloparsecs, the magnetic field strength
was greater than $ 10^{ -10} $ gauss. If before this
the virialized radius of the protosphere was $ 100 $
kiloparsecs, then in this sphere the comoving initial value for the
cosmic field strength had to be greater
than $ 10^{ -12 } $ gauss. (That is if the cosmic field filled
all space then it had to be so strong that the present
value of the magnetic field in intergalactic space
must now be $ 10^{ -12} $ gauss.) There are good reasons to
believe that a magnetic field stronger than this minimum
value
could have been generated
by turbulence in the protogalaxy during
its collapse to this virialized radius (Kulsrud et al. 1996).
However, this field would
be local to the galaxy and not fill all space.
Finally, one expects that $ B_z \approx B_1/100 $,
from the initial compression into the disk. Taking $ B_z $
as unaffected by the differential stretching
and by the ambipolar diffusion velocity, one
can derive an expression for the length of a line of force:
\begin{equation}
\frac{r d \theta }{d z } = \frac{B_{\theta} }{B_z} =
\frac{100 \Omega t}
{( 1 + \frac{2 v_{D1} }{3 D} \Omega^2 t^3 )^{1/2} }
\approx 100 \sqrt{ \frac{3 D}{2 v_{D1} t}} ,
\label{eq:38}
\end{equation}
or
\begin{equation}
r \Delta \theta =
\left( \frac{ (2.5 \times 10^{-6} \mbox{gauss} }{B_1} \right) 10
\mbox{kpc} .
\label{eq:39}
\end{equation}
Thus, for example if $ B_1 = 4.5 \times 10^{ -7} $ gauss, then
a line of force passing through the solar position
in the galaxy, would stretch
once around the galaxy. Stronger initial fields lead to
shorter lines of force. If the initial field is stronger
than 2 microgauss, then it is possible that cosmic rays
can escape along the lines of force into the halo during
their average life time.
\bigskip
\section{One Dimensional Numerical Simulation}
It is clear that any gradient of the magnetic
field strength will lead to ambipolar diffusion.
However, gradients in the angular direction
are weak, so that we may neglect ambipolar diffusion in the
angular direction. Similarly, the gradients in the radial
direction are weak at first, although as pointed out in
the introduction, the magnetic field ends up reversing rapidly
in the toroidal direction, so eventually
radial ambipolar diffusion becomes as important as vertical
ambipolar diffusion.
In this section, we restrict ourselves
to gradients only in the $ {\bf \hat{z}} $ direction.
In this case, the axisymmetric approximation is
valid, and the relevant equations are equation~\ref{eq:17}
and equation~\ref{eq:18}, where
$ v_z $ is given by equation~\ref{eq:15a},
\begin{equation}
v_z = - K \frac{\partial }{\partial z}
\frac{B^2}{8 \pi} .
\label{eq:40}
\end{equation}
In the previous section, these equations were
reduced to zero dimensions by the {\it ansatz}
that $ B_r $ and $ B_z $ were parabolic in $ z $ according to
equation~\ref{eq:21} and equation~\ref{eq:22},
and the basic equations were applied only at
$ z = 0 $. In the present section we treat these equations
numerically for all $ z $ with $ |z| < D $,
and drop the parabolic assumption.
We treat the disk as uniform so that $ \rho_i $ and $ \nu $
are taken as constants.
We need boundary conditions at $ z = \pm D $.
We expect $ B $ to be quite small outside the disk,
$ |z| > D $. (This is because we suppose that
neutrals are absent in the halo
and $ \nu = 0 $, so that the flow velocity $ v_z $ becomes very
large. Since flux is conserved, $ v B $ must be a constant
in a steady state and
the magnetic field must
be very small.) In order to match
smoothly to the outer region, we first assume that $ \rho_i $
is a constant for all $ z $ and is very small.
In addition, we assume that $ \nu $ is constant for
$ |z| < D $, and decreases rapidly to zero in
a narrow region, $ D < |z| < D + \Delta $. Then because
$ \nu \approx 0 $ in the halo region, $ |z| > D $,
$ v_z $ becomes so large that inertia is important.
The equation for $ v_z $ should read
\begin{equation}
\rho_i \left( \frac{\partial v_z }{\partial t}
+ v_z \frac{\partial v_z }{\partial z}\right) =
- \frac{\partial B^2/ 8 \pi }{\partial z}
- \rho_i \nu' v_z ,
\label{eq:41}
\end{equation}
where we have set $ \rho_i \nu' =
(\rho_i \nu/f) (1 + \beta / \alpha )/f $.
Because the time evolution is over the Hubble time $ t_H $,
$ \partial v_z / \partial t \ll v_z \partial v_z / \partial z $,
i.e.
$ v_z /D$ is very large compared to $ 1/t_H $.
Thus, we drop the partial time derivative.
The equations for $ B_r $ and $ B_{\theta} $ are the same as
in the disk. However, because $ v_z $ is large
the divergence terms $ \frac{\partial }{\partial z }(v_z B_r) $
and $ \frac{\partial }{\partial z }(v_z B_\theta ) $ are much
larger than the other terms in the region
near $ |z| = D $, and in the halo. Thus, we have
\begin{equation}
\frac{\partial }{\partial z }(v_z B) = 0 ,
\label{eq:42}
\end{equation}
where $ B = \sqrt{ B^2_r + B_{\theta}^2 } $ is the magnitude
of the magnetic field. This means that
\begin{equation}
v B = \Phi = \mbox{const} ,
\label{eq:43}
\end{equation}
where $ \Phi $ is the rate of flow of magnetic flux,
and it is a constant in these regions. (We drop the
subscript on $ v $)
Dividing equation 41 by $ \rho_i $ and dropping
the partial time derivative, we get
\begin{equation}
\frac{\partial }{\partial z} \left( v^2 + \frac{B^2}{4 \pi \rho_i} \right)
= - 2 \nu' v .
\label{eq:44}
\end{equation}
Combining this equation with equation~\ref{eq:43} we have
\begin{equation}
\frac{\partial }{\partial z}
\left( v^2 + \frac{\Phi^2}{ 4 \pi \rho_i \nu'^2} \right)
= - 2 \nu' v ,
\label{eq:45}
\end{equation}
or
\begin{equation}
\frac{d \left( v^2 + \Phi^2 / 4 \pi \rho_i v^2 \right) }{v} ,
= - 2 \nu' d z ,
\label{eq:46}
\end{equation}
or
\begin{equation}
- \int \left( ( \frac{\Phi^2}{4 \pi \rho_i v^4 } - 1 \right) d v
= - \int \nu' d z .
\label{eq:47}
\end{equation}
For $ z $ larger than $ D + \Delta $, the
right hand side becomes a constant, so that
$ v^4 \rightarrow \Phi^2/4 \pi \rho_i $.
As $ z $ decreases below $ D $, the right hand side becomes linear in
$ z $, and we find that for $ z $ far enough into the disk,
the inertial term becomes negligible.
Hence,
\begin{equation}
3 v^3 \approx \frac{\Phi^2}{4 \pi \rho_i \nu' (D - z)} ,
\label{eq:48}
\end{equation}
and $ v $ approaches a small value. This result
breaks down when $ v $ is small enough that
the other terms in equation~\ref{eq:17} and equation~\ref{eq:18}
become important.
Thus, the connection of the main part of the disk
to the halo is given by equation~\ref{eq:47}
\begin{equation}
\int^{v_c}_v \left( \frac{\Phi^2}{4 \pi \rho_i v^4} - 1 \right) d v
= - \int^z_{z_c} \nu' d z ,
\label{eq:49}
\end{equation}
where $ v_c = (\Phi^2/ 4 \pi \rho_i)^{1/4} $. Transforming
this to an equation for $ B $ we have
\begin{equation}
\int^B_{B_c} \left( \frac{B^4}{4 \pi \rho_i \Phi^2 } - 1 \right)
\frac{\Phi d B}{B^2} = - \int^z_{z_c} \nu' d z ,
\label{eq:50}
\end{equation}
where $ B_c = \Phi/v_c = (4 \pi \rho_i \Phi^2 )^{-1/4} $, and where
$ z_c $ is the value of $ z $ where $ B = B_c $.
Let us write $ \int^{\infty }_D \nu' dz = \nu'_0 \Delta' $, where
on the right hand side $ \nu'_0 $ denotes the constant
value of $ \nu' $ in the disk. Then for
$ z $ far enough into the disk, equation~\ref{eq:50} reduces
to
\begin{equation}
\frac{B^3}{3 ( 4 \pi \rho_i)} \approx (D - \epsilon - z) \nu_0 ,
\label{eq:51}
\end{equation}
where
\begin{equation}
\epsilon = \Delta' + 2 v_c/ 3 \nu_0 .
\label{eq:52}
\end{equation}
Now, $ \Delta' $ is small by assumption. If we estimate
$ \Phi $ as $ B_0 v_{D \mbox{eff}} $, where $ v_{D \mbox{eff}} $
is the order of the ambipolar diffusion velocity at the
center of the the disk, then $ v_c \approx (B_0 v_{D \mbox{eff}} /
\sqrt{ 4 \pi \rho_i } )^{1/2} \ll B_0/\sqrt{ 4 \pi \rho_i } $,
so that $ \epsilon $ is much smaller than the distance an
alfven wave can propagate in the ion-neutral collision time.
This is clearly a microscopic distance compared to the
thickness of the disk, so that we may neglect it.
In summary, equation~\ref{eq:50}
implies that the inner solution
essentially vanishes at $ z \approx \pm D $, so that we may take our boundary
condition to be $ B_r = B_{\theta} = 0 $ at $ z = D $.
Given this solution the above analysis shows how to smoothly continue
it into the halo.
Equation~\ref{eq:17} and equation~\ref{eq:18}
can be made dimensionless by a proper transformation.
We choose a transformation that is consistent
with that employed in the next section.
Define a unit of time $ t_0 $ by
\begin{equation}
\Omega_0 t_0 = R/D ,
\end{equation}
where $ R $ is the radius of the sun's galactic orbit,
and $ \Omega_0 = \Omega(R) $ is its angular velocity.
Next, choose a unit of magnetic field $ B_0 $ to satisfy
\begin{equation}
\frac{K B_0^2}{4 \pi D} = \frac{D}{t_0} .
\label{eq:54}
\end{equation}
For such a field the ambipolar diffusion velocity is
such that the field lines cross the disk in a time $ t_0 $.
Now, let
\begin{eqnarray}
t &= & t_0 t' , \\ \nonumber
B_r &= & (D/R) B_0 B'_r ,\\ \nonumber
B_{\theta } &= & B_0 B'_{\theta} ,\\ \nonumber
z &=& D z' .
\end{eqnarray}
For a cloud ion density of $6 \times 10^{-4}/ \mbox{cm}^3 $,
and a mean interstellar medium density of $ n_0 = 10/ \mbox{cm}^3 ,
B_0 = 2.85 \times 10^{ -6} \mbox{gauss} $, (see section 5).
Also, $ D = 100 $ parsecs, and
$ t_0 = 3 \times 10^{ 9} $ years. The Hubble time in these dimensionless
units is about 3.
The details of the numerical simulation are
given in another paper, (Howard, 1996). The dimensionless
equations to be solved are
\begin{equation}
\frac{\partial B'_r}{\partial t'} = \frac{1}{2} \frac{\partial }{\partial z' }
(\frac{\partial B'^2}{\partial z'} B'_r) ,
\label{eq:55}
\end{equation}
\begin{equation}
\frac{\partial B'_{\theta} }{\partial t'} = \frac{1}{2 } \frac{\partial }{\partial z' }
(\frac{\partial B'^2}{\partial z'} B'_{\theta} ) - B'_r .
\label{eq:56}
\end{equation}
Our dimensionless boundary condition is $ B'_r = B'_{\theta} = 0 $
at $ z = \pm 1 $.
The results of the integration of these equations are shown
in figures 5 and 6. The initial profiles of $ B_r $ and $ B_{\theta} $
are parabolic.
In figure 5
the initial value of $ B'_r $ is $ B_i = 0.01 $, so that $ B_r =
2.85 \times 10^{ -8 } $ gauss. It is seen that the qualitative
behavior is the same as that described in sections 1 and 2. The
field at first grows and then relaxes back to a decaying solution.
Figure 6 gives the time evolution of $ B'_{\theta} $
at $ z= 0 $ for the set of initial conditions, $ B_r = B_i \cos theta $
where $ \theta $ runs from zero to one hundred eight degrees
in increments of fifteen degrees.
It is seen that the curves all tend to the
same asymptotic behavior and are similar to those
in figure 2. As in figure 2, the field from weaker initial values
of $ B_r $ takes longer times to reach its
peak value and the saturation curve. These results
are similar to those which are
represented by equations 28 and 29.
It is seen in figure 5 that, except near the edge, the profile
remains similar to a parabola. Near the edge the cube root
behavior of equation~\ref{eq:51} is also evident. This cube root
behavior can be understood since the flux
$ v_z B $ is roughly constant, and therefore $
( \partial B'^2 / \partial z') B' \approx
(\partial/\partial z' ) (1 - z')^{2/3} \times
(1-z')^{1/3} \approx (1-z')^0 =$ a constant.
The most important
feature in figure 5 and 6, is the saturation of
$ B $ to the envelope curve.
This saturation occurs if the initial value of $ B'_r $ is
larger than 0.01. Thus, even
in our more precise calculations, information on
the initial B is lost, and the final value of $ B $
at fixed time depends only on the initial sign of $ B_r $.
\section{Two Dimensional Analysis}
In the last section we have treated the
evolution of the magnetic field under the
influence of differential rotation and ambipolar diffusion
in a one dimensional approximation. Only
the $ z $ component of the ambipolar diffusion was kept.
In this approximation
the magnetic field in each column of fluid at $ r, \theta $
evolves independently of any other $ r , \theta $
column of fluid.
In early times, because the
galactic disk is thin compared to its radius,
this is a good approximation, since the
horizontal gradients of $ B^2 $ are small
compared to the vertical gradients.
However, because of the differential
rotation of the galaxy, fluid elements
at different radii, that were initially very far from each other
are brought much closer together, and the
horizontal gradients are increased until
they become comparable to the vertical gradients.
For example, two fluid elements that
were initially on opposite sides of the galaxy, and
at a difference in radius of one hundred
parsecs, will be brought to a position on
the same radius after about fifty rotations.
Since the evolution of the field in these
two fluid elements is very different, we expect
the gradient of $ B^2 $ in the radial
direction to become as large as that in the
vertical direction. As a consequence, ambipolar drift
velocities in the horizontal direction can be
expected to be as large as those in the
vertical direction. On the other hand, two
fluid elements at the same radius, but initially
far apart, remain far apart. The
ambipolar motions in the $ \theta $ direction
should remain small.
Thus, we expect that, at first, only ambipolar
$ z $ motions are important for the evolution of the magnetic field,
but that eventually the radial ambipolar motions also
will become important, although not the angular ambipolar motions.
That is, we expect the problem to become
two dimensional.
In order to properly demonstrate
this evolution, we introduce a new independent variable
\begin{equation}
u \equiv \theta - \Omega(r) t .
\label{ueq}
\end{equation}
If ambipolar diffusion is neglected, this variable is just the
initial angular position of a fluid element
that is at the position $ r, \theta, z $ at time $ t $.
Since in the absence of ambipolar diffusion
the variables $ r, u, $ and $ z $ are constant,
following a given fluid element. They
would be the Lagrangian variables
if only rotational motion is considered.
It is appropriate to describe the evolution
of the magnetic field components $ B_r, B_{\theta} $,
and $ B_z $ in terms of these variables.
Inspection of equation~\ref{ueq} shows that
when $ \Omega t $ is large, $ u $ varies rapidly
with $ r $ at fixed $ \theta $, in agreement with
the above qualitative discussion. Changing $ r $ by only
a small amount will change the initial angular
position by $ \pi $. Thus, we expect that the
components of $ {\bf B} $ will vary finitely with
$ u $, but only slowly with $ r $ for fixed $ u $. The
surfaces of constant $ u $ are tightly wrapped spirals.
Thus, the behavior of the field should be finite in
$ r, u, $ and $ z $, but only gradients with respect to
$ u $ and $ z $ should be important.
To see this, let us first write the total velocity,
$ {\bf v = w } + \Omega r {\bf \hat{\theta }} $,
where $ {\bf w} $ is the ambipolar velocity. Next, let us
derive the equations for the components
$ B_r, B_{\theta} , B_z $ and $ w_r, w_{\theta}, w_z $
in terms of the Eulerian variables $ r, \theta $ and $ z $.
\begin{equation}
\frac{\partial B_r}{\partial t} =
({\bf B \cdot \nabla }) w_r - ({\bf w \cdot \nabla }) B_r
- B_r ( \nabla \cdot {\bf w}) -
\Omega \frac{\partial B_r }{\partial \theta },
\label{eq:a2}
\end{equation}
\begin{equation}
\frac{\partial B_{\theta}}{\partial t} =
({\bf B \cdot \nabla }) w_{\theta} - \frac{B_{\theta} w_r }{r}
({\bf w \cdot \nabla }) B_{\theta} + \frac{B_r w_{\theta} }{r}
- \Omega \frac{\partial B_r }{\partial \theta }
- B_{\theta} ( \nabla \cdot {\bf w}),
\label{eq:a3}
\end{equation}
\begin{equation}
\frac{\partial B_z}{\partial t} =
({\bf B \cdot \nabla }) w_z - ({\bf w \cdot \nabla }) B_z
- \Omega \frac{\partial B_z }{\partial \theta }
- B_z ( \nabla \cdot {\bf w}),
\label{eq:a4}
\end{equation}
where the ambipolar velocities are:
\begin{equation}
w_r= -K \frac{\partial B^2 /8 \pi }{\partial r} ,
w_{\theta} = -K \frac{\partial B^2/ 8 \pi }{r \partial \theta } ,
w_z = -K \frac{\partial B^2/ 8 \pi }{\partial z},
\label{eq:a5}
\end{equation}
and
\begin{equation}
\nabla \cdot {\bf w} = \frac{\partial w_r }{\partial r}
+ \frac{\partial w_{\theta} }{ r \partial \theta }
+ \frac{w_r}{r} + \frac{\partial w_z }{\partial z} .
\label{eq:a6}
\end{equation}
Finally, let us transform these equations to the new
coordinates $ r, u, z $. In doing this we assume
that the galactic rotation velocity
$ v_c = \Omega r $ is a constant.
The result of the transformation is
\begin{eqnarray}
\frac{\partial B_r }{\partial t } &= &
\frac{B_{\theta} }{r} \frac{\partial w_r }{\partial u }
+ B_z \frac{\partial w_r }{\partial z} \\ \nonumber
& & -w_r \frac{\partial B_r } {\partial r}
-\frac{w_{\theta} }{r} \frac{\partial B_r }{\partial u }
- \Omega t \frac{w_r }{r} \frac{\partial B_r }{\partial u}
- w_z \frac{\partial B_r }{\partial z}
\\ \nonumber
& &- B_r \left( \frac{w_r }{r}
+ \frac{\partial w_{\theta} }{r \partial u }
+ \frac{\partial w_z }{\partial z } \right) ,
\label{eq:a7}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial B_{\theta} }{\partial t}& =&
B_r \frac{\partial w_{\theta} }{\partial r}
+ \frac{\Omega t}{r} B_r
\frac{\partial w_{\theta} }{\partial u} \\ \nonumber
& &+ B_z \frac{\partial w_{\theta} }{\partial z}
-w_r \frac{\partial B_{\theta} }{\partial r }
- \frac{w_{\theta} }{r} \frac{\partial B_{\theta} }{\partial u}
-\frac{\Omega t}{r} w_r \frac{\partial B_{\theta} }{\partial u} \\ \nonumber
& &- w_z \frac{\partial B_{\theta} }{\partial z }
-\Omega B_r \\ \nonumber
& &- B_{\theta} \frac{\partial w_r }{\partial r}
-\frac{\Omega t }{r} B_{\theta} \frac{\partial w_r }{\partial u }
- B_{\theta} \frac{\partial w_z }{\partial z} ,
\label{eq:a8}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial B_z }{\partial t } &=&
B_r \frac{\partial w_z} {\partial r }
+ \frac{B_{\theta } }{r} \frac{\partial w_z }{\partial u}
+ \Omega t \frac{B_r }{r} \frac{\partial w_z }{\partial u}\\ \nonumber
& & - w_r \frac{\partial B_z }{\partial r }
-\frac{w_{\theta} }{r} \frac{\partial B_z }{\partial u }
- \Omega t \frac{w_r }{r} \frac{\partial B_z }{\partial u} \\ \nonumber
& & - w_z \frac{\partial B_z }{\partial z}
- B_z \frac{\partial w_r }{\partial r}\\ \nonumber
& & -\Omega t \frac{B_z }{r} \frac{\partial w_r }{\partial u}
- B_z \frac{w_r }{r}
- \frac{B_z }{r} \frac{\partial w_{\theta} }{\partial u} ,
\label{eq:a9}
\end{eqnarray}
\begin{eqnarray}
w_r = -K \frac{\partial B^2/ 8 \pi }{\partial r }
-K \frac{\Omega t}{r} \frac{\partial B^2/ 8 \pi }{\partial u},
\label{eq:a10}
\end{eqnarray}
\begin{eqnarray}
w_{\theta} = - \frac{K}{r} \frac{\partial B^2/ 8 \pi }{\partial u } ,
\label{eq:a11}
\end{eqnarray}
\begin{eqnarray}
w_{z} = - K \frac{\partial B^2/ 8 \pi }{\partial z }.
\label{eq:a12}
\end{eqnarray}
Only a few of these terms are important.
To see this let us introduce dimensionless variables for
the velocity and field components. (We will denote the
dimensionless variables by primes.)
We will choose these variables based on our
one-dimensional results, as follows:
The unit of length for the $ z $ variable is the galactic
disk thickness $ D $.
\begin{equation}
z= D z' .
\label{eq:a13}
\end{equation}
The variation of quantities with $ r $ is finite over
the distance $ R $ the radius of the sun's orbit
in the galaxy, so we set
\begin{equation}
r = R r' .
\label{eq:a14}
\end{equation}
The variable $ u $ is already dimensionless and quantities
vary finitely with it, so we
leave it unchanged.
The unit of time $ t_0 $ should be of order
of the age of the disk. During this time the number of
radians through which the galaxy rotates, $ \Omega t $
is of the same order of magnitude as the ratio $ R/D $,
so for analytic convenience we choose $ t_0 $ so that
\begin{equation}
\Omega_0 t_0 = R/D ,
\label{eq:a15}
\end{equation}
and set
\begin{equation}
t = t_0 t'.
\label{eq:a16}
\end{equation}
If we take $ R/D = 100 $, and $ \Omega_0 = 2 \pi /( 2 \times 10^{ 8}
\mbox{years}), $ then $ t_0 = 3 \times 10^{ 9 } $ years.
It is natural to choose the unit for $ B_{\theta}, B_0 $
as that field whose $ z $ gradient produces an
ambipolar $ z $ velocity of order $ D/t_0 $, that is an average velocity
near that which would be produced by the saturated field.
Thus, we choose $ B_0 $ so that
\begin{equation}
K B_0^2/(4 \pi D) = D/t_0 ,
\label{eq:a17}
\end{equation}
and set
\begin{equation}
B_{\theta} = B_0 B'_{\theta} .
\label{eq:a18}
\end{equation}
(Note that the definitions in these units
are consistent with those of section 3.)
In most cases of interest to us,
$ B_r $ and $ B_z $ are much smaller
than $ B_0 $. ( In general $ B_{\theta} $ is initially
small compared to $ B_0 $, but it grows by a factor of $ R/D $ to
up to the saturated value of about $ B_0 $.) Thus, we set
\begin{equation}
B_r = (D/R) B_0 B'_r .
\label{eq:a19}
\end{equation}
The vertical field $ B_z $ starts out even weaker
than this magnitude
since the initial horizontal components of the
magnetic field were amplified by the initial compression
which formed the disk. However, the
$ B_z $ field is amplified up to the size of the
$ B_r $ field by the shear of the radial ambipolar velocity
acting on $ B_r $ .
We thus transform $ B_z $ by
\begin{equation}
B_z = (D/R) B_0 B'_z .
\label{eq:a20}
\end{equation}
The ambipolar velocities $ w_r, w_{\theta} $, and $ w_z $
arise from gradients of the magnetic field in the corresponding
directions.
Thus, after $ B_{\theta} $ has been amplified by stretching
to be of order $ B_0, w_z \approx K B_0^2/D $. Similarly,
after the differential rotation has acted to
reduce the scale of the variation of $ B^2 $ in the radial
direction, $ w_r $ becomes of order $ w_z $.
However, the scale of variation in the
$ \theta $ direction remains of order $ R $ over
the age
of the galaxy so that $ w_{\theta} \approx K B_0^2/R $.
Thus, we change the $ w $ components to
$ w' $ components by
\begin{eqnarray}
w_r & = &(D/t_0) w'_r ,\\ \nonumber
w_{\theta}& = &(D^2/R t_0) w'_{\theta} , \\ \nonumber
w_z & = & (D/t_0) w'_z .
\label{eq:a21}
\end{eqnarray}
Now if we transform
equations 64 to 69 by the change of variables
equations 70 to 78, and clear the dimensional factors $ t_0, B_0 $ etc.
from the left hand side, we find that the terms on the
right hand side are either independent of dimensional
units entirely, or are proportional to powers of $ (D/R) << 1 $.
The full equations are given in appendix B.
Dropping these ``smaller'' terms proportional to
a power of $ D/R $ greater than zero, we find that the
equations for the dimensionless variables, to lowest order in
D/R are
\begin{eqnarray}
\frac{\partial B'_r} {\partial t' } &=&
- \frac{1}{r'^2} w'_r t' \frac{\partial B'_r }{\partial u }
- \frac{\partial ( w'_z B'_r ) }{\partial z'} \\ \nonumber
& & + \frac{ B'_{\theta} }{r'} \frac{\partial w'_r }{\partial u}
+ B'_z \frac{\partial w'_r }{\partial z' } ,
\label{eq:a22}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial B'_{\theta} }{\partial t'} &=&
- \frac{t'}{r'^2} \frac{\partial (w'_r B'_{\theta} ) }{\partial u}
- \frac{\partial w'_z B'_{\theta} }{\partial z' } \\ \nonumber
& &- \frac{B'_r }{r'} ,
\label{eq:a23}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial B'_z }{\partial t' } &=&
\frac{ B'_{\theta} }{r'} \frac{\partial w'_z }{\partial u}
+ \frac{t' B'_r }{r'^2} \frac{\partial w'_z }{\partial u}\\ \nonumber
& &- \frac{t' w'_r }{r'^2} \frac{\partial B'_z }{\partial u}
- w'_z \frac{\partial B'_z }{\partial z}
- \frac{t'}{r'^2} B'_z \frac{\partial w'_r }{\partial u} ,
\label{eq:a24}
\end{eqnarray}
\begin{equation}
w'_r = - \frac{t'}{2 r'^2} \frac{\partial B'^2_{\theta}}{\partial u} ,
\label{eq:a25}
\end{equation}
\begin{equation}
w'_{\theta} = - \frac{1}{2} \frac{\partial B'^2_{\theta}}{\partial u} ,
\label{eq:a26}
\end{equation}
\begin{equation}
w'_z = - \frac{1}{2} \frac{\partial B'^2_{\theta} }{\partial z' } .
\label{eq:a27}
\end{equation}
Note that $ w'_{\theta} $ does not occur in the equations
for the evolution of the $ B' $ components. Also,
the $ r' $ derivatives are absent from these lower order
equations. The initial conditions on $ B'(t', u, z'; r' ) $
are
\begin{eqnarray}
B'_r( 0, u, z', r')&=& B_r(0,r'R ,u, z'D) , \\ \nonumber
B'_{\theta}( 0, u, z', r') &=& B_{\theta}(0,r'R ,u, z'D) , \\ \nonumber
B'_z ( 0, u, z', r') &=& B_z(0,r' R ,u, z' D) .
\label{eq:a28}
\end{eqnarray}
These transformations are formal, but they
enable us to correctly drop the terms whose effect
is small. Once these
terms are dropped, the equations reduce
to two dimensional equations, which are more
easily handled numerically.
Although we have assumed that the dimensionless variables
are originally of order unity, it may be the case that they
differ substantially from unity. However, an examination
of the various possible relevant cases leads to the conviction
that all the important terms have been kept as
well as other terms which are, perhaps, unimportant.
For example, the initial value of $ B'_{\theta} $
is much smaller than unity.
However, because of the shearing terms in equation 80
(the last term) $ B'_{\theta} $ grows to finite order
when $ t' $ becomes of order unity, so during the
later stages of the galactic disk, $ B'_{\theta} $ is of order unity.
Many of the terms in
equations 79 to 81
have an obvious significance.
The second term on the right hand side of the $ B'_{\theta} $
equation is the vertical decompression term present
in the one dimensional simulation. The
first term is the radial decompression term, which only
becomes important when $ t' \approx 1 $, and the wrapping up
has made the radial ambipolar diffusion important.
Similarly, there is a $ z $ decompressional
term in the $ B'_r $ equation, but, of course, no
radial decompression term. There is a term representing the
effect of shear on the toroidal field in increasing the
radial component, and a similar term resulting
from the action of shear on the $ B'_z $ component.
These shear terms would be small if $ B_{\theta} $
were of order of $ B_r $, or if $ B_z $ were much smaller
than $ B_r $, which is the case initially. However, $ B_z $
is increased by shear terms over the age of the disk,
to a value considerably larger than its initial value.
Equations 79 to 84
only contain derivatives with respect to
$ t', z' $ and $ u $, and none with respect to
$ r' $. Thus, $ r' $ is only a parameter in these
differential equations. Therefore, the components
of the magnetic field evolve independently of those
at a different value of $ r' $ (to lowest order
in $ D/R $).
Further, $ u $ does not occur explicitly
in the differential equations. The initial conditions,
equations 85
do involve $ u $, and are periodic in it. Therefore,
the magnetic field components remain periodic
in $ u $ for all $ t' $.
Let us consider the behavior of such a solution,
periodic in $ u $, in the neighborhood of the sun, $ r' = 1 $,
at fixed $ z' $, say $ z' = 0 $. Transform the
solution back to $ r, \theta $ coordinates. For fixed
$ r $ and $ t $, the solution is periodic in $ \theta, $
e.g. $ B_{\theta} (r, \theta ) = B_{\theta} (r, u+ \Omega(r) t)$.
Moreover, for fixed $ \theta $ we can write
\begin{eqnarray}
u & = & \theta - \Omega(r) t = \theta - \frac{\Delta r}{r}
\Omega_0 t \\ \nonumber
&=& \theta - \frac{\Delta r }{r }t' \frac{R}{D}
= \theta - \frac{\Delta r t'}{D} ,
\label{eq:a29}
\end{eqnarray}
so
\begin{equation}
\Delta r = \theta - u \frac{D}{t'} .
\label{eq:a30}
\end{equation}
Thus, for fixed $ \theta, r $ changes by an amount $
2 \pi D/ t' $ when $ u $ changes by its periodic length
$ 2 \pi $. Since $ r' $ changes by a small amount
$ \approx 2 \pi D/R t' $, we may ignore the dependence of
the solution for the
components of $ {\bf B} $ on $ r' $ and the components
of $ {\bf B } $ are nearly periodic in $ r $ ( at fixed $ \theta $ ).
However, because of the actual dependence
of the solution on $ r' $, as a parameter in the equations,
the amplitude and phase ( as well as the shape)
of the periodic solution do change slowly when
one goes a distance comparable with the radius of
the galaxy.
Equations 79 to 84 were integrated numerically.
The details of the integration are discussed
in Howard(1995, 1996), where most of the results
are presented. Initial conditions were set by
starting with a cosmic field before compression
into the disk, and then calculating the
resultant fields. In this paper we present the results
for two initial cases. Only the results for the integrations
at $ r =R $, the radius of the galactic solar orbit, are included.
The variation of $ B_{\theta} $ as a function of $ u $, at
$ z= 0 $, is plotted in figure 7 for the case that the
initial cosmic field was uniform. The antisymmetry
in $ u $ is evident, and it is clear, after transforming the field
to be a function of
$ r $ as the independent variable by equation 87,
that no Faraday rotation would be
produced by this field. The same result
for the case when
the initial cosmic field was nonuniform is shown in figure 8.
[The initial cosmic field was chosen so after compression
into the disk the horizontal field was $ {\bf B } =
B_i [.5 + x ){\bf \hat{x}} + y {\bf \hat{y}} ) $.]
The resulting saturated field is not antisymmetric, and does not
average out in $ u $. It also would not average out
when transformed to be a function of $ r $, and {\em would}
produce a Faraday rotation. The variation
of $ B_{\theta} $ with $ z $ at $ u = 0 $
at a time of 9 gigayears, is shown in figure 9.
It has the parabolic shape found in section 3.
The variation of $ B_{\theta} $ with time at the point
$ u= 0, z = 0 $, is given in figure 10 for the two cases.
The results are also similar to those of figure 2
derived from the simple parabolic approximation.
\section{Ambipolar Diffusion in the Interstellar Medium}
We now consider the averaged equations for
the magnetic field, taking into account the interstellar
clouds. The bulk of interstellar matter
is in the form of diffuse clouds and
molecular clouds. Because
the properties of the molecular clouds are
not very well known, we make the simplifying assumption
that essentially
all the interstellar matter is in diffuse clouds,
with a small amount of matter in the intercloud region.
In describing the clouds, we make use of
properties given by Spitzer(1968).
We assume that all the clouds are
identical. We further
include the cosmic ray pressure, and the magnetic pressure,
in the intercloud
region, but neglect the pressure of the intercloud matter.
Then
the cosmic rays and the magnetic fields are held in
the disk against their outward pressures by the
weight of the clouds in the gravitational field of
the stars. (See figure 3.)
Now the force due to the magnetic and cosmic ray pressure
gradients
is exerted only on the ionized matter in the clouds,
while the gravitational force is exerted mainly on the
neutrals in the clouds, since the fraction of ionization
in the clouds is generally very low. Thus, these contrary forces pull
the ions through the neutrals with ambipolar diffusion
velocity $ v_D $. The frictional force between the
ions and neutrals is proportional to $ v_D $.
By equating the magnetic plus the cosmic ray force to
the frictional force, we can obtain
the mean ambipolar velocity in the clouds.
Now, we assume that the cosmic ray pressure
$ p_R $ is related to the magnetic pressure $ B^2/8 \pi $
by the factor $ \beta/\alpha $(Spitzer 1968). We take $ \beta/\alpha $
independent of time and space.
This is plausible since when the magnetic field is
strong, we expect the cosmic ray confinement
to be better and therefore the cosmic ray pressure
to be larger.
The mean vertical force per unit volume
produced by the magnetic field strength gradients
and cosmic ray pressure gradients is
\begin{equation}
F = - ( 1 + \beta/\alpha ) \frac{\partial B^2 / 8 \pi }{ \partial z} .
\end{equation}
But this force is counterbalanced by the gravitational force on
neutrals in the clouds, which
occupy a fractional volume equal to the
filling factor, $ f $, times the total volume.
Thus, the force per unit volume
on the ions in the clouds is
\begin{equation}
F_{cloud} = \frac{F}{f} =
- ( 1 + \beta/\alpha ) \frac{1}{f}
\frac{\partial B^2/ 8 \pi }{ \partial z} .
\end{equation}
This force produces an ambipolar velocity, $ v_D $,
of the ions relative to the neutrals
such that
\begin{equation}
F_{cloud} = n_i m^{*} \nu v_D ,
\end{equation}
where $ m^{*} $ is the mean ion mass, $ \nu $ is the
effective ion--neutral collision rate for momentum
transfer and $ n_i $ is the ion number density, in
the clouds. Now,
\begin{equation}
\nu \approx n_c \frac{m_H}{m^{*}+m_H } <\sigma v >
\end{equation}
where $ \sigma $ is the momentum transfer collision
cross section, and we assume that the neutrals are all
hydrogen with atomic mass $ m_H $.
Thus, we have
\begin{equation}
F_{cloud} = n_i \frac{m_H m^{*} }{m^{*}+m_H } <\sigma v > n_c v_D
\end{equation}
If $ m^{*} >> m_H $, then the mass factor is $ m_H $.
Hence, if the ions are mostly singly charged carbon,
the mass factor is $ \approx m_H $, while if they
are mostly hydrogen ions then it is $ m_H/2 $.
The ions are tied to the magnetic field
lines, since the plasma is effectively infinitely
conducting. If there are several species of ions,
they all have the same cross field
ambipolar velocity.
Thus, the ambipolar diffusion velocity in the clouds
is
\begin{equation}
v_D = - \frac{1}{f} \frac{(1 + \beta/\alpha )\nabla (B^2/8 \pi)}
{n_i m^{*}_{eff} < \sigma v > n_c } ,
\label{eq:93}
\end{equation}
where $ m^{*}_{eff} = <m^{*} m_H/(m^{*} + m_H)> $
is the effective mass averaged over ion species,
and $ \sigma $ is the ion--neutral cross section.
The velocity $ v_D $ is the mean velocity
of an ion inside a cloud. It is also the velocity of
a given line of force. Now, the ions are continually
recombining and being replaced by other ions,
so it is actually the motion of the
magnetic lines of force that has significance.
Further, the clouds themselves have a short life
compared to the age of the disk. They
collide with other clouds
every $ 10^{ 7} $ years or so, and then quickly reform.
After the collision the cloud material is dispersed,
but because of its high conductivity, it stays connected
to the same lines of force. After the cloud
reforms the lines of force are still connected to the same
mass. The lines then continue to move in the reformed
cloud, again in the opposite direction to the field gradient.
As a consequence, during each cloud lifetime,
the lines of force pass through a certain amount of mass
before the cloud is destroyed.
The amount of mass
passed during a cloud life time, by a single flux tube of diameter $ d $
is $ 2 v_D t_c a d \rho_c $ where $ t_c $ is the cloud life time,
$ 2 a $ is the cloud diameter,
and $ \rho_c $ is the cloud density. However,
because of our assumption that the bulk of the matter is
in these diffuse clouds, it must be the case that
there is only a short time in between the destruction
of one cloud, and its reformation. Thus,
during a time $ t $, the field passes through
$ \approx t/t_c $ successive clouds. If we consider
a length of a line of force $ L $, at any one time
it passes through $ f L /2 a $ different clouds.
Then, in the time $ t $, the amount of matter
passed by this length of a given tube of force
(of diameter $ d $ ) is
\begin{equation}
\Delta M = 2 a d v_D t_c \rho_c \frac{L}{2a} \frac{ t}{t_c}
=L f \rho_c d v_D t.
\end{equation}
But $ f \rho_c $, is the mean density, $ \rho $,
of interstellar matter.
Thus, if we define $ \Delta x $ as the average effective distance
that this magnetic tube
moves through the disk by
\begin{equation}
\Delta M = \rho \Delta x L d ,
\end{equation}
we get
\begin{equation}
d \rho \Delta x L = L d \rho v_D t ,
\end{equation}
or
\begin{equation}
\frac{\Delta x}{t} = v_D .
\end{equation}
This
effective {\em velocity}, averaged over many cloud lifetimes,
is the only velocity for the magnetic field lines that makes sense.
It is equal to the velocity, $ v_D $, of the ions
in the cloud material.
So far, we have made the simplifying assumption
that the clouds
are stationary, which is of course not the case.
As the clouds move through the interstellar medium they
stretch the lines, and additional tension forces
and ambipolar velocities arise. However,
these velocities are always directed toward the
mean position of the cloud material, and thus they
tend to average to zero. The ambipolar velocity which
we have calculated above, under the assumption of
stationary clouds, actually gives the rate of displacement of the
lines of force relative to the mean position
of the cloud. It is a secular velocity, and it is the only
velocity which really counts.
We assume that in the diffuse interstellar clouds
only carbon is ionized, so that $ m*_{eff} = m_{H} $.
We choose $ < \sigma v > =
2 \times 10^{ -9} \mbox{cm}^2/ \mbox{sec} $(Spitzer 1978).
If the filling factor of the clouds $ f = 0.1 $, and
if the mean interstellar density of hydrogen is
$ n_0 \approx 1.0 / \mbox{cm}^3 $, then
the density in the clouds is $ n_c \approx 10 / \mbox{cm}^3 $.
The cosmic abundance of carbon is $ 3 \times 10^{ -4} $(Allen 1963).
We assume that this abundance is depleted by $ \zeta_0 \approx 0.2 $,
(Spitzer 1978), the rest of the carbon being locked up in grains.
We further take
$ \beta/\alpha = 2 $. Then comparing equation~\ref{eq:40} with
equation~\ref{eq:93} gives
\begin{equation}
K = \frac{( 1 + \beta / \alpha) }{n_i m*_{eff} < \sigma v > n_0}
= 1.5 \times 10^{ 36} ,
\end{equation}
in cgs units. Taking $ D = 100 $ pc, we get from equation~\ref{eq:54}
\begin{equation}
B_0 = 2.85 \times 10^{ -6} \mbox{gauss} .
\end{equation}
We take $ t_0 = 3 \times 10^{ 9} $ years. We see that if
$ t' = 3 $, corresponding to an age for the galactic disk
of $ 9 \times 10^{ 9} $ years, then
the present value of the magnetic field
from figure 8, should be $ 1.5 \times 10^{ -6} $
gauss.
This value depends on our assumptions concerning the properties
of the clouds. In particular, if the hydrogen in the
interstellar clouds is partly ionized by low energy
cosmic rays penetrating the clouds, then the ambipolar diffusion
will be slower, and $ K $ will be smaller, leading to
a larger value for $ B_0 $. The consequence of this
is, that the initially dimensionless magnetic field
$ B'_r $ will be smaller, so that it would take longer
to reach saturation. However, the saturated value
will be larger.
\section{Conclusion}
We have assumed that there
was a cosmic magnetic field present before
the galaxy formed. On the basis of this
hypothesis we have constructed a simplified model
of the galactic disk in order to investigate
how the magnetic field
would evolve, and what it would look
like at present. The two essential ingredients
of this model are: the differential rotation
of the interstellar medium, and the motion
of the field lines through the interstellar medium produced
by the ambipolar diffusion of the ionized component
of the plasma driven through the neutrals by magnetic pressure and
cosmic ray pressure gradients. The effect
of turbulent motions is assumed to average out.
No large scale
mean field dynamo action on the magnetic
field was included in this model. (This model
differs from that of Piddington in that the magnetic
field is too weak to effect the galactic rotation
of the interstellar medium, and also, ambipolar diffusion is
included.)
The consequences of the model
were investigated by an approximate analytical
calculation in Section 2, and by more detailed
numerical
simulations in Section 3 and 4. These simulations
confirmed the results of the approximate analysis
of section 2.
The basic results are:
(1) To first approximation,
the magnetic field evolves
locally following a rotating fluid
element. It first grows by stretching
the radial component of the magnetic field into
the toroidal direction. When the field becomes
strong enough, the line commences to shorten
because of the vertical motions produced by
ambipolar diffusion. This reduces the radial
component, and therefore the stretching.
After a certain time, the field strength
saturates and starts to decrease as the
reciprocal square root of time.
This asymptotic behavior is determined
only by the ambipolar diffusion properties
of the clouds. Thus, the field strength everywhere
approaches the same value at a given time.
At the present time, this value is estimated to
be in the range of a few microgauss. This saturated value
is independent of the initial value which the
field had when the disk first formed, provided that the
initial value of the field strength
is greater than $ 10^{ -8} $ gauss.
The extent of each magnetic field line
in the toroidal direction also saturates,
but its length in the disk {\em does} depend on the
initial value.
(2) The direction of the toroidal field
in any given fluid element depends on the sign
of the initial radial component.
Since this sign varies with position, and since
differential rotation mixes these positions,
it turns out that the resulting toroidal field
varies rapidly with radius along a fixed radial direction.
The toroidal field changes direction on a scale of a hundred
parsecs. Because the saturated field strength
is nearly constant in magnitude, the toroidal field
strength as a function of $ r $ at fixed $ \theta $
varies as
a square wave. However, the lobes of this square
wave need not be equal since the regions of one initial
sign of $ B_r $ may be larger than those of the other
sign. In this case, the model predicts a toroidal field that would
produce a net Faraday rotation in radio sources such
as pulsars or polarized
extragalactic radio sources, in spite of its rapid variation in sign.
It is the prevailing belief that the
galactic magnetic field does not reverse in radius on small
scales. In fact, this belief is grounded in an analysis of
Faraday rotation measures of pulsars(Hamilton and Lynn 1987).
In analyzing these
rotation measure Rand and Kulkarni(1988) employed
various simple models of the
galactic field. In every one of these models, the
magnetic field reversed only on large scales of
order one kiloparsec. These models, which only
allowed a slow variation of $ B $, led to results that were
consistent with the observations, and thus supported the
general belief in reversal only on large scales.
However, when this analysis was carried out,
there was no apparent reason
reason not to consider a model in which the field varied rapidly in
radius on scales of order of a hundred parsecs,
although in hindsight it could have been done.
However,
such a model
was not included in the analysis of the rotation measures,
and thus it was not tested. Therefore, at present, there is no reason
to exclude such a model.
In short, the prevailing
belief that the galactic magnetic field is of constant sign
on a large scale, actually resulted from the
assumption that the field was a large scale field, and
from the consistency of this assumed model with observations.
Therefore, this
conclusion
has not been rigorously demonstrated. A magnetic field,
such as that arrived at from our model, which has the rapid
variation of the field with radius, would lead to
fluctuations from the mean.
Indeed, such fluctuations are in the data and
are attributed to a general isotropic random
magnetic field in addition to the mean field.
We feel the the non uniform square wave could probably
fit the observations equally as well as the other models,
so that our model can also be shown to be consistent with
observations.
We have not yet demonstrated quantitative
consistency with observations, but we hope
to carry out this task in the future.
(3) The magnetic
field observed in our galaxy and in other
galaxies should actually be the average of the true detailed magnetic field
averaged over regions in space larger than those
regions over which we find our field to vary
(several hundred parsecs at least). Thus,
under averaging the field of our model would actually
appear as an
axisymmetric toroidal magnetic field. An analysis of the various
galactic magnetic fields has made the {\em ansatz}
that the origin of the field can be distinguished as
primordial if it has
bisymmetric symmetry, and as due to a dynamo if
it has
axisymmetry(Sofue et al. 1990).
(It must be borne in mind that the actual magnetic field is perturbed by
the spiral arms, so that it is parallel
to the arms in the region of the spiral arms and
toroidal in the region in between the arms as described
in the introduction.) On the basis of our
analysis, we conclude that this method of
distinguishing the origin is not valid,
and could lead to incorrect conclusions concerning the
origin of the galactic magnetic fields.
4. Parker(1968, 1973a,b) and others(Ruzmaikin et al 1988)
have
put forth three objections to a
primordial origin. These objections are:
(a) A primordial
field would be tightly wrapped up, contrary to observations.
(b) Such a field would be expelled by ambipolar diffusion
or turbulent diffusion.
(c) There is no known mechanism to produce a
large scale primordial field in the early universe.
On the basis of our model we can counter the first
two objections (a) and (b). Our model does
indeed lead to a tightly wrapped spiral magnetic field.
However,
when averaged over a sufficiently large scale, as is
done automatically by observations,
the resulting field should actually
appear to be axisymmetric and azimuthal.
This averaged field is, thus, not in disagreement with
observations. In addition, if the
large scale field is not uniform, then the
field after saturation
by ambipolar diffusion creates larger regions
of one sign then of the other sign
and the averaged field is not zero. Thus, objection
(a) does not defeat our model for a
field of primordial origin.
With respect to objection (b) our model predicts field
lines which thread through the disk, entering
on one edge and leaving on the other. Thus, it is
impossible for a vertical ambipolar motion or
turbulent mixing to
expel lines of force. This result counters objection (b).
The third objection is still open to debate
and we do not discuss it.
The two observations that should test our model are:
The magnetic field in other galaxies should be observed
to be toroidal in between the arms. A reanalysis
of the pulsar rotation measures should fit our model
without very large extra fluctuations. These fluctuations
should be accounted for by the rapid reversals of the
toroidal field predicted by our model.
To summarize, we have shown that a
careful analysis of the evolution of a primordial
field throws new light on the way one should
view the primordial field hypothesis.
\acknowledgments
The authors are grateful for a helpful discussions
with Steve Cowley and Ellen Zweibel.
The work was supported by the National Science
Foundation Grant AST 91-21847
and by NASA's astrophysical program
under grant NAG5 2796. ACH was supported by an
NSF minority fellowship and an AT\&T Bell labs
Cooperative Research Fellowship.
\clearpage
| proofpile-arXiv_065-723 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Feynman diagrammatic technique has proven quite useful in order to
perform and organize the perturbative solution of quantum many-body
theories. The main idea is the computation of the Green's or
correlation functions by splitting the action $S$ into a quadratic or
free part $S_Q$ plus a remainder or interacting part $S_I$ which is
then treated as a perturbation. From the beginning this technique has
been extended to derive exact relations, such as the
Schwinger-Dyson~\cite{Dy49,Sc51,It80} equations, or to make
resummation of diagrams as that implied in the effective action
approach~\cite{Il75,Ne87} and its generalizations~\cite{Co74}.
Consider now a generalization of the above problem, namely, to solve
(i.e., to find the Green's functions of) a theory with action given by
$S+\delta S$ perturbatively in $\delta S$ but where the
``unperturbed'' action $S$ (assumed to be solved) is not necessarily
quadratic in the fields. The usual answer to this problem is to write
the action as a quadratic part $S_Q$ plus a perturbation $S_I+\delta
S$ and then to apply the standard Feynman diagrammatic technique. This
approach is, of course, correct but it does not exploit the fact that
the unperturbed theory $S$ is solved, i.e., its Green's functions are
known. For instance, the computation of each given order in $\delta S$
requires an infinite number of diagrams to all orders in $S_I$. We
will refer to this as the {\em standard expansion}. In this paper it
is shown how to systematically obtain the Green's functions of the
full theory, $S+\delta S$, in terms of those of the unperturbed one,
$S$, plus the vertices provided by the perturbation, $\delta
S$. Unlike the standard expansion, in powers of $S_I+\delta S$, the
expansion considered here is a strict perturbation in $\delta S$ and
constitutes the natural extension of the Feynman diagrammatic
technique to unperturbed actions which are not necessarily
quadratic. We shall comment below on the applications of such an
approach.
\section{Many-body theory background}
\subsection{Feynman diagrams and standard Feynman rules}
In order to state our general result let us recall some well known
ingredients of quantum many-body theory (see e.g.~\cite{Ne87}), and in
passing, introduce some notation and give some needed definitions.
Consider an arbitrary quantum many-body system described by variables
or {\em fields} $\phi^i$, that for simplicity in the presentation will
be taken as bosonic. As will be clear below, everything can be
generalized to include fermions. Without loss of generality we can use
a single discrete index $i$ to represent all the needed labels (DeWitt
notation). For example, for a relativistic quantum field theory, $i$
would contain space-time, Lorentz and Dirac indices, flavor, kind of
particle and so on. Within a functional integral formulation of the
many-body problem, the expectation values of observables, such as
$A[\phi]$, take the following form:
\begin{equation}
\langle A[\phi] \rangle = \frac{\int\exp\left(S[\phi]\right)A[\phi]\,d\phi}
{\int\exp\left(S[\phi]\right)\,d\phi}\,.
\label{eq:1}
\end{equation}
Here the function $S[\phi]$ will be called the {\em action} of the
system and is a functional in general. Note that in some cases
$\langle A[\phi]\rangle$ represents the time ordered vacuum
expectation values, in other the canonical ensemble averages, etc, and
also the quantity $S[\phi]$ may correspond to different objects in
each particular application. In any case, all (bosonic) quantum
many-body systems can be brought to this form and only
eq.~(\ref{eq:1}) is needed to apply the Feynman diagrammatic
technique. As already noted, this technique corresponds to write the
action in the form $S[\phi]=S_Q[\phi]+S_I[\phi]$:
\begin{equation}
S_Q[\phi]=\frac{1}{2}m_{ij}\phi^i\phi^j\,,\qquad
S_I[\phi]=\sum_{n\geq 0}\frac{1}{n!}g_{i_1\dots i_n}
\phi^{i_1}\cdots\phi^{i_n} \,,
\end{equation}
where we have assumed that the action is an analytical function of the
fields at $\phi^i=0$. Also, a repeated indices convention will be used
throughout. The quantities $g_{i_1\dots i_n}$ are the {\em coupling
constants}. The matrix $m_{ij}$ is non singular and otherwise
arbitrary, whereas the combination $m_{ij}+g_{ij}$ is completely
determined by the action. The {\em free propagator}, $s^{ij}$, is
defined as the inverse matrix of $-m_{ij}$. The signs in the
definitions of $S[\phi]$ and $s^{ij}$ have been chosen so that there
are no minus signs in the Feynman rules below. The $n$-point {\em
Green's function} is defined as
\begin{equation}
G^{i_1\cdots i_n}= \langle\phi^{i_1}\cdots\phi^{i_n}\rangle\,, \quad n\geq 0\,.
\end{equation}
Let us note that under a non singular linear transformation of the
fields, and choosing the action to be a scalar, the coupling constants
transform as completely symmetric covariant tensors and the propagator
and the Green's functions transform as completely symmetric
contravariant tensors. The tensorial transformation of the Green's
functions follows from eq.~(\ref{eq:1}), since the constant Jacobian
of the transformation cancels among numerator and denominator.
Perturbation theory consists of computing the Green's functions as a
Taylor expansion in the coupling constants. We remark that the
corresponding series is often asymptotic, however, the perturbative
expansion is always well defined. By inspection, and recalling the
tensorial transformation properties noted above, it follows that the
result of the perturbative calculation of $G^{i_1\cdots i_n}$ is a sum
of monomials, each of which is a contravariant symmetric tensor
constructed with a number of coupling constants and propagators, with
all indices contracted except $(i_1\cdots i_n)$ times a purely
constant factor. For instance,
\begin{equation}
G^{ab}= \cdots + \frac{1}{3!}s^{ai}g_{ijk\ell}s^{jm}s^{kn}s^{\ell
p}g_{mnpq}s^{qb} +\cdots \,.
\label{eq:example}
\end{equation}
Each monomial can be represented by a {\em Feynman diagram} or graph:
each $k$-point coupling constant is represented by a vertex with $k$
prongs, each propagator is represented by an unoriented line with two
ends. The dummy indices correspond to ends attached to vertices and
are called {\em internal}, the free indices correspond to unattached
or external ends and are the {\em legs} of the diagram. The lines
connecting two vertices are called {\em internal}, the others are {\em
external}. By construction, all prongs of every vertex must be
saturated with lines. The diagram corresponding to the monomial in
eq.~(\ref{eq:example}) is shown in figure~\ref{f:1}.
\begin{figure}
\begin{center}
\vspace{-0.cm}
\leavevmode
\epsfysize = 2.0cm
\makebox[0cm]{\epsfbox{f.1.EPS}}
\end{center}
\caption{Feynman graph corresponding to the monomial in
eq.~(\ref{eq:example}).}
\label{f:1}
\end{figure}
A graph is {\em connected} if it is connected in the topological
sense. A graph is {\em linked} if every part of it is connected to at
least one of the legs (i.e., there are no disconnected $0$-legs
subgraphs). All connected graphs are linked. For instance, the graph
in figure~\ref{f:1} is connected, that in figure~\ref{f:2}$a$ is
disconnected but linked and that in figure~\ref{f:2}$b$ is
unlinked. To determine completely the value of the graph, it only
remains to know the weighting factor in front of the monomial. As
shown in many a textbook~\cite{Ne87}, the factor is zero if the
diagram is not linked. That is, unlinked graphs are not to be included
since they cancel due to the denominator in eq.~(\ref{eq:1}); a result
known as Goldstone theorem. For linked graphs, the factor is given by
the inverse of the {\em symmetry factor} of the diagram which is
defined as the order of the symmetry group of the graph. More
explicitly, it is the number of topologically equivalent ways of
labeling the graph. For this counting all legs are distinguishable
(due to their external labels) and recall that the lines are
unoriented. Dividing by the symmetry factor ensures that each distinct
contributions is counted once and only once. For instance, in
figure~\ref{f:1} there are three equivalent lines, hence the factor
$1/3!$ in the monomial of eq.~(\ref{eq:example}).
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.0cm\epsffile{f.2.EPS}}}
\vspace{6pt}
\caption{$(a)$ A linked disconnected graph. $(b)$ A unlinked
graph. The cross represents a 1-point vertex.}
\label{f:2}
\end{figure}
Thus, we arrive to the following {\em Feynman rules} to compute
$G^{i_1\cdots i_n}$ in perturbation theory:
\begin{enumerate}
\item Consider each $n$-point linked graph. Label the legs with
$(i_1,\dots,i_n)$, and label all internal ends as well.
\item Put a factor $g_{j_1\dots j_k}$ for each $k$-point vertex, and a
factor $s^{ij}$ for each line. Sum over all internal indices and
divide the result by the symmetry factor of the graph.
\item Add up the value of all topologically distinct such graphs.
\end{enumerate}
We shall refer to the above as the Feynman rules of the theory
``$S_Q+S_I$''. There are several relevant remarks to be made: If
$S[\phi]$ is a polynomial of degree $N$, only diagrams with at most
$N$-point vertices have to be retained. The choice $g_{ij}=0$ reduces
the number of diagrams. The 0-point vertex does not appear in any
linked graph. Such term corresponds to an additive constant in the
action and cancels in all expectation values. On the other hand, the
only linked graph contributing to the 0-point Green's function is a
diagram with no elements, which naturally takes the value 1.
Let us define the {\em connected Green's functions}, $G_c^{i_1\cdots
i_n}$, as those associated to connected graphs (although they can be
given a non perturbative definition as well). From the Feynman rules
above, it follows that linked disconnected diagrams factorize into its
connected components, thus the Green's functions can be expressed in
terms of the connected ones. For instance
\begin{eqnarray}
G^i &=& G_c^i \,, \nonumber \\
G^{ij} &=& G_c^{ij} + G_c^iG_c^j \,, \\
G^{ijk} &=& G_c^{ijk} +
G_c^iG_c^{jk} + G_c^jG_c^{ik} + G_c^kG_c^{ij} + G_c^iG_c^jG_c^k \,.
\nonumber
\end{eqnarray}
It will also be convenient to introduce the {\em generating function}
of the Green's functions, namely,
\begin{equation}
Z[J] = \int\exp\left(S[\phi]+J\phi\right)\,d\phi \,,
\label{eq:Z}
\end{equation}
where $J\phi$ stands for $J_i\phi^i$ and $J_i$ is called the {\em
external current}. By construction,
\begin{equation}
\frac{Z[J]}{Z[0]} = \langle\exp\left(J\phi\right)\rangle
=\sum_{n\geq 0}\frac{1}{n!}G^{i_1\cdots i_n}J_{i_1}\cdots J_{i_n}\,,
\end{equation}
hence the name generating function. The quantity $Z[0]$ is known as
{\em partition function}. Using the replica method~\cite{Ne87}, it can
be shown that $W[J]=\log\left(Z[J]\right)$ is the generator of the
connected Green's functions. It is also shown that $W[0]$ can be
computed, within perturbation theory, by applying essentially the same
Feynman rules given above as the sum of connected diagrams without
legs and the proviso of assigning a value $-\frac{1}{2}{\rm
tr}\,\log(-m/2\pi)$ to the diagram consisting of a single closed
line. The partition function is obtained if non connected diagrams are
included as well. In this case, it should be noted that the
factorization property holds only up to possible symmetry factors.
\subsection{The effective action}
To proceed, let us introduce the {\em effective action}, which will be
denoted $\Gamma[\phi]$. It can be defined as the Legendre transform of
the connected generating function. For definiteness we put this in the
form
\begin{equation}
\Gamma[\phi] = \min_J\left(W[J]-J\phi\right)\,,
\end{equation}
although in general $S[\phi]$, $W[J]$, as well as the fields, etc, may
be complex and only the extremal (rather than minimum) property is
relevant. For perturbation theory, the key feature of the effective
action is as follows. Recall that a connected graph has $n$ {\em
loops} if it is possible to remove at most $n$ internal lines so that
it remains connected. For an arbitrary graph, the number of loops is
defined as the sum over its connected components. {\em Tree} graphs
are those with no loops. For instance the diagram in
figure~\ref{f:1} has two loops whereas that in figure~\ref{f:3} is
a tree graph. Then, the effective action coincides with the equivalent
action that at tree level would reproduce the Green's functions of
$S[\phi]$. To be more explicit, let us make an arbitrary splitting of
$\Gamma[\phi]$ into a (non singular) quadratic part $\Gamma_Q[\phi]$
plus a remainder, $\Gamma_I[\phi]$,
\begin{equation}
\Gamma_Q[\phi]=\frac{1}{2}\overline{m}_{ij}\phi^i\phi^j\,, \qquad
\Gamma_I[\phi]=\sum_{n\ge 0}\frac{1}{n!}
\overline{g}_{i_1\dots i_n}\phi^{i_1}\cdots\phi^{i_n}\,,
\end{equation}
then the Green's functions of $S[\phi]$ are recovered by using the
Feynman rules associated to the theory ``$\Gamma_Q+\Gamma_I$'' but
adding the further prescription of including only tree level
graphs. The building blocks of these tree graphs are the {\em
effective line}, $\overline{s}^{ij}$, defined as the inverse matrix of
$-\overline{m}_{ij}$, and the {\em effective (or proper) vertices},
$\overline{g}_{i_1\dots i_n}$. This property of the effective action
will be proven below. Let us note that $\Gamma[\phi]$ is completely
determined by $S[\phi]$, and is independent of how $m_{ij}$ and
$\overline{m}_{ij}$ are chosen. In particular, the combination
$\overline{m}_{ij}+\overline{g}_{ij}$ in free of any choice. Of
course, the connected Green's are likewise obtained at tree level from
the theory ``$\Gamma_Q+\Gamma_I$'', but including only connected
graphs.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.5cm\epsffile{f.3.EPS}}}
\caption{A tree graph.}
\label{f:3}
\end{figure}
For ulterior reference, let us define the {\em effective current} as
$\overline{g}_i$ and the {\em self-energy} as
\begin{equation}
\Sigma_{ij}= \overline{m}_{ij}+\overline{g}_{ij}-m_{ij}\,.
\end{equation}
Note that $\Sigma_{ij}$ depends not only on $S[\phi]$ but also on the
choice of $S_Q[\phi]$.
A connected graph is {\em 1-particle irreducible} if it remains
connected after removing any internal line, and otherwise it is called
{\em 1-particle reducible}. In particular, all connected tree graphs
with more than one vertex are reducible. For instance the graph in
figure~\ref{f:1} is 1-particle irreducible whereas those in
figures~\ref{f:3} and ~\ref{f:4} are reducible. To {\em amputate} a
diagram (of the theory ``$S_Q+S_I$'') is to contract each leg with a
factor $-m_{ij}$. In the Feynman rules, this corresponds to not to
include the propagators of the external legs. Thus the amputated
diagrams are covariant tensors instead of contravariant. Then, it is
shown that the $n$-point effective vertices are given by the connected
1-particle irreducible amputated $n$-point diagrams of the theory
``$S_Q+S_I$''. (Unless $n=2$. In this case the sum of all such
diagrams with at least one vertex gives the self-energy.)
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.5cm\epsffile{f.4.EPS}}}
\caption{$(a)$ A 1-particle reducible graph. $(b)$ A graph with a
tadpole subgraph.}
\label{f:4}
\end{figure}
A graph has {\em tadpoles} if it contains a subgraph from which stems
a single line. It follows that all graphs with 1-point vertices have
tadpoles. Obviously, when the single line of the tadpole is internal,
the graph is 1-particle reducible (cf. figure~\ref{f:4}$b$). An
important particular case is that of actions for which
$\langle\phi^i\rangle$ vanishes. This ensures that the effective
current vanishes, i.e. $\overline{g}_i=0$ and thus all tree graphs of
the theory ``$\Gamma_Q+\Gamma_I$'' are free of tadpoles (since tadpole
subgraphs without 1-point vertices require at least one loop). Given
any action, $\langle\phi^i\rangle=0$ can be achieved by a redefinition
of the field $\phi^i$ by a constant shift, or else by a readjustment
of the original current $g_i$, so this is usually a convenient
choice. A further simplification can be achieved if $\Gamma_Q[\phi]$
is chosen as the full quadratic part of the effective action, so that
$\overline{g}_{ij}$ vanishes. Under these two choices, each Green's
function requires only a finite number of tree graphs of the theory
``$\Gamma_Q+\Gamma_I$''. Also, $\overline{s}^{ij}$ coincides with the
full connected propagator, $G_c^{ij}$, since a single effective line
is the only possible diagram for it. Up to 4-point functions, it is
found
\begin{eqnarray}
G_c^i &=& 0 \,, \nonumber \\
G_c^{ij} &=& \overline{s}^{ij} \,, \label{eq:connected}
\\
G_c^{ijk} &=&
\overline{s}^{ia}\overline{s}^{jb}\overline{s}^{kc}\overline{g}_{abc}
\,,\nonumber \\
G_c^{ijk\ell} &=&
\overline{s}^{ia}\overline{s}^{jb}\overline{s}^{kc}\overline{s}^{\ell
d}\overline{g}_{abcd} \nonumber \\
& & +\overline{s}^{ia}\overline{s}^{jb}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ek}\overline{s}^{f\ell}
+\overline{s}^{ia}\overline{s}^{kb}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ej}\overline{s}^{f\ell}
+\overline{s}^{ia}\overline{s}^{\ell b}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ek}\overline{s}^{fj}
\,.\nonumber
\end{eqnarray}
The corresponding diagrams are depicted in figure~\ref{f:5}. Previous
considerations imply that in the absence of tadpoles, $G_c^{ij}=
-((m+\Sigma)^{-1})^{ij}$.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.5cm\epsffile{f.5.EPS}}}
\vspace{6pt}
\caption{Feynman diagrams for the 3- and 4-point connected Green's
functions in terms of the proper functions
(cf. eq.~(\ref{eq:connected})). The lighter blobs represent the
connected functions, the darker blobs represent the irreducible
functions.}
\label{f:5}
\end{figure}
\section{Perturbation theory on non quadratic actions}
\subsection{Statement of the problem and main result}
All the previous statements are well known in the literature. Consider
now the action $S[\phi]+\delta S[\phi]$, where
\begin{equation}
\delta S[\phi]=\sum_{n\ge 0}\frac{1}{n!} \delta g_{i_1\dots i_n}
\phi^{i_1}\cdots\phi^{i_n}\,,
\end{equation}
defines the {\em perturbative vertices}, $\delta g_{i_1\dots i_n}$. The
above defined standard expansion to compute the full Green's
functions corresponds to the Feynman rules associated to
the theory ``$S_Q+(S_I+\delta S)$'', i.e., with $g_{i_1\cdots i_n}+\delta
g_{i_1\cdots i_n}$ as new vertices. Equivalently, one can use an
obvious generalization of the Feynman rules, using one kind of
line, $s^{ij}$, and two kinds of vertices, $g_{i_1\dots i_n}$ and
$\delta g_{i_1\dots i_n}$, which should be considered as
distinguishable. As an alternative, we seek instead a diagrammatic
calculation in terms of $\Gamma[\phi]$ and $\delta S[\phi]$, that is,
using $\overline{s}^{ij}$ as line and $\overline{g}_{i_1\dots i_n}$
and $\delta g_{i_1\dots i_n}$ as vertices. The question of which new
Feynman rules are to be used with these building blocks is answered by
the following
{\bf Theorem.} {\em The Green's functions associated to
$S[\phi]+\delta S[\phi]$ follow from applying the Feynman rules of the
theory ``$\Gamma_Q+(\Gamma_I+\delta S)$'' plus the further
prescription of removing the graphs that contain ``unperturbed
loops'', i.e., loops constructed entirely from effective elements
without any perturbative vertex $\delta g_{i_1\dots i_n}$.}
This constitutes the basic result of this paper. The same statement
holds in the presence of fermions. The proof is given below. We remark
that the previous result does not depend on particular choices, such
as $\overline{g}_i=\overline{g}_{ij}=0$. As a consistency check of the
rules, we note that when $\delta S$ vanishes only tree level graphs of
the theory ``$\Gamma_Q+\Gamma_I$'' remain, which is indeed the correct
result. On the other hand, when $S[\phi]$ is quadratic, it coincides
with its effective action (up to an irrelevant constant) and therefore
there are no unperturbed loops to begin with. Thus, in this case our
rules reduce to the ordinary ones. In this sense, the new rules given
here are the general ones whereas the usual rules correspond only to
the particular case of perturbing an action that is quadratic.
\subsection{Illustration of the new Feynman rules}
To illustrate our rules, let us compute the corrections to the
effective current and the self-energy, $\delta\overline{g}_i$ and
$\delta\Sigma_{ij}$, induced by a perturbation at most quadratic in
the fields, that is,
\begin{equation}
\delta S[\phi]= \delta g_i\phi^i+\frac{1}{2}\delta g_{ij}\phi^i\phi^j \,,
\end{equation}
and at first order in the perturbation. To simplify the result, we
will choose a vanishing $\overline{g}_{ij}$. On the other hand,
$S_Q[\phi]$ will be kept fixed and $\delta S[\phi]$ will be included
in the interacting part of the action, so $\delta\Sigma_{ij}=
\delta\overline{m}_{ij}$.
Applying our rules, it follows that $\delta\overline{g}_i$ is given by
the sum of 1-point diagrams of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$'' with either one $\delta g_i$ or one $\delta g_{ij}$ vertex and
which are connected, amputated, 1-particle irreducible and contain no
unperturbed loops. Likewise, $\delta\Sigma_{ij}$ is given by 2-point
such diagrams. It is immediate that $\delta g_i$ can only appear in
0-loop graphs and $\delta g_{ij}$ can only appear in 0- or 1-loop
graphs, since further loops would necessarily be unperturbed. The
following result is thus found
\begin{eqnarray}
\delta\overline{g}_i &=& \delta g_i + \frac{1}{2}\delta g_{ab}
\overline{s}^{aj}\overline{s}^{bk}\overline{g}_{jki}\,, \nonumber \\
\delta\Sigma_{ij} &=& \delta g_{ij} + \delta g_{ab}\overline{s}^{ak}
\overline{s}^{b\ell}\overline{g}_{kni} \overline{g}_{\ell rj}
\overline{s}^{nr} +\frac{1}{2}\delta g_{ab}
\overline{s}^{ak}\overline{s}^{b\ell}\overline{g}_{k\ell ij}\,.
\label{eq:2}
\end{eqnarray}
The graphs corresponding to the r.h.s. are shown in
figure~\ref{f:6}. There, the small full dots represent the
perturbative vertices, the lines with lighter blobs represent the
effective line and the vertices with darker blobs are the effective
vertices. The meaning of this equation is, as usual, that upon
expansion of the skeleton graphs in the r.h.s., every ordinary Feynman
graph (i.e. those of the theory ``$S_Q+(S_I+\delta S)$'') appears once
and only once, and with the correct weight. In other words, the new
graphs are a resummation of the old ones.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.6cm\epsffile{f.6.EPS}}}
\vspace{6pt}
\caption{Diagrammatic representation of eqs.~(\ref{eq:2}). The small
full dot represents perturbation vertices. All graphs are amputated.}
\label{f:6}
\end{figure}
Let us take advantage of the above example to make several
remarks. First, in order to use our rules, all $n$-point effective
vertices have to be considered, in principle. In the example of
figure~\ref{f:6}, only the 3-point proper vertex is needed for the
first order perturbation of the effective current and only the 3- and
4-point proper vertices are needed for the self-energy. Second, after
the choice $\overline{g}_{ij}=0$, the corrections to any proper vertex
requires only a finite number of diagrams, for any given order in each
of the perturbation vertices $\delta g_{i_1\dots i_n}$. Finally,
skeleton graphs with unperturbed loops should not be
included. Consider, e.g. the graph in figure~\ref{f:7}$a$. This graph
contains an unperturbed loop. If its unperturbed loop is contracted to
a single blob, this graph becomes the third 2-point graph in
figure~\ref{f:6}, therefore it is intuitively clear that it is
redundant. In fact, the ordinary graphs obtained by expanding the
blobs in figure~\ref{f:7}$a$ in terms of ``$S_Q+S_I$'' are already be
accounted for by the expansion of the third 2-point graph in
figure~\ref{f:6}.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=2.5cm\epsffile{f.7.EPS}}}
\vspace{6pt}
\caption{$(a)$ A redundant graph. Meaning of lines and vertices as
in figures~\ref{f:5} and ~\ref{f:6}. $(b)$ The associated
unperturbed graph to $(a)$.}
\label{f:7}
\end{figure}
For a complicated diagram of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$'', the cleanest way to check for unperturbed loops is to construct
its {\em associated unperturbed graph}. This is the graph of the theory
``$\Gamma_Q+\Gamma_I$'' which is obtained after deleting all perturbation
vertices, so that the ends previously attached to such vertices become
external legs in the new graph. Algebraically this means to remove the
$\delta g_{i_1\dots i_n}$ factors so that the involved indices become
external (uncontracted) indices. The number of unperturbed loops of
the old (perturbed) graph coincides the number of loops of the
associated unperturbed graph. The associated graph to that in
figure~\ref{f:7}$a$ is depicted in figure~\ref{f:7}$b$.
\section{Some applications}
Of course, the success of the standard Feynman diagrammatic technique
is based on the fact that quadratic actions, unlike non quadratic
ones, can be easily and fully solved. Nevertheless, even when the
theory $S[\phi]$ is not fully solved, our expansion can be
useful. First, it helps in organizing the calculation. Indeed, in the
standard expansion the same 1-, 2-,..., $n$-point unperturbed Green's
functions are computed over and over, as subgraphs, instead of only
once. Second, and related, because the perturbative expansion in
$S_I[\phi]$ must be truncated, in the standard expansion one is in
general using different approximations for the same Green's functions
of $S[\phi]$ in different subgraphs. As a consequence, some known
exact properties (such as symmetries, experimental values of masses or
coupling constants, etc) of the Green's functions of $S[\phi]$ can be
violated by the standard calculation. On the contrary, in the
expansion proposed here, the Green's functions of $S[\phi]$ are taken
as an input and hence one can make approximations to them (not
necessarily perturbative) to enforce their known exact properties. As
an example consider the Casimir effect. The physical effect of the
conductors is to change the photon boundary conditions. This in turn
corresponds to modify the free photon propagator~\cite{Bo85}, i.e., to
add a quadratic perturbation to the Lagrangian of quantum
electrodynamics (QED). Therefore our expansion applies. The advantage
of using it is that one can first write down rigorous relations
(perturbative in $\delta S$ but non perturbative from the point of
view of QED) and, in a second step, the required QED propagators and
vertex functions can be approximated (either perturbatively or by some
other approach) in a way that is consistent with the experimentally
known mass, charge and magnetic moment of the electron, for instance.
Another example would be chiral perturbation theory: given some
approximation to massless Quantum Chromodynamics (QCD), the
corrections induced by the finite current quark masses can be
incorporated using our scheme as a quadratic perturbation. Other
examples would be the corrections induced by a non vanishing
temperature or density, both modifying the propagator.
\subsection{Derivation of diagrammatic identities}
Another type of applications comes in the derivation of diagrammatic
identities. We can illustrate this point with some Schwinger-Dyson
equations~\cite{Dy49,Sc51,It80}. Let $\epsilon^i$ be field
independent. Then, noting that the action $S[\phi+\epsilon]$ has
$\Gamma[\phi+\epsilon]$ as its effective action, and for infinitesimal
$\epsilon^i$, it follows that the perturbation $\delta
S[\phi]=\epsilon^i\partial_i S[\phi]$ yields a corresponding
correction $\delta\Gamma[\phi]=\epsilon^i\partial_i \Gamma[\phi]$ in
the effective action. Therefore for this variation we can write:
\begin{eqnarray}
\delta\overline{g}_i &=& \delta\partial_i\Gamma[0]=
\epsilon^j\partial_i\partial_j\Gamma[0]=
\epsilon^j(m+\Sigma)_{ij}\,, \nonumber \\
\delta\Sigma_{ij} &=& \delta\partial_i\partial_j\Gamma[0]=
\epsilon^k\partial_i\partial_j\partial_k\Gamma[0]=
\epsilon^k\overline{g}_{ijk}\,.
\end{eqnarray}
Let us particularize to a theory with a 3-point bare vertex, then
$\delta S[\phi]$ is at most a quadratic perturbation with vertices
$\delta g_j =\epsilon^i(m_{ij}+g_{ij})$ and $\delta g_{jk}=\epsilon^i
g_{ijk}$. Now we can immediately apply eqs.~(\ref{eq:2}) to obtain
the well known Schwinger-Dyson equations
\begin{eqnarray}
\Sigma_{ij} &=& g_{ij}+
\frac{1}{2}g_{iab}\bar{s}^{a\ell}\bar{s}^{br}\bar{g}_{\ell rj} \,, \\
\overline{g}_{cij} &=& g_{cij}+
g_{cab}\overline{s}^{ak}\overline{s}^{b\ell}\overline{g}_{kni}
\overline{g}_{\ell rj}\overline{s}^{nr}
+\frac{1}{2}g_{cab}\overline{s}^{ak}\overline{s}^{b\ell}
\overline{g}_{k\ell ij}\,. \nonumber
\end{eqnarray}
The corresponding diagrams are depicted in figure~\ref{f:8}.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.3cm\epsffile{f.8.EPS}}}
\vspace{6pt}
\caption{Two Schwinger-Dyson equations for a cubic action.}
\label{f:8}
\end{figure}
\subsection{Effective Lagrangians and the double-counting problem}
There are instances in which we do not have (or is not practical to
use) the underlying unperturbed action and we are provided directly,
through the experiment, with the Green's functions. In these cases it
is necessary to know which Feynman rules to use with the exact Green's
functions of $S$. Consider for instance the propagation of particles
in nuclear matter. This is usually described by means of so called
effective Lagrangians involving the nucleon field and other relevant
degrees of freedom (mesons, resonances, photons, etc). These
Lagrangians are adjusted to reproduce at tree level the experimental
masses and coupling constants. (Of course, they have to be
supplemented with form factors for the vertices, widths for the
resonances, etc, to give a realistic description, see
e.g.~\cite{Er88}.) Thus they are a phenomenological approximation to
the effective action rather than to the underlying bare action $S$. So
to say, Nature has solved the unperturbed theory (in this case the
vacuum theory) for us and one can make experimental statements on the
exact (non perturbative) Green's functions. The effect of the nuclear
medium is accounted for by means of a Pauli blocking correction to the
nucleon propagator in the vacuum, namely,
\begin{eqnarray}
G(p)&=&(p^0-\epsilon(\vec{p})+i\eta)^{-1}+
2i\pi n(\vec{p})\delta(p^0-\epsilon(\vec{p}))
= G_0(p) +\delta G(p)\,,
\end{eqnarray}
where $G_0(p)$ and $G(p)$ stand for the nucleon propagator at vacuum
and at finite density, respectively, $n(\vec{p})$ is the Fermi sea
occupation number and $\epsilon(\vec{p})$ is the nucleon kinetic
energy. In the present case, the vacuum theory is the unperturbed one
whereas the Pauli blocking correction is a 2-point perturbation to the
action and our expansion takes the form of a density expansion.
The use of an effective Lagrangian, instead of a more fundamental one,
allows to perform calculations in terms of physical quantities and
this makes the phenomenological interpretation more direct. However,
the use of the standard Feynman rules is not really justified since
they apply to the action and not to the effective action, to which the
effective Lagrangian is an approximation. A manifestation of this
problem comes in the form of double-counting of vacuum contributions,
which has to be carefully avoided. This is obvious already in the
simplest cases. Consider, for instance, the nucleon self-energy coming
from exchange of virtual pions, with corresponding Feynman graph
depicted in figure~\ref{f:9}$a$. This graph gives a non vanishing
contribution even at zero density. Such vacuum contribution is
spurious since it is already accounted for in the physical mass of the
nucleon. The standard procedure in this simple case is to subtract the
same graph at zero density in order to keep the true self-energy. This
is equivalent to drop $G_0(p)$ in the internal nucleon propagator and
keep only the Pauli blocking correction $\delta G(p)$. In more
complicated cases simple overall subtraction does not suffice, as it
is well known from renormalization theory; there can be similar
spurious contributions in subgraphs even if the graph vanishes at zero
density. An example is shown in the photon self-energy graph of
figure~\ref{f:9}$b$. The vertex correction subgraphs contain a purely
vacuum contribution that is already accounted for in the effective
$\gamma NN$ vertex. Although such contributions vanish if the
exchanged pion is static, they do not in general. As is clear from our
theorem, the spurious contributions are avoided by not allowing vacuum
loops in the graphs. That is, for each standard graph consider all the
graphs obtained by substituting each $G(p)$ by either $G_0(p)$ or
$\delta G(p)$ and drop all graphs with any purely vacuum loop. We
emphasize that strictly speaking the full propagator and the full
proper vertices of the vacuum theory have to be used to construct the
diagrams. In each particular application it is to be decided whether a
certain effective Lagrangian (plus form factors, widths, etc) is a
sufficiently good approximation to the effective action.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.0cm\epsffile{f.9.EPS}}}
\vspace{6pt}
\caption{Nucleon (a) and photon (b) self-energy diagrams.}
\label{f:9}
\end{figure}
\subsection{Derivation of low density theorems}
A related application of our rules comes from deriving low density
theorems. For instance, consider the propagation of pions in nuclear
matter and in particular the pionic self-energy at lowest order in an
expansion on the nuclear density. To this end one can use the first
order correction to the self-energy as given in eq.~(\ref{eq:2}), when
the labels $i,j$ refer to pions and the 2-point perturbation is the
Pauli blocking correction for the nucleons. Thus, the labels
$a,b,k,\ell$ (cf. second line of figure~\ref{f:6}) necessarily refer
to nucleons whereas $n,r$ can be arbitrary baryons ($B$). In this
case, the first 2-point diagram in figure~\ref{f:6} vanishes since
$i,j$ are pionic labels which do not have Pauli blocking. On the other
hand, as the nuclear density goes to zero, higher order diagrams
(i.e. with more than one full dot, not present in figure~\ref{f:6})
are suppressed and the second and third 2-point diagrams are the
leading contributions to the pion self energy. The $\pi NB$ and
$\pi\pi NN$ proper vertices in these two graphs combine to yield the
$\pi N$ $T$-matrix, as is clear by cutting the corresponding graphs by
the full dots. (Note that the Dirac delta in the Pauli blocking term
places the nucleons on mass shell.) We thus arrive at the following
low density theorem~\cite{Hu75}: at lowest order in a density
expansion in nuclear matter, the pion optical potential is given by
the nuclear density times the $\pi N$ forward scattering
amplitude. This result holds independently of the detailed
pion-nucleon interaction and regardless of the existence of other kind
of particles as well since they are accounted for by the $T$-matrix.
\subsection{Applications to non perturbative renormalization in
Quantum Field Theory}
Let us consider a further application, this time
to the problem of renormalization in Quantum Field Theory (QFT). To be
specific we consider the problem of ultraviolet divergences. To first
order in $\delta S$, our rules can be written as
\begin{equation}
\delta\Gamma[\phi] =\langle\delta S\rangle^\phi\,,
\label{eq:Lie}
\end{equation}
where $\langle A\rangle^\phi$ means the expectation value of $A[\phi]$
in the presence of an external current $J$ tuned to yield $\phi$ as
the expectation value of the field. This formula is most simply
derived directly from the definitions give above. (In passing, let us
note that this formula defines a group of transformations in the space
of actions, i.e., unlike standard perturbation theory, it preserves
its form at any point in that space.) We can consider a family of
actions, taking the generalized coupling constants as parameters, and
integrate the above first order evolution equation taking e.g. a
quadratic action as starting point. Perturbation theory corresponds to
a Taylor expansion solution of this equation.
To use this idea in QFT, note that our rules directly apply to any
pair of regularized bare actions $S$ and $S+\delta S$. Bare means that
$S$ and $S+\delta S$ are the true actions that yield the expectation
values in the most naive sense and regularized means that the cut off
is in place so that everything is finite and well defined. As it is
well known, a parametric family of actions is said to be
renormalizable if the parameters can be given a suitable dependence on
the cut off so that all expectation values remain finite in the limit
of large cut off (and the final action is non trivial, i.e., non
quadratic). In this case the effective action has also a finite
limit. Since there is no reason to use the same cut off for $S$ and
$\delta S$, we can effectively take the infinite cut off limit in
$\Gamma$ keeping finite that of $\delta S$. (For instance, we can
regularize the actions by introducing some non locality in the
vertices and taking the local limit at different rates for both
actions.) So when using eq.~(\ref{eq:Lie}), we will find diagrams with
renormalized effective lines and vertices from $\Gamma$ and bare
regularized vertices from $\delta S$. Because $\delta\Gamma$ is also
finite as the cut off is removed, it follows that the divergences
introduced by $\delta S$ should cancel with those introduced by the
loops. This allows to restate the renormalizability of a family of
actions as the problem of showing that 1) assuming a given asymptotic
behaviour for $\Gamma$ at large momenta, the parameters in $\delta S$
can be given a suitable dependence on the cut off so that
$\delta\Gamma$ remains finite, 2) the assumed asymptotic behaviour is
consistent with the initial condition (e.g. a free theory) and 3) this
asymptotic behaviour is preserved by the evolution equation. This
would be an alternative to the usual forest formula analysis which
would not depend on perturbation theory. If the above program were
successfully carried out (the guessing of the correct asymptotic
behaviour being the most difficult part) it would allow to write a
renormalized version of the evolution equation~(\ref{eq:Lie}) and no
further renormalizations would be needed. (Related ideas regarding
evolution equations exist in the context of low momenta expansion, see
e.g.~\cite{Morris} or to study finite temperature
QFT~\cite{Pietroni}.)
To give an (extremely simplified) illustration of these ideas, let us
consider the family of theories with Euclidean action
\begin{equation}
S[\phi,\psi]=\int
d^4x\left(\frac{1}{2}(\partial\phi)^2+\frac{1}{2}m^2\phi^2
+\frac{1}{2}(\partial\psi)^2+\frac{1}{2}M^2\psi^2
+\frac{1}{2}g\phi\psi^2 + h\phi + c\right).
\end{equation}
Here $\phi(x)$ and $\psi(x)$ are bosonic fields in four dimensions.
Further, we will consider only the approximation of no
$\phi$-propagators inside of loops. This approximation, which treats
the field $\phi$ at a quasi-classical level, is often made in the
literature. It As it turns out, the corresponding evolution equation
is consistent, that is, the right-hand side of eq.~(\ref{eq:Lie}) is
still an exact differential after truncation. In order to evolve the
theory we will consider variations in $g$, and also in $c$, $h$ and
$m^2$, since these latter parameters require a ($g$-dependent)
renormalization. (There are no field, $\psi$-mass or coupling constant
renormalization in this approximation.) That is
\begin{equation}
\delta S[\phi,\psi]= \int
d^4x\left(\frac{1}{2}\delta m^2\phi^2
+\frac{1}{2}\delta g\phi\psi^2 + \delta h\phi + \delta c\right)\,.
\end{equation}
The graphs with zero and one $\phi$-leg are divergent and clearly they
are renormalized by $\delta c$ and $\delta h$, so we concentrate on the
remaining divergent graph, namely, that with two $\phi$-legs. Noting
that in this quasi-classical approximation $g$ coincides with the full
effective coupling constant and $S_\psi(q)=(q^2+M^2)^{-1}$ coincides
with the the full propagator of $\psi$, an application of the rules
gives (cf. figure~\ref{f:10})
\begin{equation}
\delta\Sigma_\phi(k)= \delta m^2 - \delta g
g\int\frac{d^4q}{(2\pi)^4}\theta(\Lambda^2-q^2) S_\psi(q)S_\psi(k-q)\,,
\label{eq:21}
\end{equation}
where $\Lambda$ is a sharp ultraviolet cut off.
\begin{figure}[htb]
\centerline{\mbox{\epsfxsize=12cm\epsffile{f.10.EPS}}}
\vspace{6pt}
\caption{Diagrammatic representation of eq.~(\ref{eq:21}).}
\label{f:10}
\end{figure}
Let us denote the cut off integral by $I(k^2,\Lambda^2)$. This
integral diverges as $\frac{1}{(4\pi)^2}\log(\Lambda^2)$ for large
$\Lambda$ and fixed $k$. Hence $\delta\Sigma_\phi$ is guaranteed to
remain finite if, for large $\Lambda$, $\delta m^2$ is taken in the
form
\begin{equation}
\delta m^2 = \delta m_R^2 + \delta g g\frac{1}{(4\pi)^2}\log(\Lambda^2/\mu^2)
\end{equation}
where $\mu$ is an arbitrary scale (cut off independent), and $\delta
m_R^2$ is an arbitrary variation. Thus, the evolution equation for
large cut off can be written in finite form, that is, as a
renormalized evolution equation, as follows
\begin{equation}
\delta\Sigma_\phi(k)= \delta m_R^2 - \delta g gI_R(k^2,\mu^2)\,,
\end{equation}
where
\begin{equation}
I_R(k^2,\mu^2)=\lim_{\Lambda\to\infty}\left(I(k^2,\Lambda^2)
-\frac{1}{(4\pi)^2}\log(\Lambda^2/\mu^2)\right)\,.
\end{equation}
Here $\delta g$ and $\delta m_R^2$ are independent and arbitrary
ultraviolet finite variations. The physics remains constant if a
different choice of $\mu$ is compensated by a corresponding change in
$\delta m_R^2$ so that $\delta m^2$, and hence the bare regularized action,
is unchanged. The essential point has been that $\delta m^2$ could be
chosen $\Lambda$ dependent but $k^2$ independent. As mentioned, this
example is too simple since it hardly differs from standard
perturbation theory. The study of the general case (beyond
quasi-classical approximations) with this or other actions seems very
interesting from the point of view of renormalization theory.
\section{Proof of the theorem}
In order to prove the theorem it will be convenient to change the
notation: we will denote the unperturbed action by $S_0[\phi]$ and its
effective action by $\Gamma_0[\phi]$. The generating function of the
full perturbed system is
\begin{equation}
Z[J]= \int\exp\left(S_0[\phi]+\delta S[\phi] +J\phi\right)\,d\phi \,.
\label{eq:8}
\end{equation}
By definition of the effective
action, the connected generating function of the unperturbed theory is
\begin{equation}
W_0[J]=\max_\phi\left(\Gamma_0[\phi]+J\phi\right)\,,
\end{equation}
thus, up to a constant ($J$-independent) factor, we can write
\begin{eqnarray}
\exp\left(W_0[J]\right) &=& \lim_{\hbar\to 0}\left[\int
\exp\left(\hbar^{-1}\left(\Gamma_0[\phi]+J\phi\right)\right)
\,d\phi\right]^{\textstyle\hbar}\,.
\end{eqnarray}
$\hbar$ is merely a bookkeeping parameter here which is often used to
organize the loop expansion~\cite{Co73,Ne87}. The $\hbar$-th power above can be
produced by means of the replica method~\cite{Ne87}. To this end we
introduce a number $\hbar$ of replicas of the original field, which
will be distinguished by a new label $k$. Thus, the previous equation
can be rewritten as
\begin{equation}
\exp\left(W_0[J]\right)= \lim_{\hbar\to 0}\int
\exp\left(\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J\phi_k\right)\right)
\prod_kd\phi_k \,.
\label{eq:10}
\end{equation}
On the other hand, the identity (up to a constant)
$\int\exp\left(J\phi\right)\,d\phi = \delta[J]$, where $\delta[J]$
stands for a Dirac delta, allows to write the reciprocal relation of
eq.~(\ref{eq:Z}), namely
\begin{equation}
\exp\left(S_0[\phi]\right)= \int\exp\left(W_0[J_0]-J_0\phi\right)\,dJ_0 \,.
\label{eq:11}
\end{equation}
If we now use eq.~(\ref{eq:10}) for $\exp W_0$ in eq.~(\ref{eq:11})
and the result is substituted in eq.~(\ref{eq:8}), we obtain
\begin{equation}
Z[J]= \lim_{\hbar\to 0}\int\exp\left(
\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J_0\phi_k\right) +\delta
S[\phi] +\left(J-J_0\right)\phi \right) \,dJ_0\,d\phi\prod_kd\phi_k
\,.
\end{equation}
The integration over $J_0$ is immediate and yields a Dirac delta for
the variable $\phi$, which allows to carry out also this
integration. Finally the following formula is obtained:
\begin{equation}
Z[J]= \lim_{\hbar\to 0}\int
\exp\left(\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J\phi_k\right)
+\delta S\big[\hbar^{-1}\sum_k\phi_k\big]\right)\prod_kd\phi_k \,.
\label{eq:13}
\end{equation}
which expresses $Z[J]$ in terms of $\Gamma_0$ and $\delta S$. Except
for the presence of replicas and explicit $\hbar$ factors, this
formula has the same form as that in eq.~(\ref{eq:8}) and hence it
yields the same standard Feynman rules but with effective lines and
vertices.
Consider any diagram of the theory ``$\Gamma_Q+(\Gamma_I+\delta S)$'',
as described by eq.~(\ref{eq:13}) before taking the limit $\hbar\to
0$. Let us now show that such diagram carries precisely a factor
$\hbar^{L_0}$, where $L_0$ is the number of unperturbed loops in the
graph. Let $P$ be the total number of lines (both internal and
external), $E$ the number of legs, $L$ the number of loops and $C$ the
number of connected components of the graph. Furthermore, let $V^0_n$
and $\delta V_n$ denote the number of $n$-point vertices of the types
$\Gamma_0$ and $\delta S$ respectively. After these definitions, let
us first count the number of $\hbar$ factors coming from the explicit
$\hbar^{-1}$ in eq.~(\ref{eq:13}). The arguments are
standard~\cite{Co73,It80,Ne87}: from the Feynman rules it is clear
that each $\Gamma_0$ vertex carries a factor $\hbar^{-1}$, each
effective propagator carries a factor $\hbar$ (since it is the inverse
of the quadratic part of the action), each $n$-point $\delta S$ vertex
carries a factor $\hbar^{-n}$ and each leg a $\hbar^{-1}$ factor
(since they are associated to the external current $J$). That is, this
number is
\begin{equation}
N_0 = P - \sum_{n\geq 0} V^0_n-E-\sum_{n\ge 0}n\delta V_n \,.
\end{equation}
Recall now the definition given above of the associated unperturbed
diagram, obtained after deleting all perturbation vertices, and let
$P_0$, $E_0$, $L_0$ and $C_0$ denote the corresponding quantities for
such unperturbed graph. Note that the two definitions given for the
quantity $L_0$ coincide. Due to its definition, $P_0=P$ and also
$E_0=E+\sum_{n\geq 0}n\delta V_n$, this allows to rewrite $N_0$ as
\begin{equation}
N_0= P_0-\sum_{n\geq 0} V^0_n-E_0\,.
\end{equation}
Since all quantities now refer a to the unperturbed graph, use can be
made of the well known diagrammatic identity $N_0=L_0-C_0$. Thus from
the explicit $\hbar$, the graph picks up a factor
$\hbar^{L_0-C_0}$. Let us now turn to the implicit $\hbar$ dependence
coming from the number of replicas. The replica method idea applies
here directly: because all the replicas are identical, summation over
each different free replica label in the diagram yields precisely one
$\hbar$ factor. From the Feynman rules corresponding to the theory of
eq.~(\ref{eq:13}) it is clear that all lines connected through
$\Gamma_0$ vertices are constrained to have the same replica label,
whereas the coupling through $\delta S$ vertices does not impose any
conservation law of the replica label. Thus, the number of different
replica labels in the graph coincides with $C_0$. In this argument is
is essential to note that the external current $J_i$ has not been
replicated; it couples equally to all the replicas. Combining this
result with that previously obtained, we find that the total $\hbar$
dependence of a graph goes as $\hbar^{L_0}$. As a consequence, all
graphs with unperturbed loops are removed after taking the limit
$\hbar\to 0$. This establishes the theorem.
Some remarks can be made at this point. First, it may be noted that
some of the manipulations carried out in the derivation of
eq.~(\ref{eq:13}) were merely formal (beginning by the very definition
of the effective action, since there could be more than one extremum
in the Legendre transformation), however they are completely
sufficient at the perturbative level. Indeed, order by order in
perturbation theory, the unperturbed action $S_0[\phi]$ can be
expressed in terms of its effective action $\Gamma_0[\phi]$, hence the
Green's functions of the full theory can be expressed perturbatively
within the diagrams of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$''. It only remains to determine the weighting factor of each graph
which by construction (i.e. the order by order inversion) will be just
a rational number. Second, it is clear that the manipulations that
lead to eq.~(\ref{eq:13}) can be carried out in the presence of
fermions as well, and the same conclusion applies. Third, note that in
passing, it has been proven also the statement that the effective
action yields at tree level the same Green's functions as the bare
action at all orders in the loop expansion, since this merely
corresponds to set $\delta S[\phi]$ to zero. Finally,
eq.~(\ref{eq:13}) does not depend on any particular choice, such as
fixing $\langle\phi^i\rangle=0$ to remove tadpole subgraphs.
\section*{Acknowledgments}
L.L. S. would like to thank C. Garc\'{\i}a-Recio and J.W. Negele for
discussions on the subject of this paper. This work is supported in
part by funds provided by the U.S. Department of Energy (D.O.E.)
under cooperative research agreement \#DF-FC02-94ER40818, Spanish
DGICYT grant no. PB95-1204 and Junta de Andaluc\'{\i}a grant no.
FQM0225.
| proofpile-arXiv_065-726 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\def \thesection\arabic{equation} {\arabic{section}.\arabic{equation}}
\def \begin{equation} {\begin{equation}}
\def \end{equation} {\end{equation}}
\def \begin{eqnarray} {\begin{eqnarray}}
\def \end{eqnarray} {\end{eqnarray}}
\def \begin{eqnarray*} {\begin{eqnarray*}}
\def \end{eqnarray*} {\end{eqnarray*}}
\def \begin{thebibliography} { | proofpile-arXiv_065-742 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{section:I}
Because both of its simplicity and nontrivial nature,
(2+1)-dimensional Einstein gravity serves as a good
test case for pursuing quantum gravity in
the framework of general relativity. In particular,
because of the low dimensionality,
the global degrees of freedom of a space can be analyzed
quite explicitly in this case
~\cite{MART,MONC,HOS,CAR1}.
Recently, back-reaction effects from quantum matter
on the global degrees of freedom of a semiclassical universe
were analyzed explicitly~\cite{MS1}. In this analysis, the
(2+1)-dimensional homogeneous spacetime with topology
${\cal M}\simeq T^2 \times {\bf R}$ was chosen as a model.
This problem was investigated from a general interest on
the global properties of a semiclassical universe, whose
analysis has not yet been pursued
sufficiently~\cite{MS1,MS2,MS3}.
In this analysis, it was also investigated whether
the path-integral measure could give a correction
to the semiclassical dynamics of
the global degrees of freedom~\cite{MS1}.
By virtue of several techniques developed in string theory,
one can give a meaning to a partition function, formally
defined as
\begin{equation}
Z={\cal N} \int [dh_{ab}][d\pi^{ab}][dN][dN_a]\ \exp iS\ \ \ .
\label{eq:partition0}
\end{equation}
Here $h_{ab}$ and $\pi^{ab}$ are a spatial metric and its
conjugate momentum, respectively; $N$ and $N_a$ are the lapse
function and the shift vector, respectively;
$S$ is the canonical action
for Einstein gravity.
It is expected that $Z$ reduces to the form
\[
Z={\cal N}
\int [dV\ d\sigma][d\tau^A\ dp_A][dN'] \
\mu(V, \sigma, \tau^A, p_A) \
\exp i S_{ reduced}\ \ \ .
\]
Here $V$, $\sigma$, $\tau^A$, and $p_A$ ($A=1,2$) are, respectively,
the 2-volume (area) of a torus, its conjugate momentum,
the Teichm\"uller parameters, and their conjugate momenta;
$N'$ is the spatially constant part of $N$;
$S_{ reduced}$ is the reduced action written in terms of
$V$, $\sigma$, $\tau^A$ and $p_A$. The factor
$\mu(V, \sigma, \tau^A, p_A)$ is a possible nontrivial measure,
which can cause a modification of the semiclassical evolution
determined by $S_{ reduced}$. The
result of Ref.\cite{MS1} was that
$\mu(V, \sigma, \tau^A, p_A)=1$: The partition function defined
as in Eq.(\ref{eq:partition0}) is equivalent, after
a suitable gauge fixing,
to the one defined directly from the reduced system,
$S_{ reduced}$.
Though this result looks natural at first sight,
it is far from trivial.
One needs to extract a finite dimensional reduced phase space
from an infinite dimensional original phase space. Therefore,
it is meaningful to show that such a natural reduction
is really achieved by a suitable gauge fixing.
The main interest in Ref.\cite{MS1} was the explicit analysis
of the semiclassical dynamics of a tractable model,
${\cal M}\simeq T^2 \times {\bf R}$.
Therefore, the analysis of
the reduction of the partition function
was inevitably restricted to
the special model in question. Namely
it was the case of $g=1$, where $g$ is a genus of a Riemann
surface. Furthermore, the model was set to be spatially
homogeneous from the outset.
It is then desirable for completeness to generalize the
analysis in Ref.\cite{MS1} to the general case of any
$g\geq 1$.
More significantly, there is one issue remaining to be
clarified in the case of $g=1$: The relation
between the reduced system of the
type of Ref.\cite{MONC} and the one of the type of
Ref.\cite{HOS} in the context of quantum theory.
For brevity, let us call the former formulation as the
$\tau$-form, while the latter one as the
$(\tau, V)$-form.
The $\tau$-form takes $(\tau^A, p_A)$ as fundamental
canonical pairs and the action is given by~\cite{MONC}
\begin{equation}
S[\tau^A, p_A]
=\int d\sigma \{ p_A\ d{\tau}^A / d\sigma- V(\sigma, \tau^A, p_A) \} \ \ \ .
\label{eq:tau}
\end{equation}
On the other hand the $(\tau, V)$-form
uses $(V, \sigma)$ as well
as $(\tau^A, p_A)$ and the action is given in the
form~\cite{HOS}
\begin{equation}
S[(\tau^A, p_A), (V, \sigma)]
=\int dt\ \{ p_A\ \dot{\tau}^A + \sigma \dot{V}- N\
H(\tau^A, p_A, V, \sigma) \}\ \ \ .
\label{eq:tauV}
\end{equation}
[The explicit expression for $H$ shall be presented later
(Eq.(\ref{eq:reducedaction})).]
The key procedure in deriving the
$(\tau, V)$-form (in the classical sense)
is to choose $N=$ spatially constant~\cite{HOS}.
Since the compatibility of this choice with
York's time-slicing
is shown by means of the equations of motion~\cite{HOS},
one should investigate the effect of this choice
in quantum theory.
Furthermore, the condition $N=$ spatially constant
is not in the standard form of the canonical gauge, so that
the analysis of its role in the quantum level
requires special cares.
Since the model analyzed in Ref.\cite{MS1} was
chosen to be spatially homogeneous,
this issue did not make its appearance.
We shall make these issues clarified.
Regarding the $(\tau, V)$-form, there is
another issue which is not very clear.
In this formulation, $(V, \sigma)$ joins to
$(\tau^A, p_A)$ as one of the canonical pairs.
Therefore~\cite{MS1}, $\int[d\sigma]$
should appear in the final form of $Z$
as well as $\int[dV]$.
Since the adopted gauge-fixing condition is $\pi/\sqrt{h} -\sigma=0$
(York's gauge~\cite{YORK}), $\sigma$ plays the role of
a label parameterizing a family of allowed gauge-fixing
conditions,
so that it is not dynamical in the beginning.
Therefore, the appearance of $\int[d\sigma]$ is not apparent,
and worth to be traced from a viewpoint of
a general procedure of gauge fixing.
We shall investigate these points.
Independently from the analysis of Ref.\cite{MS1}, Carlip also
investigated the relation between two partition
functions,
one being defined on the entire phase space, and the other
one on
the reduced phase space in the sense of the
$\tau$-form~\cite{CAR2}.
With regard to this problem,
his viewpoint was more general
than Ref.\cite{MS1}. He showed that, for the case of
$g\geq 2$,
the partition function formally defined as
in Eq.(\ref{eq:partition0})
is equivalent to the one for the reduced system in the
$\tau$-form. On the other hand, the exceptional case of
$g=1$ was not analyzed so much.
Indeed, we shall see later that the case of $g=1$ can yield a
different result compared with the case of $g \geq 2$.
In this respect, his analysis and the analysis
in Ref.\cite{MS1}
do supplement each other. Furthermore,
his way of analyzing is quite different
from the one developed in Ref.\cite{MS1}.
In particular, it looks difficult to trace
the appearance of $\int[d\sigma]$ if his analysis
is applied to the case of $g=1$ in the
$(\tau, V)$-form. It may be useful, therefore,
to investigate all the cases of $g\geq 1$ from a
different angle,
namely by a developed version of the method of
Ref.\cite{MS1}.
In view of these situations of previous work, we shall
present here
the full analysis for all the cases $g\geq 1$.
In particular, a more detailed investigation for
the case of $g=1$
shall be performed.
In \S \ref{section:II}, we shall investigate for $g\geq 1$
the reduction of the
partition function of Eq.(\ref{eq:partition0}), to the one for
the reduced system in the $\tau$-form.
In \S \ref{section:III},
we shall investigate how the $(\tau, V)$-form
emerges for $g=1$ in the course of
the reduction of the partition function,
Eq.(\ref{eq:partition0}). We shall find out that
a nontrivial measure appears in the formula defining
a partition function, if the $(\tau, V)$-form is
adopted. We shall see that this factor is understood as
the Faddeev-Popov determinant associated with the
reparameterization invariance inherent in the $(\tau, V)$-form.
Furthermore we shall see that another factor can appear in the
measure for the case of $g=1$, originating from the
existence of the zero modes of a certain differential operator
$P_1$. It depends on how to define
the path-integral domain for the shift vector $N_a$ in $Z$:
If it is defined to include $\ker P_1$, the nontrivial factor
does not appear, while it appears if the integral domain is
defined to exclude $\ker P_1$.
We shall discuss that this factor
can influence the semiclassical dynamics of the
(2+1)-dimensional spacetime with $g=1$.
These observations urge us to clarify how to choose
the integral domain for $N_a$ in quantum gravity.
Section \ref{section:IV} is devoted to several discussions.
In {\it Appendix},
we shall derive useful formulas which shall become
indispensable for our analysis.
\section{The partition function for (2+1)-gravity}
\label{section:II}
Let us consider a (2+1)-dimensional spacetime,
${\cal M}\simeq \Sigma \times {\bf R}$, where
$\Sigma$ stands for a compact, closed, orientable
2-surface with genus $g$. The partition function for
(2+1)-dimensional pure Einstein gravity is formally given by
\begin{equation}
Z={\cal N} \int [dh_{ab}][d\pi^{ab}][dN][dN_a]
\exp i\int dt \int_{\Sigma} d^2x \ (\pi^{ab} \dot{h}_{ab}
-N{\cal H} -N_a {\cal H}^a), \ \ \
\label{eq:partition}
\end{equation}
where \footnote{
We have
chosen units such that $c=\hbar=1$ and such that the
Einstein-Hilbert
action becomes just $\int R \sqrt{-g}$ up to a boundary
term.
The spatial indices $a,b, \cdots$ are raised and lowered
by $h_{ab}$. The operator $D_a$ is the covariant
derivative
w.r.t. (with respect to)
$h_{ab}$, and $^{(2)} \! R$ stands for a scalar
curvature
of the 2-surface $\Sigma$. Unless otherwise stated,
the symbols $\pi$ and $h$ stand
for $h_{ab}\pi^{ab}$ and $\det h_{ab}$, respectively,
throughout this paper.}
\begin{eqnarray}
{\cal H} &=& (\pi^{ab}\pi_{ab} - \pi^2)\sqrt{h^{-1} } -
( ^{(2)} \! R - 2\lambda)\sqrt{h}
\ \ \ ,
\label{eq:hamiltonian} \\
{\cal H}^a &=& -2D_b \pi^{ab} \ \ \ .
\label{eq:momentum}
\end{eqnarray}
Here, $\lambda$ is the cosmological constant which is set to be
zero if it is not being considered.
Taking ${\cal H} ={\cal H} (\sqrt{h}) $, a canonical
pair $(\sqrt{h}, \pi / \sqrt{h})$ can be chosen to be gauge-fixed.
One natural way to fix the gauge is to impose a 1-parameter
family of gauge-fixing conditions,
\begin{equation}
\chi_{{}_1} := {\pi \over \sqrt{h}} - \sigma =0\ \ \ (\exists \sigma \in
{\bf R})\ \ ,
\label{eq:gauge1}
\end{equation}
where $\sigma$ is a spatially constant parameter
(York's gauge~\cite{YORK}).
Let us make clear the meaning of the gauge Eq.(\ref{eq:gauge1}).
We adopt the following
notations; $(P_1^\dagger w)^a := -2D_b w^{ab}$ for a symmetric
traceless tensor $w^{ab}$;
$\tilde{\pi}^{ab}:= \pi^{ab}-{1\over 2}\pi h^{ab}$ is the
traceless part of $\pi^{ab}$ and in particular
$\tilde{\pi}^{'ab}$ stands for
$\tilde{\pi}^{'ab} \notin \ker P_1^\dagger$.
Now, let $Q:= {\pi \over \sqrt{h}}$ and
$Q':= \int_{\Sigma} d^2 x\ \sqrt{h} \ Q \big/
\int_{\Sigma} d^2 x \sqrt{h} $,
which is the spatially constant component of $Q$.
Therefore,
${\cal P}' \ (\cdot)
:= \int_{\Sigma} d^2 x\ \sqrt{h} \ (\cdot)
\big/ \int_{\Sigma} d^2 x \sqrt{h} $
forms a linear map which projects $Q$ to $Q'$.
On the other hand,
${\cal P}= 1- {\cal P}'$ projects $Q$ to
its spatially varying component.
Note that $({\cal P}Q, {\cal P}' Q)=0$ w.r.t. the
natural inner product ({\it Appendix} $A$).
Then, Eq.(\ref{eq:gauge1})
can be recast as
\begin{equation}
\chi_{{}_1} := {\cal P} \left( {\pi \over \sqrt{h}} \right)=0\ \ \ .
\label{eq:gauge1'}
\end{equation}
We note that ${\cal H}^a = -2D_b \tilde{\pi}^{ab}
=: (P_1^\dagger \tilde{\pi})^{ab}$
under the condition of Eq.(\ref{eq:gauge1'}).
Taking
${\cal H}^a ={\cal H}^a (\tilde{\pi}^{'ab})$, a pair
$(h_{ab}/\sqrt{h}, \tilde{\pi}^{'ab}\sqrt{h})$ shall be gauge-fixed.
Thus we choose as a gauge-fixing condition,
\begin{equation}
\chi_{{}_2} := {h_{ab} \over \sqrt{h}}- \hat{h}_{ab} (\tau^A) =0\ \ \
(\exists \tau^A \in {\cal M}_g) \ \ \ ,
\label{eq:gauge2}
\end{equation}
where $\hat{h}_{ab}$ is a $m$-parameter family of reference metrics
($m=2, 6g-6$ for $g=1$, $g \geq 2$, respectively) s.t.
$\det \hat{h}_{ab} =1 $; $\tau^A$ ($A=1, \cdots, m$) denote the
Teichm\"uller parameters parameterizing the moduli space
${\cal M}_g$ of $\Sigma$~\cite{HAT}.
At this stage, we recall~\cite{HAT} that
a general variation of $h_{ab}$ can be decomposed as
$\delta h_{ab}= \delta_W h_{ab} + \delta_D h_{ab} +\delta_M h_{ab}$, where
$\delta_W h_{ab}$ is the trace part of $\delta h_{ab}$
(Weyl deformation),
$\delta_D h_{ab}=(P_1v)_{ab}:=D_a v_b + D_b v_a - D_c v^c h_{ab}$
for ${\exists v^a}$ (the traceless part of a diffeomorphism),
and
$\delta_M h_{ab}=
{\cal T}_{Aab} \delta \tau^A :=
({\partial h_{ab} \over \partial \tau^A}
-{1\over 2}h^{cd}{\partial h_{cd} \over
\partial \tau^A }h_{ab}) \delta \tau^A$
(the traceless part of
a moduli deformation).\footnote{
Needless to say, these quantities are defined for $h_{ab}$,
a spatial metric induced on $\Sigma$. Therefore,
under the condition
(\ref{eq:gauge2}), they are calculated using
$\sqrt{h} \hat{h}_{ab} (\tau^A)$, and
not just $\hat{h}_{ab} (\tau^A)$.
\label{footnote:remark}}
It is easy to show
that~\cite{HAT}, the adjoint of $P_1$ w.r.t. the natural
inner product
({\it Appendix} $A$) becomes $(P_1^\dagger w)^a :=-2D_b w^{ab}$,
acting on a symmetric traceless
tensor $w^{ab}$. [Therefore the notation
``$P_1$" is compatible with the
notation ``$P_1^\dagger $" introduced just after
Eq.(\ref{eq:gauge1}).]
Now, the meaning of the gauge Eq.(\ref{eq:gauge2}) is
as follows.
The variation of $h_{ab}/\sqrt{h}$ in the neighborhood of
$\hat{h}_{ab} (\tau^A)$ is expressed as\footnote{
The symbol $\delta \left\{ \cdot \right\}$ shall
be used to represent a
variation whenever there is a possibility
of being confused
with the delta function $\delta (\cdot)$.
}
\[
\delta \left\{ h_{ab}/\sqrt{h} \right\}
= \delta_D \left\{ h_{ab}/\sqrt{h} \right\} + \delta_M \left\{ h_{ab}/\sqrt{h}
\right\}\ \ \ .
\]
Let $Riem_1 (\Sigma)$ denote the space of unimodular
Riemannian metrics on $\Sigma$.
We introduce projections defined on the tangent space of
$Riem_1 (\Sigma)$ at $\hat{h}_{ab} (\tau^A)$,\newline
$T_{\hat{h}_{ab} (\tau^A)} (Riem_1 (\Sigma))$:
\begin{eqnarray*}
{\cal P}_D \left( \delta \left\{ h_{ab}/\sqrt{h} \right\} \right)
&=& \delta_D \left\{ h_{ab}/\sqrt{h} \right\} \ \ \ , \\
{\cal P}_M \left( \delta \left\{ h_{ab}/\sqrt{h} \right\} \right)
&=& \delta_M \left\{ h_{ab}/\sqrt{h} \right\} \ \ \ , \\
{\cal P}_D + {\cal P}_M &=&1 \ \ \ .
\end{eqnarray*}
Then, the gauge Eq.(\ref{eq:gauge2}) is recast as
\begin{equation}
\chi_{{}_2} = {\cal P}_D \left( \delta \left\{ h_{ab}/\sqrt{h} \right\}
\right) =0\ \ \ .
\label{eq:gauge2'}
\end{equation}
On $Riem_1 (\Sigma)$ we can introduce a system of coordinates
in the neighborhood of each $\hat{h}_{ab} (\tau^A)$. Then
$[d h_{ab}]$ in Eq.(\ref{eq:partition}) is expressed as
$[d\sqrt{h}][d \delta\left\{ h_{ab}/\sqrt{h} \right\}]$.
[It is easy to show that
the Jacobian factor associated with this change of variables
is unity.]
Finally let us discuss about the integral domain for $N_a$ in
Eq.(\ref{eq:partition}) for the case of $g=1$.\footnote{
The author thanks S. Carlip for valuable remarks
on this point.}
Let us note that, under the gauge Eq.(\ref{eq:gauge1'}), we get
\[
\int_{\Sigma} d^2 x \ N_a {\cal H}^a =
2 \int_{\Sigma} d^2 x (P_1 N)_{ab} \tilde{\pi}^{ab}\ \ \ .
\]
Thus, when $N_a \in \ker P_1$,
$N_a$ does not work as a Lagrange multiplier
enforcing the momentum constraint
Eq.(\ref{eq:momentum}).
Then there are two possible options for
the path-integral domain of
$N_a$:
\def(\alph{enumi}){(\alph{enumi})}
\def\alph{enumi}{\alph{enumi}}
\begin{enumerate}
\item All of the vector fields on $\Sigma$, including $\ker P_1$.
\label{item:a}
\item All of the vector fields on $\Sigma$, except for $\ker P_1$.
\label{item:b}
\end{enumerate}
If we choose the option (\ref{item:a}), the integral over
$N_a$ in Eq.(\ref{eq:partition}) yields the factors
$\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)
\delta (P_1^\dagger \tilde{\pi})$.
Here $\{ \varphi _\alpha \}_{\alpha =1,2}$ is a basis of $\ker P_1$
for the case of $g=1$.\footnote{
Let us recall that $\dim \ker P_1 = 6, 2$
and $0$ for $g=0$, $g=1$ and
$g \geq 2$, respectively. On the other hand,
$\dim \ker P_1^\dagger = 0, 2$
and $6g-6$ for $g=0$, $g=1$ and
$g \geq 2$, respectively.
There is a relation
$\dim \ker P_1 - \dim \ker P_1^\dagger =6-6g$
(Riemann-Roch Theorem)~\cite{HAT}.
[Throughout this paper, $\dim W$ indicates
the real dimension of a space $W$, regarded as
a vector space over $\bf R$.]}
The factor $\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)$
appears here since it is proportional to the volume of
$\ker P_1$ w.r.t. the natural inner product.
If we choose the option (\ref{item:b}), the integral
over $N_a$ yields just a factor
$\delta (P_1^\dagger \tilde{\pi})$.
Integrating over the Lagrange multipliers $N$ and $N_a$,
(\ref{eq:partition}) reduces to
\begin{equation}
Z={\cal N} \int [dh_{ab}][d\pi^{ab}]\ {\cal B}\
\delta ({\cal H}) \delta({\cal H}^a)
\exp i\int dt \int_{\Sigma} d^2x \ \pi^{ab} \dot{h}_{ab}
\ \ \ ,
\label{eq:partition2}
\end{equation}
where
\[
{\cal B} = \cases{
\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)&
{\rm when}\ g=1\
{\rm with\ the\ option\ (\ref{item:a})}
\cr
1 & otherwise \cr
}\ \ \ .
\]
According to the Faddeev-Popov procedure~\cite{FAD}, we insert
into the right-hand side of Eq.(\ref{eq:partition2}) the factors
\[
|\det \{{\cal H}, \chi_{{}_1} \}|
|\det \{{\cal H}^a, \chi_{{}_2} \}|
\delta(\chi_{{}_1}) \delta(\chi_{{}_2}) \ \ \ .
\]
Note that, because
$\{ \int v_a {\cal H}^a, \chi_{{}_1} \} = -v_a {\cal H}^a -v^c
D_c \chi_{{}_1}=0$
{\it mod}
${\cal H}^a=0$ and $\chi_{{}_1} =0$,
the Faddeev-Popov determinant separates into
two factors as above.\footnote{
For notational neatness, the symbol of absolute value
associated with the Faddeev-Popov
determinants shall be omitted for most
of the cases. }
The determinants turn to simpler expressions if we note the
canonical structure of our system;
\begin{eqnarray*}
\int_{\Sigma} d^2x \ \pi^{ab} \delta h_{ab}
&=& \int_{\Sigma} d^2x \
\left( \tilde{\pi}^{ab} + {1\over 2} \pi h^{ab} \right)
\left( \delta_W h_{ab} + \delta_D h_{ab} +\delta_M h_{ab} \right) \\
&=& \int_{\Sigma} d^2x \
\left( { \pi\over \sqrt{h}} \delta \sqrt{h} +
(P_1^\dagger \tilde{\pi}')^a v_a
+ \tilde{\pi}^{ab} \delta_M h_{ab} \right)\ \ \ .
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\det \{{\cal H}, \chi_{{}_1} \} &=& {\partial {\cal H} \over
\partial \sqrt{h} }\ \ \ , \\
\det \{{\cal H}^a, \chi_{{}_2} \} &=&
\left( \det{\partial {\cal H}^a \over \partial
\tilde{\pi}^{'ab}}
\right) \cdot {\partial \chi_{{}_2} \over
\partial \left( \delta_D h_{ab} \right)}
= \det{}' P_1^\dagger \ \sqrt{h}
\ \ \ .
\end{eqnarray*}
Thus we get
\begin{eqnarray}
Z &{}&={\cal N}
\int [ d \sqrt{h} \ \ d \left( \delta \left\{ h_{ab}/\sqrt{h} \right\} \right)\
d\left( \pi/\sqrt{h} \right) \ d \tilde{\pi}^{ab}]\ {\cal B}
\nonumber \\
&{}& \ {\partial {\cal H} \over \partial \sqrt{h} }\
\det{}' P_1^\dagger \ \sqrt{h} \
\delta ({\cal H})\ \delta (P_1^\dagger \tilde{\pi})\
\delta \left( {\cal P} \left( \pi / \sqrt{h} \right) \right) \
\delta \left( {\cal P}_D \left( \delta \left\{ h_{ab}/\sqrt{h} \right\}
\right)
\right) \nonumber \\
&{}& \exp i \int dt \int_{\Sigma} d^2 x \
\big( \tilde{\pi}^{ab} + {1\over 2} \pi h^{ab} \big)
\dot{h}_{ab} \ \ \ .
\label{eq:partition3}
\end{eqnarray}
We can simplify the above expression.
First of all, the path integral w.r.t.
$ \pi / \sqrt{h} $ in Eq.(\ref{eq:partition3})
is of the form
\[
I_1 = \int d \left( \pi / \sqrt{h} \right)
\delta \left( {\cal P} \left( \pi / \sqrt{h} \right) \right)
F\left( \pi / \sqrt{h} \right)\ \ \ ,
\]
so that Eq.(\ref{eq:keyformula}) in $\it Appendix$ $B$
can be applied. Note that $\ker {\cal P}=$a
space of spatially constant
functions, which forms a 1-dimensional vector space over
$\bf R$. Now $\dim \ker {\cal P}=1$, so that $dp_A$ and
$d (p_A \vec{\Psi}^A)$ are equivalent,
following the notation in
$\it Appendix$ $B$. Furthermore
$\cal P$ is a projection. Thus no extra Jacobian factor
appears in this case.
Thus we get
\[ I_1 = \int [d\sigma] F({\cal P} \left( \pi / \sqrt{h} \right)=0, \sigma )
\ \ \ ,
\]
where $\sigma$ denotes a real parameter parameterizing
$\ker {\cal P}$.
Second, the path integral w.r.t.
$ \delta \left\{ h_{ab}/\sqrt{h} \right\} $
is of the form
\[ I_2=\int d \delta \left\{ h_{ab}/\sqrt{h} \right\}
\delta \left( {\cal P}_D \left( \delta \left\{ h_{ab}/\sqrt{h} \right\}
\right)
\right) G\left( \delta \left\{ h_{ab}/\sqrt{h} \right\} \right) \ \ \ .
\]
Note that $\ker {\cal P}_D = \delta_M \left\{ h_{ab}/\sqrt{h} \right\}
= \sqrt{h^{-1}} \delta_M h_{ab}$ and $\det{}' {\cal P}_D =1$.
Let
$\{ \xi ^A \}$ $(A=1, \cdots , \dim \ker P_1^\dagger )$
be a basis of $\ker {\cal P}_D $. Then the factor
$\det{}^{1/2} (\xi^A , \xi^B)$
(see Eq.(\ref{eq:keyformula}))
is given as
\begin{equation}
\det{}^{1/2} (\xi^A , \xi^B)=
\det ({\cal T}_A , \Psi^B) \det{}^{-1/2} (\Psi^A, \Psi^B)
\sqrt{h^{-1}}\ \ ,
\label{eq:detxi}
\end{equation}
where $\{ \Psi^A \}$ $( A=1, \cdots , \dim \ker P_1^\dagger )$
is a basis of $\ker P_1^\dagger$.
This expression results in as follows.
Carrying out a standard manipulation~\cite{HAT,CAR2,MS1},
\footnote{
Because $P_1^\dagger$ is a Fredholm operator on a space of
symmetric traceless tensors ${\cal W}$, ${\cal W}$ can be
decomposed as
${\cal W}= Im P_1 \oplus \ker P_1^\dagger$~\cite{NAKA}.
Therefore ${\cal T}_{Aab} \delta \tau^A \in {\cal W}$ is
uniquely decomposed in the form of
$P_1 u_0 + ({\cal T}_A , \Psi^B)
{(\Psi^{\cdot}, \Psi^{\cdot})^{-1}}_{BC}
\Psi^C \delta \tau^A$.
Then, $(P_1 \tilde{v})_{ab}:= (P_1 (v+u_0))_{ab} $.}
\begin{eqnarray}
\delta h_{ab}&=& {\delta \sqrt{h} \over \sqrt{h}} h_{ab} + (P_1 v)_{ab}
+ {\cal T}_{Aab} \delta \tau^A \nonumber \\
&=& {\delta \sqrt{h} \over \sqrt{h}} h_{ab} + (P_1 \tilde{v})_{ab}
+ ({\cal T}_A , \Psi^B)
{(\Psi^{\cdot}, \Psi^{\cdot})^{-1}}_{BC}
\Psi^C \delta \tau^A\ \ \ .
\label{eq:delh}
\end{eqnarray}
For the present purpose, the first and second terms are
set to be zero. [See the
footnote \ref{footnote:remark}.]
According to {\it Appendix} $A$, then,
it is easy to get Eq.(\ref{eq:detxi}). Then with the help of
Eq.(\ref{eq:keyformula}), we get
\[
I_2 = \int d\tau^A\
\det ({\cal T}_A , \Psi^B) \det{}^{-1/2} (\Psi^A, \Psi^B)
\sqrt{h^{-1}}\
G \left( \delta_D \left\{ h_{ab}/\sqrt{h} \right\}=0, \tau^A \right)
\ \ \ .
\]
Here we understand that the integral domain for
$\int d\tau^A$ is on the moduli space
${\cal M}_g$, and not the Teichm\"uller space, which
is the universal covering space of ${\cal M}_g$~\cite{HAT}.
This is clear
because $\tau^A$ appears in the integrand $G$ only through
$\hat{h}_{ab}$ (Eq.(\ref{eq:gauge2})).
We note that
the kinetic term in Eq.(\ref{eq:partition3}) becomes
\begin{eqnarray*}
\int_{\Sigma} d^2 x \
{ (\tilde{\pi}^{ab}+{1\over 2}\pi h^{ab})
\dot{h}_{ab} }_{|_{ \chi_{{}_1}=\chi_{{}_2}=0 } }
&=& \int_{\Sigma} d^2 x \
\left( \tilde{\pi}^{ab} + {1\over 2} \sigma \sqrt{h} h^{ab} \right)
{\dot{h}_{ab}}
|_{ \chi_{{}_2} =0 } \\
&{}& =
\left( \tilde{\pi}^{'ab} + p_A \Psi^{Aab} \sqrt{h}, {\cal T}_{Bcd}
\right) \dot{\tau}^B +\sigma \dot{V} \ \ \ .
\end{eqnarray*}
Here $V:=\int_{\Sigma} d^2 x\ \sqrt{h}$, which is interpreted as a
2-volume (area) of $\Sigma$.
[See {\it Appendix} $A$ for the inner product of densitized
quantities.]
Finally, the path integral w.r.t. $ \tilde{\pi}^{ab}$ in
Eq.(\ref{eq:partition})
is of the form
\[
I_3 = \int d \tilde{\pi}^{ab}\ \delta (P_1^\dagger \tilde{\pi})\
H \left( \tilde{\pi}^{ab} \right)\ \ \ .
\]
Using Eq.(\ref{eq:keyformula}), this is recast as
\[
I_3=\int dp_A\ \det{}^{1/2} (\Psi^A, \Psi^B)
(\det{}' P_1^\dagger)^{-1}
H \left( \tilde{\pi}^{'ab}=0, p_A \right) \ \ \ .
\]
Combining the above results for $I_1$, $I_2$ and $I_3$,
the expression in Eq.(\ref{eq:partition3}) is recast as
\begin{eqnarray}
Z=&{}& {\cal N} \int [d\sqrt{h} \ d \sigma \ d\tau^A dp_A ] \
{\partial {\cal H} \over \partial \sqrt{h} }\ \delta ({\cal H})\
{ \det ({\cal T}_A , \Psi^B) \over \det{}^{1/2}
(\varphi_\alpha , \varphi_\beta)}\ {\cal B}
\nonumber \\
&{}& \exp i \int_{\Sigma}dt\
\{ p_A (\Psi^A, {\cal T}_B)\dot{\tau}^B
+ \sigma \dot{V} \} \ \ \ .
\label{eq:partition4}
\end{eqnarray}
The reason why the factor
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$ appears in
Eq.(\ref{eq:partition4}) for $g=1$ shall be discussed
below. [For the case of $g \geq 2$,
the factor $\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$
should be set to unity.]
Without loss of generality, we can choose a basis of
$\ker P_1^\dagger$,
$\{ \Psi^A \}$, as to satisfy
$({\cal T}_A , \Psi^B)={\delta _A}^B $.
Under our gauge choice, the equation
${\cal H}=0$ considered as being an equation for $\sqrt{h}$,
has an unique solution,
$\sqrt{h} = \sqrt{h} (\cdot\ ; \sigma, \tau^A, p_A)$, for fixed
$\sigma$, $\tau^A$, and $p_A$~\cite{MONC}.
We therefore obtain
\begin{equation}
Z= {\cal N} \int [d\sigma \ d\tau^A dp_A ] \
\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta) \ {\cal B}
\ \exp i \int\ dt\ ( p_A \dot{\tau}^A
+ \sigma \dot{V}(\sigma, \tau^A, p_A) ) \ \ \ ,
\label{eq:partition5}
\end{equation}
where
$V(\sigma, \tau^A, p_A):=\int_{\Sigma} d^2 x\ \sqrt{h}(x; \sigma, \tau^A, p_A)$,
which is regarded as a function of
$\sigma$, $\tau^A$, and $p_A$.
It is clear that there is still the invariance under the
reparameterization $t \rightarrow f(t) $ remaining in
Eq.(\ref{eq:partition5}). From the geometrical viewpoint,
this corresponds to the
freedom in the way of labeling the time-slices
defined by Eq.(\ref{eq:gauge1}).
[This point is also clear in the analysis of Ref.\cite{MONC}.
The treatment of this
point seems somewhat obscure in the analysis of
Ref.\cite{CAR2}.]
The present system illustrates that the time reparameterization
invariance still remains even after choosing the time-slices
(Eq.(\ref{eq:gauge1}) or Eq.(\ref{eq:gauge1'})).
Eq.(\ref{eq:partition5}) is equivalent to
\begin{eqnarray}
Z &=& {\cal N} \int [d\sigma \ dp_{\sigma} d\tau^A dp_A ] [dN'] \
\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta) \ {\cal B} \nonumber \\
&{}& \ \exp i \int \ dt\ \{ p_A \dot{\tau}^A + p_{\sigma} \dot{\sigma}
- N'(p_{\sigma} + V(\sigma, \tau^A, p_A)) \} \ \ \ ,
\label{eq:partition6}
\end{eqnarray}
where the integration by parts is understood.
This system has a similar structure to a system of
a relativistic
particle and a system of a non-relativistic particle in a
parameterized form~\cite{HART}. We shall discuss this
point in detail in the final section.
One can gauge-fix the reparameterization symmetry by choosing
$\sigma=t$, i.e. by imposing a condition $\chi = \sigma -t =0$.
The Faddeev-Popov procedure~\cite{FAD} in this case
reduces to simply inserting $\delta (\sigma-t)$ into
Eq.(\ref{eq:partition6}).
Thus we obtain
\begin{equation}
Z={\cal N}
\int [d\tau^A dp_A] \ {\cal A} \
\exp i \int d\sigma ( p_A\ d{\tau}^A / d\sigma- V(\sigma, \tau^A, p_A))
\ \ \ .
\label{eq:partition7}
\end{equation}
Here
\[
{\cal A} = \cases{
\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)&
{\rm when}\ g=1\
{\rm with\ the\ option\ (\ref{item:b})}
\cr
1 & otherwise \cr
}\ \ \ .
\]
Looking at the exponent in Eq.(\ref{eq:partition7}),
we see that
$V(\sigma, \tau^A, p_A)$ plays the role of
a time-dependent Hamiltonian in the present gauge~\cite{MONC}.
We see that the partition function formally defined by
Eq.(\ref{eq:partition}) is equivalent to the
partition function defined by taking
the reduced system as a starting point, as can be read off in
Eq.(\ref{eq:partition7}). However, there is one point to
be noted.
For the case of $g=1$ with the option (\ref{item:b}),
the factor $\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$ appears.
This factor
can cause a nontrivial effect. We shall come back to this point
in the next section.
Typically, this factor can be a function of $V(\sigma, \tau^A, p_A)$
(see below, Eq.(\ref{eq:factor1})).
On the contrary, for the case of $g = 1$ with the
option (\ref{item:a}) and for the case of $g \geq 2$, this factor
does not appear. We especially note that, for
the case of $g = 1$ with the option (\ref{item:a}),
the factor $\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)$
coming from $\int [d N_a]$ cancels with the same factor
appeared in Eq.(\ref{eq:partition4}).\footnote{
The author thanks S. Carlip for very helpful
comments on this point.}
Let us discuss the factor
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$
in Eq.(\ref{eq:partition7}).
In the case of $g=1$, the space $\ker P_1$, which is
equivalent to a space of
conformal Killing vectors, is nontrivial. Now a special class
of Weyl deformations represented as
$\delta_W h_{ab}= D \cdot v_0\ h_{ab}$, where $v_0 \in \ker P_1$,
is translated into a diffeomorphism: $D \cdot v_0\ h_{ab}
= (P_1 v_0)_{ab} + D \cdot v_0\ h_{ab}
= {\cal L}_{v_0} h_{ab}$.
[Here ${\cal L}_{v_0}$ denotes the Lie derivative w.r.t. $v_0$.]
Thus, $\delta_W h_{ab}= D \cdot v_0\ h_{ab}$, $v_0 \in \ker P_1$
is generated by ${\cal H}^a$ along the gauge orbit.
Therefore it should be removed
from the integral domain for $\int [d\sqrt{h}]$ in
Eq.(\ref{eq:partition3}).
One easily sees that the volume of $\ker P_1$, which should be
factorized out from the whole volume of
the Weyl transformations,
is proportional to
$\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)$. Therefore the
factor $\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$ appears in
Eq.(\ref{eq:partition7}).
There is another way of explaining the factor
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$~\cite{MS1}. Let us
concentrate on the diffeomorphism
invariance in Eq.(\ref{eq:partition2}) characterized by
${\cal H}^a=0$.
The Faddeev-Popov determinant associated with this
invariance can be related to the Jacobian
for the change $h_{ab} \rightarrow (\sqrt{h}, v^a, \tau^A)$.
By the same kind of argument as in Eq.(\ref{eq:delh}),
one finds the Faddeev-Popov determinant to be
\[
\Delta_{FP}
= \det ({\cal T}_A , \Psi^B)\ \det{} ^{-1/2} (\Psi^A, \Psi^B)
(\det{}' P_1^\dagger P_1)^{1/2}\ \ \ .
\]
One way of carrying out the Faddeev-Popov procedure is to
insert \\
$1=\int d\Lambda \det{ \partial \chi \over \partial \Lambda}
\delta (\chi)$
into the path-integral formula in question, where $\chi$ is
a gauge-fixing function and $\Lambda$ is a gauge parameter.
Then the path integral in Eq.(\ref{eq:partition2}) reduces
to the form
\begin{eqnarray*}
I&=&\int [dh_{ab} ] [d\sqrt{h}\ dv^a\ d\tau^A][d\ast]\
\delta (h_{ab}-\sqrt{h} \hat{h}_{ab})
\ f(h_{ab}) \\
&=& \int [d\sqrt{h}\ dv^a\ d\tau^A][d\ast]\ f(\sqrt{h} \hat{h}_{ab})\ ,
\end{eqnarray*}
where $[d\ast]$ stands for all of the remaining integral
measures including $\Delta_{FP}$.
Now, we need to factorize out $V_{Diff_0}$,
the whole volume of diffeomorphism homotopic to $1$. This
volume is related to $\int [dv^a]$ as
$V_{Diff_0} = (\int [dv^a])\cdot V_{\ker P_1}$, where
${V_{\ker P_1}} \propto
\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)$~\cite{HAT}.
Here we note that
$\ker P_1$ is not included in the integral domain of
$\int [dv^a]$:
the diffeomorphism associated with $\forall v_0 \in \ker P_1$
is translated into a Weyl transformation, as
${{\cal L}_{v_0}h}_{ab}=(P_1 v_0)_{ab} + D \cdot v_0\ h_{ab}
=D \cdot v_0\ h_{ab} $ [it is noteworthy that this argument
is reciprocal to the previous one],
so that it is already counted in $\int [d\sqrt{h}]$.
In this manner we get
\[
I= V_{Diff_0} \int { [d\sqrt{h}\ d\tau^A] [d\ast] \over
\det{}^{1/2} (\varphi_\alpha , \varphi_\beta)}\ f(\sqrt{h} \hat{h}_{ab})\ \ \ .
\]
In effect, the volume of $\ker P_1$ has been removed
from the whole volume of the Weyl transformations, which is
the same result as the one in the previous argument.
[Again, for the case of $g \geq 2$,
the factor $\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$
should be set to unity.]
Furthermore, by factorizing the entire volume of
diffeomorphisms, $V_{Diff}$, and not just $V_{Diff_0}$,
the integral domain for
$\int [d\tau^A]$ is reduced to the moduli space,
${\cal M}_g$~\cite{HAT,MS1}. The intermediate step of
factorizing $V_{Diff_0}$ is necessary since the $v^a$'s are
labels parameterizing the tangent
space of $Riem(\Sigma)$, the space of
all Riemannian metrics on $\Sigma$.
\section{Analysis of the $g=1$ case }
\label{section:III}
We now investigate
how the reduced canonical system in the
$(\tau, V)$-form~\cite{HOS}
comes out in the partition function when $g=1$.
To begin with, let us recover $\int [d\sqrt{h}]$ and $\int [d\sigma]$
in Eq.(\ref{eq:partition7}), yielding
\begin{eqnarray}
Z &=& {\cal N}
\int [d\sqrt{h}][d\sigma][d\tau^A dp_A] \ {\cal A} \
{\partial {\cal H} \over \partial \sqrt{h} }\
\delta ({\cal H}) \ \delta (\sigma-t) \nonumber \\
&{}& \ \exp i \int dt
\{ p_A\ {d {\tau}^A \over dt} + \sigma {d \over dt}
V(\sigma, \tau^A, p_A) \}
\ \ \ .
\label{eq:partition7b}
\end{eqnarray}
Eq.(\ref{eq:partition7b}) is of the form
\begin{equation}
I=\int [d\sqrt{h}][d\ast]\
{\partial {\cal H} \over \partial \sqrt{h} }\
\delta ({\cal H})\ f(\sqrt{h})\ \ \ ,
\label{eq:I}
\end{equation}
where $[d\ast]$
stands for all of the remaining integral measures.
Now it is shown that for $g=1$ the simultaneous differential
equations Eq.(\ref{eq:hamiltonian}), Eq.(\ref{eq:momentum}),
Eq.(\ref{eq:gauge1}) (or Eq.(\ref{eq:gauge1'}))
and Eq.(\ref{eq:gauge2}) (or Eq.(\ref{eq:gauge2'}))
have a unique solution
for $\sqrt{h}$, which is spatially constant,
$\sqrt{h} _0
:=F(\tau^A, p_A, \sigma)$~\cite{MONC}.
Thus the integral
region for $\int [d\sqrt{h}]$ in Eq.(\ref{eq:I}) can be restricted
to ${\cal D}=\{\sqrt{h}| \sqrt{h} ={\rm spatially \ constant} \}$.
Let us note that
$\sqrt{h}$ is the only quantity that in principle
can depend on spatial coordinates in
Eq.(\ref{eq:partition7b}).
Accordingly, only the spatially constant components of the
arguments of the integrand contribute to the path integral.
Thus,
\begin{eqnarray*}
I&=&\int [d\ast]\ f(\sqrt{h} _0) \\
&=& \int_{\cal D} [d\sqrt{h}][d\ast]\
\left\{ \int_{\Sigma} d^2x\ \sqrt{h} \
{\partial {\cal H} \over \partial \sqrt{h} }\
{\Bigg/} \int_{\Sigma}d^2 x\ \sqrt{h} \right\}
\times \\
&{}&\times
\delta \left(\int_{\Sigma} d^2x\ {\cal H}\ {\Bigg/}
\int_{\Sigma} d^2x \right) \ f(\sqrt{h}) \\
&=& \int _{\cal D} ([d\sqrt{h}] \int_{\Sigma} d^2x)[d\ast]\
{\partial H \over \partial V }\
\delta (H) \ \tilde{f}(V) \\
&=& \int [dV][d\ast]\
{\partial H \over \partial V }\
\delta (H)\ \tilde{f}(V) \\
&=& \int [dV][d\ast][dN']\ {\partial H \over \partial V }\
\tilde{f}(V)\ \exp-i\int dt\ N'(t) H(t)\ \ \ ,
\end{eqnarray*}
where $H:=\int_{\Sigma} d^2x\ {\cal H}$,
$V:=\int_{\Sigma} d^2x\ \sqrt{h}$ and $\tilde{f}(V):= f(\sqrt{h})$.
The prime symbol in $N'(t)$ is to emphasize that
it is spatially constant.
Thus we see that Eq.(\ref{eq:partition7b}) is equivalent to
\begin{eqnarray}
Z&=&{\cal N}
\int [dV\ d\sigma][d\tau^A\ dp_A] [dN'] \ {\cal A} \
{\partial H \over \partial V } \ \delta (\sigma-t) \nonumber \\
&{}& \exp i \int dt\ ( p_A\ \dot{\tau}^A + \sigma \dot{V}- N'H)
\ \ \ ,
\label{eq:partition8}
\end{eqnarray}
where $V$ and $N'$ are spatially constant, and
$H$ is the reduced Hamiltonian in the $(\tau, V)$-form.
[See below, Eq.(\ref{eq:reducedaction}).]
We choose as a gauge condition
(see Eq.(\ref{eq:gauge2}))~\cite{MS1},
\[
h_{ab}=V \hat{h}_{ab} \ , \ \ \
\hat{h}_{ab} ={ 1\over \tau^2 }
\pmatrix{ 1 & \tau^1 \cr
\tau^1 & |\tau|^2 \cr}\ \ \ ,
\]
where $\tau:= \tau^1 + i \tau^2$
and $\tau^2 >0$.\footnote{
Throughout this section, $\tau^2$ always indicates the second
component of $(\tau^1, \tau^2)$, and never the square of
$\tau:= \tau^1 + i \tau^2$.
}
Here we have already replaced $\sqrt{h}$ with $V=\int_{\Sigma} \sqrt{h}$,
noting that $\sqrt{h}$ is spatially constant for the case of $g=1$.
Then, it is straightforward to get
\[
{\cal T}_{1ab}={V \over \tau^2}
\pmatrix { 0 & 1 \cr
1 & 2\tau^1 \cr},\ \
{\cal T}_{2ab}={V \over (\tau^2)^2}
\pmatrix { -1 & -\tau^1 \cr
-\tau^1 & (\tau^2)^2-(\tau^1)^2 \cr}\ \ .
\]
[See the paragraph next to the one including
Eq.(\ref{eq:gauge2}) for the definition of $\{ {\cal T}_A \}$.]
As a basis of $\ker P^\dagger_1$, $\{\Psi^A \}_{A =1,2}$, the
fact that
$P^\dagger_1({\cal T}_A)_a:=-2D_b{{\cal T}_{Aa}}^b
=-2\partial_b {{\cal T}_{Aa}}^b=0$ simplifies the situation.
We can choose as $\{\Psi^A \}_{A =1,2}$
\[
\Psi^1_{ab}={1 \over 2 }
\pmatrix { 0 & \tau^2 \cr
\tau^2 & 2\tau^1 \tau^2 \cr},\ \
\Psi^2_{ab}={ 1 \over 2 }
\pmatrix { -1 & -\tau^1 \cr
-\tau^1 & (\tau^2)^2-(\tau^1)^2 \cr}\ \ ,
\]
which satisfy $( \Psi ^A, {\cal T}_B) ={\delta^A}_B$.
Now let us consider in detail the case of the option
(\ref{item:b}) (\S \ref{section:II}).
In this case the factor $\cal A$
becomes ${\cal A} = \det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$.
As a basis of $\ker P_1$, $\{\varphi_\alpha \}_{\alpha =1,2}$,
we can take spatially constant vectors because
$D_a=\partial_a$ for the metric in question, and
because constant vectors are compatible with the condition for
the allowed vector fields on $T^2$.
[Note the fact that the Euler characteristic of $T^2$
vanishes, along with the Poincar\'e-Hopf
theorem~\cite{DUB}.]
Therefore, let us take as
\[
{\varphi_1}^a = \lambda_1 \pmatrix{1 \cr
0 \cr}\ , \ \
{\varphi_2}^a = \lambda_2 \pmatrix{0 \cr
1 \cr}\ , \ \
\]
where $\lambda_1$ and $\lambda_2$ are spatially constant
factors. Then, we get
\[
(\varphi_\alpha, \varphi_\beta)=
\pmatrix{ {\lambda_1}^2 V^2 /\tau^2 &
\lambda_1 \lambda_2 V^2 \tau^1 /\tau^2 \cr
\lambda_1 \lambda_2 V^2 \tau^1 /\tau^2 &
{\lambda_2}^2 V^2 |\tau|^2/\tau^2 \cr
}\ \ \ .
\]
Thus, we obtain
\begin{equation}
\det{}^{1/2} (\varphi_\alpha, \varphi_\beta) = |\lambda_1 \lambda_2|
V^2\ \ \ .
\label{eq:factor1}
\end{equation}
On account of a requirement that $Z$ should be modular
invariant, $|\lambda_1 \lambda_2|$ can be a function of only
$V$ and $\sigma$ at most.
There seems no further principle for fixing $|\lambda_1
\lambda_2|$.
Only when we choose as $|\lambda_1 \lambda_2|=V^{-2}$,
the factor $\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$
in Eq.(\ref{eq:partition5}) or Eq.(\ref{eq:partition7}) has no
influence.
No such subtlety occurs in the string theory, since $\sigma$ does
not appear and since $V$ is not important on account of the
conformal invariance
[except for, of course, the conformal anomaly].
It is easy to see that, in our representation,
the reduced action in the $(\tau, V)$-form
becomes
\begin{eqnarray}
S &=&\int_{t_1}^{t_2} dt\
(p_A \dot{\tau}^A + \sigma \dot{V} - N'(t) H ) \ \ \ ,
\nonumber \\
H &=& { (\tau^2)^2 \over {2V} }
(p_1^2 + p_2^2)-{1\over 2} \sigma^2 V - \Lambda V\ \ \ .
\label{eq:reducedaction}
\end{eqnarray}
Here $\lambda=-\Lambda$ ($\Lambda>0$) corresponds to
the negative cosmological constant, which is set to zero when
it is not considered.
[The introduction of $\lambda$ ($<0$) may be preferable to
sidestep a subtlety of the existence of a special solution
$p_1=p_2=\sigma=0$ for $\lambda=0$. This special solution forms a
conical singularity in the reduced phase space, which has been
already discussed in Ref.\cite{MONC} and in Ref.\cite{CAR2}.]
Therefore, we get
\begin{equation}
- {\partial H \over \partial V } =
{ (\tau^2)^2 \over {2V^2} }(p_1^2 + p_2^2)+{1\over 2} \sigma^2+
\Lambda \ \ \ .
\label{eq:factor2}
\end{equation}
As discussed in \S \ref{section:I}, the choice of
$N=$ spatially constant, which is consistent with
the equations of motion, is essential in the $(\tau,V)$-form.
This procedure can be however influential quantum
mechanically, so that its quantum theoretical effects should
be investigated.
In particular we need to understand the origin of
the factor $ {\partial H \over \partial V }$ in
Eq.(\ref{eq:partition8}).
Let us start from the action in Eq.(\ref{eq:reducedaction}).
It possesses a time reparameterization invariance:
\begin{eqnarray}
\delta \tau^A = \epsilon (t)\{ \tau^A, H \}\ ,
&{}& \delta p_A = \epsilon (t)\{ p_A , H \}\ ,
\nonumber \\
\delta V = \epsilon (t)\{ V, H \}\ , &{}& \delta \sigma
= \epsilon (t)\{ \sigma , H \}\ ,
\nonumber \\
\delta N' = \dot{\epsilon}(t)\ &{}& {\rm with}\ \ \epsilon (t_1)=\epsilon (t_2)=0
\ \ \ .
\label{eq:reparameterization}
\end{eqnarray}
In order to quantize this system, one needs to fix a time
variable. One possible gauge-fixing condition is
$\chi :=\sigma-t=0$. Then according
to the Faddeev-Popov procedure, the factor
$\{ \chi, H \}\ \delta (\chi)
= -{\partial H \over \partial V }\ \delta (\sigma-t)$ is inserted
into the formal expression for $Z$. The result is equivalent to
Eq.(\ref{eq:partition7b}) up to the factor
$\cal A$.
Now we understand the origin of the nontrivial factor
${\partial H \over \partial V }$ in Eq.(\ref{eq:partition7b}).
In order to shift from the $(\tau, V)$-form to the
$\tau$-form, it is necessary to demote the virtual dynamical
variables $V$ and $\sigma$ to the Hamiltonian and
the time parameter,
respectively. Then, the factor ${\partial H \over \partial V }$
appears as the Faddeev-Popov determinant associated with a
particular time gauge $\sigma=t$.
In this manner, we found that the $(\tau, V)$-form is
equivalent to the $\tau$-form even in the quantum theory,
provided that the time-reparameterization symmetry remnant in
the $(\tau, V)$-form is gauge-fixed by a particular
condition $\chi:=\sigma-t=0$. In particular the key procedure of
imposing $N=$spatially constant~\cite{HOS} turned out to be
independent of the equations of motion themselves
and valid in the quantum theory.
[Of course the fact that
it does not contradict with the equations of motion is
important.]
Finally it is appropriate to mention the relation of the
present result with the previous one obtained in
Ref.\cite{MS1}. In Ref.\cite{MS1} also, the case
of $g=1$ was analyzed although the
model was restricted to be spatially homogeneous
in the beginning. The result
there was that the factor ${\partial H \over \partial V }$ did
not appear in the measure although the $(\tau,V)$-form was
adopted. This result is reasonable because in Ref.\cite{MS1}
only the spatial diffeomorphism symmetry associated
with ${\cal H}^a$ was gauge-fixed
explicitly. As for the symmetry associated with ${\cal H}$,
the Dirac-Wheeler-DeWitt procedure was applied instead of the
explicit gauge-fixing.
[Alternatively, one can regard that the symmetry
associated with ${\cal H}$ was fixed by a non-canonical gauge
$\dot{N}=0$~\cite{HALL}.] Therefore it is reasonable that
${\partial H \over \partial V }$ did not appear in the
analysis of Ref.\cite{MS1}. Thus the result of Ref.\cite{MS1}
is compatible with the present result.
\section{Discussions}
\label{section:IV}
We have investigated how a partition function for
(2+1)-dimensional pure Einstein gravity, formally defined in
Eq.(\ref{eq:partition}), yields a partition function defined on
a reduced phase space by gauge fixing.
We have shown that Eq.(\ref{eq:partition}) reduces to
Eq.(\ref{eq:partition7}), which is interpreted as
a partition function for a reduced system in the $\tau$-form.
For the case of $g \geq 2$, this result is compatible
with Carlip's analysis~\cite{CAR2}.
For the case of $g=1$ with the option (\ref{item:b}), a factor
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$ arises as a
consequence of the fact that $\dim \ker P_1 \neq 0$.
This factor can be influential
except when the choice
$\det{}^{1/2}(\varphi_\alpha , \varphi_\beta) =1$ is
justified. The requirement of the modular invariance is not
enough to fix this factor.
Furthermore Eq.(\ref{eq:partition}) has turned out to reduce to
Eq.(\ref{eq:partition8}), which is interpreted as a partition
function
for a reduced system in the $(\tau, V)$-form with a
nontrivial measure factor
$ {\partial H \over \partial V }$ as well as the possible
factor
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$. The former factor
was interpreted as the Faddeev-Popov determinant
associated with the time gauge $\sigma=t$, which was necessary
to convert from the $(\tau, V)$-form to the $\tau$-form.
The choice of $N=$ spatially constant was
the essential element to derive the $(\tau, V)$-form in the
classical theory.
In particular the equations of motion were used to show
its compatibility with York's gauge~\cite{HOS}. Therefore the
relation of the $(\tau, V)$-form with the $\tau$-form in the
quantum level was required to be clarified. Moreover,
since the condition $N=$ spatially constant is not in the
form of the canonical gauge, the analysis of
its role in the quantum level was intriguing.
Our analysis based on
the path-integral formalism turned out to be powerful
for studying these issues.
Our result shows that
the $(\tau, V)$-form is
equivalent to the $\tau$-form even in the quantum theory,
as far as the time-reparameterization symmetry in
the $(\tau, V)$-form is gauge-fixed by $\chi:=\sigma-t=0$.
The postulation of $N=$spatially constant in deriving the
$(\tau, V)$-form turned out to be
independent of the equations of motion
and harmless even in the quantum theory.
These results are quite suggestive to quantum gravity and
quantum cosmology.
First of all, the measure factor similar to
$\det{}^{-1/2} (\varphi_\alpha , \varphi_\beta)$ is likely to appear
whenever a class of spatial geometries in question allows
conformal Killing vectors ($\ker P_1 \neq \emptyset$).
This factor can be influential on the semiclassical
behavior of the Universe.
The issue of the two options
(\ref{item:a}) and (\ref{item:b}) regarding the path-integral
domain of the shift vector (\S \ref{section:II})
is interesting from
a general viewpoint of gravitational systems. If one imposes
that there should be no extra factor in the path-integral
measure for the reduced system, then the option
(\ref{item:a}) is preferred. There may be other arguments which
prefer one of the two options.
[For instance, general covariance of $Z$.] It may be interesting
if a similar situation like $\ker P_1 \neq 0$ occurred in the
asymptotic flat spacetime. In this case, the choice of the options
may be influential to the gravitational momentum.
As another issue, the variety of representations of the same
system in the classical level and the variety of
the gauge-fixing conditions result in
different quantum theories in general, and the relation
between them should be more clarified.
The model analyzed here shall be a
good test case for the study of this issue.
To summarize what we have learnt and to recognize what is
needed to be clarified more,
it is helpful to place our system beside a simpler system with a
similar structure.
The system of a relativistic particle~\cite{HART} is
an appropriate model for illustrating the relation between
the $\tau$-form and the $(\tau, V)$-form.
Let $x^\alpha:=(x^0, \ \vec{x})$ and $p_\alpha:=(p_0, \ \vec{p})$ be
the world point and the four momentum, respectively,
of a relativistic particle. Taking $x^0$ as the time parameter,
the action for the (positive energy) relativistic particle with
rest mass $m$ is
given by
\begin{equation}
S=\int dx^0\ (\vec{p} \cdot {d\vec{x} \over dx^0}
- \sqrt{\vec{p}^2 + m^2 })\ \ \ .
\label{eq:particle1}
\end{equation}
Eq.(\ref{eq:particle1}) corresponds to the $\tau$-form
(Eq.(\ref{eq:tau})). Now one can promote $x^0$ to a
dynamical variable:
\begin{equation}
S=\int dt\ \{ p_\alpha \dot{x}^\alpha
- N(p_0 + \sqrt{\vec{p}^2 + m^2 }) \} \ \ \ .
\label{eq:particle2}
\end{equation}
Here $t$ is an arbitrary parameter s.t. $x^0(t)$ becomes a
monotonic function of $t$; $N$
is the Lagrange multiplier enforcing
a constraint $p_0+ \sqrt{\vec{p}^2 + m^2 }=0$.
The action Eq.(\ref{eq:particle2}) corresponds
to the action appearing in Eq.(\ref{eq:partition6}).
It is possible to take $p^2+m^2=0$ with $p_0 < 0$
as a constraint instead of
$p_0+ \sqrt{\vec{p}^2 + m^2 }=0$. Then an alternative action
for the same system is given by
\begin{eqnarray}
S&=&\int_{t_1}^{t_2} dt\ \{ p_\alpha \dot{x}^\alpha - N\ H \} \ \ \ ,
\nonumber \\
&{}& H=p^2+m^2 \ \ \ .
\label{eq:particle3}
\end{eqnarray}
Eq.(\ref{eq:particle3}) corresponds to the $(\tau, V)$-form
(Eq.(\ref{eq:tauV}) or Eq.(\ref{eq:reducedaction})).
The system defined by Eq.(\ref{eq:particle3}) possesses the
time reparameterization invariance similar to
Eq.(\ref{eq:reparameterization}):
\begin{eqnarray}
\delta x^\alpha = \epsilon (t)\{ x^\alpha , H \}\ ,
&{}& \delta p_\alpha = \epsilon (t)\{ p_\alpha , H \}\ ,
\nonumber \\
\delta N = \dot{\epsilon}(t)\ &{}& {\rm with}\ \ \epsilon (t_1)=\epsilon (t_2)=0
\ \ \ .
\label{eq:reparameterization2}
\end{eqnarray}
Thus the gauge-fixing is needed in order to quantize this
system. Here let us concentrate on two kinds of the
gauge-fixing condition:
\begin{enumerate}
\item $\chi_{{}_I} := x^0 -t =0\ \ $ (canonical gauge),
\label{item:I}
\item $\chi_{{}_{II}} := \dot{N} =0\ \ $ (non-canonical gauge).
\label{item:II}
\end{enumerate}
Choosing the gauge condition (\ref{item:I}),
one inserts the factors
$\{ \chi_{{}_I}, H \}$$ \delta (\chi_{{}_I}) =-2p_0 $
$ \delta (x^0 -t) $ into
the path-integral formula according to the
Faddeev-Popov procedure~\cite{FAD}. More rigorously, the factors
$\theta (-p_0) $$\{ \chi_{{}_I}, H \}\ $$ \delta (\chi_{{}_I})$,
or alternatively,
$\theta (N) $$\{ \chi_{{}_I}, H \}\ $$ \delta (\chi_{{}_I})$
should be inserted in order to obtain the equivalent
quantum theory to the one obtained by
Eq.(\ref{eq:particle1})~\cite{HART}.
The factor $\theta (-p_0)$ selects the positive energy solution
$-p_0 = \sqrt{\vec{p}^2 + m^2 }$ among the two solutions of
$H=p^2 +m^2=0$ w.r.t. $p_0$.
This gauge (\ref{item:I}) corresponds to the gauge $\chi = \sigma-t=0$
in the previous section.
We observe that a pair $(x^0,\ p_0)$ corresponds to
the pair $(\sigma,\ -V)$
which is obtained from an original pair $(V,\ \sigma)$ by a simple
canonical transformation.
[The relation $-p_0=\sqrt{\vec{p}^2 + m^2 }$
corresponds to the relation $V=V(\sigma, \tau^A, p_A)$.]
Thus the additional restriction factor
$\theta (-p_0)$ should correspond to $\theta (V)$, which is
identically unity because of the positivity of $V$.
It is quite suggestive that one solution among the two
solutions of $H=0$ (Eq.(\ref{eq:reducedaction})) w.r.t. $V$
is automatically
selected because $V$ is the 2-volume of $\Sigma$.
As for the other gauge (\ref{item:II})
$\chi_{{}_{II}} := \dot{N} =0$,
it is
quite different in nature compared with (\ref{item:I})
$\chi_{{}_I} := x^0 -t =0$. Apparently the path-integral
measure becomes different. This point becomes clear if
the transition amplitude $(\ x^\alpha_2\ |\ x^\alpha_1\ )$ for the
system Eq.(\ref{eq:particle3}) is calculated by
imposing (\ref{item:I}) and
by imposing (\ref{item:II}).
By the canonical gauge (\ref{item:I}), one obtains
\[
(\ x^\alpha_2\ |\ x^\alpha_1\ )_{I} = \int d^4p \
\exp \{ i p_\alpha (x^\alpha_2 - x^\alpha_1) \}\ |-2p^0|\
\delta (p^2 +m^2)\ \ \ ,
\]
if the simplest skeletonization scheme is adopted as in
Ref.\cite{HART}. (Here we set aside the question about the
equivalence with the system described by Eq.(\ref{eq:particle1})
so that the factor $\theta (-p^0)$ is not inserted.)
The gauge (\ref{item:II}) can be handled~\cite{HALL} by
the Batalin-Fradkin-Vilkovisky method~\cite{BAT}
instead of the Faddeev-Popov
method, and the result is
\[
(\ x^\alpha_2\ |\ x^\alpha_1\ )_{II} = \int d^4p \
\exp \{ i p_\alpha (x^\alpha_2 - x^\alpha_1)\}\ \ \delta (p^2 +m^2)
\ \ \ .
\]
Both $(\ x^\alpha_2\ |\ x^\alpha_1\ )_{I}$ and
$(\ x^\alpha_2\ |\ x^\alpha_1\ )_{II}$
satisfy the Wheeler-DeWitt equation but they are clearly
different. One finds that if another gauge (\ref{item:I}')
$\chi_{{}_{I'}} := -{x^0 \over 2 p_0} -t =0$ is adopted
instead of
(\ref{item:I}), the resultant $(\ x^\alpha_2\ |\ x^\alpha_1\ )_{I'}$ is
equivalent to
$(\ x^\alpha_2\ |\ x^\alpha_1\ )_{II}$. One sees that
$-{x^0 \over 2 p_0} \propto$$x^0 \sqrt{1- ({v \over c})^2}$
under the condition $H=0$, which is interpreted as
the proper-time.
Even in the present simple model, it is already clear that
only solving the Wheeler-DeWitt equation is not enough to
reveal the quantum nature of the spacetime.
Then it is intriguing what the
relation there is between the gauge conditions and the
boundary conditions for the Wheeler-DeWitt equation.
Apparently more investigations are needed regarding
the gauge-fixing conditions,
especially the relation between the canonical gauges
and the noncanonical gauges.
The system of (2+1)-dimensional Einstein gravity shall serve as
a good test candidate to investigate these points
in the context of quantum cosmology.
\begin{center}
\bf Acknowledgments
\end{center}
The author would like to thank H. Kodama and S. Mukohyama for
valuable discussions.
He also thanks S. Carlip for very helpful comments regarding
the path integral over the shift vector.
Part of this work was done during the author's stay at
the Tufts Institute of Cosmology. He heartily thanks
the colleagues there for providing him with a nice research
environment.
This work was supported by the Yukawa Memorial Foundation,
the Japan Association for Mathematical
Sciences, and the Japan Society for the
Promotion of Science.
\vskip 2cm
\makeatletter
\@addtoreset{equation}{section}
\def\thesection\arabic{equation}{\thesection\arabic{equation}}
\makeatother
| proofpile-arXiv_065-780 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{Figure captions}
\newcounter{bean}
\begin{list}%
{Fig. \arabic{bean}}{\usecounter{bean}
\setlength{\rightmargin}{\leftmargin}}
\item (a) $\Delta E_0(\Phi_0/4) \equiv E_0(\Phi_0/4) - E_0(\Phi_0/2)$
versus $1/L$ for several values of $W/t$. For $W/t < 0.3 $ the solid
lines correspond to a least square fit of the data to the SDW form:
$L \exp(-L/\xi)$. For $W/t > 0.3 $ the QMC data is compatible
with a $1/L$ scaling. The solid lines are least square fit to this form.
(b) Extrapolated value of $\Delta E_0(\Phi_0/4)$ versus $W/t$.
The solid line is a guide to the eye.
\item $d_{x^2 - y^2}$ (triangles) and $s$-wave (circles)
pair-field correlations versus $1/L$.
\item (a) $n(\vec{k})$ at $W/t = 0.6$, $U/t=4$ and $\langle n \rangle = 1 $.
Lattices form $ L=8 $ to $ L=16 $ were considered.
(b) same as (a) but for $W/t = 0$. The calculations in this figure
were carried out at $\Phi = 0$ (see Eq. \ref{Bound}).
\item (a) $ S(L/2,L/2)$ versus $1/L$ for several values of $W/t$. The
solid lines correspond to least square fits of the QMC data to the form
$1/L$. Inset: $ S(L/2,L/2)$ versus $1/L$ at $W/t = 0.6$. The solid
is a least square fit to the form $L^{-\alpha}$.
(b) Staggered moment as obtained from (a) versus $W/t$.
The data point at $W/t = 0$ is taken from reference \cite{White}.
At $W/t = 0.3$, we were unable to distinguish $m$ from zero within our
statistical uncertainty. The solid line is a guide to the eye.
The calculations in this figure
were carried out at $\Phi = 0$ (see Eq. \ref{Bound}).
\end{list}
\end{document}
| proofpile-arXiv_065-793 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:I}
Phase transitions that occur in a quantum mechanical system at zero
temperature ($T=0$) as a function of some
non-thermal control parameter are called quantum phase transitions. In
contrast to their finite-temperature counterparts, which are often
referred to as thermal or classical phase transitions, the
critical fluctuations
one has to deal with at zero temperature are quantum fluctuations
rather than thermal ones, and the need for a quantum mechanical treatment
of the relevant statistical mechanics makes the theoretical description
of quantum phase transitions somewhat different from that of classical
ones. However, as Hertz has shown in a seminal paper,\cite{Hertz}
the basic theoretical concepts that have led to successfully
describe and understand thermal transitions work in the quantum case as well.
Experimentally, the zero-temperature behavior of any material can of course
not be studied directly, and furthermore the
most obvious control parameter that
drives a system through a quantum transition is often some microscopic
coupling strength that is hard to change experimentally. As a result, the
dimensionless distance from the critical point, $t$, which for classical
transitions with a transition temperature $T_c$ is given by $t=T/T_c - 1$ and
is easy to tune with high accuracy, is much harder to control in the
quantum case. However, $t$
is usually dependent on some quantity that can be
experimentally controlled, like e.g. the composition of the material.
Also, the zero temperature critical behavior manifests itself already at
low but finite temperatures. Indeed, in a system with a very low thermal
transition temperature all but the final asymptotic behavior in the
critical region is dominated by quantum effects. The study of quantum
phase transitions is therefore far from being of theoretical interest only.
Perhaps the most obvious example of a quantum phase transition is the
paramagnet-to-ferromagnet transition of itinerant electrons at
$T=0$ as a function of the exchange interaction between the electronic
spins. Early theoretical work\cite{Hertz} on this transition suggested that the
critical behavior in the physical dimensions $d=2$ and $d=3$ was not
dominated by fluctuations, and mean-field like, as is the thermal
ferromagnetic transition in dimensions $d>4$. The reason for this is
a fundamental feature of quantum statistical mechanics, namely the fact
that statics and dynamics are coupled. As a result, a quantum mechanical
system in $d$ dimensions is very similar so the corresponding classical
system in $d+z$ dimensions, where the so-called dynamical critical exponent
$z$ can be thought of as an extra dimensionality that is provided to the
system by time or temperature. The $d+z$-dimensional space relevant for
the statistical mechanics of the quantum system bears some resemblance to
$d+1$-dimensional Minkowski space, but $z$ does {\em not} need to be equal to
$1$ in nonrelativistic systems. For clean and disordered itinerant quantum
ferromagnets, one finds $z=3$ and $z=4$, respectively, in mean-field theory.
This appears to
reduce the upper critical dimension $d_c^+$, above which fluctuations
are unimportant and simple mean-field theory yields the correct critical
behavior, from $d_c^+ = 4$ in the classical case to $d_c^+ = 1$ and
$d_c^+ = 0$, respectively, in the clean and disordered quantum cases.
If this were true, then this quantum phase transition would be rather
uninteresting from a critical phenomena point of view.
It has been known for some time that, for the case of disordered
systems, this conclusion cannot be correct.\cite{Millis} It is known that
in any system with quenched disorder that undergoes a phase transition, the
critical exponent $\nu$ that describes the divergence of the correlation
length, $\xi \sim t^{-\nu}$ for $t\rightarrow 0$, must satisfy the inequality
$\nu\geq 2/d$.\cite{Harris}
However, mean-field theory yields $\nu = 1/2$, which is incompatible with
this inequality for $d<4$. Technically, this implies that the disorder
must be a relevant perturbation with respect to the mean-field fixed point.
The mean-field fixed point must therefore be unstable, and the phase
transition must be governed by some other fixed point that has a
correlation length exponent $\nu\geq 2/d$.
Recently such a non-mean field like fixed point has been
discovered, and the critical behavior has been determined exactly for
all dimensions $d>2$.\cite{fm} It was found that both the value $d_c^+ = 0$
for the upper critical dimension, and the prediction of mean-field critical
behavior for $d>d_c^+$ were incorrect. Instead, $d_c^+ = 2$, and while both the
quantum fluctuations and the disorder fluctuations are irrelevant with
respect to the new fixed point for all $d > d_c^+$, there are two
other ``upper critical dimensionalities'', $d_c^{++} = 4$ and
$d_c^{+++} = 6$. The critical behavior for $d_c^+ < d < d_c^{+++}$
is governed by a non-standard Gaussian
fixed point with non-mean field like exponents, and only for $d > d_c^{+++}$
does one obtain mean-field exponents. In addition, the clarification of the
physics behind this surprising behavior has led to the conclusion that
very similar effects occur in clean systems.\cite{clean} In that case,
$d_c^+ = 1$ in agreement with the early result, but again the critical
behavior is nontrivial in a range of dimensionalities
$d_c^+ < d \leq d_c^{++} = 3$, and only for $d > d_c^{++}$ does one obtain
mean-field critical behavior. In addition, we have found that Hertz's
$1-\epsilon$ expansion for the clean case is invalid. This explains an
inconsistency between this expansion and an exact exponent relation that
was noted earlier by Sachdev.\cite{Sachdev}
In order to keep our discussion focused, in
what follows we will restrict ourselves to the disordered case, where the
effects are more pronounced, and will only quote results for the clean
case where appropriate.
The basic physical reason behind the complicated behavior above the upper
critical dimensionality $d_c^+$, i.e. in a regime in parameter space
where the critical behavior is not dominated by fluctuations, is simple.
According to our general understanding of continuous phase transitions
or critical points, in order to understand the critical singularities at
any such transition, one must identify all of the slow or soft modes near
the critical point, and one must make sure that all of these soft modes are
properly included in the effective theory for the phase transition. This
is obvious, since critical phenomena are effects
that occur on very large length
and time scales, and hence soft modes, whose excitation energies vanish in
the limit of long wavelengths and small frequencies,
will in general influence the critical behavior.
In previous work on the ferromagnetic transition it was implicitly assumed
that the only relevant soft modes are the fluctuations of the order
parameter, i.e. the magnetization. For finite temperatures this is correct.
However, at $T=0$ there are additional soft modes in a disordered electron
system, namely diffusive particle-hole excitations that are distinct from
the spin density excitations that determine the magnetization. In many-body
perturbation theory these modes manifest themselves as products of retarded
and advanced Green's functions, and in field theory they can be interpreted as
the Goldstone modes that result from the spontaneous breaking of the
symmetry between retarded and advanced correlation functions, or between
positive and negative imaginary frequencies.
In a different context, namely the
transport theory for disordered electron systems, these diffusive
excitations are sometimes referred to as `diffusons' and `Cooperons',
respectively, and they are responsible for what is known as
`weak localization effects' in disordered electron systems.\cite{WeakLoc}
For our purposes,
their most important feature is their spatial long-range nature in the
zero frequency limit. This long-range nature follows immediately from the
diffusion equation
\begin{mathletters}
\label{eqs:1.1}
\begin{equation}
\left(\partial_t - D\,\partial_{\bf x}^2\right)\,f({\bf x},t) = 0\quad,
\label{eq:1.1a}
\end{equation}
for some diffusive quantity $f$, with $D$ the diffusion constant. Solving
this equation by means of a Fourier-Laplace transform to wavevectors ${\bf q}$
and complex frequencies $z$, one obtains in the
limit of zero frequency,
\begin{equation}
f({\bf q},z=0) = {1\over D{\bf q}^2}\,f({\bf q},t=0)\quad.
\label{eq:1.1b}
\end{equation}
\end{mathletters}%
Long-range static correlations are thus an immediate consequence of the
diffusive nature of the density dynamics in disordered systems.
The fact
that we are concerned with the zero frequency or long-time limit is due
to the order parameter, i.e. the magnetization, being a conserved quantity.
Since the only way to locally change the order parameter density is to
transport this conserved quantity from one region in space to another,
in order to develop long-range order over arbitrarily large
distances the systems needs an infinitely long time. This in turn means
that criticality can be reached only if the frequency is taken to zero
before the wavenumber. This feature would be lost if there were some type
of spin-flip scattering mechanism present, and our results hold only in the
absence of such processes. For the same reason, they do not apply to
quantum antiferromagnets, which show a quite different behavior.\cite{afm}
It is important that the long-range static correlations mentioned above are
distinct from the order parameter fluctuations. For instance, the latter
are soft only at the critical point and in the ordered phase, while the
former are soft even in the paramagnetic phase, and they do not change
their nature at the critical point. However, since they couple to the
conserved order paramter, they influence the critical behavior. If one
integrates out these diffusive modes in order to obtain an effective
theory or Landau-Ginzburg-Wilson (LGW) functional in terms of the order
parameter only, then their long-range
nature leads to infrared singular integrals,
which in turn results in singular vertices in the LGW funcional, or
diverging coupling constants for couplings between the order parameter
fluctuations. The usual LGW philosophy of deriving an effective local
field theory entirely in terms of the order parameter field therefore does
not lead to a well behaved field theory in this case.
The situation is analogous to a well known phenomenon
in high energy physics: Suppose some interaction between, say, fermions,
is mediated by the exchange of some other particles, e.g. gauge bosons of
mass $M$. If the bosons are integrated out, then the resulting theory will
be nonrenormalizable, i.e. it will be ill-behaved
on momentum scales larger than the
mass $M$. The nonrenormalizable theory corresponds to the order parameter
LGW theory, except that in statistical mechanics one runs into infrared
problems rather ultraviolet ones. Nevertheless, it turns out that in
our case the critical behavior can still be determined
exactly even after having
integrated out the additional soft modes. The point is that the diffusive
modes lead to an effective long-range interaction between the order parameter
fluctuations that falls off in real space like $r^{2-2d}$. It is known that
in general long-range interactions suppress fluctuation effects.\cite{FMN}
In our case
they are strong enough to not only suppress quantum fluctuations, but also
any remaining disorder fluctuations. The critical behavior is thus neither
dominated by quantum fluctuations (since we work above the upper critical
dimension $d_c^+$), nor by the disorder fluctuations, but rather is given
by a simple, though non-standard (because of the long-range interactions)
Gaussian theory. The resulting Gaussian fixed point allows for a correlation
length exponent that satisfies $\nu\geq 2/d$ as required, and the exponents
are dimensionality dependent for all $d<6$. In $d=3$ they are substantially
different from either the mean-field exponents, or from those for a classical
Heisenberg ferromagnet. This has striking observables consequences, as we
will discuss.
The outline of this paper is as follows. In Sec. \ref{sec:II} we first
discuss some general aspects of itinerant ferromagnets, and then we give
our results for the critical exponents and for the equation of state near
the critical point. Since the purpose of this paper is to give an exposition
and discussion of these results that is as nontechnical as possible,
they will be presented without any derivations. In Sec.\ \ref{sec:III}
we discuss these results as well
as several possible experiments that could be performed to test our
predicitions. Finally, in Sec.\ \ref{sec:IV} we sketch the derivation of our
theoretical results.
\section{Results}
\label{sec:II}
In order to put the phase transition we are going to consider in perspective,
let us first discuss the qualitative phase diagram that one expects for a
disordered itinerant electron system in $d=3$. Let $F_0^a<0$ be the
Fermi liquid parameter that characterizes the strength of the sytem's
tendency towards ferromagnetism: For $\vert F_0^a\vert < 1$ the system is
paramagnetic with a spin susceptibility $\chi_s \sim 1/(1+F_0^a)$, while
for $\vert F_0^a\vert > 1$ the clean Fermi liquid has a ferromagnetic ground
state. In Fig.\ \ref{fig:1} we show the qualitative phase diagram one
expects for a disordered system at $T=0$ in the $F_0^a$-$\lambda$ plane,
where $\lambda$ is some dimensionless measure of the disorder. For $\lambda=0$,
we have the transition from a paramagnetic metal (PM) to a ferromagnetic
metal (FM) at $F_0^a = -1$. At small but nonzero $\lambda$ this transition
will occur at somewhat smaller values of $\vert F_0^a\vert$, since the
disorder effectively increases the spin triplet electron-electron interaction
amplitude, and hence $\vert F_0^a\vert$. This accounts for the downward
curvature of the PM-FM transition line. At $\vert F_0^a\vert = 0$, a
metal-insulator transition of Anderson type is known to occur at a critical
disorder value $\lambda_c$.\cite{LeeRama}
At nonzero $\vert F_0^a\vert$ such a transition
from a paramagnetic metal to a paramagnetic insulator (PI) still occurs,
albeit it now is what is called an Anderson-Mott transition that takes place
at a somewhat larger value of the disorder.\cite{R}
The two transition lines will
meet at a multicritical point M, and for
large values of $\lambda$ and $\vert F_0^a\vert$ one expects a ferromagnetic
insulator (FI). The transitions from the FM and PI phases, respectively,
to the FI phase have not been studied theoretically, which is why we denote
them by dashed lines in the figure.
We will be mostly interested in the phase transition that occurs across
the PM-FM transition line at finite disorder, but far away from the
metal-insulator transition. However, in Sec.\ \ref{sec:III} below
we will come back to the remaining regions in this phase diagram.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig1.eps}
\vskip 0.5cm
\caption{Schematic phase diagram for a $3$-$d$ disordered itinerant electron
system in the plane spanned by the Landau parameter $F_0^a$ and the
disorder $\lambda$ at $T=0$. See the text for further explanations.}
\label{fig:1}
\end{figure}
In Fig.\ \ref{fig:2} we show the same phase diagram in the
$F_0^a$-$T$ plane for some value of the disorder $0 < \lambda << \lambda_c$.
With increasing temperature $T$, the critical value of $\vert F_0^a\vert$
increases, since in order to achieve long-range order, a larger
$\vert F_0^a\vert$ is needed to compensate for the disordering effect of
the thermal fluctuations. The inset shows schematically the boundary of the
critical region (dashed line) and the crossover line between classical and
quantum critical behavior (dotted line). At any nonzero $T$, the asymptotic
critical behavior is that of a classical Heisenberg magnet, but at sufficiently
low $T$ there is a sizeable region where quantum critical behavior can be
observed.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig2.eps}
\vskip 0.5cm
\caption{Schematic phase diagram for a disordered itinerant electron
system in the plane spanned by the Landau parameter $F_0^a$ and the
temperature $T$. The inset shows the boundary of the critical region
(dashed line) and the crossover line (dotted line) that separates
classical critical behavior (cc) from quantum critical behavior (qc).}
\label{fig:2}
\end{figure}
Our theoretical results for the zero temperature paramagnet-to-ferromagnet
transition can be summarized as follows. Let $t$ be the dimensionless
distance from the line separating the regions PM and FM in Fig.\ \ref{fig:1}.
Then the equation of state, which determines the magnetization $m$ as a
function of $t$ and the magnetic field $h$, can be written
\begin{equation}
tm + m^{d/2} + m^3 = h\quad,
\label{eq:2.1}
\end{equation}
where we have left out all prefactors of the various terms.
Equation (\ref{eq:2.1}) is valid for all dimensions $d>2$. Notice the
term $m^{d/2}$, which occurs in addition to what otherwise is an ordinary
mean-field equation of state. It is a manifestation of the soft particle-hole
excitations mentioned in the Introduction. For $d<6$ it dominates the
$m^3$-term, and
hence we have for the exponent $\beta$, which determines the vanishing of
the zero-field magnetization via $m(t,h=0) \sim t^{\beta}$,
\begin{mathletters}
\label{eqs:2.2}
\begin{equation}
\beta = \cases{2/(d-2)& for $2<d<6$\cr%
1/2& for $d>6$\cr}%
\quad.
\label{eq:2.2a}
\end{equation}
Similarly, the exponent $\delta$, defined by $m(t=0,h) \sim h^{1/\delta}$,
is obtained as
\begin{equation}
\delta = \cases{d/2& for $2<d<6$\cr%
3& for $d>6$\cr}%
\quad.
\label{eq:2.2b}
\end{equation}
\end{mathletters}%
Now let us consider the order parameter field $M({\bf x},t)$ as a function
of space and time, i.e. the field whose average yields the magnetization,
$\langle M({\bf x},t)\rangle = m$. Here the angular brackets
$\langle\ldots\rangle$ denote a trace with the full statistical operator,
i.e. they include a quantum mechanical expectation value, a disorder
average, and at nonzero temperature also a thermal average. We first
consider the case of $T=0$, and Fourier
transform to wave vectors ${\bf q}$ (with modulus $q=\vert{\bf q}\vert$)
and frequencies $\omega$. For the order parameter correlation function
$G(q,\omega) = \langle M({\bf q},\omega)\,M(-{\bf q},-\omega)\rangle$
we find in the limit of small $q$ and $\omega$,
\begin{equation}
G(q,\omega) = {1\over t + q^{d-2} + q^2 - i\omega/q^2}\quad.
\label{eq:2.3}
\end{equation}
Here we have again omitted all prefactors of the terms in the denominator,
since they are of no relevance for our discussion.
The most interesting feature in Eq.\ (\ref{eq:2.3}) is the term $q^{d-2}$.
It is again an immediate consequence of the additional soft modes discussed
in the first section, and Eq.\ (\ref{eq:2.3}), like Eq.\ (\ref{eq:2.1}),
is valid for $d>2$. For $q=\omega=0$, the correlation function $G$ determines
the magnetic susceptibility $\chi_m \sim G(q=0,\omega =0)$
in zero magnetic field.
Hence we have $\chi_m (t) \sim t^{-1} \sim t^{-\gamma}$, where the last
relation defines the critical exponent $\gamma$. This yields
\begin{equation}
\gamma = 1\quad,
\label{eq:2.4}
\end{equation}
which is valid for all $d>2$. $\gamma$ thus has its usual mean-field value.
However, for nonzero $q$ the anomalous $q^{d-2}$ term dominates the usual
$q^2$ dependence for all $d<4$. The correlation function at zero frequency
can then be written
\begin{mathletters}
\label{eqs:2.5}
\begin{equation}
G(q,\omega=0) \sim {1\over 1 + (q\xi)^{d-2}}\quad,
\label{eq:2.5a}
\end{equation}
with the correlation length $\xi \sim t^{1/(d-2)} \sim t^{-\nu}$. For
$d>4$ the $q^2$ term is dominant, and we have for the correlation length
exponent $\nu$,
\begin{equation}
\nu=\cases{1/(d-2)& for $2<d<4$\cr%
1/2& for $d>4$\cr}%
\quad,
\label{eq:2.5b}
\end{equation}
\end{mathletters}%
Note that $\nu \geq 2/d$, as it must be according to the discussion in the
Introduction. The wavenumber dependence of $G$ at criticality,
i.e. at $t=0$, is
characterized by the exponent $\eta$: $G(q,\omega=0) \sim q^{-2+\eta}$.
From Eq.\ (\ref{eq:2.3}) we obtain,
\begin{equation}
\eta = \cases{4-d& for $2<d<4$\cr%
0& for $d>4$\cr}%
\quad,
\label{eq:2.6}
\end{equation}
Finally, consider the correlation function at a wavenumber such that
$q\xi = 1$. Then it can be written
\begin{mathletters}
\label{eqs:2.7}
\begin{equation}
G(q=\xi^{-1},\omega) \sim {1\over 1 - i\omega\tau}\quad,
\label{eq:2.7a}
\end{equation}
with the relaxation or correlation time $\tau \sim \xi^2/t \sim \xi^{2+1/\nu}
\sim \xi^z$, where the last relation defines the dynamical critical exponent
$z$. From Eq.\ (\ref{eq:2.5b}) we thus obtain,
\begin{equation}
z = \cases{d& for $2<d<4$\cr%
4& for $d>4$\cr}%
\quad.
\label{eq:2.7b}
\end{equation}
\end{mathletters}%
Notice that with increasing dimensionality $d$, the exponents $\nu$, $\eta$,
and $z$ `lock into' their mean-field values at $d=d_c^{++}=4$, while $\beta$
and $\delta$ do so only at $d=d_c^{+++}=6$.
In the special dimensions $d=4$ and $d=6$ the power law scaling behavior
quoted above holds only up to additional multiplicative logarithmic
dependences on the variables $t$, $h$, and $T$. Since these corrections to
scaling occur only in unphysical dimensions they are of academic interest
only, and we refer the interested reader to Refs.\ \onlinecite{fm} for
details.
The results for the clean case are qualitatively similar, but the anomalous
term in the equation of state, Eq.\ (\ref{eq:2.1}), is $m^d$ instead of
$m^{d/2}$. This is because the additional soft modes in that case are
ballistic instead of diffusive, so their frequency scales with wavenumber
like $\omega \sim q$ rather than $\omega \sim q^2$. As a result, the two
special dimensions $d_c^{++}$ and $d_c^{+++}$ coincide,
and are now $d_c^{++}=3$,
while the upper critical dimension proper, above which fluctuations are
irrelevant, is $d_c^+=1$. For $1<d<3$, the exponent values are
$\beta = \nu = 1/(d-2)$, $\delta = z = d$, $\eta = 3-d$, and $\gamma = 1$.
For $d>3$, all exponents take on their mean-field values as they do in the
disordered case for $d>6$, and in $d=3$ there are logarithmic corrections
to power-law scaling.
We now turn to the behavior at nonzero temperatures. Then the equation of
state acquires temperature corrections, and it is helpful to distinguish
between the cases $m>>T$ and $m<<T$, with $m$ and $T$ measured in suitable
units. Taking into account the leading corrections in either limit,
the equation of state reads
\begin{eqnarray}
tm + m^{d/2}\left(1 + T/m\right) = h\qquad ({\rm for}\quad m>>T)\quad,
\nonumber\\
\left(t + T^{(d-2)/2}\right)m + m^3 = h\qquad ({\rm for}\quad T>>m)\quad.
\label{eq:2.8}
\end{eqnarray}
Equation (\ref{eq:2.8}) shows that for any nonzero temperature the asymptotic
critical behavior is not given by the quantum critical exponents. Since
Eq.\ (\ref{eq:2.8}) takes
temperature into account only perturbatively, it correctly
describes only the initial deviation from the quantum critical behavior, and
approximates the classical critical behavior by the mean-field result.
A full crossover calculation would yield instead the classical Heisenberg
critical behavior in the asymptotic limit. Also, we are considering only the
saddle point contribution to the magnetization. For models with no additional
soft modes it has been shown that fluctuations that act as dangerous
irrelevant variables introduce another temperature
scale that dominates the one obtained from the saddle
point.\cite{Millis,Sachdev96} In the present case, however, fluctuations
are suppressed by the long-range nature of the effective field theory,
and the fluctuation temperature scale is subdominant. The behavior described by
Eq.\ (\ref{eq:2.8}) can be summarized by means of a generalized homogeneity
law,
\begin{mathletters}
\label{eqs:2.9}
\begin{equation}
m(t,T,H) = b^{-\beta/\nu}\,m(tb^{1/\nu}, Tb^{\phi/\nu},
Hb^{\delta\beta/\nu})\quad.
\label{eq:2.9a}
\end{equation}
Here $\beta$, $\nu$, and $\delta$ have the values given above, and
$b$ is an arbitrary scale factor.
\begin{equation}
\phi = 2\nu\quad,
\label{eq:2.9b}
\end{equation}
\end{mathletters}%
is the crossover exponent that describes the deviation from the quantum
critical behavior due to the relevant perturbation provided by the nonzero
temperature. The entry $Tb^{\phi/\nu} = Tb^2$ in the scaling function in
Eq.\ (\ref{eq:2.9a}) reflects the fact that the temperature dependence of the
saddle point solution is determined by that of the diffusive modes, i.e.
frequency or temperature scales like $T \sim q^2 \sim b^{-2}$. The critical
temperature scale, $T \sim b^{-z}$, would be dominant if it were present, but
since the leading behavior of the magnetization is not determined by
critical fluctuations, it is suppressed.
By differentiating Eq.\ (\ref{eq:2.9a}) with respect to the
magnetic field $h$, one obtains an analogous homogeneity law for the
magnetic susceptibility, $\chi_m$,
\begin{mathletters}
\label{eqs:2.10}
\begin{equation}
\chi_m(t,T,H) = b^{\gamma/\nu}\,\chi_m(tb^{1/\nu}, Tb^{\phi/\nu},
Hb^{\delta\beta/\nu})\quad,
\label{eq:2.10a}
\end{equation}
with
\begin{equation}
\gamma = \beta (\delta -1) = 1\quad,
\label{eq:2.10b}
\end{equation}
in agreement with Eq.\ (\ref{eq:2.4}). This result is in agreement with
a more direct calculation of $\chi_m$: The same temperature corrections
that modify the equation of state, Eq.\ (\ref{eq:2.8}), lead to a
replacement of the term $q^{d-2}$ in the denominator of Eq.\ (\ref{eq:2.3})
by $(q^2 + T)^{(d-2)/2}$. Since the homogeneous order parameter correlation
function determines the spin or order parameter susceptibility, this yields
\begin{equation}
\chi_m(t,T) = {1\over t + T^{1/2\nu}}\quad,
\label{eq:2.10c}
\end{equation}
\end{mathletters}%
in agreement with Eqs.\ (\ref{eq:2.10a},\ \ref{eq:2.10b}).
Finally, the critical behavior of the specific heat $c_V$ has been calculated.
It is most convenient to discuss the specific heat coefficient,
$\gamma_V = \lim_{T\rightarrow 0} c_V/T$, which in a Fermi liquid
would simply be a constant. Its behavior at criticality,
$t=0$, is adequately represented by the integral
\begin{mathletters}
\label{eqs:2.11}
\begin{equation}
\gamma_V = \int_0^{\Lambda} dq\ {q^{d-1}\over T+q^d+q^4+h^{1-1/\delta}q^2}\quad.
\label{eq:2.11a}
\end{equation}
Remarkably, in zero magnetic field, $\gamma_V$ diverges logarithmically as
$T\rightarrow 0$ for all dimensions $2<d<4$. This can be shown to be a
consequence of the dynamical exponent $z$ being exactly equal to the spatial
dimensionality $d$ in that range of dimensionalities. If one restores the
dependence of $\gamma_V$ on $t$,
then one obtains a generalized homogeneity law
with a logarithmic correction for the leading scaling behavior of $\gamma_V$,
\begin{eqnarray}
\gamma_V(t,T,H) = \Theta(4-d)\,\ln b \qquad\qquad\qquad\qquad
\nonumber\\
+ F_{\gamma}(t\,b^{1/\nu},T\,b^z,
H\,b^{\delta\beta/\nu})\quad.
\label{eq:2.11b}
\end{eqnarray}
\end{mathletters}%
Here $\Theta(x)$ denotes the step function, and $F_{\gamma}$ is an unknown
scaling function. Note that $\gamma_V$ is determined by Gaussian fluctuations
and depends on the critical temperature scale, i.e. $T$ scales like
$t^{\nu z}$ in Eq.\ (\ref{eq:2.11b}). This is the leading temperature
scale, and whenever it is present it dominates the diffusive temperature scale
that shows in Eqs.\ (\ref{eqs:2.9}) and (\ref{eqs:2.10}).
In the clean case, Eqs.\ (\ref{eq:2.9a}) and (\ref{eqs:2.10}) still hold,
if one uses the appropriate exponent values and replaces Eq.\ (\ref{eq:2.9b})
by $\phi=\nu$. In Eq.\ (\ref{eq:2.11a}), the term $q^4$ in the denominator
of the integrand gets replaced by $q^3$, and consequently the argument of
the $\Theta$-function in Eq.\ (\ref{eq:2.11b}) is $3-d$ rather than $4-d$.
\section{Experimental Implications, and Discussion}
\label{sec:III}
\subsection{Experimental Implications}
\label{subsec:III.A}
Let us now discuss the experimental implications of the results presented
in the preceding section. Obviously, one needs a material that shows a
transition from a paramagnetic state to a ferromagnetic one at zero
temperature as a function of some experimentally tunable parameter $x$.
Obvious candidates are magnetic alloys of the stoichiometry
${\rm P}_x{\rm F}_{1-x}$, with P a paramagnetic metal and F a ferromagnetic
one. Such materials show the desired transition as a function of the
composition parameter $x$; examples include Ni for the ferromagnetic
component, and Al or Ga for the paramagnetic one.\cite{Mott} At the critical
concentration $x_c$ they also are substantially disordered, but due to the fact
that both constituents are metals they are far from any metal-insulator
transition. Our theory should therefore be applicable to these systems.
The schematic phase diagram at $T=0$ in the $T$-$x$ plane
is shown in Fig.\ \ref{fig:3}. Notice that this is a realistic
phase diagram, as opposed to
the `theoretical' ones in Figs.\ \ref{fig:1} and \ref{fig:2}.
A change of the composition parameter $x$ leads, besides a change of $F_0^a$,
to many other changes in the microscopic parameters of the system. As $x$ is
varied, the system will therefore move on a complicated path in the diagram
shown in, say, Fig.\ \ref{fig:1}. However,
since the critical behavior near the transition is universal, it is independent
of the exact path traveled.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig3.eps}
\vskip 0.5cm
\caption{Schematic phase diagram for an alloy of the form
${\rm P}_x\,{\rm F}_{1-x}$. $T_c$ is the Curie temperature for the pure
ferromagnet F, and $x_c$ is the critical concentration.}
\label{fig:3}
\end{figure}
One possible experiment would consist in driving the system at a low, fixed
temperature through the transition by changing the composition $x$.
While this involves the preparation of many samples, this way of probing a
quantum phase transition has been used to observe the metal-insulator
transition in P doped Si.\cite{vL} It might also be possible to use the
stress tuning technique that has been used for the same purpose.\cite{stress}
Either way one will cross the transition line along a more or less vertical
path in Fig.\ \ref{fig:2}, and for a sufficiently low temperature this path
will go through both the classical and the quantum critical region indicated
in the inset in Fig.\ \ref{fig:2}. Due to the large difference between the
quantum critical exponents quoted in Sec.\ \ref{sec:II} and the corresponding
exponents for classical Heisenberg magnets, the resulting crossover should
be very pronounced and easily observable. For instance, for $3$-$d$ systems
our Eq.\ (\ref{eq:2.2a}) predicts $\beta = 2$, while the value for the
thermal transition is $\beta_{\rm class}\approx 0.37$. The resulting
crossover in the critical behavior of the magnetization is schematically
shown in Fig.\ \ref{fig:4}. Alternatively,
one could prepare a sample with a
value of $x$ that is as close as possible to $x_c$, and measure the
magnetic field dependence of the magnetization, extrapolated to $T=0$, to
obtain the exponent $\delta$. Again, there is a large difference between
our prediction of $\delta=1.5$ in $d=3$, and the classical value
$\delta_{\rm class}\approx 4.86$.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig4.eps}
\vskip 0.5cm
\caption{Schematic critical behavior of the magnetization $m$ at nonzero
temperature, showing the crossover from the quantum critical behavior
($\beta=2$, dashed line) to the classical critical behavior
($\beta\approx 0.37$, dotted line).
Notice that the actual transition is classical in nature.}
\label{fig:4}
\end{figure}
Yet another possibility
is to measure the zero-field magnetic susceptibility as a function of both
$t=\vert x-x_c\vert$ and $T$. Equation (\ref{eq:2.10a}) predicts
\begin{equation}
\chi_m(t,T) = T^{-1/2}\,f_{\chi}(T/t^2)\quad.
\label{eq:3.1}
\end{equation}
Here $f_{\chi}$ is a scaling function that has two branches, $f_{\chi}^+$
for $x>x_c$, and $f_{\chi}^-$ for $x<x_c$. Both branches approach a constant
for large values of their argument,
$f_{\chi}^{\pm}(y\rightarrow\infty)={\rm const.}$ For small arguments, we
have $f_{\chi}^+(y\rightarrow 0)\sim \sqrt{y}$, while $f_{\chi}^-$ diverges
at a nonzero value $y^*$ of its argument that signalizes the classical
transition, $f_{\chi}^-(y\rightarrow y^*)\sim (y-y^*)^{-\gamma_{\rm class}}$,
with $\gamma_{\rm class}\approx 1.39$ the susceptibility exponent for
the classical transition. Our prediction is then that a plot of
$\chi_m\ T^{1/2}$ versus $T/t^2$ will yield a universal function the
shape of which is schematically shown in Fig.\ \ref{fig:5}.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig5.eps}
\vskip 0.5cm
\caption{Schematic prediction for a scaling plot of the magnetic
suscteptibility.}
\label{fig:5}
\end{figure}
Notice that the exponents are known {\it exactly}, so the only adjustable
parameter for plotting experimental data will be the position of the
critical point. This is on sharp contrast to some other quantum phase
transitions, especially metal-insulator transitions, where the exponent
values are not even approximately known, which makes scaling plots almost
meaningless.\cite{glass}
Finally, one can consider the low-temperature behavior of the specific
heat. According to Eq.\ (\ref{eq:2.11b}), as the temperature is lowered
for $x\agt x_c$ the leading temperature dependence of
the specific heat will be
\begin{mathletters}
\label{eqs:3.2}
\begin{equation}
c_V(T) \sim T\ln T\quad.
\label{eq:3.2a}
\end{equation}
At criticality this behavior will continue to $T=0$, while for $x>x_c$
it will cross over to
\begin{equation}
c_V(T) \sim (\ln t)\ T\quad.
\label{eq:3.2b}
\end{equation}
\end{mathletters}%
For $x\alt x_c$ one will encounter the classical Heisenberg transition
where the specific heat shows a finite cusp (i.e., the exponent $\alpha$,
defined by $c_V \sim (T-T_c)^{-\alpha}$, is negative).
\subsection{Theoretical Discussion}
\label{subsec:III.B}
There are also various theoretical implications of the results presented
in Sec.\ \ref{sec:II}. One aspect is the general message that the usual
LGW philosophy must not be applied uncritically to quantum phase transitions,
because of the large number of soft modes that exist at zero temperature
in a generic system. If any of these couple to the order parameter, then
an effective theory entirely in terms of the order parameter will not be
well behaved. In the present case we have actually been able to use this
to our advantage, since the long-ranged interaction that the additional
soft modes induce in the order parameter theory suppress the disorder
fluctuations, which is the reason for the remarkable fact that we are
able to exactly determine the critical behavior of a three-dimensional,
disordered system. In general, however, the presence of soft modes in
addition to the order parameter fluctuations will call for the derivation
of a more complete low-energy effective theory that keeps {\em all} of the
soft modes explicitly.
Another very interesting aspect is a connection between our results on
the ferromagnetic transition, and a substantial body of literature on a
problem that appears in the theory of the metal-insulator
transition in interacting disordered electron systems, i.e. the transition
from PM to PI in Fig.\ \ref{fig:1}. This problem has been known ever since
the metal-insulator transition of interacting disordered electrons was
first considered, and it has led to substantial confusion in that field.
Early work on the metal-insulator transition
showed that in two-dimensional systems without impurity
spin-flip scattering, the spin-triplet interaction amplitude
scaled to large values
under renormalization group iterations.\cite{RunawayFlow} This is still
true in $d=2+\epsilon$, and since the runaway flow occurs before the
metal-insulator transition is reached, this precluded the theoretical
description of the latter in such systems. This problem was
interpreted, incorrectly as it turned out later, as a signature of local
moment formation in all dimensions.\cite{LM}
Subsequently, the present authors
studied this problem in some detail.\cite{IFS} We were able to explicitly
resum the perturbation theory and show that at a critical value of the
interaction strength, or of the disorder, there is a bulk, thermodynamic
phase transition in $d>2$ that is {\em not} the metal-insulator transition.
While this ruled out local moments (which would not lead to to phase
transition), the physical meaning of this transition was obscure
at the time since no order parameter had been identified, and its description
was entirely in terms of soft diffusion modes. However, the critical exponents
obtained are identical to those given in Sec.\ \ref{sec:II}
for the quantum ferromagnetic
phase transition, and in both cases logarithmic corrections to scaling
are found.\cite{logfootnote}
Because the exponents in the two cases are identical, we
conclude that the transition found earlier by us, whose physical nature
was unclear, is actually the ferromagnetic transition.
One also concludes that our speculations in Ref.\ \onlinecite{IFS}
about the nature of the ordered phase as an
`incompletely frozen spin phase' with no long-range magnetic order, were
not correct; this phase is actually the metallic ferromagnetic
phase. On the other hand, the techniques used in that reference
allowed for a determination of the qualitative phase diagram as a function
of dimensionality, which our present analysis is not capable of.
This analysis showed the existence of yet another interesting dimensionality
above $d=2$, which we denote by $d^*$. With
the appropriate reinterpretation of the `incompletely frozen spin phase'
as the ferromagnetic phase, the qualitative phase diagram for $2<d<d^*$ is
shown in Fig.\ \ref{fig:6}. Compared to Fig.\ \ref{fig:1}, the following
happens as $d$ is lowered from $d=3$: The multicritical point M moves to
downward, and at $d=d^*$ it reaches the $\lambda$-axis. $d^*$ was
estimated in Ref.\ \onlinecite{IFS} to be approximately $d^*=2.03$. For $d<d^*$,
the insulator phase can therefore not be reached directly from the
paramagnetic metal. This explains why in the perturbative renormalization
group calculations in $d=2+\epsilon$ one necessarily encounters the
ferromagnetic transition first, and it should finally put to rest the long
discussion about the physical meaning of the runaway flow that is encountered
in these theories. It also shows that
none of these theories are suitable
for studying the metal-insulator transition in the absence of spin-flip
mechanisms, as they start out in the wrong phase.
\begin{figure}
\epsfxsize=8.25cm
\epsfysize=6.6cm
\epsffile{proceedings_fig6.eps}
\vskip 0.5cm
\caption{Schematic phase diagram for a disordered itinerant electron
system at $T=0$ close to $d=2$. The phases shown are the paramagnetic
metal (PM), the ferromagnetic metal (FM), and the insulator (I) phase.
Whether within I there is another phase transition from a ferromagnetic
to a paramagnetic insulator is not known.}
\label{fig:6}
\end{figure}
It should also be pointed out that our earlier theory depended crucially
on there being electronic spin conservation. This feature would be lost
of there were some type of impurity spin flip scattering process. In
that case, the soft modes that lead to the long-range order parameter
interactions acquire a mass or energy gap, and at sufficiently large
scales the interactions are effectively of short range. The asymptotic
critical phenomena in this case are described by a short-range, local
order parameter field theory with a random mass, or temperature, term.
Such a term is present in the case of a conserved order parameter also,
but due to the long ranged interaction it turns out to be irrelevant
with respect to the nontrivial
Gaussian fixed point. In the absence of the conservation law, however,
the random mass term is relevant with respect to
the Gaussian fixed point analogous to the one discussed here. This
underscores the important role that is played by the order parameter
being conserved in our model. The quantum phase transition in a model
where it is not has been discussed in Ref.\ \onlinecite{afm}.
We finally discuss why some of our results are in disagreement with
Sachdev's\cite{Sachdev} general scaling analysis of quantum phase
transitions with conserved order parameters. For instance, it follows from
our Eqs.\ (\ref{eqs:2.10},\ \ref{eq:2.11b}) that the Wilson ratio, defined as
$W = (m/H)/(C_V/T)$, diverges at criticality rather than being a universal
number as predicted in Ref.\ \onlinecite{Sachdev}. Also, for $2<d<4$ the
function $F_{\gamma}$ in Eq.\ (\ref{eq:2.11b}), for $t=0$ and neglecting
corrections to scaling, is a function of $T/H$, in agreement with
Ref.\ \onlinecite{Sachdev}, but for $d>4$ this is not the case.
The reason for this breakdown of general scaling is that we work
above an upper critical dimensionality, and hence dangerous irrelevant
variables\cite{MEF}
appear that prevent a straightforward application of the results
of Ref.\ \onlinecite{Sachdev} to the present problem. These dangerous
irrelevant variables have to be considered very carefully,
and on a case by case
basis. This caveat is particularly relevant for quantum phase transitions
since they tend to have a low upper critical dimension.
It is well known that a given irrelevant variable can be
dangerous with respect to some observables but not with respect to others.
Specifically, in our case there is a dangerous irrelevant variable
that affects the leading scaling behavior of the magnetization,
but not that of the specific heat coefficient, and this leads to the divergence
of the Wilson ratio. This dangerous irrelevant variable is also the reason
why the exponents $\beta$ and
$\delta$, which describe the critical behavior of the magnetization, remain
dimensionality dependent up to $d=6$, while all other exponents `lock into'
their mean-field values already at $d=4$.
\section{Theoretical Outline}
\label{sec:IV}
Here we sketch the derivation of the results that were presented in Sec.\
\ref{sec:II}. We do so for completeness only, and will be very brief. A
detailed account of the derivation can be found in Ref.\ \onlinecite{fm}
for the disordered case, and in Ref.\ \onlinecite{clean} for the clean case.
Hertz\cite{Hertz} has shown how to derive an LGW functional for a quantum
ferromagnet. One starts by separating the spin-triplet part
of the electron-electron interaction, i.e. the interaction between spin
density fluctuations, from the rest of the action, writing
\begin{mathletters}
\label{eqs:4.1}
\begin{equation}
S = S_0 + S_{int}^{\,(t)}\quad,
\label{eq:4.1a}
\end{equation}
with
\begin{equation}
S_{int}^{\,(t)} = {\Gamma_t\over 2} \int dx\,{\bf n}_s(x)
\cdot {\bf n}_s(x)\quad.
\label{eq:4.1b}
\end{equation}
\end{mathletters}%
Here $S_{int}^{\,(t)}$ is the spin-triplet interaction part of the action,
and $S_0$ contains all other parts, in particular the electron-electron
interaction in all other channels. $\Gamma_t$ is the spin triplet interaction
amplitude, which is related to the Landau paramter $F_0^a$ used above by
$\Gamma_t = -F_0^a/(1 + F_0^a)$, ${\bf n}_s(x)$ is the electron spin density
vector, $x = ({\bf x},\tau)$ denotes space and imaginary time, and
$\int dx = \int d{\bf x}\int_0^{1/T} d\tau$. In the critical region near a
quantum phase transition, imaginary time scale like a length to the power
$z$, and the space-time nature of the integrals in the action accounts for
the system's effective dimension $d+z$ that was mentioned in the Introduction.
Now $S_{int}^{\,(t)}$ is
decoupled by means of a Hubbard-Stratonovich transformation.\cite{Hertz}
The partition function, apart from a noncritical multiplicative constant,
can then be written
\begin{mathletters}
\label{eqs:4.2}
\begin{equation}
Z = \int D[{\bf M}]\ \exp\left(-\Phi[{\bf M}]\right)\quad,
\label{eq:4.2a}
\end{equation}
with the LGW functional
\begin{eqnarray}
\Phi[&&{\bf M}] = {\Gamma_t\over 2} \int dx\ {\bf M}(x)\cdot {\bf M}(x)
\quad\quad\quad\quad\quad\quad
\nonumber\\
&&- \ln \left<\exp\left[-\Gamma_t \int dx\ {\bf M}(x)\cdot
{\bf n}_s(x)\right]\right>_{S_0}\quad.
\label{eq:4.2b}
\end{eqnarray}
\end{mathletters}%
Here $\langle\ldots\rangle_{S_0}$ denotes an average taken with the action
$S_0$. If the LGW functional $\Phi$ is formally expanded in powers of
${\bf M}$, then the term of order ${\bf M}^n$ obviously has a coefficient
that is given by a connected $n$-point spin density correlation function
of the `reference system' defined by the action $S_0$.
At this point we need to remember that our reference system $S_0$ contains
quenched disorder, which has not been averaged over yet.
The $n$-point correlation functions that form the coefficients of the
LGW functional therefore still
depend explicitly on the particular realization of the randomness in the
system. The average over the quenched disorder, which we denote by
$\{\ldots \}_{\rm dis}$, requires averaging the
free energy, i.e. we are interested in $\{\ln Z\}_{\rm dis}$.
This is most easily done by means of the replica trick,\cite{Grinstein},
i.e. one writes
\begin{eqnarray}
\{\ln Z\}_{\rm dis} = \lim_{n\rightarrow 0}\,{1\over n}\,
\left[\{Z^n\}_{\rm dis} - 1\right]
\nonumber\\
= \lim_{n\rightarrow 0}\,{1\over n}\,
\left[\int\prod_{\alpha} D[{\bf M}^{\alpha}]\left\{
e^{-\sum_{\alpha = 1}^n \Phi^{\alpha}[{\bf M}^{\alpha}]}\right\}_{\rm dis}
- 1\right]\ ,
\label{eq:4.3}
\end{eqnarray}
where the index $\alpha$ labels $n$ identical replicas of the system.
The disorder average is now easily performed by expanding the exponential
in Eq.\ (\ref{eq:4.3}). Upon reexponentiation, the coefficients in the
replicated LGW functional are disorder averaged correlation functions
of the reference system that are cumulants with respect to the disorder
average. The Gaussian part of $\Phi^{\alpha}$ is simply
\begin{eqnarray}
\Phi_{(2)}^{\alpha}[{\bf M}^{\alpha}] = {1\over 2} \int dx_1\,dx_2\,
{\bf M}^{\alpha}(x_1)\biggl[\delta(x_1 - x_2)
\nonumber\\
- \Gamma_t \chi(x_1 - x_2)\biggr]\cdot {\bf M}^{\alpha}(x_2)\quad.
\label{eq:4.4}
\end{eqnarray}
Here $\chi(x)$ is the disorder averaged spin susceptibility or 2-point
spin density correlation function of the
reference system. The cubic term, $\Phi_{(3)}^{\alpha}$ has a coefficient
given by the averaged 3-point spin density correlation function. For the
quartic term, the cumulant nature of these correlation functions leads
to two terms with different replica structures, and higher order terms
have correspondingly more complicated structures.
The next step is to calculate the spin density correlation functions for
the reference system. It now becomes important that we have
kept in our action $S_0$ the electron-electron interaction in all channels
except for the spin-triplet one that has been decoupled in deriving the
LGW functional. At this point our treatment deviates from that of Hertz,
who took the reference ensemble to describe noninteracting electrons. This was
generally considered an innocent approximation that should not have any
qualitative effects. However, this belief was mistaken, since the spin
density correlations of interacting
electrons are qualitatively different from those of noninteracting ones.
The spin susceptibility can be easily calculated in perturbation theory.
The result shows that the static spin susceptibility as a function of
the wavenumber $q$ is nonanalytic at $q=0$. For small $q$ it has the
form
\begin{equation}
\chi(q) = {\rm const} - q^{d-2} - q^2 \quad.
\label{eq:4.5}
\end{equation}
The nonanalyticity is a
consequence of the presence of soft particle-hole excitations in the
spin-triplet channel, and it occurs only in an interacting electron
system. That is, the prefactor of the $q^{d-2}$ term, which we have
suppressed in Eq.\ (\ref{eq:4.5}),
vanishes for vanishing interaction amplitudes.
Renormalization group arguments can then be used to ascertain that
this perturbative result indeed represents the exact behavior of $\chi$ in
the long-wavelength limit.
If one also considers the frequency dependence of $\chi$, one
obtains the Gaussian part of the LGW functional in the form
\begin{mathletters}
\label{eqs:4.6}
\begin{eqnarray}
\Phi^{\alpha}_{(2)}[{\bf M}] &&= {1\over 2} \sum_{\bf q}\sum_{\omega_n}
{\bf M}^{\alpha}({\bf q},\omega_n)\,\left[t_0 + q^{d-2} \right.
\nonumber\\
&&\left. +q^2 + \vert\omega_n\vert/q^2\right]\,
\cdot {\bf M}^{\alpha}(-{\bf q},-\omega_n)\ ,
\label{eq:4.6a}
\end{eqnarray}
where
\begin{equation}
t_0 = 1 - \Gamma_t\,\chi_s({\bf q}\rightarrow 0,\omega_n=0)\quad,
\label{eq:4.6b}
\end{equation}
\end{mathletters}%
is the bare distance from the critical point, and the $\omega_n = 2\pi Tn$
are bosonic Matsubara frequencies.
The Gaussian theory, Eqs.\ (\ref{eqs:4.6}), can be analyzed using standard
renormalization group techniques.\cite{Ma} Such an analysis reveals
the existence of a Gaussian fixed point whose critical properties are
the ones given in Sec.\ \ref{sec:II}. The remanining question is whether
this fixed point is stable with respect to the higher, non-Gaussian terms
in the action. These higher terms also need to be considered in order to
obtain the critical behavior of the magnetization.
A calculation of the higher correlation functions that determine the
non-Gaussian vertices of the field theory shows that the nonanalyticity
that is analogous to the one in the spin susceptibility, Eq.\ (\ref{eq:4.5}),
is stronger and results in a divergence of these correlation functions in
the zero frequency, long-wavelength limit. Specifically, the leading
behavior of
the $n$-point spin density correlation that determines the coefficient of
the term of order ${\bf M}^n$ in the LGW functional, considered at vanishing
frequencies as a function of a representative wavenumber $q$, is
\begin{equation}
\chi^{(n)}(q\rightarrow 0) \sim q^{d+2-2n}\quad.
\label{eq:4.7}
\end{equation}
As a result, the coefficients cannot, as usual, be expanded about zero
wavenumber, and the theory is nonlocal. Despite this unpleasant behavior
of the field theory, it is easy to see by power counting that all of
these terms except for one are irrelevant with respect to the Gaussian
fixed point in all dimensions $d>2$. The one exception is the quartic
cumulant contribution that is the disorder average of the square of the
spin susceptibility, which is marginal in $d=4$, but irrelevant in all
other dimensions. This term is physically of interest, since it represents
the random mass or random temperature contribution that one would expect
in a theory of disordered magnets, and that was mentioned in
Sec.\ \ref{subsec:III.B} above.
The conclusion from these considerations is that apart from logarithmic
corrections to scaling in certain special dimensions,
the Gaussian theory yields
the exact critical behavior, and the only remaining question pertains to
the form of the equation of state. Since the quartic coefficient
$\chi^{(4)}$ is a dangerous irrelevant variable for the magnetization,
this requires a scaling interpretation of the infrared divergence of
$\chi^{(4)}$. In Ref.\ \onlinecite{fm} it has been shown that for scaling
purposes the wavenumber $q$ in Eq.\ (\ref{eq:4.7}) can be identified
with the magnetization $m^{1/2}$. This is physically plausible, since
the divergence stems from an integration over soft modes that are
rendered massive by an external magnetic field. Since a nonzero
magnetization acts physically like a magnetic field, it cuts off the
singularity in Eq.\ (\ref{eq:4.7}). With this interpretation of the
singular coefficients, the term of order $m^n$ in the saddle point solution
of the LGW theory has the structure $m^{n-1}\,(m^{1/2})^{d+2-2n} = m^{d/2}$,
which leads to the equation of state as given in Eq.\ (\ref{eq:2.1}).
One might wonder why the magnetic fluctuations in the paramagnetic
phase do not also cut off the singularity in Eq.\ (\ref{eq:4.7}),
and thus weaken
or even destroy the effects discussed above. While such a cutoff mechanism
does exist, it enters the theory only via the fluctuations, which are
RG irrelevant with respect to the Gaussian fixed point. It therefore shows
only in the corrections to scaling, but not in the leading critical
behavior.
Again, all of these arguments can be repeated for the case without disorder.
The only changes one encounters pertain to the values of various exponents
due to the different character of the soft modes. This leads to the results
quoted in Sec.\ \ref{sec:II}.
\acknowledgments
This work was supported by the NSF under grant numbers
DMR-96-32978 and DMR-95-10185. Part of the work reviewed here was performed
during the 1995 Workshop on Quantum Phase Transitions at the TSRC in
Telluride, CO. We thank the participants of that workshop for stimulating
discussions. We also would like to acknowledge our collaborators on the
clean case, Thomas Vojta and Rajesh Narayanan.
| proofpile-arXiv_065-796 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
At HERA energies interactions between almost real photons (of
virtuality $P^2 \approx 0$) and protons can produce jets of high
transverse energy ($E_T^{jet}$). A significant fraction of these jets are
expected to arise from charmed quarks. The presence of a `hard' energy
scale means that perturbative QCD calculations of event properties can
be confronted with experiment, and hence the data have the potential
to test QCD and to constrain the structures of the colliding
particles.
At leading order (LO) two processes are responsible for jet
production. The photon may interact directly with a parton in the
proton or it may first resolve into a hadronic state. In the first
case all of the photon's energy participates in the interaction with a
parton in the proton. In the second case the photon acts as a source
of partons which then scatter off partons in the proton. Examples of
charm production in these processes are shown in Fig.~\ref{f:diagrams}.
The possibility of experimentally separating samples of direct and
resolved photon events was demonstrated in~\cite{Z2}, and in~\cite{Z4}
a definition of resolved and direct photoproduction was introduced
which is both calculable to all orders and measurable. This
definition is based upon the variable
\begin{equation}
x_\g^{OBS} = \frac{ \sum_{jets}E_T^{jet} e^{-\eta^{jet}}}{2yE_e},
\label{xgoeq}
\end{equation}
\noindent where the sum runs over the two jets of highest $E_T^{jet}$.
$x_\g^{OBS}$ is thus the fraction of the photon's energy participating in
the production of the two highest $E_T^{jet}$ jets. This variable is used
to define cross sections in both data and theoretical calculations.
High $x_\g^{OBS}$ events are identified as direct, and low $x_\g^{OBS}$ events as
resolved photoproduction.
\begin{figure}
\psfig{figure=diag_3charm.eps,height=10cm,angle=270}
\caption{\label{f:diagrams} \it Examples of charm photoproduction at HERA: a)
Direct; photon-gluon fusion, b) Resolved; single excitation of charm in
the photon, c) Resolved; gluon-gluon fusion. The
charm and anticharm quarks are indicated by the bold lines.}
\end{figure}
Charm-tagged jet cross sections have several advantages over untagged
jet cross sections. Knowledge of the nature of the outgoing parton
reduces the number of contributing subprocesses and thus simplifies
calculations and the possible extraction of parton densities,
including the charm content of the photon and the gluon content of the
proton. Detailed studies of the dynamics of charm production should
provide a stringent test of the QCD calculations. In addition, in the
case that charm decays are fully reconstructed, the outgoing momenta
provide an alternative to calculating the event kinematics from jet
momenta, which could provide a useful {\it model independent}
examination of the uncertainties coming from non-perturbative
fragmentation effects.
Here we briefly examine the event rates and distributions obtainable
with high luminosities using the three charm tagging methods described
below. We use the HERWIG~5.8~\cite{HERWIG} Monte Carlo
simulation, including multiparton interactions~\cite{MI}, along
with simple cuts and smearing to mimic the expected gross features of
detector effects. We define our starting sample by running the $k_T$
jet algorithm~\cite{kt} on the final state particles of HERWIG (after
the decay of charmed particles) and demanding at least two jets with
transverse energy $E_T^{jet} > 6$GeV and absolute pseudorapidity $|\eta^{jet}| <
2$. In addition we demand $P^2 < 4$GeV$^2$ and 135~GeV $< W_{\gamma p}
< 270$~GeV. This is a kinematic region in which dijet cross sections
have already been measured at HERA~\cite{Z4}.
According to HERWIG, the total cross section for heavy flavour ($b$ or $c$)
jets in this kinematic region is 1900~pb$^{-1}$ (1000~pb$^{-1}$) for direct
(resolved) photoproduction, using the GRV94 LO proton parton distribution
set~\cite{GRV94} and the GRV LO photon parton distribution set~\cite{GRVph}.
There is some evidence~\cite{Z4,RG} that these LO calculations may
underestimate the cross section, particularly the resolved cross
section, by a factor of around two. On the other hand, the dominant
LO subprocess for resolved charm production is predicted to be
excitation of charm from the photon. This expectation is not
fully reliable: the charm content in the photon is presently
overestimated in the available parton distribution sets as they assume
only massless quarks. If quark masses are included one would expect
the resolved charm cross section to be considerably {\it smaller} than
the number we are quoting here. Its measurement will be an important
topic in its own right.
\section{Charm tagging methods}
\subsection{$D^*$ tagging method}
Currently, the reconstruction of $D^*$ is the only method used to tag
open charm by the HERA experiments in published data~\cite{dstar}.
$D^*$ are tagged by
reconstructing the $D^0$ produced in the decay
$D^{*\pm} \rightarrow D^0 + \pi^\pm$ and the mass difference $\Delta(M)$ between the
$D^*$ and the $D^0$.
The overall tagging efficiency for the $D^*$ method is given in
table~1, along with the expected number of events after an integrated
luminosity of 250~pb$^{-1}$. For this study we have demanded
a $D^*$ with $p_T > 1.5$~GeV and $|\eta| < 2$, and assumed that for
these $D^*$ the efficiency of
reconstruction is 50\%. The decay
channels used are $D^* \rightarrow D^0 + \pi \rightarrow ( K + \pi ) +
\pi$ and $D^* \rightarrow D^0 + \pi \rightarrow ( K + \pi \pi \pi ) +
\pi$.
A signal/background ratio of around 2 is estimated, although this (as well
as the $D^*$ reconstruction efficiency) will depend upon
the understanding of the detectors and cuts eventually achieved
in the real analysis, which cannot be simulated here.
\subsection{$\mu$ tagging method}
The capability of the $\mu$ tagging method has been evaluated using a
complete simulation of the ZEUS detector~\cite{ZEUS} based on the
GEANT package \cite{GEANT}. The method itself develops previous work
\cite{GA} in which a measurement of the total charm photoproduction
cross section was obtained in the range $60 < W < 275$~GeV. Muons are
tagged requiring a match between a track in the ZEUS central tracking
detector pointing to the interaction region and a reconstructed
segment in the inner muon detectors (which lie about one metre away, outside
the uranium calorimeter).
The position and the direction of the reconstructed segment are used
to determine the displacements and deflection angles of its
projections on two orthogonal planes with respect to the extrapolated
track. These quantities are distributed according primarily to the
multiple Coulomb scattering within the calorimeter. In comparison the
measurement errors are negligible and have not been taken into
account. With this approximation and a simple model accounting for
the ionization energy loss of the muon through the calorimeter, a
$\chi^2$ has been defined from the four variables.
The cut on the $\chi^2$ has been chosen to keep 90\% of
the events with a reconstructed true muon in large Monte Carlo charm
samples and checked in selected data samples. The results are
contained in table~1.
\subsection{Tagging using secondary vertices}
If a high resolution microvertex detector is installed close to the interaction
region, the tagging of charm by looking for secondary vertices inside jets
becomes practical. For this study we have simulated three example methods
(`A', very tight cuts and `B', looser cuts and `C', very loose cuts)
as follows:
\begin{itemize}
\item Look at all stable charged tracks
which have transverse momentum
$p_T({\rm track}) > 500$~MeV and $|\eta({\rm track})| < 2$ and
which lie within $\delta R = \sqrt{(\delta\phi)^2+(\delta\eta)^2} < 1.0$
of the centre of either of the two jets, and
\item Assume a (Gaussian) impact parameter
resolution for these tracks of $180~\mu$m in $XY$ and $Z$ independent
of momentum and angle. This corresponds to the design value
of the H1 vertex detector~\cite{H1} for tracks with momentum
500~MeV at $90^o$.
\item Demand at least two tracks which have impact parameters
displaced by 3$\sigma$ (condition A)
or one track with an impact parameter displaced by 3$\sigma$
(condition B) or 2$\sigma$ (condition C) from the primary vertex.
\end{itemize}
The results are given in table~1. We note that an enriched sample
of $b$ quarks could be obtained by using very tight tagging conditions
in a microvertex detector.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Tagging & \multicolumn{3}{c|}{ Direct }& \multicolumn{3}{c|}{ Resolved } \\ \cline{2-7}
Method & Efficiency & N(events) & Sig./Bkgd & Efficiency & N(events) & Sig./Bkgd \\ \hline
$D^*$ & 1.4\% & 6500 ( 9\% b) & $\approx 2$& 0.7\% & 1700 ( 4\% b)& $\approx 1$ \\
$\mu$ & 7.3\% & 34000 (20\% b) & 2.0 & 3.4\% & 8400 (10\% b)& 0.3 \\
Vertex A & 2.3\% & 11000 (63\% b) & 76 & 1.0\% & 2500 (34\% b)& 8 \\
Vertex B & 10\% & 47000 (33\% b) & 3.4 & 6.0\% & 15000 (17\% b)& 0.5 \\
Vertex C & 37\% &170000 (17\% b) & 0.9 & 32\% & 79000 (6\% b) & 0.2 \\ \hline
\end{tabular}
\caption{\it Estimated tagging efficiencies,
signal to background ratio and total numbers of expected
signal events for various tagging methods after an integrated
luminosity of 250~pb$^{-1}$. The efficiencies given are the ratios of
good events which are tagged to all good events. `Good events' are
$ep \rightarrow 2$ or more jets with $E_T^{jet} \ge 6$~GeV, $|\eta^{jet}| < 2$,
for virtualities of the exchanged photon less than 4 GeV$^2$ in the
range 135~GeV $< W_{\gamma p} < 270$~GeV and where
one or more of the outgoing partons from the hard subprocess was a
charm or beauty quark. The fraction of the signal events which
are from $b$ quarks is also given.}
\end{table}
\section{Physics Potential and Conclusions}
High luminosity running at HERA will provide large samples of jets
containing heavy quarks. These jets can be identified using muons or
$D^*$ with efficiencies of a few percent and signal-to-background
ratios of around 2. In addition there is the possibility of
identifying the electron channel for semi-leptonic decays, which we
have not considered here but which could be very effective at these
high transverse energies. The presence of a high resolution vertex
detector would enormously enrich the tagging possibilities, allowing
improved signal-to-noise ratios and/or improved efficiencies (up to
around 35\%) depending upon the details of the cuts and
reconstruction. Combining the tagging methods we have studied here can
be expected to give still more flexibility in the experimental
selection and cross section measurement.
With the samples of several tens of thousands of charm-tagged jets
thus obtainable, jet cross sections can be measured over a wide
kinematic range. For the signal events selected by the vertex method
B, various distributions are shown in Fig.2. From the $x_\g^{OBS}$
distribution (Fig.2a) we see that the resolved photon component,
whilst suppressed relative to the direct component compared to the
untagged case~\cite{Z4}, is significant. This component is largely due
to the charm content in the GRV photon parton distribution set.
Measurement of this cross section can be expected to constrain the
charm content of the resolved photon and the implementation of the
$\gamma \rightarrow c\bar{c}$ splitting in the perturbative evolution.
The boson gluon fusion diagram dominates for the high-$x_\g^{OBS}$ range and
this cross section is sensitive to the gluon content of the proton in
the range $0.003 < x_p^{OBS} < 0.1$, where $x_p^{OBS} = \frac{ \sum_{jets}E_T^{jet}
e^{\eta^{jet}}}{2E_p}$ is the fraction of the proton's energy manifest
in the two highest $E_T^{jet}$ jets (Fig.2b). The $M_{JJ}$ distribution is
shown in Fig.2c, where $M_{JJ} = \sqrt{ 2 E_T^{jet1}
E_T^{jet2}[\cosh(\eta^{jet1} - \eta^{jet2}) - \cos(\phi^{jet1}
-\phi^{jet2})]}$ is the dijet invariant mass. For $M_{JJ} > 23$~GeV the
dijet angular distribution~\cite{Z5} $|\cos\theta^{\ast}| =
|\tanh(\frac{\eta^{jet1} - \eta^{jet2}}{2})|$ is unbiased by the
$E_T^{jet}$ cut. As shown in Fig.2d the angular distributions of high and
low $x_\g^{OBS}$ should differ strongly, due to the underlying bosonic
(gluon) or fermionic (quark) exchange processes~\cite{Z5}. The
measurement of such a distribution should confirm that the dominant
charm production process in direct photoproduction was photon-gluon
fusion. In addition it will determine whether excitation of charm from
the incoming particles or gluon-gluon fusion is the dominant
production mechanism in resolved photoproduction.
\begin{figure}
\hspace{2cm}
\psfig{figure=chfig2.eps,height=14cm}
\caption{\label{f:dist} \it a) $x_\g^{OBS}$, b) $x_p^{OBS}$, c) $M_{JJ}$, and d) $\cos\theta^{\ast}$.
In a), b) and c), clear circles are the LO direct only, solid dots are the
full sample. The normalisation is to 250~pb$^{-1}$.
In d) the solid squares are the $x_\g^{OBS} < 0.75$ sample and
the clear squares are the $x_\g^{OBS} > 0.75$ sample. Both samples are normalised to
one at $|\cos\theta^{\ast}| = 0$ and the error bars have been scaled to
correspond to the statistical uncertainty expected after 250~pb$^{-1}$.}
\end{figure}
\section*{Acknowledgements}
It is a pleasure to thank U. Karshon, D. Pitzl, A. Prinias,
S. Limentani and the members and conveners of the working group for
discussions and encouragement.
| proofpile-arXiv_065-807 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The existence of the top quark, which is required in the Standard Model
as the weak isospin partner of the bottom quark, was firmly established
in 1995 by the CDF\cite{cdf_obs} and D0\cite{d0_obs} experiments at the
Fermilab Tevatron, confirming earlier evidence presented by CDF\cite{
top_evidence_prd,top_kin_prd}. Each experiment reported a roughly 5$\sigma$
excess of $t\bar{t}$ candidate events over background, together with a
peak in the mass distrbution for fully reconstructed events. The datasets
used in these analyses were about 60\% of the eventual Run I totals.
With the top quark well in hand and over 100~pb$^{-1}$ of data collected
per experiment, the emphasis has now shifted to a more precise study of the
top quark's properties.
In $p\bar{p}$ collisions at $\sqrt{s}=1.8$~TeV, the dominant top quark
production mechanism is pair production through $q\bar{q}$ annihilation. In
the Standard Model, each top quark decays immediately to a $W$ boson and
a $b$ quark. The observed
event topology is then determined by the decay mode of the two $W$'s. Events
are classified by the number of $W$'s that decay leptonically. About 5\%
of the time each $W$ decays to $e\nu$ or $\mu\nu$ (the ``dilepton channel''),
yielding a final state with two isolated, high-$P_T$ charged leptons,
substantial missing transverse energy (${\not}{E_T}$) from the undetected
energetic neutrinos, and two $b$ quark jets. This final state is extremely
clean but suffers from a low rate. The ``lepton + jets'' final state
occurs in the 30\% of $t\bar{t}$ decays where when one $W$ decays to leptons
and the other decays into quarks. These events contain a single high-$P_T$
lepton, large ${\not}{E_T}$, and (nominally) four jets, two of which are from
$b$'s. Backgrounds in this channel can be reduced to an acceptable level
through $b$-tagging and/or kinematic cuts, and the large branching ratio
to this final state makes it the preferred channel for studying the top
quark at the Tevatron.
The ``all-hadronic'' final state occurs when both $W$'s decay to
$q\bar{q}^{\prime}$, which happens 44\% of the time. This final state contains
no leptons, low ${\not}{E_T}$, and six jets, including two $b$ jets. Although
the QCD backgrounds in this channel are formidable, extraction of the signal
is possible through a combination of $b$-tagging and kinematic cuts. Finally,
approximately 21\% of $t\bar{t}$ decays are to final states containing
$\tau$'s. Backgrounds to hadronic $\tau$ decays are large, and while signals
have been identified I will not discuss these analyses here.
This paper is organized as follows. Section~\ref{sec-xsection} discusses
the measurement of the $t\bar{t}$ production cross section. The measurement
of the top quark mass is described in Section~\ref{sec-mass}. Kinematic
properties of $t\bar{t}$ production are described in Section~\ref{sec-kine}.
The measurement of the top quark branching ratio to $Wb$ and the CKM matrix
element $V_{tb}$ is described in Section~\ref{sec-Vtb}. Section~\ref{sec-rare}
discusses searches for rare or forbidden decays of the top.
Section~\ref{sec-Wpol} discusses a measurement of the $W$ polarization in
top decays. Section~\ref{sec-concl} concludes.
\section{Production Cross Section}
\label{sec-xsection}
The measurement of the top quark production cross section $\sigma_{t\bar{t}}$
is of interest for a number of
reasons. First, it checks QCD calculations of top production, which have
been performed by several groups\cite{xsec-berger,xsec-catani,xsec-laenen}.
Second, it provides an important benchmark for estimating
top yields in future high-statistics experiments at the Tevatron and LHC.
Finally, a value of the cross section significantly different from the QCD
prediction
could indicate nonstandard production or decay mechanisms, for example
production through the decay of an intermediate high-mass state or decays
to final states other than $Wb$.
\subsection{CDF Measurements of $\sigma_{t\bar{t}}$}
The CDF collaboration has measured the $t\bar{t}$ production cross section
in the dilepton and lepton + jets modes, and in addition has recently
performed a measurement in the all-hadronic channel. The dilepton and
lepton + jets analyses begin with a common inclusive lepton sample,
which requires an isolated electron or muon with $P_T > 20$~GeV and
$|\eta|<1$. The integrated luminosity of this sample is 110~pb$^{-1}$.
For the dilepton analysis, a second lepton is required with $P_T>20$~GeV.
The second lepton must have an opposite electric charge to the primary lepton
and may satisfy a looser set of identification cuts. In addition, two
jets with $E_T>10$~GeV are required, and the ${\not}{E_T}$ must be greater
than 25~GeV. For the case $25 < {\not}{E_T} < 50$~GeV, the ${\not}{E_T}$ vector must
be separated from the nearest lepton or jet by at least 20 degrees. This
cut rejects backgrounds from $Z\rightarrow \tau\tau$ decays followed
by $\tau\rightarrow (e~{\rm or}~\mu)$ (where the
${\not}{E_T}$ tends to lie along the lepton direction) and from events containing
poorly measured jets (where the ${\not}{E_T}$ tends to lie along a jet axis).
Events where the dilepton invariant mass lies between 75 and 105 GeV are
removed from the $ee$ and $\mu\mu$ channels as $Z$ candidates. In addition,
events containing a photon with $E_T > 10$ GeV are removed if the $ll\gamma$
invariant mass falls within the $Z$ mass window. This ``radiative $Z$'' cut
removes one event from the $\mu\mu$ channel and has a negligible effect
on the $t\bar{t}$ acceptance and backgrounds. Nine dilepton candidates
are observed: one $ee$, one $\mu\mu$, and seven $e\mu$ events. Including
a simulation of the trigger acceptance, the expected division of dilepton
signal events is 58\% $e\mu$, 27\% $\mu\mu$, and 15\% $ee$, consistent with
the data. It is also interesting to note that four of the nine
events are $b$-tagged, including two double-tagged events. Although no
explicit $b$-tag requirement is made in the dilepton analysis, the fact that
a large fraction of the events are tagged is powerful additional evidence
of $t\bar{t}$ production.
Backgrounds in the dilepton channel arise from Drell-Yan production of
lepton pairs, diboson production, $Z\rightarrow\tau\tau$, $b\bar{b}$, and
fakes. These backgrounds are estimated through a combination of data and
Monte Carlo. The total background in the $ee + \mu\mu$ channels is
$1.21\pm 0.36$ events, and is $0.76\pm 0.21$ events in the $e\mu$ channel.
Event yields, backgrounds, and estimated $t\bar{t}$ contributions are
summarized in Table~\ref{tab-cdf_dil}.
When these numbers are combined with the $t\bar{t}$ acceptance in the
dilepton mode of $0.78\pm0.08$\% (including branching ratios), and using
CDF's measured top mass of 175~GeV (described below), the resulting
cross section is $\sigma_{t\bar{t}} = 8.3^{+4.3}_{-3.3}$~pb.
\begin{table}[h]
\begin{center}
\caption{Summary of event yields and backgrounds in the CDF dilepton
analysis. Expected $t\bar{t}$ contributions are also shown.}
\label{tab-cdf_dil}
\begin{tabular}{ccc}
\hline
\hline
Background & $ee, \mu\mu$ & $e\mu$ \\
\hline
Drell-Yan & $0.60\pm0.30$ & --- \\
$WW$ & $0.16\pm0.07$ & $0.20\pm0.09$ \\
fakes & $0.21\pm0.17$ & $0.16\pm0.16$ \\
$b\bar{b}$ & $0.03\pm0.02$ & $0.02\pm0.02$ \\
$Z\rightarrow\tau\tau$ & $0.21\pm0.08$ & $0.38\pm0.11$ \\ \hline
Total bkgd. & $1.21\pm0.36$ & $0.76\pm0.21$ \\ \hline
Expected $t\bar{t}$, & 2.6, 1.6, 1.0 & 3.9, 2.4, 1.5 \\
$M_{top}=160,175,190$ & & \\ \hline
Data (110 pb$^{-1}$) & 2 & 7 \\
\hline
\end{tabular}
\end{center}
\end{table}
The lepton + jets cross section analysis begins with the common inclusive
lepton sample described above. An inclusive $W$ sample is selected from
this sample by requiring ${\not}{E_T} > 20$~GeV. Jets are clustered in a cone
of $\Delta R \equiv \sqrt{(\Delta\eta)^2 + (\Delta\phi)^2} = 0.4$, and at least
three jets with $E_T > 15$~GeV and $|\eta|<2$ are required in the $t\bar{t}$
signal region. (These jet energies are not corrected for detector effects,
out-of-cone energy, the underlying event, etc. Such corrections
are applied later, in the mass analysis. The average correction factor is
about 1.4.) $Z$ candidates are removed as before, and the lepton is
required to pass an appropriate trigger. Finally, the event is required
not to have been accepted by the dilepton analysis above. The dilepton
and lepton + jets samples are therefore nonoverlapping by construction.
There are 324 $W + \ge 3$-jet events in this sample.
Signal to background in this sample is approximately 1:4. CDF employs
two $b$-tagging techniques to reduce background.
The first technique identifies
$b$ jets by searching for a lepton from the decay $b\rightarrow l\nu X$ or
$b\rightarrow c\rightarrow l\nu X$. Since this lepton typically has
a lower momentum than the the lepton from the primary $W$ decay,
this technique is known as the ``soft lepton tag'' or SLT. In addition
to tagging soft muons, as in the D0 analysis, CDF also identifies soft
electrons. The second, more powerful,
technique exploits the finite lifetime of the $b$ quark by searching for
a secondary decay vertex. Identification of these vertices
is possible because of the excellent impact parameter resolution
of CDF's silicon microstrip vertex detector, the SVX\cite{cdf_svx,cdf_svxp}.
This technique is known as the ``SVX tag.''
The SLT algorithm identifies electrons and muons from semileptonic $b$
decays by matching central tracks with electromagnetic energy clusters
or track segments in the muon chambers. To maintain acceptance for leptons
coming from both direct and sequential decays, the $P_T$ threshold
is kept low (2~GeV). The fiducial region for SLT-tagged leptons is
$|\eta|<1$. The efficiency for SLT-tagging a $t\bar{t}$ event is $20\pm2$\%,
and the typical fake rate per jet is about 2\%. The details of the SLT
algorithm are discussed in Ref.~\cite{top_evidence_prd}.
The SVX algorithm begins by searching for displaced vertices containing
three or more tracks which satisfy a ``loose'' set of track quality
requirements. Loose track requirements are possible because the probability
for three tracks to accidentally intersect at the same displaced space
point is extremely low. If no such vertices are found, two-track vertices
that satisfy more stringent quality cuts are accepted. A jet is defined to
be tagged if it contains a secondary vertex whose transverse displacement
(from the primary vertex) divided by its uncertainty is greater than three.
The efficiency for SVX-tagging a $t\bar{t}$ event is $41\pm 4$\%, nearly
twice the efficiency of the SLT algorithm, while the fake rate is
only $\simeq 0.5$\% per jet. The single largest source of inefficiency
comes from the fact that the SVX covers only about 65\% of the Tevatron's
luminous region. SVX-tagging is CDF's primary $b$-tagging technique.
Table~\ref{tab:cdf_ljets_btag} summarizes the results of tagging in
the lepton + jets sample. The signal region is $W +\ge3$ jets, where there
are 42 SVX tags in 34 events and 44 SLT tags in 40 events, on backgrounds
of $9.5\pm1.5$ and $23.9\pm2.8$ events respectively. SVX backgrounds are
dominated by real heavy flavor production ($Wb\bar{b}$, $Wc\bar{c}$, $Wc$),
while SLT backgrounds are dominated by fakes. Monte Carlo calculations are
used to determine the fraction of observed $W$+jets events that contain
a heavy quark, and then the observed tagging efficiency is used to derive
the expected number of tags from these sources. Fake rates are measured
in inclusive jet data. Backgrounds are corrected iteratively for the assumed
$t\bar{t}$ content of the sample.
When combined with the overall $t\bar{t}$ acceptance in the lepton + jets
mode, $\sigma_{t\bar{t}}$ is measured to be $6.4^{+2.2}_{-1.8}$~pb using SVX
tags, and $8.9^{+4.7}_{-3.8}$~pb using SLT tags.
\begin{table}[htbp]
\begin{center}
\caption{Summary of results from the CDF lepton + jets $b$-tag analysis.
The expected $t\bar{t}$ contributions are calculated using CDF's measured
combined cross section.}
\begin{tabular}{lccc}
\hline
\hline
& $W$ + 1 jet & $W$+2 jets & $W+\ge$3 jets \\ \hline
Before tagging & 10,716 & 1,663 & 324 \\ \hline
SVX tagged evts & 70 & 45 & 34 \\
SVX bkgd & $70\pm11$& $32\pm4$ & $9.5\pm1.5$ \\
Expected $t\bar{t}$ & $0.94\pm0.4$ & $6.4\pm2.4$ & $29.8\pm8.9$
\\ \hline
SLT tagged evts & 245 & 82 & 40 \\
SLT bkgd & $273\pm24$& $80\pm6.9$ & $23.9\pm2.8$ \\
Expected $t\bar{t}$ & $1.1\pm0.4$ & $4.7\pm1.6$ & $15.5\pm5.3$ \\
\hline
\end{tabular}
\label{tab:cdf_ljets_btag}
\end{center}
\end{table}
CDF has also performed a measurement of $\sigma_{t\bar{t}}$ in the
all-hadronic channel, which nominally contains six jets, no leptons,
and low ${\not}{E_T}$.
Unlike in the case of lepton + jets, $b$-tagging
alone is not sufficient to overcome the huge backgrounds from QCD multijet
production. A combination of kinematic cuts and SVX $b$-tagging is therefore
used.
The initial dataset is a sample of about 230,000 events containing
at least four jets with $E_T>15$~GeV and $|\eta|<2$. Signal to background
in this sample is a forbidding 1:1000, so a set of kinematic cuts is
applied. The jet multiplicity is required to be $5\le N_{jets}\le 8$,
and the jets are required to be separated by $\Delta R\ge 0.5$. Additionally,
the summed transverse energy of the jets is required to be greater than
300 GeV and to be ``centrally'' deposited: $\sum E_T(jets)/\sqrt{\hat{s}} >
0.75$, where $\sqrt{\hat{s}}$ is the invariant mass of the multijet system.
Finally, the $N_{jet}-2$ subleading jets are required to pass an aplanarity
cut. The resulting sample of 1630 events has a signal to background of about
1:15. After the requirement of an SVX tag, 192 events remain.
The tagging background is determined by appying the SVX tagging probabilities
to the jets in the 1630 events selected by the analysis prior to tagging.
The probabilities are measured from multijet events and are parametrized
as a function of jet $E_T$, $\eta$, and SVX track multiplicity. The
probability represents the fraction of jets which are tagged in the absence
of a $t\bar{t}$ component, and includes real heavy flavor as well as mistags.
Applying the tagging probabilities to the jets in the 1630 events remaining
after kinematic cuts, a predicted background of $137\pm 11$ events is obtained,
compared to the 192 tagged events observed.
The efficiency to SVX-tag a $t\bar{t}$ event in the all-hadronic mode
is $47\pm5$\%. This value is slightly larger than the lepton + jets case
due to the presence of additional charm tags from $W\rightarrow c\bar{s}$.
Combining this value with the acceptance for the all-hadronic mode, including
the efficiency of the multijet trigger and the various kinematic cuts,
CDF obtains a $t\bar{t}$ cross section in this channel of
$10.7^{+7.6}_{-4.0}$~pb.
The large background in the all-hadronic channel makes it desirable to
have some independent cross check that the observed excess of events is
really due to $t\bar{t}$ production. The events in this sample with exactly
six jets can be matched to partons from the process $t\bar{t}\rightarrow
WbW\bar{b}\rightarrow jjbjj\bar{b}$, and can be fully reconstructed.
A plot of the reconstructed top mass for these events is shown in
Fig.~\ref{fig:all_had_mass}. The events clearly display a peak at the
value of the top mass measured in other channels. This analysis impressively
illustrates the power of SVX-tagging to extract signals from
very difficult environments.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{all_had_mass_plot.ps}
\caption{Reconstructed top mass obtained from a constrained fit to SVX-tagged
events in the CDF all-hadronic analysis.}
\label{fig:all_had_mass}
\end{center}
\end{figure}
The combined $t\bar{t}$ cross section is obtained using the number of events,
backgrounds, and acceptances for each of the channels. The calculation is
done using the likelihood technique described in Ref.~\cite{top_evidence_prd}.
Acceptances are calculated using $M_{top}=175$~GeV. The likelihood method
takes account of correlated uncertainties such as the luminosity uncertainty,
acceptance uncertainty from initial state radiation, etc. The combined
$t\bar{t}$ production cross section for $M_{top}=175$~GeV is
\begin{equation}
\sigma_{t\bar{t}} = 7.7^{+1.9}_{-1.6}~{\rm pb}~{\rm (CDF~Prelim.)}
\end{equation}
where the quoted uncertainty includes both statistical and systematic
effects. Fig.~\ref{fig:xsec-lepstyle} shows the individual and combined
CDF measurements together with the theretical central value and spread.
All measurements are in good agreement with theory, though all fall on
the high side of the prediction. It is perhaps noteworthy that the single
best measurement, from SVX-tagging in the lepton+jets mode, is the one
closest to theory.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{xsec_lepstyle_plot.ps}
\caption{CDF values of $\sigma_{t\bar{t}}$ for individual channels and for the
combined measurement. The band represents the central value and spread of
the theoretical value from three recent calculations for $M_{top}=175$~GeV.}
\label{fig:xsec-lepstyle}
\end{center}
\end{figure}
\subsection{D0 Measurements of $\sigma_{t\bar{t}}$}
The D0 collaboration has measured $\sigma_{t\bar{t}}$ in both the dilepton
($ee$, $e\mu$, and $\mu\mu$) and lepton + jets channels. The dilepton analysis
is a straightforward counting experiment. Two high-$P_T$ leptons are
required, as well as two jets. Cosmic ray and $Z$ candidates are removed.
In the $ee$ and $e\mu$ channels, a cut is
also placed on the missing transverse energy. Finally, a cut on $H_T$, the
transverse energy of the jets plus the leading electron (or the jets only,
in the case of dimuon events) is applied to reduce backgrounds from $W$ pairs,
Drell-Yan, etc.
The largest acceptance is in the $e\mu$ channel,
which also has the lowest backgrounds. Three candidate events are observed
in this channel on a background of $0.36\pm 0.09$ events. For $M_{top}$ = 180
GeV, $1.69\pm 0.27$ signal events are expected in this channel.
One event is
observed in each of the $ee$ and $\mu\mu$ channels on backgrounds of
$0.66\pm 0.17$ and $0.55\pm 0.28$ events respectively. For $M_{top}$ = 180
GeV, one expects $0.92\pm0.11$ and $0.53\pm 0.11$ $t\bar{t}$ in these two
channels.
The D0 measurement of $\sigma_{t\bar{t}}$ in the lepton + jets channel
makes use of two different approaches to reducing the background from
$W$+jets and other sources: topological/kinematic cuts, and $b$-tagging.
The first approach exploits the fact that the large top quark mass
gives rise to kinematically distinctive events:
the jets tend be more energetic and more central than jets in
typical background events, and the events as a whole are more spherical.
Top-enriched samples can therefore be selected
with a set of topological and kinematic cuts. (For some earlier work on
this subject, see Refs.~\cite{top_kin_prd} and \cite{cdf_Ht}.) In particular,
the total hadronic activity in the event, $H_T \equiv \sum E_T(jets)$,
can be combined with the aplanarity of the $W$ + jets
system to reduce
backgrounds substantially. Cuts on both of these variables were used in
the original D0 top discovery analysis\cite{d0_obs}, and these cuts have
now been reoptimized on Monte Carlo samples for use in the cross section
measurement. A third kinematic variable with discriminating power, the
total leptonic transverse energy ($E_T^L\equiv E_T^{lep} + {\not}{E_T}$) is
also used. Events are required to have four jets with $E_T>15$~GeV and
$|\eta| < 2.$
In 105.9~pb$^{-1}$ of $e,\mu$ + jet data, a total of 21 candidate
events are observed, on a background of 9.23$\pm$ 2.83 events that is dominated
by QCD production of $W$ + jets. For comparison, 19$\pm$ 3 (13 $\pm$ 2)
events are expected
for $M_{top} = 160$~(180) GeV, again using the theoretical cross section from
Ref.~\cite{xsec-laenen}.
A second D0 approach to the lepton + jets cross section measurement makes
use of $b$-tagging via soft muon tags. Soft muons are expected to be produced
in $t\bar{t}$ events through the decays $b\rightarrow\mu X$ and
$b\rightarrow c\rightarrow\mu X$. Each $t\bar{t}$ event contains two $b$'s,
and ``tagging muons'' from their semileptonic decays are detectable in
about 20\% of $t\bar{t}$ events. Background events, by contrast, contain a low
fraction of $b$~quarks and thus produce soft muon tags at only the $\sim 2$\%
level. Events selected
for the lepton + jets + $\mu$-tag analysis are required to contain an $e$
or $\mu$ with $E_T$ ($P_T$ for muons) $>~20$ GeV, and to have $|\eta|<2.0~
(1.7)$ respectively. At least three jets are required with $E_T> 20$~GeV
and $|\eta|<2.$ The ${\not}{E_T}$ is required to be at least 20~GeV (35 GeV
if the ${\not}{E_T}$ vector is near the tagging muon in an $e$+jets event), and
in $\mu$ + jets events is required to satisfy certain topological cuts
aimed at rejecting backgrounds from fake muons. Loose cuts on the
aplanarity and $H_T$ are also applied. Finally, the tagging muon is
required to have $P_T > 4$~GeV and to be near one of the jets, as would
be expected in semileptonic $b$ decay. In 95.7 pb$^{-1}$ of $e, \mu$ +
jet data with a muon tag, 11 events are observed on a background ($W$ + jets,
fakes, and residual $Z$'s) of 2.58$\pm$0.57 events. Theory\cite{xsec-laenen}
predicts 9.0$\pm$2.2 and 5.2$\pm$1.2 events for $M_{top}$ = 160 and 180 GeV
respectively. Figure~\ref{fig-d0_njets_mutag} shows the clear excess of
events in the signal region compared to the top-poor regions of one and
two jets.
Table~\ref{tab-d0_xsec_summary} summarizes event yields and backgrounds
in the D0 cross
section analysis. A total of 37 events is observed in the various
dilepton and lepton + jets channels on a total background of 13.4$\pm$3.0
events. The expected contribution from $t\bar{t}$ ($M_{top}$ = 180~GeV) is
21.2$\pm$ 3.8 events.
\begin{figure}[ht]
\begin{center}
\epsfysize=3.5in
\leavevmode\epsffile{d0_njets_mutag.ps}
\end{center}
\caption{Number of observed ($e,\mu$) + jets events with a soft muon tag
compared to background predictions, as a function of jet multiplicity.
Note the excess in the $t\bar{t}$ signal region with $W + \ge 3$ jets.}
\label{fig-d0_njets_mutag}
\end{figure}
\begin{table}[h]
\begin{center}
\caption{Summary of event yields and backgrounds in the D0 cross section
analysis. Expected $t\bar{t}$ contributions are calculated for
$M_{top}=180$~GeV.}
\label{tab-d0_xsec_summary}
\begin{tabular}{ccccc}
\hline
\hline
Channel & $\int{\cal L}\, dt$ & Bkgd. & Expected $t\bar{t}$ & Data \\
\hline
$e\mu$ & 90.5 & $0.36\pm0.09$ & $1.69\pm0.27$ & 3 \\
$ee$ & 105.9 & $0.66\pm0.17$ & $0.92\pm0.11$ & 1 \\
$\mu\mu$& 86.7 & $0.55\pm0.28$ & $0.53\pm0.11$ & 1 \\ \hline
$e$+jets& 105.9 & $3.81\pm1.41$ & $6.46\pm1.38$ & 10 \\
$\mu$+jets& 95.7 & $5.42\pm2.05$ & $6.40\pm1.51$ & 11 \\
$e$+jets/$\mu$& 90.5 & $1.45\pm0.42$ & $2.43\pm0.42$ & 5 \\
$\mu$+jets/$\mu$& 95.7 & $1.13\pm0.23$ & $2.78\pm0.92$ & 6 \\ \hline
Total & & $13.4\pm3.0$ & $21.2\pm3.8$ & 37 \\
\hline
\end{tabular}
\end{center}
\end{table}
When combined with a Monte Carlo calculation of the $t\bar{t}$ acceptance, these
numbers can be converted into a measurement of the cross section.
Figure~\ref{fig-d0_xsec} shows the cross section derived from D0 data as
a function of $M_{top}$. For D0's measured top mass of 170 GeV, described
below, the measured $t\bar{t}$ cross section is
\begin{equation}
\sigma_{t\bar{t}} = 5.2\pm 1.8~{\rm pb}~{\rm (D0~Prelim.)},
\end{equation}
in good agreement with theory.
\begin{figure}[ht]
\begin{center}
\epsfysize=3.5in
\leavevmode\epsffile{d0_xsec_vs_mtop.ps}
\end{center}
\caption{D0 measurement of the $t\bar{t}$ production cross section as a
function of $M_{top}$.}
\label{fig-d0_xsec}
\end{figure}
\section{Top Quark Mass Measurement}
\label{sec-mass}
The top quark mass is a fundamental parameter in the Standard Model.
It plays an important role in radiative corrections that relate electroweak
parameters, and when combined with other precision electroweak data can
be used to probe for new physics. In particular, the relationship between
$M_W$ and $M_{top}$ displays a well-known dependence on the mass
of the Higgs. A precise measurement of the top mass is therefore a high
priority of both experiments.
The primary method for measuring the top mass at the Tevatron is a constrained
fit to lepton + 4-jet events arising from the process $t\bar{t}\rightarrow
WbW\bar{b}\rightarrow l\nu jjb\bar{b}$.
In these events, the observed
particles and ${\not}{E_T}$ can be mapped one-to-one to partons from the
$t\bar{t}$ decay. However, there are 12 possible
jet--parton assignments. The number of jet combinations is reduced to six
if one $b$-tag is present, and to two if two $b$'s are tagged.
To select the best combination, both experiments use a likelihood
method that exploits the many constraints in the system. Each event is
fitted individually to the hypothesis that three of the jets come from
one $t$ or $\bar{t}$ through its decay to $Wb$, and that the lepton,
${\not}{E_T}$, and the remaining jet come from the other $t$ or $\bar{t}$
decay. The fit is performed for each jet combination, with the requirement
that any tagged jets must be assigned as $b$ quarks in the fit. Each
combination has a two-fold ambiguity in the longitudinal momentum of the
neutrino. CDF chooses the solution with the best $\chi^2$, while D0
takes a weighted average of the three best solutions. In both cases,
solutions are required to satisfy a $\chi^2$ cut. The result is
a distribution of the best-fit top mass for each of the candidate events.
The final value for the top mass is extracted by fitting this distribution
to a set of Monte Carlo templates for $t\bar{t}$ and background. A likelihood
fit is again used to determine which set of $t\bar{t}$ templates best fits
the data. Because this measurement involves precision jet spectroscopy,
both experiments have developed sophisticated jet energy corrections,
described below, that relate measured jet energies to parton
four-vectors. Uncertainties associated with these corrections are the
largest source of systematic error.
Measurements of the top mass in other channels (dilepton, all-hadronic\dots
have larger uncertainties, and give results consistent with the lepton + jets
measurements. These channels will not be discussed here. I now describe the
CDF and D0 measurements in more detail.
\subsection{D0 Measurement of $M_{top}$}
The D0 top mass measurement begins with event selection cuts similar to
those used in the lepton + jets cross section analysis, with two important
differences. First, all events are required to have at least four jets with
$E_T > 15$~GeV and $|\eta|<2$. (Recall that in the cross section analysis,
soft-muon tagged events were allowed with only three jets.) Second and
more importantly, the cut on the total hadronic $E_T$ ($\equiv H_T$), which
proved extremely useful for selecting a high-purity sample in the cross
section analysis, is replaced by a new ``top likelihood'' cut that combines
several kinematic variables. A straightforward $H_T$ cut would inject
significant bias into the analysis by pushing both background and signal
distributions toward higher values of $M_t$ and making background look like
signal. The top likelihood variable combines the ${\not}{E_T}$, the aplanarity
of the $W$ + jets, the fraction of the $E_T$ of the $W$ + jets system that
is carried by the $W$, and the $E_T$-weighted rms $\eta$ of the $W$ and jets.
The distributions for each of these variables are determined from $t\bar{t}$
Monte Carlo events, and the probabilities are combined such that the bias
of the fitted mass distributions is minimized. The top likelihood distributions
for signal and background Monte Carlo events are shown in
Fig.~\ref{fig:D0_top_like}. The advantages of this variable are demonstrated
in Fig.~\ref{fig:D0_templates}, which compares fitted mass distributions
for signal and background Monte Carlo events after the likelihood cut and
after the cross section ($H_T$) cuts. The top likelihood cut gives a
significantly smaller shift in the fitted distributions. This is particularly
true in the case of background events, where the cross section cuts ``sculpt''
the background distribution into a shape that looks rather top-like.
The reduction of this source of bias is particularly important since
the D0 top mass sample is nearly 60\% background. A total
of 34 events pass the selection cuts, of which 30 have a good fit to the
$t\bar{t}$ hypothesis.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{D0_top_like.ps}
\caption{Top likelihood distributions for $e$+jets signal and
background Monte Carlo events.
The D0 top mass analysis uses events with top likelihood $> 0.55$.}
\label{fig:D0_top_like}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.2in
\epsffile{D0_templates.ps}
\caption{Fitted mass distributions for background events and $t\bar{t}$ events
of various masses in the D0 analysis. Histogram: parent sample. Dot-dash:
after top likelihood cut. Dots: after cross-section cuts. Note the smaller
bias introduced by the likelihood cut.}
\label{fig:D0_templates}
\end{center}
\end{figure}
For reconstructing the top mass, one desires to know the four-momenta of
the underlying partons as accurately as possible. In practice one observes
jets, usually reconstructed with a fixed-cone algorithm, and several effects
can complicate the connection between these jets and their parent partons.
Calorimeter nonlinearies, added energy from multiple interactions and the
underlying event, uranium noise in the calorimeter,
and energy that falls outside of the jet clustering cone all must be accounted
for. The D0 jet corrections are derived from an examination of events in
which a jet recoils against a highly electromagnetic object (a ``$\gamma$'').
The energy of the ``$\gamma$'' is well-measured in the electromagnetic
calorimeter, whose energy scale is determined from $Z\rightarrow ee$ events.
It is then assumed that the component of the ${\not}{E_T}$ along the jet axis
(${\not}{E_T}_{,\parallel}$) is
due entirely to mismeasurement of the jet energy, and a correction factor for
the recoil jet energy is obtained by requiring ${\not}{E_T}_{,\parallel}$ to
vanish. The correction factors are derived as a function of jet $E_T$ and
$\eta$.
These jet corrections are ``generic'' and are used in many D0 analyses,
including the $t\bar{t}$ cross section analysis. Additional corrections are
applied for the top mass analysis. These corrections account for the fact
that light quark jets (from hadronic $W$ decays) and $b$ quark jets have
different fragmentation properties. Furthermore, $b$ jets tagged with the
soft muon tag must have the energy of the minimum-ionizing muon added back
in, and a correction must be applied for the neutrino. These flavor-dependent
corrections are determined from $t\bar{t}$ Monte Carlo events. The flavor
assignment of the jets is established by the constrained fit.
Backgrounds in the 30-event final sample come from the QCD production of
$W$ + multijets, and from fakes. These backgrounds are calculated for each
channel before the top likelihood cut. The effects of the top likelihood cut
and the fitter $\chi^2$ cut are determined from Monte Carlo. The result
is an estimated background of 17.4$\pm$2.2 events. The background is
constrained to this value (within its Gaussian uncertainties) in the
overall fit to $t\bar{t}$ plus background templates that determines the
most likely top mass.
Figure~\ref{fig:D0_mtfit} shows the reconstructed mass distribution for the
30 events, together with the results of the fit. The result is
$M_{top} = 170\pm 15$(stat)~GeV. The statistical error is determined
by performing a large number of Monte Carlo ``pseudo-experiments'' with
$N=30$ events and $\bar{N}_{bkgd}=17.4$. The standard deviation of the
mean in this ensemble of pseudo-experiments is taken to be the statistical
error.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.2in
\epsffile{D0_mtfit.ps}
\caption{Reconstructed top mass distribution from D0 data, together with results of the best fit.}
\label{fig:D0_mtfit}
\end{center}
\end{figure}
Systematic uncertainties come from the determination of the jet energy scale
from $Z\rightarrow ee$ events ($\pm$7 GeV), variations among Monte Carlo
generators ({\sc Isajet} vs. the default {\sc Herwig}) and jet
definitions ($\pm$6 GeV), uncertainties in the background
shape ($\pm$ 3 GeV), variations in the likelihood fitting method ($\pm$ 3
GeV), and Monte Carlo statistics ($\pm$ 1 GeV). The final result is therefore
\begin{equation}
M_{top}= 170\pm 15 ({\rm stat})\pm 10 ({\rm syst})~{\rm GeV}~({\rm
D0~prelim.})
\end{equation}
\subsection{CDF Measurement of $M_{top}$}
At the winter `96 conferences and at Snowmass, CDF reported a top mass
value of $M_{top} = 175.6\pm 5.7({\rm stat})\pm 7.1({\rm syst})$~GeV.
This value was obtained using a technique very similar to that reported
in Refs.~\cite{cdf_obs} and~\cite{top_evidence_prd}, with the main
improvements being a larger dataset (110~pb$^{-1}$) and a better determination
of the systematic uncertainties. This measurement used a sample of events
with a lepton, ${\not}{E_T}$, at least three jets with $E_T > 15$~GeV and
$|\eta|<2$, and a fourth jet
with $E_T > 8$~GeV and $|\eta|<2.4$. Events were further required to
contain an SVX- or SLT-tagged jet. Thirty-four such events had an acceptable
$\chi^2$ when fit to the $t\bar{t}$ hypothesis, with a calculated background
of $6.4^{+2.1}_{-1.4}$ events.
This technique, while powerful, does not take account of all the available
information. It does not exploit the difference in signal to background
between SVX tags and SLT tags, nor does it use any information from
untagged events that satisfy the kinematic requirements for top.
CDF has recently completed an optimized mass analysis that takes full
advantage of this information.
To determine the optimal technique for measuring the mass,
Monte Carlo samples of signal and background events are generated and
the selection cuts for the mass analysis are applied. This sample is then
divided into several
nonoverlapping subsamples, in order of decreasing signal to background:
SVX double tags, SVX single tags, SLT tags (no SVX tag), and untagged
events.
The mass resolution for each subsample is obtained by performing many
Monte Carlo ``pseudo-experiments.''
Each pseudo-experiment for a given subsample contains the number of events
observed in the data, with the number of background events thrown according
its predicted mean value and uncertainty. For example, 15 SVX single-tagged
events are observed in the data, so the pseudo-experiments for the ``single
SVX-tag'' channel each contain 15 events, with the number of background
events determined by Poisson-fluctuating the estimated background in this
channel of $1.5\pm 0.6$ events. The standard likelihood fit to
top plus background templates is then performed for each pseudo-experiment.
The mass resolutions are slightly different for each subsample
because single-, double-, and untagged events have different combinatorics,
tagger biases, etc. Top mass templates are therefore generated for each
subsample. By performing many pseudo-experiments, CDF obtains the expected
statistical error for each subsample.
Because the subsamples are nonoverlapping by construction, the likelihood
functions for each subsample can be multiplied together to yield a combined
likelihood. Monte Carlo studies have been performed to determine which
combination of subsamples produces the smallest statistical error. One
might expect that the samples with SVX tags alone would yield the best
measurement, because of their high signal to background. However it turns
out that the number of events lost by imposing this tight tagging requirement
more than compensates for the lower background, and actually gives a
slightly larger statistical uncertainty than the previous CDF technique
of using SVX or SLT tags. Instead, the optimization studies show that
the best measurement is obtained by combining double SVX tags, single SVX tags,
SLT tags, and untagged events.
For the untagged events, these Monte Carlo studies show that a smaller
statistical error results from requiring the fourth jet to satisfy the
same cuts as the first three jets, namely $E_T>15$~GeV and $|\eta|<2$. For
the various tagged samples, the fourth jet can satisfy the looser requirements
$E_T>8$~GeV, $|\eta|<2.4$. The median statistical error expected from
combining these four samples is 5.4~GeV, compared to 6.4~GeV expected from the
previously-used method. This reduction in statistical uncertainty is
equivalent to increasing the size of the current SVX or SLT tagged data
sample by approximately 40\%.
The optimized procedure is then applied to the lepton plus jets
data. Table~\ref{tab:mass_subsamples} shows the number of observed events
in each subsample, together with the expected number of signal and
background events, the fitted mass, and the statistical uncertainties.
The result is $M_{top} = 176.8\pm 4.4$(stat)~GeV. The statistical uncertainty
is somewhat better than the 5.4~GeV expected from the pseudo-experiments.
Approximately 8\% of the pseudo-experiments have a statistical uncertainty of
4.4~GeV or less, so the data are within expectations.
Figure~\ref{fig:mt_opt_like} shows the reconstructed mass distribution
for the various subsamples, together with the results of the fit.
\begin{table}[htbp]
\begin{center}
\caption{Mass-fit subsamples for the CDF top mass measurement. The first
row gives the results from the method of Refs.~\protect\cite{cdf_obs}
and \protect\cite{top_evidence_prd}. The next four rows show
the results from the subsamples used in the optimized method. The last
row shows the results of combining the four subsamples.}
\begin{tabular}{lccc}
\hline\hline
Subsample & $N_{obs}$ & $N_{bkgd}$ & Fit Mass \\
& & & (GeV) \\ \hline
SVX or SLT tag & 34 & $6.4^{+2.1}_{-1.4}$ & $175.6 \pm 5.7$ \\
(Prev. Method) & & & \\ \hline
SVX double tag & 5 & $0.14\pm 0.04$ & $174.3 \pm 7.9$ \\
SVX single tag & 15 & $1.5\pm 0.6$ & $176.3 \pm 8.2$ \\
SLT tag (no SVX)& 14 & $4.8\pm 1.5$ & $140.0 \pm 24.1$ \\
Untagged ($E_T^4>15$)& 48 & $29.3\pm 3.2$ & $180.9 \pm 6.4$ \\ \hline
Optimized Method & & & $176.8 \pm 4.4$ \\ \hline
\end{tabular}
\label{tab:mass_subsamples}
\end{center}
\end{table}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{mt_opt_like.ps}
\caption{Top mass distribution for all four of the CDF subsamples combined.}
\label{fig:mt_opt_like}
\end{center}
\end{figure}
Systematic uncertainties in the CDF measurement are summarized in
Table~\ref{tab:cdf_mt_systs}. The largest systematic is the combined
uncertainty in the jet $E_T$ scale and the effects of soft gluons ({\em i.e.}
fragmentation effects). Such effects include calorimeter nonlinearities
and cracks, the effect of the underlying event, and Monte Carlo modeling
of the jet energy flow outside the clustering cone. The ``hard gluon''
systematic comes from the uncertainty in the fraction of $t\bar{t}$ events
where one of the four highest-$E_T$ jets is a gluon jet from initial- or
final-state radiation. The \textsc{Herwig} Monte Carlo program predicts that
55\% of the time a gluon jet is among the four leading jets.
This systematic is evaluated by varying the fraction
of such events by $\pm 30\%$ in the Monte Carlo
and determining the resulting
mass shift. Systematics from the kinematic and likelihood fit are determined
by using slightly different but equally reasonable methods of performing
the constrained fit and the final likelihood fit for the top mass. Such
variations include
allowing the background to float, or varying the range over which the
parabolic fit that determines the minimum and width of the likelihood
function is performed. The ``different MC generators'' systematic is assigned
by generating the $t\bar{t}$ templates with {\sc Isajet} instead of the default
{\sc Herwig}. Systematics in the background shape are evaluated by varying the
$Q^2$ scale in the Vecbos Monte Carlo program that models the
$W$ + jets background. Studies have shown that the relatively small non-$W$
background is kinematically similar to $W$ + jets. The systematic from
$b$-tagging bias includes uncertainties in the jet $E_T$-dependence of
the $b$-tag efficiency and fake rate, and in the rate of tagging non-$b$
jets in top events. Monte Carlo statistics account for the remainder of
the systematic uncertainties. The final result is:
\begin{equation}
M_{top}= 176.8\pm 4.4 ({\rm stat})\pm 4.8 ({\rm syst})~{\rm GeV}~({\rm
CDF~prelim.})
\end{equation}
\begin{table}[htbp]
\begin{center}
\caption{Systematic uncertainties in the CDF top mass measurement.}
\begin{tabular}{cc}
\hline\hline
Systematic & Uncertainty (GeV) \\ \hline
Soft gluon + Jet $E_T$ scale & 3.6 \\
Hard gluon effects & 2.2 \\
Kinematic \& likelihood fit & 1.5 \\
Different MC generators & 1.4 \\
Monte Carlo statistics & 0.8 \\
Background shape & 0.7 \\
$b$-tagging bias & 0.4 \\ \hline
Total & 4.8 \\ \hline
\end{tabular}
\label{tab:cdf_mt_systs}
\end{center}
\end{table}
\section{Kinematic Properties}
\label{sec-kine}
The constrained fits described above return the complete four-vectors
for all the partons in the event, and allow a range of other kinematic
variables to be studied. As examples, Fig.~\ref{fig:kin_Pt_ttbar}
shows the $P_T$ of the $t\bar{t}$ system as reconstructed from CDF data,
and Fig.~\ref{fig:kin_D0_Mtt_Pttop} shows the $t\bar{t}$ invariant mass
and the average $t$ and $\bar{t}$ $P_T$ from D0. The distributions have
not been corrected for event selection biases or combinatoric misassignments.
In these and in similar plots, the agreement with the Standard Model
is good.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{kin_Pt_ttbar.ps}
\caption{Reconstructed $P_T$ of the $t\bar{t}$ system.}
\label{fig:kin_Pt_ttbar}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{D0_Mtt_Pttop.ps}
\caption{Reconstructed $t\bar{t}$ invariant mass (top) and average $t$ or
$\bar{t}$~$P_T$ (bottom) from D0 data.}
\label{fig:kin_D0_Mtt_Pttop}
\end{center}
\end{figure}
A very important cross-check that the experiments are really observing
$t\bar{t}$ pair production is to search for the hadronically decaying
$W$ in lepton + jets events. CDF has performed such an analysis by selecting
lepton + 4-jet events with two $b$-tags. To maximize the $b$-tag efficiency,
the second $b$ in the event is allowed to satisfy a looser tag requirement.
The two untagged jets should then correspond to the hadronic $W$ decay.
Fig.~\ref{fig:kin_mjj_dbl_tag} shows the dijet invariant mass for the
two untagged jets. The clear peak at the $W$ mass, together with the
lepton, the ${\not}{E_T}$, and the two tagged jets, provides additional
compelling evidence that we are observing $t\bar{t}$ decay to two $W$'s
and two $b$'s.
This measurement is also interesting because it suggests
that in future high-statistics experiments the jet energy scale can be
determined directly from the data by reconstructing this resonance.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{kin_mjj_dbl_tag.ps}
\caption{Reconstructed hadronic $W$ peak in double-tagged top
candidate events.}
\label{fig:kin_mjj_dbl_tag}
\end{center}
\end{figure}
\section{Branching ratios, ${V_{\lowercase{tb}}}$}
\label{sec-Vtb}
In the Standard Model, the top quark decays essentially 100\% of the time
to $Wb$. Therefore the ratio of branching ratios
\begin{equation}
B = \frac{BR(t\rightarrow Wb)}{BR(t\rightarrow Wq)},
\end{equation}
where $q$ is any quark, is predicted to be one. CDF has measured $B$
using two techniques. The first technique compares the ratio of
double- to single-tagged lepton + jets events that pass the mass analysis
cuts, and double-, single- and
un-tagged dilepton events. Since the efficiency to tag a single $b$-jet
is well known from control samples, the observed tag ratios can be
converted into a measurement of $B$. CDF finds:
\begin{equation}
B = 0.94 \pm 0.27~(\mathrm{stat})~\pm 0.13~(\mathrm{syst}),
\end{equation}
or
\begin{equation}
B > 0.34 ~\mathrm{(95\%~C.L.)}
\end{equation}
Untagged lepton + jets events are not used in this analysis because of the
large backgrounds admitted by the standard cuts. (Of course, the cuts were
designed to be loose to avoid kinematic bias; the background rejection is
normally provided by $b$-tagging.) The second CDF technique uses the
``event structure'' cuts of Ref.~\cite{top_kin_prd} to increase the purity
of the untagged lepton + jets sample, allowing it to be included in this
measurement. The result is:
\begin{equation}
B = 1.23^{+0.37}_{-0.31},
\end{equation}
or
\begin{equation}
B > 0.61 ~\mathrm{(95\%~C.L.)}
\end{equation}
It should be noted that these analyses make the implicit assumption that the
branching ratio to non-$W$ final states is negligible. The fact
that the cross sections measured in the dilepton, lepton + jets, and
all-hadronic channels are in good agreement is evidence that this assumption
is correct. Alternatively, if one believes the theoretical cross section,
it is clear from the SVX and SLT $b$-tag measurements that this cross section
is saturated by decays to $Wb$. However, these ``indications'' have not
yet been turned into firm limits on non-$W$ decays.
The measurement of $B$ above can be interpreted as a measurement of
the CKM matrix element $V_{tb}$. However, it is not necessarily the case
that $B=1$ implies $V_{tb}=1$. This inference follows only in the absence of a
fourth generation, where the value of $V_{tb}$ is constrained by unitarity
and the known values of the other CKM matrix elements. In this case,
$V_{tb}$ is determined much more accurately from these constraints
than from the direct measurement. (In fact, under the assumption of
3-generation unitarity, $V_{tb}$ is actually the \textit{best known}
CKM matrix element.) A more general relationship, which is true for
three or more generations provided that there is no fourth generation
$b^{\prime}$ quark lighter than top, is
\begin{equation}
B=\frac{BR(t\rightarrow Wb)}{BR(t\rightarrow Wq)} =
\frac{|V_{tb}|^2}{|V_{td}|^2 + |V_{ts}|^2 + |V_{tb}|^2}.
\label{eqn:Vtb}
\end{equation}
Since $B$ depends on three CKM matrix elements and not just one, a single
measurement cannot determine $V_{tb}$, and we must make additional assumptions
about $V_{ts}$ and $V_{td}$.
In general, a fourth generation would allow $V_{td}$ and $V_{ts}$ to take on
any value up to their values assuming 3-generation unitarity.
One simplifying assumption is that the upper 3$\times 3$ portion of the CKM
martix is unitary.
In that case, $|V_{td}|^2 + |V_{ts}|^2 + |V_{tb}|^2 = 1$, and
$B$ gives $V_{tb}$ directly. However, as noted above, under this assumption
$V_{tb}$ is very well determined anyway and this direct measurement adds no
improved information. Assuming 3$\times 3$ unitarity, the two analyses
described above give $V_{tb} = 0.97\pm0.15~\mathrm{(stat)}\pm
0.07~\mathrm{(syst)}$ and $V_{tb} = 1.12\pm0.16$ respectively. A more
interesting assumption is that 3$\times 3$ unitarity is relaxed
\textit{only} for $V_{tb}$. Then we can insert the PDG values of $V_{ts}$
and $V_{td}$ and obtain:
\begin{equation}
V_{tb} > 0.022~\mathrm{(95\%~C.L.)}
\end{equation}
for the first method, or
\begin{equation}
V_{tb} > 0.050~\mathrm{(95\%~C.L.)}
\end{equation}
for the second.
To see that a small value of $V_{tb}$ would not violate anything we know
about top, consider the situation with $b$ decays. The $b$ quark decays
$\approx 100\%$ of the time to $Wc$, even though $V_{cb}\approx 0.04$.
This is because the channel with a large CKM coupling, $Wt$, is
kinematically inaccessible. The same situation could occur for top in
the presence of a heavy fourth generation. However in this case the top
width would be narrower than the Standard Model expectation.
A more definitive measurement\cite{willenbrock}
of $V_{tb}$ will be performed in future Tevatron runs by measuring
$\Gamma_{t\rightarrow Wb}$ directly through the single top production
channel $p\bar{p}\rightarrow W^{*} \rightarrow t\bar{b}$.
\section{Rare Decays}
\label{sec-rare}
CDF has performed searches for the flavor-changing neutral current decays
$t\rightarrow qZ$ and $t\rightarrow q\gamma$. The decay to $qZ$ can have
a branching ratio as high as $\sim 0.1\%$ in some theoretical
models\cite{fritzsch}.
The search for this decay includes the possibility that one or both
top quarks in an event can decay to $qZ$. In either case the signature
is one $Z\rightarrow ll$ candidate and four jets.
Backgrounds in the $qZ$ channel come from $WZ$ and $ZZ$ plus jets
production, and are estimated to be $0.60\pm 0.14 \pm 0.12$ events.
In addition, 0.5 events are expected from Standard Model $t\bar{t}$ decay.
One event is observed.
Under the conservative assumption that this event is signal, the resulting
limit is:
\begin{equation}
BR(t\rightarrow qZ) < 0.41~\mathrm{(90\%~C.L.)}
\end{equation}
The branching ratio of $t\rightarrow q\gamma$ is predicted to be roughly
$10^{-10}$\cite{parke}, so any observation of this decay would probably
indicate new physics. CDF searches for final states in which one top
decays to $Wb$ and the other decays to $q\gamma$. The signature is
then $l\nu\gamma + 2$ or more jets (if $W\rightarrow l\nu$), or
$\gamma + 4$ or more jets (if $W\rightarrow jj$). In the hadronic channel,
the background is 0.5 events, and no events are seen. In the leptonic
channel, the background is 0.06 events, and one event is seen. (It is
a curious event, containing a 72~GeV muon, an 88~GeV $\gamma$ candidate,
24~GeV of ${\not}{E_T}$, and three jets.) Conservatively assuming this event
to be signal for purposes of establishing a limit, CDF finds:
\begin{equation}
BR(t\rightarrow q\gamma) < 0.029~\mathrm{(95\%~C.L.)}
\end{equation}
This limit is stronger than the $qZ$ limit because of the $Z$ branching
fraction to $ee + \mu\mu$ of about 6.7\%, compared to the $\gamma$
reconstruction efficiency of about 80\%.
\section{W Polarization}
\label{sec-Wpol}
The large mass of the top quark implies that the top quark decays before
hadronization, so its decay products preserve the helicity structure
of the underlying Lagrangian. Top decays, therefore, are a unique
laboratory for studying the weak interactions of a bare quark. In particular,
the Standard Model predicts that top can only decay into left-handed or
longitudinal $W$ bosons, and the ratio is fixed by the relationship
\begin{equation}
\frac{W_{long}}{W_{left}} = \frac{1}{2}\frac{M_t^2}{M_W^2}.
\end{equation}
For $M_t = 175$~GeV, the Standard Model predicts that about 70\% of top
quarks decay into longitudinal $W$ bosons.
This is an exact prediction resulting from Lorentz invariance and the $V-A$
character of the electroweak Lagrangian. If new physics modifies the
$t$-$W$-$b$ vertex---i.e. through the introduction of a right-handed
scale---it may reveal itself in departures of the $W$ polarization from
the Standard Model prediction. The $W$ polarization has recently been
measured, albeit with low statistics, by CDF. I describe this measurement
here to illustrate the type of measurement that will be done with high
precision in future runs with the Main Injector.
The $W$ polarization is determined from the $\cos\theta^{*}_l$, the
angle between the charged lepton and the $W$ in the rest frame of the $W$.
This quantity can be expressed in the lab frame using the approximate
relationship\cite{kane_wpol}
\begin{equation}
\cos\theta^{*}_l \approx \frac{2m_{lb}^2}{m_{l\nu b}^2 - M_W^2},
\end{equation}
where $m_{lb}$ is the invariant mass of the charged lepton and the
$b$ jet from the same top decay, and $m_{l\nu b}$ is the three-body invariant
mass of the charged lepton, the neutrino, and the corresponding $b$ jet.
This last quantity is nominally equal to $M_t$, though in the analysis the
measured jet and lepton energies are used, and the possibility of combinatoric
misassignment is included.
Monte Carlo templates for $\cos\theta^{*}_l$ are generated using the
{\sc Herwig}
$t\bar{t}$ event generator followed by a simulation of the CDF detector. The
simulated events are then passed through the same constrained fitting
procedure used in the top mass analysis. The fit is used here to determine
the most likely jet--parton assignment (i.e. which of the two $b$ jets to
assign to the leptonic $W$ decay), and to adjust the measured jet and
lepton energies within their uncertainties in order to
obtain the best resolution on $\cos\theta^{*}_l$. The same procedure
is applied to $W$+jets events generated by the Vecbos Monte Carlo
program to obtain the $\cos\theta^{*}_l$ distribution of the background.
The $\cos\theta^{*}_l$ distribution from the data is then fit to a
superposition of Monte Carlo templates to determine the fraction of
longitudinal $W$ decays. The dataset is the same as in the CDF top mass
analysis (lepton + ${\not}{E_T}$ + three or more jets with $E_T>15$~GeV and
$|\eta|<2$, and a fourth jet with $E_T>8$ GeV and $|\eta|<2.4$). To increase
the purity, only events with SVX tags are used. The $\cos\theta^{*}_l$
distribution in this sample is shown in Fig.~\ref{fig:wpol} together with
the results of the fit. The fit returns a longitudinal $W$ fraction of
$0.55^{+0.48}_{-0.53}$ (statistical uncertainties only). The statistics are
clearly too poor at present to permit any conclusions about the structure of
the $t$-$W$-$b$ vertex. However, with the large increase in statistics
that the Main Injector and various planned detector improvements will
provide, precision measurements of this vertex will become possible.
Studies indicate, for example, that with a 10~fb$^{-1}$ sample one can measure
BR($t\rightarrow W_{long}$) with a statistical uncertainty of about
2\%, and have sensitivity to decays to right-handed $W$'s with a
statistical precision of about 1\%\cite{TEV2000}.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=3.5in
\epsffile{wpol_fit.ps}
\caption{Results of fit to the $\cos\theta^{*}_l$ distribution, used to determine the $W$ polarization in top decays. The dataset is the CDF top mass sample with only SVX tags allowed.}
\label{fig:wpol}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec-concl}
The Tevatron experiments have progressed quickly from the top search
to a comprehensive program of top physics. Highlights of the recently
completed run include measurements of the top cross section and mass,
studies of kinematic features of top production, and a first look at the
properties of top decays. Many of these analyses are still in progress,
and improved results can be expected.
With a mass of approximately 175~GeV, the top quark is a unique
object, the only known fermion with a mass at the natural electroweak
scale. It would be surprising if the top quark did not play a role in
understanding electroweak symmetry breaking. Current measurements are all
consistent with the Standard Model but in many cases are limited by
poor statistics: the world $t\bar{t}$ sample numbers only about a hundred
events at present. Both CDF and D0 are undertaking major detector upgrades
designed to take full advantage of high-luminosity running with the Main
Injector starting in 1999. This should increase the top sample by a
factor of $\sim$50. Beyond that, Fermilab is considering plans to
increase the luminosity still further, the LHC is on the horizon, and
an $e^+e^-$ linear collider could perform precision studies at the $t\bar{t}$
threshold. The first decade of top physics has begun, and the future
looks bright.
\section{Acknowledgements}
I would like to thank the members of the CDF and D0 collaborations whose
efforts produced the results described here, and the members of the
technical and support staff for making this work possible.
I would also like to thank the organizers of Snowmass `96 for inviting
me to present these results and for sponsoring a stimulating and enjoyable
workshop. This work is supported in part by NSF grant PHY-9515527 and by a DOE
Outstanding Junior Investigator award.
| proofpile-arXiv_065-818 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Proximity effect between a normal metal (N) and a superconductor (S) is a unique tool
for studying the interplay between dissipation and quantum coherence on one hand,
resistive transport
and superconductivity on the other hand. Induced superconductivity in a normal metal
manifests itself by the enhancement of the local conductivity compared to the intrinsic
metallic regime, the Josephson effect in a S-N-S junction and the appearance of a
(pseudo-) gap in the density of states. Although all these effects have been observed
and understood in the early 60's, recent experimental work triggered a large interest in
S-N structures \cite{Karlsruhe}. This revival is mostly due to the mesoscopic size and
the well-controlled design of the recently investigated samples.
On the microscopic scale, superconductivity diffuses in the normal metal thanks to
the Andreev reflection of electrons at the S-N interface \cite{Andreev}. In this
process, an electron coming from N is reflected as a hole of opposite momentum,
while a Cooper pair is transmitted in S. In consequence, this process correlates
electron states of the same energy $\epsilon$ compared to the Fermi level $E_F$, but
with a slight change in wave-vector $\delta k$ due to the branch crossing : $\delta k / k_F
= \epsilon/E_F$, $k_F$ being the Fermi wavevector. After elastic diffusion up to a
distance L from the interface, the wave-vector mismatch induces a phase shift $\delta
\varphi = (L/L_{\epsilon})^2 =
\epsilon/\epsilon_{c}$. In consequence, a pair maintains coherence in N up to the
energy-dependent length $L_\epsilon = \sqrt{\hbar D/\epsilon}$. Taking
$2 \pi k_B T$ as a characteristic energy of the incident electron, one finds again the
well-known thermal length $L_T = \sqrt{\hbar D/2 \pi k_B T}$ in the "dirty" limit $l_p <
L_T$. This length is indeed described in the textbooks \cite{deGennes} as the
characteristic decay length of the proximity effect.
If one measures the resistance of a
long S-N junction, the length $L_T$ appears missing, so to say superconducting. Does
it mean that superconductivity in N is restricted to a region of length $L_T$ from the
interface ? The answer is clearly no. In fact, it is easy to see that at the Fermi level
($\epsilon = 0$), the value of the diffusion length $L_\epsilon$ diverges. The coherence
of low-energy electron pairs is then only bound by the phase-coherence of one electron,
which persists on the length $L_\varphi$. In a pure metal at low temperature, the
phase-coherence length $L_{\varphi}$ can reach several microns
\cite{Pannetier-Rammal} so that it is much larger than the thermal length $L_T$. In this
scope, the thermal length $L_T$ and the phase-coherence length $L_{\varphi}$ are the
smallest and the largest of the coherence lengths $L_\epsilon$ ($\epsilon < k_B T$) of the
electron pairs present at the temperature $T$. The strong coherence at the Fermi level
justifies the survival up to $L_{\varphi}$ of a local proximity effect on the resistive
transport that can be tuned by an Aharonov-Bohm flux
\cite{Petrashov,Dimoulas,CourtoisPrl}.
In the quasiclassical theory \cite{Volkov,Zaitsev,Zhou}, the phase conjugation
between the electron and the hole is expressed as a finite pair amplitude involving states
of wave-vectors $(k_F+ \epsilon / \hbar v_F,-k_F+ \epsilon / \hbar v_F)$, $v_F$ being
the Fermi velocity. Looking back at the expression of the wave-vector mismatch $\delta
k$, one can see that at a distance $L$ from the interface, electron-hole coherence is
restricted to an energy window of width the Thouless energy $\epsilon_{c}=\hbar
D/L^2$. The pair amplitude $F(\epsilon,x)$ as a function of both energy and distance
from the S interface therefore varies on the scales of $\hbar D/x^2$ and $L_\epsilon$
respectively. As a result, the latter energy enters as a characteristic energy in the local
density of states of the N conductor \cite{Gueron} at large distance $x > \xi_s$. The
presence of a finite pair amplitude $F(\epsilon)$ also induces a local and
energy-dependent enhancement of the effective conductivity of N. A surprising feature
appears at low temperature, when $L_T$ is larger than the sample size $L$, and zero
voltage. In this regime, electron pairs are coherent over the whole sample, but diffusion
of individual electrons is predicted to be insensitive to the presence of electron pairs
\cite{Stoof,Volkov-Lambert}.
In this paper, we review the different contributions to the phase-coherent transport in a
mesoscopic N metal with S electrodes. We show under which conditions the
Josephson coupling, conductance fluctuations, and proximity effect prevail. Various
sample geometry's have been used in order to precisely measure the different
quantities. We will demonstrate that the relevant energy for the Josephson coupling in
a {\it mesoscopic} S-N-S junction is not the superconducting gap $\Delta$, but the Thouless energy $\epsilon_c$. In the resistive state, we find a phase-sensitive contribution to transport carrying a $1/T$ power-law decay with
temperature. This proximity effect is much larger than the weak localization
correction \cite{Spivak} and results from the persistence of electron-hole coherence far
away from the S-N interface. Eventually, we report new experimental results showing for
the first time the low-temperature re-entrance of the metallic resistance in a mesoscopic
proximity superconductor. This re-entrant regime is destroyed by increasing the
temperature \cite{Stoof} or the voltage \cite{Volkov-Lambert}. An Aharonov-Bohm flux
can modify the sample effective length and therefore shift the energy crossover. All these results are shown to be in qualitative agreement with the quasiclassical theory.
\section{A theoretical approach to S-N junctions}
Superconductor--Normal-metal structures can be well described by the
quasiclassical theory for inhomogenous superconductors. We present here a simplified
version that however keeps the essential physical features. Let us consider the
mesoscopic regime where the inelastic scattering length is larger than the sample length.
The flow of electrons at some energy $\epsilon$ is then uniform over the sample, so that
one has to consider parallel and independent channels
transporting electrons at a particular energy $\epsilon$. In a perfect reservoir, electrons
follow the Fermi equilibrium distribution at the temperature $T$ and chemical potential
$\mu$. Charges are injected in the system from one reservoir at $\mu = eV$ and
transferred to the other, so that the current is carried by electrons within an energy
window $[0,\mu = eV]$ with a thermal broadening $k_B T$.
In N, $F(\epsilon,x)$ follows the Usadel equation that for small $F$ can be linearized as
:
\begin{equation}
\hbar D \partial^{2}_{x} F + (2 i\epsilon - \frac{\hbar D}{L_{\varphi}^2}) F = 0
\label{diffusion}
\end{equation}
The linearization used above is a very crude approximation, since near the interface
which is believed to be clean, the pair amplitude should be large (in this
approximation, we have $F(\epsilon \ll \Delta,0) =-i\pi /2$ for a perfectly transparent
interface \cite{Charlat}). At the contact with a N reservoir $F$ is assumed to be zero.
However, this simple formulation enables a straightforward understanding of the
physical root of the proximity effect in a S-N junction. Eq. 1 features simply a
diffusion equation for the pair amplitude $F$ at the energy $\epsilon$ with a decay
length $L_\epsilon$ and a cut-off at $L_\varphi$. At a particular energy $\epsilon$,
the imaginary part is maximum at the S-N interface and decays in an oscillating way
on the length scale of $L_\epsilon$ if $L_\epsilon \ll L$. The real part of the pair
amplitude is zero at the S interface, maximum at a distance $L_\epsilon$ if $L_\epsilon \ll
L$ and then decays also in an oscillating way.
From the theory, one can calculate everything, including the Josephson effect in a
S-N-S geometry \cite{Zaikin}, the change in the density of states \cite{Gueron} and
the conductivity enhancement \cite{Volkov}. Here we will concentrate on the last point.
The pair amplitude $F$ is responsible for a local enhancement of the conductivity
$\delta \sigma(\epsilon,x) = \sigma_N (Re [F(\epsilon,x)])^{2}$ for small $F$,
$\sigma_N$ being the normal-state conductivity \cite{Volkov,Zhou,Stoof}. Part of this
contribution is similar to the Maki-Thompson contribution to the fluctuation in a
conventional superconductor \cite{MT}. The other part is related to a modification of the
density of states and compensates the first one at zero energy \cite{Volkov}.
From the behaviour of $\delta \sigma(\epsilon,x)$ it is then
straightforward to calculate the excess conductance $\delta g(\epsilon )$ for the precise
geometry of the sample. The measured excess conductance $\delta G(V,T)$ at voltage
V and temperature T writes :
\begin {equation}
\delta G(V,T) = \int_{-\infty}^{\infty}\delta g(\epsilon ) P(V-\epsilon)d \epsilon
\label {total}
\end {equation}
where $P(\epsilon)=[4k_B T ${cosh}$^2(\epsilon/2k_B T)]^{-1}$ is a thermal kernel
which reduces to the Dirac function at $T = 0$. Hence, the low-temperature
differential conductance $dI/dV = G_{N}+\delta G$ probes the proximity-induced excess
conductance $\delta g(\epsilon)$ at energy $\epsilon = eV$ with a thermal broadening
$k_{B}T$.
As an example, let us consider a long N wire connected to a S reservoir at one end. Along
the wire, the excess conductivity $\delta \sigma(\epsilon,x)$ at a given energy $\epsilon $
increases from zero at the S-N interface to a maximum of about 0.3 $\sigma_N $ at a
distance $L_\epsilon$ from the interface (if $L \gg L_{\epsilon}$) and then decays
exponentially with $x$. The integrated excess conductance $\delta g(\epsilon)$ of the
whole sample rises from zero with an ${\epsilon}^2$ law at low energy, reaches a
maximum of 0.15 $G_N$ at about 5 $\epsilon_c$ and goes back to zero at higher energy
with a $1/\sqrt{\epsilon}$ law. In the regime $L > L_T$, this contribution is responsible
for the subtraction of a length $L_T$ in the resistance of the sample. The effect is
maximum when temperature or voltage is near the Thouless energy : $k_B T$ or $eV
\simeq \epsilon_c$, i.e. when $L \simeq L_T$. At zero voltage and temperature ($eV,
k_B T \ll \epsilon_c$), the conductance enhancement is predicted to be zero, since $Re
[F(0,x)] = 0$ everywhere.
\section{Sample fabrication and characterization}
We fabricated S-N samples with a clean S-N interface and various geometry's for the
N conductor. Two different techniques were used, depending on the desired geometry. In
the case of long N wires, oblique metallic evaporation's were performed in a
Ultra-High Vacuum (UHV) chamber through a PMMA resist mask suspended 500 nm
above a Si substrate. During the same vacuum cycle, N = Cu or Ag
and S = Al structures are evaporated with the same angle of 45$^\circ$ but along perpendicular axes of the substrate plane \cite{Courtois}. The whole process lasts about 5
minutes with a pressure never exceeding 2.$10^{-8}$ mbar, which ensures us of a
high-quality interface between the two metals.
A different technique has been used for nanofabricating single Cu loops with or two Al
electrodes. The Cu and the Al structures are patterned by conventional lift-off e-beam
lithography in two successive steps with repositioning accuracy better than
100 nm. In-situ cleaning of the Cu surface by 500 eV $Ar^+$ ions prior to Al
deposition ensures us of a transparent and reproducible interface. In this case, the base
pressure is about $10^{-6}$ mbar. The typical interface resistance of a S-N junction
of area 0.01 $\mu m^2$ was measured to be less than 1 $\Omega$.
We performed transport measurements of a variety of samples in a $\mu$-metal--shielded
dilution refrigerator down to 20 mK. Miniature low temperature high-frequency filters
were integrated in the sample holder \cite{Filtres}. Sample 1 is made of a continuous Ag
wire in contact with a series of Al islands of period 800 nm, see Fig. 1. The
width and thickness of the Ag wire are 210 nm and 150 nm respectively, of the Al wire
120 nm and 100 nm. The normal-state resistance of one junction $r_n =$ 0.66 $\Omega$
gives the elastic mean free path in Ag : $l_p =$ 33 nm, the diffusion coefficient $D = v_F
l_p /3$ = 153 $cm^2/s$ and the thermal length $L_T =$ 136 nm at 1 K.
Due to the strong proximity effect in Ag below each Al island, the whole sample can be
considered as a 1D array of about one hundred S-N-S junctions. Below the
superconducting transition of Al at $T_{c} \simeq 1.4$ K, the resistance of sample 1
indeed decreases by an amount $(10 \%)$ compatible with the coverage ratio of the Ag
wire by Al islands $(\simeq 15 \%)$. This behaviour is in agreement with a low S-N
interface resistance and shows that superconductivity in Al is little depressed by the
contact with Ag. Let us point out that we do not observe any resistance anomaly or peak
at $T_c$ as soon as the current-feed contacts are in-line with the sample \cite{Moriond}.
At low temperature ($T <$ 800 mK), the resistance drops to zero due to the Josephson
effect. The appearance of a pair current here, in contrast with previous experiments on
similar samples \cite{Petrashov}, should be attributed to a much cleaner interface thanks
to the UHV environment.
\section{Josephson coupling}
In all our samples, the length $L$ of the N conductor between two neighbouring S
islands is larger than the superconducting coherence length $\xi_s$ and smaller than
the phase-breaking length $L_\varphi$ : $\xi_s < L < L_\varphi$. In terms of energy, the
first relation also means that the Thouless energy $\epsilon_c$ is smaller than the
superconducting gap $\Delta$. We call this regime mesoscopic, as opposed to the
classical regime $L < \xi_s$, in which the two S electrodes are strongly coupled. In this
regime, one could expect (since $L > \xi_s$) a depletion of superconducting properties
but (since $L < L_\varphi$) still a strong quantum coherence.
\vspace*{0.5 cm}
\epsfxsize=8 cm
\begin{center}
\small
{\bf Fig. 1} : Temperature dependence of the critical current of sample 1 in
parallel with the fit derived from the de Gennes theory in the dirty limit $l_p < L_T =
\sqrt{\hbar D /k_B T}$, which is relevant for the sample. Inset : oblique view of a similar
sample made of a continuous Ag wire of width 210 nm in contact with a series of Al
islands. The distance between two neighbouring Al islands is 800 nm.
\normalsize
\end{center}
We studied extensively the behaviour of the critical current as a function of
temperature in a variety of samples \cite{Courtois}. Fig. 1 shows
the critical current of sample 1, which is representative of many other samples
consisting of one or many S-N-S junctions in series. The first remarkable point is that the
low temperature saturation value of the $r_N I_c$ product of about 1.7 $\mu V$ is much
smaller than the energy gap ($\Delta \simeq$ 300 $\mu V$) of Al. This is in clear
contrast with what is observed in S-I-S tunnel junctions and short S-N-S junctions
\cite{Jct}. Moreover, the experimental data as a function of temperature does not fit
with the classical de Gennes theory \cite{deGennes} derived from the Ginzburg-Landau
equations, see Fig. 1. In order to get a good fit, one has to take the clean limit ($l_p >
L_T = \hbar v_F / k_B T$) expression for $L_T$ \cite{Courtois}, but even in this
non-relevant regime the fit parameters ($v_F$ and $\Delta$) are not correct.
These results motivated a recent theoretical study \cite{Zaikin} that met most of the
experimental facts. The dirty limit is considered. In the mesoscopic regime where $L >
\xi_s$, it is found that the relevant energy is no
longer the gap $\Delta$ but the Thouless energy $\epsilon_c$. This is indeed consistent
with our physical argument that only electron pairs within the Thouless window around
the Fermi level remain coherent over the N metal length $L$. In this case, the $r_N I_c$
product at zero temperature is equal to $\epsilon_c / 4$. This result was already
known in the ballistic regime \cite{Bagwell}. Eventually, the
calculated temperature dependence differs strongly from the naively expected
$exp(-L/L_T)$ law as soon as all Matsubara frequencies are taken into account. The
argument below the temperature in the exponential is indeed proportional to the Thouless
energy \cite{Zaikin}. A good agreement of the theory with Fig. 1 experimental points has
been obtained by taking $L =$ 1.2 $\mu m$ \cite{Zaikin}. The discrepancy with the
actual size of 0.8 $\mu m$ may at least partly be attributed to the crude modelization of
the sample geometry.
Sample 2, see Fig. 3 inset, is made of a single Cu loop in contact with two S islands.
Diameter of the loop is 500 nm, width and thickness of the Cu wire are 50 nm and 25
nm. Centre-to-centre distance between the 150 nm wide Al islands is 1 $\mu m$. The
normal-state resistance $R_N =$ 51 $\Omega$ gives a mean free path $l_{p}
=$ 16 nm, a diffusion constant $D =$ 81 $cm^2/s$ and a thermal length $L_{T}=$
99 nm/$\sqrt{T}$. The amplitude of the conductance fluctuations measured in the purely
normal state (not shown) gives the value of the phase-coherence length : $L_{\varphi} =$
1.9 $\mu m$, so that the whole structure is coherent.
Near $T_c$, sample 2 resistance behaves very closely to
that of sample 1. At low temperature ($T < 250$ mK), sample 2 resistance drops to
a constant value of 16 $\Omega$ due to the Josephson coupling between the two S
electrodes. The residual resistance can be related to the resistance of the normal metal
between each voltage contact and the neighbouring S island, with an extra contribution
due to the current conversion. In this truly mesoscopic sample, the critical current
behaves very like in the long N wires with a series of S islands. In addition, the critical
current is modulated by the magnetic flux $\phi$ with a period $\phi_{0}$
\cite{CourtoisPrl}. This behaviour is reminiscent of a superconducting quantum
interference device (SQUID), although our mesoscopic geometry differs strongly from
the classical design.
\section{The conductivity enhancement}
In the temperature regime $L_T \ll L$, which corresponds in sample 2 to $T >$ 500
mK, the pair current is exponentially small but still sensitive to the magnetic flux.
Thermal fluctuations should therefore induce exponentially-small magnetoresistance
oscillations. From the classical RSJ model \cite{RSJ} one can extrapolate a relative
amplitude of $10^{-8}$ at 1 K. In contrast, the experiment (see Fig. 2) exhibits
pronounced magnetoresistance oscillations with a relative amplitude of 0.8 $\%$ at 1 K.
The only flux-periodicity is ${\it h}/2e$ as no structure of half periodicity
was met, even when the measuring current amplitude was changed. We used small
currents, so that electrons chemical potential difference $eV$ between the two N
reservoirs was less than the Thouless energy. In this "adiabatic" regime, the phase
difference between the two S electrodes can be considered as constant despite the finite
voltage drop $V$ between them.
\vspace*{0.5 cm}
\epsfxsize=8 cm \epsfbox{Fig2hc.eps}
\begin{center}
\small
{\bf Fig. 2} : Sample 2 low-field magnetoresistance for $T =$ 0.7; 0.8; 0.9; 1; 1.1; 1.2
and 1.3 K with a measurement current $I_{mes} =$ 60 nA. $T =$ 0.7 and 0.8 K curves
have been shifted down by 1 and 0.5 $\Omega$ respectively for clarity. Oscillations of
flux-periodicity ${\it h}/2e$ and relative amplitude 0.8 $\%$ or equivalently 4.2 $e^2/{\it h}$ at $T =$ 1 K are visible.
\normalsize
\end{center}
The observed magnetoresistance oscillations are very robust, since they remain clearly
visible very near $T_c$ at 1.3 K, when the thermal length $L_T$ is much smaller than
the distance between one Al island and the loop. Fig. 3 shows the temperature
dependence of the two main contributions to transport. The Josephson current vanishes
rapidly above 250 mK, revealing the exponential decay with temperature. The amplitude
of the observed h/2e magnetoresistance oscillations is plotted on the same graph. It shows
a striking agreement with a fit using a plain $1/T$ power-law. The slight deviation of the
data from the $1/T$ fit near $T_{c}$ is clearly related to the depletion of
superconductivity in S. This power-law dependence is a new result, in clear contrast with
the exponential damping with the temperature of the Josephson current. The observation
of the same effect in samples with only one S island, see below, further illustrates the
difference with the Josephson effect. Eventually, the large amplitude of the effect
compared to the quantum of conductance $e^2/{\it h}$ discards an interpretation in terms
of weak localization or conductance fluctuations.
\vspace*{0.5 cm}
\epsfxsize=8 cm \epsfbox{Fig3hc.eps}
\begin{center}
\small
{\bf Fig. 3}, Left scale : Temperature dependence of the critical current of sample 2 with a
25 $\Omega$ differential resistance criteria. The dashed line is a guide to the eye.
Right scale : Temperature dependence of the amplitude of the
magnetoresistance oscillations, $I_{mes} =$ 60 nA. The dashed line is a $1/T$ fit.
Inset : micrograph of a similar sample made of a single Cu in contact with two (here
vertical) Al islands. The whole sample, including the measurements contacts and except
the two Al islands, is made of Cu. The loop diameter is 500 nm.
\normalsize
\end{center}
In our geometry, the loop is a unique tool for selecting the long-range component of the
proximity effect by making interfere the pair amplitudes of both arms of the loop. The
observation of a power-law temperature dependence instead of the naively expected
exponential damping over $L_T$ definitely shows that the natural length scale of the
proximity effect is not the thermal length $L_T$ but the phase-memory length
$L_\varphi$. The role of temperature is only to distribute the energy of the
electrons that will reflect on the superconducting interface and contribute to the proximity
effect. At the Fermi level, electrons in a pair are perfectly matched, so that only
phase-breaking can break the pair. It is quite remarkable that the relative
amplitude of the oscillations is in agreement with the ratio $\epsilon_c / k_B T =$ 1.5
$\%$ at 1 K if one takes the characteristic length $L =$ 2 $\mu m$ for the sample. This
ratio can be seen as the fraction of electron pairs remaining coherent after diffusion up to a
distance $L$ from the S-N interface.
\vspace*{0.5 cm}
\epsfxsize=8 cm \epsfbox{Fig4hc.eps}
\begin{center}
\small
{\bf Fig. 4}: Temperature dependence of sample 3 resistance. Sample 3 is made
of a single short N wire in lateral contact with a S island. Critical temperature of S = Al is
1.25 K. Measurement current is 400 nA. Inset : micrograph of a similar sample made of a
400-nm--long Cu wire between two massive N = Cu reservoirs and in lateral contact with
a massive Al island. Due to the lack of contrast, a dotted line was drawn following the
contour of the Al reservoir.
\normalsize
\end{center}
\section{Re-entrance regime}
With two S islands, one cannot investigate the dissipative transport in the strong
coherence regime $L_{T} \simeq L$, since the Josephson effect shorts the low-current
resistance between the two S islands. Thus we studied transport in samples with
only one S island. Sample 3 consists in a 400 nm-long N wire in lateral contact with a
single S island located 200 nm away from the N conductor centre, see Fig. 4
inset. The width and thickness of the Cu wire are 80 and 50 nm respectively. The
normal-state resistance is $R_N =$ 10.1 $\Omega$, the mean free path is $l_p =$ 6 nm,
the diffusion coefficient $D$ is 30 $cm^2/s$ and the thermal length $L_T$ is 60 nm at 1
K. The Thouless energy is then equal to 12 $\mu eV$ and the Thouless temperature is
$\epsilon_c / k_B =$ 142 mK if one takes a characteristic length $L =$ 400 nm for the
sample.
\vspace*{0.5 cm}
\epsfxsize=8 cm \epsfbox{Fig5hc.eps}
\begin{center}
\small
{\bf Fig. 5}: Bias current dependence of the differential resistance of sample 3 at $T =$ 50
mK. Measurement current is 400 nA. The differential-resistance minimum corresponds to
an electron energy of about 34 $\mu V$.
\normalsize
\end{center}
At low temperature, we do not observe a sharp drop of the resistance, since obviously no
Josephson current can establish itself. In contrast, we observe an {\it increase} of the
zero--magnetic-field and low-current resistance below 500 mK, see Fig. 4.
The current bias dependence of the measured resistance shows the most striking
behaviour with a {\it decrease} of the resistance when the polarisation current is
increased, see Fig. 5. This non-linear behaviour discards
an interpretation in terms of weak localization, which is known to be insensitive to
voltage. The location of the resistance minimum gives a threshold voltage of
about 34 $\mu V$. This value is of the order of the calculated value based on the
Thouless energy $5 \epsilon_c =$ 61 $\mu V$. Moreover, the 500 mK crossover
temperature for the temperature dependence is also compatible with the related Thouless
temperature $5 \epsilon_{c} / k_B =$ 710 mK. In summary, we observed for the first
time the re-entrance of the metallic resistance in a mesoscopic proximity superconductor.
In agreement with the predictions \cite{Stoof,Volkov-Lambert}, this new regime occurs when all the energies involved are of order and below the Thouless energy of the sample.
Eventually, we studied electron transport in sample 4 that has exactly the same loop
geometry than sample 2, except for the absence of one S island. The width of the
wire is 150 nm, its thickness is about 40 nm, and the distance between the loop and
the S island is about 100 nm. The normal state resistance $R_{N}$ = 10.6 $\Omega$
gives a mean free path $l_p =$ 18 nm, a diffusion coefficient $D = $ 70 $cm^2/s$ and a
thermal length $L_T =$ 92 nm at 1 K. The behaviour of the resistance above
500 mK is very similar to the two-island case : (i) At $T_{c}$ we observe a drop of the
resistance of about 3 $\%$; (ii) Below $T_{c}$, the resistance oscillates with the
magnetic field with an amplitude that follows again a $1/T$ temperature dependence
down to 200 mK.
Fig. 6 (a) shows the temperature dependence of the resistance for various values of the
magnetic flux in the loop. At zero-field, the re-entrance of the resistance is real but
hardly distinguishable. At $\phi = \phi_0 / 2$, the re-entrance occurs at higher
temperature $(T <$ 500 mK) and has a much larger amplitude. At $\phi = \phi_0$, the
curve is close to the zero-field case, and at $\phi = 3 \phi_0 /2$, close to the $\phi_0 /
2$ case. With increasing further the magnetic field (Fig. 6 (a)), the re-entrant
regime is entered at an increasing temperature, independently of the $\phi_0$-periodic
modulation.
\section{Comparison with linearized theory}
Let us compare experimental results from sample 4 with calculation derived from the
theory. Sample 4 can be modelized as two independent S-N circuits in series as
shown in the inset of Fig. 6. This strong approximation describes the main
physics of our particular geometry and illustrates more general situations. Both circuits
consist of a N-wire between a superconductor S and a normal reservoir N. Because of the
loop geometry, a magnetic field induces an Aharonov-Bohm flux, which changes the
boundary conditions on the pair amplitude $F(\epsilon )$. At zero magnetic flux,
$F(\epsilon )$ is zero only at the contact with the normal reservoirs. At half magnetic
flux, destructive interference of the pair functions in the two branches enforces a zero in
$F(\epsilon )$ at the node K (see Fig. 6 inset). Consequently,
the pair amplitude is also zero between the loop and the N reservoir \cite{Charlat}.
Half a flux-quantum then reduces the effective sample size to the length $L'$ between the
S interface and the point K. As a result, the crossover temperature of the re-entrance of
the resistance is shifted to higher temperature, see Fig. 6. Compared to the zero-field
case, this subtracts the contribution of the pairs remaining coherent
after the end of the loop, i.e. with an energy below the Thouless energy $\epsilon_{c}'
= \hbar D / L'^2$. In the intermediate temperature regime $k_{B}T > \epsilon_{c}'$,
this quantity is of the order of $\epsilon_{c}'/k_{B}T$. This is in qualitative agreement
with the amplitude of the magnetoresistance oscillations in respect of the amplitude and
the temperature dependence \cite{Charlat}. With two S islands as in sample 2, the same
analysis holds.
\vspace*{0.5 cm}
\epsfxsize=8 cm \epsfbox{Fig6hc.eps}
\begin{center}
\small
{\bf Fig. 6}, Left : Measured temperature dependence of the resistance of sample 4 at
different values of the magnetic flux in units of the flux-quantum : $\phi / \phi_0 =$0; 1/2;
1; 3/2; 2; 3.8; 5.6 ; 7.4. $I_{mes} =$ 200 nA. Right : Calculated
resistance temperature dependence's for the flux values. The sample model is shown in
the inset. The arrow shows the location (K) where half a flux-quantum enforces a zero
pair amplitude. The physical parameters (D, L...) are the measured ones except for the
width of the N wire $w =$ 50 nm. The zero--magnetic-field phase-breaking length
$L_\varphi$ is taken to be infinite.
\normalsize
\end{center}
As an additional effect of the magnetic field $H$, the
phase-memory length $L_{\varphi}$ is renormalized due to the finite width $w$ of the
Cu wire \cite{Pannetier-Rammal} :
\begin {equation}
L_{\varphi}^{-2}(H)=L_{\varphi}^{-2}(0)+\frac{\pi^{2}}{3}
\frac{H^{2}w^{2}}{\Phi_{0}^{2}}
\end {equation}
When smaller than the sample length L, the phase-memory length $L_{\varphi}(H)$
plays the role of an effective length for the sample. As a result, the resistance minima
is shifted to higher temperatures when the magnetic field is increased, see Fig. 6.
In the right part of Fig. 6, we show the calculated
resistance using of Eq. 1-3 in the modelized geometry of Fig. 6 inset in the
case of a fully transparent interface. The only free parameter is the width of the wires
which has been adjusted so that the high-field damping of the amplitude of the
magnetoresistance oscillations is well described by the calculation. The discrepancy
between the fitted value $w=$ 65 nm and the measured value is attributed to sample
non-ideality. Our calculation accounts for both the global shape and amplitude of the
curves and for their behaviour as a function of the magnetic flux. This is particularly
remarkable in respect with the strong assumptions of the model. One should note that
qualitative shape and amplitude of the curves are conserved if non-linearized Usadel
equations or slightly different geometrical parameters are used.
\section{Conclusion}
In conclusion, we have clearly identified the different components of the proximity
effect in a mesoscopic metal near a superconducting interface. We demonstrated the
cross-over between the low-temperature Josephson coupling and a energy-sensitive
conductance enhancement at high temperature. We proved the persistence of the
proximity effect over the phase-coherence length $L_\varphi$ by observing the
power-law ($1/T$) temperature dependence of the magneto-resistance oscillations in a
loop. This effect is much larger than the weak localization correction. We
observed for the first time the re-entrance of the metallic resistance when all
energies involved are below the Thouless energy of the
sample, as predicted in recent theories. Our experimental results are well described by the
linearized Usadel equations from the quasiclassical theory. We notice that in addition to
the study of electron-electron interaction in metals \cite{Stoof}, these results are very
promising for studying the actual efficiency of electron reservoirs \cite{Charlat}.
We thank B. Spivak, F. Wilhem, A. Zaikin and F. Zhou for stimulating
discussions. The financial support of the Direction de la Recherche et des Technologies
and of the R\'egion Rh\^one-Alpes is gratefully acknowledged.
| proofpile-arXiv_065-819 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The Picard group and the classical Picard group.}
{\bf{\underline{Notation}.}} Let $A$ be \mbox{$C^*$}-algebra. If $M$ is a Hilbert \cstar-bimodule over
$A$ (in the sense of \cite[1.8]{bms}) we denote by $\langle$ , $\rangle_{{\scriptscriptstyle{L}}} ^M$,
and $\langle$ , $\rangle_{{\scriptscriptstyle{R}}}^M$, respectively, the left and right $A$-valued
inner products, and drop the superscript whenever the context is clear
enough. If $M$ is a left (resp. right) Hilbert \mbox{$C^*$}-module over $A$, we
denote by $K(_{A}\!M)$ (resp. $K(M_{A})$) the \mbox{$C^*$}-algebra of compact
operators on $M$. When $M$ is a Hilbert \cstar-bimodule over $A$ we will view the elements of
$\langle M, M\rangle_{{\scriptscriptstyle{R}}}$ (resp. $\langle M, M\rangle_{{\scriptscriptstyle{L}}}$) as compact operators on
the left (resp. right) module $M$, as well as elements of $A$, via the
well-known identity:
\[\langle m,n\rangle_{{\scriptscriptstyle{L}}} p= m \langle n,p \rangle_{{\scriptscriptstyle{R}}},\] for $m,n,p\in M$.
The bimodule denoted by $\tilde{M}$ is the dual bimodule of $M$, as defined in
\cite[6.17]{irep}.
By an isomorphism of left (resp. right) Hilbert \mbox{$C^*$}-modules we mean an
isomorphism of left (resp. right) modules that preserves the left
(resp. right) inner product. An isomorphism of Hilbert \cstar-bimodules is an isomorphism of both
left and right Hilbert \mbox{$C^*$}-modules. We recall from \cite[3]{bgr} that
$\mbox{Pic}(A)$, the Picard group of $A$, consists of isomorphism classes of full
Hilbert \cstar-bimodules over $A$ (that is, Hilbert \cstar-bimodules $M$ such that $\langle M,M \rangle_{{\scriptscriptstyle{L}}} =\langle
M,M \rangle_{{\scriptscriptstyle{R}}} = A$), equipped with the tensor product, as defined in
\cite[5.9]{irep}.
It was shown in \cite[3.1]{bgr} that there is an anti-homomorphism from
$\mbox{Aut}(A)$ to $\mbox{Pic}(A)$ such that the sequence
\[ 1\longrightarrow \mbox{Gin}(A) \longrightarrow \mbox{Aut}(A) \longrightarrow
\mbox{Pic}(A)\] is exact, where $\mbox{Gin}(A)$ is the group of generalized inner
automorphisms of $A$. By this correspondence, an automorphism $\alpha$ is
mapped to a bimodule that corresponds to the one we denote by
$A_{\alpha^{-1}}$ (see below), so that $\alpha\mapsto A_{\alpha}$ is a group
homomorphism having $\mbox{Gin}(A)$ as its kernel.
Given a partial automorphism $(I,J,\theta)$ of a \mbox{$C^*$}-algebra $A$, we denote
by $J_{\theta}$ the corresponding (\cite[3.2]{aee}) Hilbert \cstar-bimodule over $A$. That is,
$J_{\theta}$ consists of the vector space $J$ endowed with the $A$-actions:
\[a\cdot x= ax \mbox{,\hspace{.2in}} x\cdot a = \theta[\theta^{-1}(x)a],\] and the inner
products
\[\langle x,y \rangle_{{\scriptscriptstyle{L}}}=xy^*,\] and
\[\langle x,y \rangle_{{\scriptscriptstyle{R}}}=\theta^{-1}(x^*y),\] for $x,y\in J$, and $a\in A$.
If $M$ is a Hilbert \cstar-bimodule over $A$, we denote by $M_{\theta}$ the Hilbert \cstar-bimodule obtained by
taking the tensor product $M\otimes_A J_{\theta}$.
The map $m\otimes j \mapsto mj$, for $m\in M,$ $j\in J$, identifies
$M_{\theta}$ with the vector space $MJ$ equipped with the $A$-actions:
\[a\cdot mj= amj \mbox{,\hspace{.2in}} mj\cdot a =m \theta[\theta^{-1}(j) a],\] and the
inner products
\[ \langle x,y\rangle_{{\scriptscriptstyle{L}}}^{M_{\theta}}=\langle x,y\rangle^{M}_L,\] and
\[\langle x,y\rangle_{{\scriptscriptstyle{R}}}^{M_{\theta}} =\theta^{-1}(\langle x,y \rangle^M_R),\]
where $m \in M$, $j\in J$, $x,y\in MJ$, and $a\in A$.
As mentioned above, when $M$ is a \mbox{$C^*$}-algebra $A$, equipped with its usual
structure of Hilbert \cstar-bimodule over $A$, and $\theta\in \mbox{Aut}(A)$ the bimodule $A_{\theta}$
corresponds to the element of $\mbox{Pic}(A)$ denoted by $X_{\theta^{-1}}$ in
\cite[3]{bgr}, so we have $A_{\theta}\otimes A_{\sigma}\cong A_{\theta\sigma}$
and $\widetilde{A_{\theta}}\cong A_{\theta^{-1}}$ for all $\theta,\sigma\in
\mbox{Aut}(A)$.
In this section we discuss the interdependence between the left and the right
structure of a Hilbert \mbox{$C^*$}-bimodule. Proposition \ref{leftiso} shows that
the right structure is determined, up to a partial isomorphism, by the left
one. By specializing this result to the case of full Hilbert \cstar-bimodules over a commutative
\mbox{$C^*$}-algebra, we are able to describe $\mbox{Pic}(A)$ as the semidirect product of
the classical Picard group of $A$ by the group of automorphisms of $A$.
\begin{prop}
\label{leftiso}
Let $M$ and $N$ be Hilbert
\mbox{$C^*$}-bimodules over a \mbox{$C^*$}-algebra $A$. If $\Phi:M\longrightarrow N$ is
an isomorphism of left $A$-Hilbert \mbox{$C^*$}-modules, then there is a partial
automorphism $(I,J,\theta)$ of $A$ such that $\Phi:M_{\theta}\longrightarrow
N$ is an isomorphism of $A-A$ Hilbert \mbox{$C^*$}-bimodules. Namely, $I=\langle
N,N\rangle_{{\scriptscriptstyle{R}}}$, $J=\langle M,M\rangle_{{\scriptscriptstyle{R}}}$ and $\theta(\langle \Phi(m_0),
\Phi(m_1)\rangle_{{\scriptscriptstyle{R}}})=\langle m_0,m_1\rangle_{{\scriptscriptstyle{R}}}$.
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } Let $\Phi: M\longrightarrow N$ be a left $A$-Hilbert \mbox{$C^*$}-module
isomorphism. Notice that, if $m\in M$, and $\|m\|=1$, then, for all $m_i,m'_i
\in M$, and $i=1,...,n$:
\[\begin{array}[t]{ll}
\|\sum m\langle m_i,m'_i\rangle_{{\scriptscriptstyle{R}}}\| & =\|\sum\langle m,m_i\rangle_{{\scriptscriptstyle{L}}} m'_i\|\\ &
=\|\Phi(\sum\langle m,m_i\rangle_{{\scriptscriptstyle{L}}} m'_i)\|\\ & =\|\sum\langle m,m_i\rangle_{{\scriptscriptstyle{L}}}
\Phi(m'_i)\|\\ &=\|\sum \langle \Phi(m),\Phi(m_i)\rangle_{{\scriptscriptstyle{L}}} \Phi(m'_i)\|\\
&=\|\sum \Phi(m) \langle \Phi(m_i),\Phi(m'_i)\rangle_{{\scriptscriptstyle{R}}} \|.
\end{array}
\] Therefore:
\[\begin{array}[t]{ll}
\|\sum \langle m_i,m'_i \rangle_{{\scriptscriptstyle{R}}}\|&=\sup_{\{m:\|m\|=1\}}\|\sum m\langle
m_i,m'_i\rangle_{{\scriptscriptstyle{R}}}\|\\ &=\sup_{\{m:\|m\|=1\}}\|\sum \Phi(m) \langle
\Phi(m_i),\Phi(m'_i)\rangle_{{\scriptscriptstyle{R}}}\|\\ &=\|\sum \langle \Phi(m_i),\Phi(m'_i)
\rangle_{{\scriptscriptstyle{R}}}\|,
\end{array}\]
Set $I=\langle N,N\rangle_{{\scriptscriptstyle{R}}}$, and $J=\langle M,M\rangle_{{\scriptscriptstyle{R}}}$, and let
$\theta:I\longrightarrow J$ be the isometry defined by
\[ \theta (\langle \Phi(m_1), \Phi(m_2)\rangle_{{\scriptscriptstyle{R}}})= \langle m_1,m_2
\rangle_{{\scriptscriptstyle{R}}},\] for $m_1,m_2\in M$. Then,
\[\begin{array}[t]{ll}
\theta(\langle \Phi(m_1), \Phi(m_2) \rangle_{{\scriptscriptstyle{R}}}^*)&=\theta(\langle \Phi(m_2),
\Phi(m_1) \rangle_{{\scriptscriptstyle{R}}})\\ &=\langle m_2, m_1\rangle_{{\scriptscriptstyle{R}}}\\ &=\langle
m_1,m_2\rangle_{{\scriptscriptstyle{R}}}^*\\ &=\theta(\langle \Phi(m_1), \Phi(m_2) \rangle_{{\scriptscriptstyle{R}}})^*,
\end{array}\] and
\[\begin{array}[t]{ll}
\theta(\langle \Phi(m_1), \Phi(m_2) \rangle_{{\scriptscriptstyle{R}}} \langle \Phi(m'_1), \Phi(m'_2)
\rangle_{{\scriptscriptstyle{R}}})&=\theta(\langle \Phi(m_1), \Phi(m_2)\langle \Phi(m'_1), \Phi(m'_2)
\rangle_{{\scriptscriptstyle{R}}} \rangle_{{\scriptscriptstyle{R}}})\\ &=\theta(\langle \Phi(m_1), \langle \Phi(m_2),
\Phi(m'_1)\rangle_{{\scriptscriptstyle{L}}} \Phi(m'_2) \rangle_{{\scriptscriptstyle{R}}})\\ &=\langle
m_1,\langle\Phi(m_2),\Phi(m'_1)\rangle_{{\scriptscriptstyle{L}}} m'_2\rangle_{{\scriptscriptstyle{R}}}\\ &=\langle m_1,\langle
m_2,m'_1\rangle_{{\scriptscriptstyle{L}}} m'_2\rangle_{{\scriptscriptstyle{R}}}\\ &=\langle m_1,m_2\langle m'_1,m'_2\rangle_{{\scriptscriptstyle{R}}}
\rangle_{{\scriptscriptstyle{R}}}\\ &=\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}} \langle m'_1,m'_2\rangle_{{\scriptscriptstyle{R}}}\\
&=\theta(\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}})\theta(\langle m'_1,m'_2\rangle_{{\scriptscriptstyle{R}}}) ,
\end{array}\] which shows that $\theta$ is an isomorphism.
Besides, $\Phi:M_{\theta}\longrightarrow N$ is a Hilbert \cstar-bimodule isomorphism:
\[\begin{array}[t]{ll}
\Phi(m\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}}\cdot a)&=\Phi(m\theta[\theta^{-1}(\langle
m_1,m_2\rangle_{{\scriptscriptstyle{R}}})a]\\ &=\Phi(m\theta(\langle \Phi(m_1),\Phi(m_2)a\rangle_{{\scriptscriptstyle{R}}}))\\
&=\Phi(m\langle m_1,\Phi^{-1}(\Phi(m_2)a)\rangle_{{\scriptscriptstyle{R}}})\\ &=\Phi(\langle
m,m_1\rangle_{{\scriptscriptstyle{L}}}\Phi^{-1}(\Phi(m_2)a))\\ &=\langle m,m_1\rangle_{{\scriptscriptstyle{L}}}\Phi(m_2)a\\
&=\Phi(\langle m,m_1 \rangle_{{\scriptscriptstyle{L}}} m_2)a\\ &=\Phi(m\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}})a,
\end{array}\] and
\[\langle \Phi(m_1),\Phi(m_2)\rangle_{{\scriptscriptstyle{R}}}=\theta^{-1}(\langle
m_1,m_2\rangle_{{\scriptscriptstyle{R}}}^M)=\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}}^{M_{\theta}}.\]
Finally, $\Phi$ is onto because
\[\Phi(M_{\theta})=\Phi(M\langle M,M\rangle_{{\scriptscriptstyle{R}}})=\Phi(M)=N.\]
\hfill {\bf Q.E.D.} \vspace*{.1in}
\vspace{.2in}
\begin{clly}
\label{unrs} Let $M$ and $N$ be Hilbert \cstar-bimodules over a \mbox{$C^*$}-algebra $A$, and let
$\Phi:M\longrightarrow N$ be a an isomorphism of left Hilbert
\mbox{$C^*$}-modules. Then $\Phi$ is an isomorphism of Hilbert \cstar-bimodules if and only if
$\Phi$ preserves either the right inner product or the right $A$-action.
\end{clly}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } Let $\theta$ be as in Proposition \ref{leftiso}, so that
$\Phi:M_{\theta}\longrightarrow N$ is a Hilbert \cstar-bimodule isomorphism. If $\Phi$ preserves
the right inner product, then $\theta$ is the identity map on $\langle
M,M\rangle_{{\scriptscriptstyle{R}}}$ and $M_{\theta}=M$.
If $\Phi$ preserves the right action of $A$, then, for $m_0,m_1,m_2\in M$ we
have:
\[\begin{array}[t]{ll}
\Phi(m_0)\langle \Phi(m_1),\Phi(m_2)\rangle_{{\scriptscriptstyle{R}}} & = \langle
\Phi(m_0),\Phi(m_1)\rangle_{{\scriptscriptstyle{L}}} \Phi(m_2)\\ &=\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}} \Phi(m_2)\\
&=\Phi(m_0\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}})\\ &=\Phi(m_0)\langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}},
\end{array}\] so $\Phi$ preserves the right inner product as well.
\hfill {\bf Q.E.D.} \vspace*{.1in}
\begin{prop}
\label{lmod} Let $M$ and $N$ be left Hilbert \mbox{$C^*$}-modules over a
\mbox{$C^*$}-algebra $A$. If $M$ and $N$ are isomorphic as left $A$-modules, and $
K(_A\!M)$ is unital, then $M$ and $N$ are isomorphic as left Hilbert
\mbox{$C^*$}-modules.
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } First recall that any $A$-linear map $T:M\longrightarrow N$ is
adjointable. For if $m_i,m'_i\in M$, $i=1,...,n$ are such that $\sum\langle
m_i,m'_i\rangle_{{\scriptscriptstyle{R}}}=1_{K(_A\!M)}$, then for any $m\in M$:
\[T(m)=T(\sum\langle m,m_i\rangle_{{\scriptscriptstyle{L}}} m'_i)=\sum\langle m,m_i\rangle_{{\scriptscriptstyle{L}}}
T(m'_i)=(\sum\xi_{m_i,Tm'_i})(m),\] wher $\xi_{m,n}:M\longrightarrow N$ is the
compact operator (see, for instance, \cite[1]{la}) defined by
$\xi_{m,n}(m_0)=\langle m_0,m\rangle_{{\scriptscriptstyle{L}}} n$, for $m\in M$, and $n\in N$, which is
adjointable. Let $T:M\longrightarrow N$ be an isomorphism of left modules,
and set $S:M\longrightarrow N$, $S=T(T^*T)^{-1/2}$. Then $S$ is an $A$-linear
map, therefore adjointable. Furthermore, $S$ is a left Hilbert \mbox{$C^*$}-module
isomorphism: if $m_0,m_1\in M$, then
\[\begin{array}[t]{ll}
\langle S(m_0),S(m_1)\rangle_{{\scriptscriptstyle{L}}}&=\langle
T(T^*T)^{-1/2}m_0,T(T^*T)^{-1/2}m_1\rangle_{{\scriptscriptstyle{L}}}\\ &=\langle
m_0,(T^*T)^{-1/2}T^*T(T^*T)^{-1/2}m_1\rangle_{{\scriptscriptstyle{L}}}\\ &=\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}}.
\end{array}\]
\hfill {\bf Q.E.D.} \vspace*{.1in}
We next discuss the Picard group of a \mbox{$C^*$}-algebra $A$. Proposition
\ref{leftiso} shows that the left structure of a full Hilbert \cstar-bimodule over $A$ is
determined, up to an isomorphism of $A$, by its left structure.
This suggests describing $\mbox{Pic}(A)$ in terms of the subgroup $\mbox{Aut}(A)$ together
with a cross-section of the equivalence classes under left Hilbert
\mbox{$C^*$}-modules isomorphisms. When $A$ is commutative there is a natural choice
for this cross-section: the family of symmetric Hilbert \cstar-bimodules (see Definition
\ref{sym}). That is the reason why we now concentrate on commutative
\mbox{$C^*$}-algebras and their symmetric Hilbert \mbox{$C^*$}-bimodules.
\begin{prop}
\label{lrip} Let $A$ be a commutative \mbox{$C^*$} algebra and $M$ a Hilbert \cstar-bimodule over
$A$. Then $\langle m,n\rangle_{{\scriptscriptstyle{L}}} p=\langle p,n\rangle_{{\scriptscriptstyle{L}}} m$ for all $m,n,p\in M$.
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } We first prove the proposition for $m=n$, the statement will then follow
{}from polarization identities.
Let $m,p\in M$, then:
\[\begin{array}[t]{l}
\langle \langle m,m\rangle_{{\scriptscriptstyle{L}}} p-\langle p, m\rangle_{{\scriptscriptstyle{L}}} m,\ \langle m,m\rangle_{{\scriptscriptstyle{L}}}
p-\langle p, m\rangle_{{\scriptscriptstyle{L}}} m \rangle_{{\scriptscriptstyle{L}}}\\
\\ =\langle \langle m,m\rangle_{{\scriptscriptstyle{L}}} p,\langle m,m\rangle_{{\scriptscriptstyle{L}}} p\rangle_{{\scriptscriptstyle{L}}} -\langle
\langle m,m\rangle_{{\scriptscriptstyle{L}}} p, \langle p,m\rangle_{{\scriptscriptstyle{L}}} m\rangle_{{\scriptscriptstyle{L}}}\\ -\langle \langle
p,m\rangle_{{\scriptscriptstyle{L}}} m, \langle m,m\rangle_{{\scriptscriptstyle{L}}} p\rangle_{{\scriptscriptstyle{L}}} +\langle \langle p,m\rangle_{{\scriptscriptstyle{L}}} m,
\langle p,m\rangle_{{\scriptscriptstyle{L}}} m\rangle_{{\scriptscriptstyle{L}}}\\
\\ = \langle m\langle m,p\rangle_{{\scriptscriptstyle{R}}}\langle p ,m\rangle_{{\scriptscriptstyle{R}}},m\rangle_{{\scriptscriptstyle{L}}} -\langle
m,m\rangle_{{\scriptscriptstyle{L}}} \langle p,m\rangle_{{\scriptscriptstyle{L}}} \langle m,p\rangle_{{\scriptscriptstyle{L}}}\\ -\langle p,m\rangle_{{\scriptscriptstyle{L}}}
\langle m,p\rangle_{{\scriptscriptstyle{L}}}\langle m,m\rangle_{{\scriptscriptstyle{L}}} + \langle p,m\rangle_{{\scriptscriptstyle{L}}} \langle
m,m\rangle_{{\scriptscriptstyle{L}}}\langle m,p\rangle_{{\scriptscriptstyle{L}}}\\
\\ = \langle m\langle p,m\rangle_{{\scriptscriptstyle{R}}}\langle m,p\rangle_{{\scriptscriptstyle{R}}},m\rangle_{{\scriptscriptstyle{L}}} -\langle
m,m\rangle_{{\scriptscriptstyle{L}}} \langle p,m\rangle_{{\scriptscriptstyle{L}}} \langle m,p\rangle_{{\scriptscriptstyle{L}}}\\
\\ =\langle m\langle p,m\rangle_{{\scriptscriptstyle{R}}}, m\langle p,m\rangle_{{\scriptscriptstyle{R}}}\rangle_{{\scriptscriptstyle{L}}} -\langle
m,m\rangle_{{\scriptscriptstyle{L}}} \langle p,m\rangle_{{\scriptscriptstyle{L}}} \langle m,p\rangle_{{\scriptscriptstyle{L}}}\\
\\ =\langle\langle m,p \rangle_{{\scriptscriptstyle{L}}} m,\langle m,p \rangle_{{\scriptscriptstyle{L}}} m\rangle_{{\scriptscriptstyle{L}}} -\langle
m,m\rangle_{{\scriptscriptstyle{L}}} \langle p,m\rangle_{{\scriptscriptstyle{L}}} \langle m,p\rangle_{{\scriptscriptstyle{L}}}\\
\\ =0.
\end{array}\] Now, for $m,n,p \in M$, we have:
\[\begin{array}[t]{ll}
\langle m,n\rangle_{{\scriptscriptstyle{L}}} p&=\frac{1}{4}\sum_{k=0}^3i^k \langle m+i^kn,
m+i^kn\rangle_{{\scriptscriptstyle{L}}} p\\
& \\ &=\frac{1}{4}\sum_{k=0}^3\i^k\langle p,m+i^kn\rangle_{{\scriptscriptstyle{L}}} (m+i^kn)\\ & \\
&=\frac{1}{4}\sum_{k=0}^3 i^kp\langle m+i^kn, m+i^kn\rangle_{{\scriptscriptstyle{R}}}\\ & \\ &=p\langle
n,m\rangle_{{\scriptscriptstyle{R}}}\\ & \\ &=\langle p,n \rangle_{{\scriptscriptstyle{L}}} m.
\end{array}\]
\vspace{.2in}
\begin{defi}
\label{sym}Let $A$ be a commutative \mbox{$C^*$}-algebra. A Hilbert \cstar-bimodule $M$ over $A$ is
said to be {\em{symmetric}} if $am=ma$ for all $m\in M$, and \mbox{$a\in A$.}
If $M$ is a Hilbert \cstar-bimodule over $A$, the {\em{symmetrization}} of $M$ is the symmetric
Hilbert \cstar-bimodule $M^s$, whose underlying vector space is $M$ with its given left
Hilbert-module structure, and right structure defined by:
\[m\cdot a =am \mbox{,\hspace{.2in}} \langle m_0,m_1\rangle_{{\scriptscriptstyle{R}}}^{M^s}=\langle
m_1,m_0\rangle_{{\scriptscriptstyle{L}}}^M,\] for $a\in A$, $m,m_0,m_1\in M^s$. The commutativity of
$A$ guarantees the compatibility of the left and right actions. As for the
inner products, we have, in view of Proposition \ref{lrip}:
\[\begin{array}[t]{ll}
\langle m_0, m_1\rangle_{{\scriptscriptstyle{L}}}^{M^s}\cdot m_2&=\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}} ^M m_2\\
&=\langle m_2,m_1\rangle_{{\scriptscriptstyle{L}}}^M m_0\\ &=m_0\cdot \langle m_2,m_1 \rangle_{{\scriptscriptstyle{L}}}^M\\
&=m_0\cdot \langle m_1,m_2\rangle_{{\scriptscriptstyle{R}}} ^{M^s},
\end{array}\] for all $m_0, m_1,m_2\in M^s$.
\end{defi}
\vspace{.2in}
\begin{rk}
\label{unsym} By Corollary \ref{unrs} the bimodule $M^s$ is, up to
isomorphism, the only symmetric
Hilbert \cstar-bimodule that is isomorphic to $M$ as a left Hilbert module.
\end{rk}
\begin{rk}
\label{spm} Let $M$ be a symmetric Hilbert \cstar-bimodule over a commutative \mbox{$C^*$}-algebra
$A$ such that $K(_{A}\!M)$ is unital. By Remark \ref{unsym} and Proposition
\ref{lmod}, a symmetric Hilbert \cstar-bimodule over $A$ is isomorphic to $M$ if and only if it
is isomorphic to $M$ as a left module.
\end{rk}
\begin{exam}
\label{proj} Let $A=C(X)$ be a commutative unital \mbox{$C^*$}-algebra, and let
$M$ be a Hilbert \cstar-bimodule over $A$ that is, as a left Hilbert \mbox{$C^*$}-module, isomorphic to
$A^np$, for some $p\in\mbox{Proj}(M_n(A))$. This implies that $pM_n(A)p\cong
K(_A\!M)$ is isomorphic to a \mbox{$C^*$}-subalgebra of $A$ and is, in particular,
commutative. By viewing $M_n(A)$ as $C(X,M_n(\mbox{$I\!\!\!C$}))$ one gets that
$p(x)M_n(\mbox{$I\!\!\!C$})p(x)$ is a commutative \mbox{$C^*$}-algebra, hence rank $p(x)\leq 1$
for all $x\in X$.
Conversely, let $A=C(X)$ be as above, and let $p:X\longrightarrow
\mbox{Proj}(M_n(\mbox{$I\!\!\!C$}))$ be a continuous map, such that rank $p(x)\leq 1$ for all
$x\in X$. Then $A^np$ is a Hilbert \cstar-bimodule over $A$ for its usual left structure, the
right action of $A$ by pointwise multiplication, and right inner product given
by:
\[ \langle m,r\rangle_{{\scriptscriptstyle{L}}} = \tau(m^*r),\] for $m,r\in A^np$, $a\in A$, and
where $\tau$ is the usual $A$-valued trace on $M_n(A)$ (that is,
$\tau[(a_{ij})]=\sum a_{ii}$).
To show the compatibility of the inner products, notice that for any
\mbox{$T\in M_n(A)$}, and $x\in X$ we have:
\[ (pTp)(x)=p(x)T(x)p(x)=[\mbox{trace}(p(x)T(x)p(x))]p(x),\] which implies
that $pTp=\tau(pTp)p.$ Then, for $m,r,s \in M$:
\[\langle m,r\rangle_{{\scriptscriptstyle{L}}} s=mpr^*sp=m\tau(pr^*sp)p=m\tau(r^*s)=m\cdot \langle
r,s \rangle_{{\scriptscriptstyle{R}}}.\]
Besides, $A^np$ is symmetric:
\[\langle m,r\rangle_{{\scriptscriptstyle{R}}}=\tau(m^*r)=\sum_{i=1}^{n}m_i^*r_i=\langle
r,m\rangle_{{\scriptscriptstyle{L}}},\] for $m=(m_1,m_2,...,m_n)$, $r=(r_1,r_2,...r_n)\in M$.
Therefore, by Remark \ref{spm}, if $p,q\in \mbox{Proj}(M_n(A))$, the Hilbert \cstar-bimodules $A^np$
and $A^nq$ described above are isomorphic if and only if $p$ and $q$ are
Murray-von Neumann equivalent. Notice that the identity of $K(_{A}\!A^np)$ is
$\tau(p)$, that is, the characteristic function of the set $\{x\in X:
\mbox{rank }p(x)=1\}$. Therefore $A^np$ is full as a right module if and only
if rank $p(x)=1$ for all $x\in X$, which happens in particular when $X$ is
connected, and $p\not =0$.
\end{exam}
\begin{prop}
\label{ppic}
Let $A$ be a commutative \mbox{$C^*$}-algebra. For any Hilbert \cstar-bimodule $M$ over $A$ there is a
partial automorphism $(\langle M,M \rangle_{{\scriptscriptstyle{R}}}, \langle M,M\rangle_{{\scriptscriptstyle{L}}} ,\theta)$ of
$A$ such that the map $i: (M^s)_{\theta}\longrightarrow M$ defined by $i(m)=m$
is an isomorphism of Hilbert \cstar-bimodules .
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } The map $i:M^s\longrightarrow M$ is a left Hilbert \mbox{$C^*$}-modules
isomorphism. The existence of $\theta$, with $I=\langle M,M \rangle_{{\scriptscriptstyle{R}}}$ and
$J=\langle M^s,M^s\rangle_{{\scriptscriptstyle{R}}}=\langle M,M\rangle_{{\scriptscriptstyle{L}}}$, follows from Proposition
\ref{leftiso}.
\hfill {\bf Q.E.D.} \vspace*{.1in}
We now turn to the discussion of the group $\mbox{Pic}(A)$ for a commutative
\mbox{$C^*$}-algebra $A$. For a full Hilbert \cstar-bimodule $M$ over $A$, we denote by $[M]$ its
equivalence class in $\mbox{Pic}(A)$. For a commutative \mbox{$C^*$}-algebra $A$, the
group $\mbox{Gin}(A)$ is trivial, so the map $\alpha\mapsto A_{\alpha}$ is
one-to-one. In what follows we identify, via that map, $Aut(A)$ with a
subgroup of $\mbox{Pic}(A)$.
Symmetric full Hilbert \cstar-bimodules over a commutative \mbox{$C^*$}-algebra $A=C(X)$ are known to
correspond to line bundles over $X$. The subgroup of $\mbox{Pic}(A)$ consisting of
isomorphism classes of symmetric Hilbert \cstar-bimodules is usually called the classical Picard
group of $A$, and will be denoted by \mbox{CPic}($A$). We next specialize the
result above to the case of full bimodules.
\begin{nota} For $\alpha\in \mbox{Aut}(A)$, and $M$ a Hilbert \cstar-bimodule over $A$, we denote by
$\alpha(M)$ the Hilbert \cstar-bimodule $\alpha(M)=A_{\alpha}\otimes M \otimes A_{\alpha^{-1}}$.
\end{nota}
\begin{rk}
\label{alfam} The map $a\otimes m\otimes b \mapsto amb$ identifies
$A_{\alpha}\otimes M\otimes A_{\alpha^{-1}}$ with $M$ equipped with the
actions:
\[a\cdot m =\alpha^{-1}(a)m \mbox{,\hspace{.2in}} m\cdot a =m\alpha^{-1}(a),\] and inner
products
\[\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}}=\alpha(\langle m_0,m_1 \rangle_{{\scriptscriptstyle{L}}}^M),\] and
\[\langle m_0,m_1\rangle_{{\scriptscriptstyle{R}}}=\alpha(\langle m_0,m_1 \rangle_{{\scriptscriptstyle{R}}}^M),\] for $a\in
A$, and $m, m_0,m_1\in M.$
\end{rk}
\begin{thm}
\label{sdpr} Let $A$ be a commutative \mbox{$C^*$}-algebra. Then \mbox{CPic}$(A)$ is a
normal subgroup of $\mbox{Pic}(A)$ and
\[ \mbox{Pic}(A)=\mbox{CPic}(A) \cp \mbox{Aut}(A),\] where the action of $\mbox{Aut}(A)$ is given by
conjugation, that is $\alpha\cdot M=\alpha(M)$.
\end{thm}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } Given $[M]\in \mbox{Pic}(A)$ write, as in Proposition \ref{ppic}, $M\cong
M^s_{\theta}$, $\theta$ being an isomorphism from $\langle M,M \rangle_{{\scriptscriptstyle{R}}} =A$
onto $\langle M,M \rangle_{{\scriptscriptstyle{L}}} =A$.
Therefore $M\cong M^s\otimes A_{\theta}$, where $[M^s]\in \mbox{CPic}(A)$ and
$\theta \in \mbox{Aut}(A)$. If $[S]\in \mbox{CPic}(A)$ and $\alpha\in \mbox{Aut}(A)$ are such
that $M\cong S\otimes A_{\alpha}$, then $S$ and $M^s$ are symmetric bimodules,
and they are both isomorphic to $M$ as left Hilbert \mbox{$C^*$}-modules. This
implies, by Remark \ref{unsym}, that they are isomorphic. Thus we have:
\[M^s\otimes A_{\theta}\cong M^s\otimes A_{\alpha} \Rightarrow
A_{\theta}\cong \widetilde{M^s}\otimes M^s \otimes A_{\theta}\cong
\widetilde{M^s}\otimes M^s \otimes A_{\alpha}\cong A_{\alpha},\] which implies
(\cite[3.1]{bgr}) that $\theta\alpha^{-1}\in \mbox{Gin}(A)=\{id\}$, so
$\alpha=\theta$, and the decomposition above is unique.
It only remains to show that $\mbox{CPic}(A)$ is normal in $\mbox{Pic}(A)$, and it
suffices to prove that $[A_{\alpha}\otimes S\otimes A_{\alpha^{-1}}]\in
\mbox{CPic}(A)$ for all $[S]\in \mbox{CPic}(A)$, and $\alpha\in \mbox{Aut}(A)$, which follows
{}from Remark \ref{alfam}.
\hfill {\bf Q.E.D.} \vspace*{.1in}
\begin{nota} If $\alpha\in\mbox{Aut}(A)$, then for any positive integers $k,l$, we
still denote by $\alpha$ the automorphism of $M_{k\times l}(A)$ defined by
$\alpha[(a_{i j})]=(\alpha(a_{ij}))$.
\end{nota}
\begin{lemma}
\label{alfap}
Let $A$ be a commutative unital \mbox{$C^*$}-algebra, and $ p\in \mbox{Proj}(M_n(A))$ be
such that $A^np$ is a symmetric Hilbert \cstar-bimodule over $A$, for the structure described in
Example \ref{proj}. If $\alpha\in\mbox{Aut}(A)$, then $\alpha(A^np)\cong
A^n\alpha(p).$
\end{lemma}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } Set $ J:\alpha(A^np)\longrightarrow A^n\alpha(p)\mbox{,\hspace{.2in}} J(m\otimes
x\otimes r)=m\alpha(xr)$, for $m\in A_{\alpha}$, $r\in A_{\alpha^{-1}}$, and
$x\in A^np$. Notice that
\[m\alpha(xr)=m\alpha(xpr)=m\alpha(xr)\alpha(p)\in A^n\alpha(p).\] Besides,
if $a\in A$
\[\begin{array}[t]{ll} J(m\cdot a\otimes x\otimes r)&=J(m\alpha(a)\otimes
x\otimes r)\\ &=m\alpha(axr)\\ &=J(m\otimes a\cdot x\otimes r),
\end{array}\] and
\[\begin{array}[t]{ll} J(m\otimes x\cdot a \otimes r)&=m\alpha(xar)\\
&=J(m\otimes x\otimes a\cdot r),
\end{array}\] so the definition above makes sense. We now show that $J$ is a
Hilbert \cstar-bimodule isomorphism. For $m\in A_{\alpha}$, $n\in A_{\alpha^{-1}}$, $x\in A^np$,
and $a\in A$, we have:
\[\begin{array}[t]{ll} J(a\cdot(m\otimes x\otimes r))&=J(am\otimes x\otimes
r)\\ &=am\alpha(xr)\\ &=a\cdot J(m\otimes x\otimes r),
\end{array}\] and
\[\begin{array}[t]{ll} J(m\otimes x\otimes r\cdot
a)&=m\alpha(x(r\alpha^{-1}(a))\\ &=m\alpha(xr)a\\ &=J((m\otimes x\otimes
r)\cdot a)
\end{array}\] Finally,
\[\begin{array}[t]{ll}
\langle J(m\otimes x\otimes r),J(m'\otimes x'\otimes r')\rangle_{{\scriptscriptstyle{L}}}&=\langle
m\alpha(xr),m'\alpha(x'r')\rangle_{{\scriptscriptstyle{L}}}\\ &=\langle m\cdot
[(xr)(x'r')^*],m'\rangle_{{\scriptscriptstyle{L}}}\\ &=\langle m\cdot \langle x\cdot\langle
r,r''\rangle_{{\scriptscriptstyle{L}}}^A,x'\rangle_{{\scriptscriptstyle{L}}} ^{A^np},m'\rangle_{{\scriptscriptstyle{L}}}\\ &=\langle m\cdot \langle
x\otimes r,x'\otimes r'\rangle_{{\scriptscriptstyle{L}}}^{A^np\otimes A_{\alpha^{-1}}},m'\rangle_{{\scriptscriptstyle{L}}}\\
&=\langle m\otimes x\otimes r,m'\otimes x'\otimes r'\rangle_{{\scriptscriptstyle{L}}},
\end{array}\] which shows, by Corollary \ref{unrs}, that $J$ is a Hilbert \cstar-bimodule
isomorphism.
\hfill {\bf Q.E.D.} \vspace*{.1in}
\begin{prop}
\label{alfa} Let $A$ be a commutative unital \mbox{$C^*$}-algebra and $M$ a Hilbert \cstar-bimodule
over $A$. If $\alpha\in \mbox{Aut}(A)$ is homotopic to the identity, then
\[A_{\alpha}\otimes M\cong M \otimes A_{\gamma^{-1}\alpha\gamma},\] where
$\gamma\in\mbox{Aut}(A)$ is such that $M\cong (M^s)_{\gamma}$.
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } We then have that $K(_A\!M)$ is unital so, in view of Proposition
\ref{lmod} we can assume that $M^s=A^np$ with the Hilbert \mbox{$C^*$}-bimodule
structure described in Example \ref{proj}, for some positive integer $n$, and
$p\in \mbox{Proj}(M_n(A))$. Since $p$ and $\alpha(p)$ are homotopic, they are
Murray-von Neumann equivalent (\cite[4]{bl}). Then, by Lemma \ref{alfap} and
Example \ref{proj}, we have
\[A_{\alpha}\otimes M\cong A_{\alpha}\otimes M^s\otimes A_{\gamma}\cong
M^s\otimes A_{\alpha\gamma}\cong M\otimes A_{\gamma^{-1}\alpha\gamma}.\]
\hfill {\bf Q.E.D.} \vspace*{.1in}
We turn now to the discussion of crossed products by Hilbert \mbox{$C^*$}-bimodules,
as defined in \cite{aee}. For a Hilbert \cstar-bimodule $M$ over a \mbox{$C^*$}-algebra $A$, we denote
by $A\cp_{M} \mbox{$Z\!\!\!Z$}$ the crossed product \mbox{$C^*$}-algebra. We next establish some
general results that will be used later.
\begin{nota} In what follows, for $A-A$ Hilbert \mbox{$C^*$}-bimodules $M$ and $N$
we write $M\stackrel{cp}{\cong} N$ to denote $A\cp_{M} \mbox{$Z\!\!\!Z$} \cong A\cp_{N} \mbox{$Z\!\!\!Z$} $.
\end{nota}
\begin{prop}
\label{cpiso} Let $A$ be a \mbox{$C^*$}-algebra, $M$ an $A-A$ Hilbert
\mbox{$C^*$}-bimodule and $\alpha\in\mbox{Aut} (A)$. Then
i) $M\stackrel{cp}{\cong} \tilde{M}.$
ii) $M\stackrel{cp}{\cong} \alpha(M)$.
\end{prop}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } Let $i_A$ and $i_M$ denote the standard embeddings of $A$ and $M$ in
$A\cp_{M} \mbox{$Z\!\!\!Z$}$, respectively.
{\em{i)}} Set
\[i_{\tilde{M}}:\tilde{M}\longrightarrow A\cp_{M} \mbox{$Z\!\!\!Z$} \mbox{,\hspace{.2in}}
i_{\tilde{M}}(\tilde{m})=i_M(m)^*.\]
Then $(i_A,i_{\tilde{M}})$ is covariant for $(A,\tilde{M})$:
\[i_{\tilde{M}}(a\cdot \tilde{m})=i_{\tilde
{M}}(\widetilde{ma^*})=[i_M(ma^*)]^*=i_A(a)i_M(m)^*=i_A(a)i_{\tilde{M}}(\tilde{m}),\]
\[i_{\tilde{M}}(\tilde{m_1})i_{\tilde{M}}(\tilde{m_2})^*=i_M(m_1)^*i_M(m_2)=i_A(\langle
m_0,m_1\rangle_{{\scriptscriptstyle{R}}}^M)=i_A(\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}}^{\tilde{M}}),\] for $a\in A$
and $m,m_0,m_1\in M$. Analogous computations prove covariance on the right.
By the universal property of the crossed products there is a homomorphism from
$A\cp_{{\tilde{M}}} \mbox{$Z\!\!\!Z$}$ onto $A\cp_{M} \mbox{$Z\!\!\!Z$}$. Since $\tilde{\tilde{M}}=M$, by
reversing the construction above one gets the inverse of $J$.
{\em{ii)}} Set
\[j_A: A\longrightarrow A\cp_{M} \mbox{$Z\!\!\!Z$} \mbox{,\hspace{.2in}} j_{\alpha(M)}: M\longrightarrow
A\cp_{M} \mbox{$Z\!\!\!Z$} ,\] defined by $j_A=i_A{\scriptstyle \circ}\alpha^{-1}$ ,
$j_{\alpha(M)}(m)=i_M(m)$, where the sets $M$ and $\alpha(M)$ are identified
as in Remark \ref{alfam}. Then $(j_A,j_{\alpha(M)})$ is covariant for $(A,
\alpha(M))$:
\[j_{\alpha(M)}(a\cdot
m)=j_{\alpha(M)}(\alpha^{-1}(a)m)=i_A(\alpha^{-1}(a))i_{M}(m)=j_A(a)i_{\alpha(M)}(m),\]
\[j_{\alpha(M)}(m_0)j_{\alpha(M)}(m_1)^*=i_M(m_0)i_M(m_1)^*=i_A(\langle
m_0,m_1\rangle_{{\scriptscriptstyle{L}}} ^M)=\]
\[=j_A(\alpha\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}}^M)=j_A(\langle m_0,m_1\rangle_{{\scriptscriptstyle{L}}}
^{\alpha(M)}),\] for $a\in A$, $m, m_0,m_1 \in M$, and analogously on the
right. Therefore there is a homomorphism
\[J:A\cp_{\alpha(M)} \mbox{$Z\!\!\!Z$}\longrightarrow A\cp_{M} \mbox{$Z\!\!\!Z$} ,\] whose inverse is
obtained by applying the construction above to $\alpha^{-1}$.
\hfill {\bf Q.E.D.} \vspace*{.1in}
\section{An application: isomorphism classes for quantum Heisenberg
manifolds.}
For $\mu,\nu\in\mbox{$I\!\!R$}$ and a positive integer $c$, the quantum Heisenberg
manifold $D_{\mu\nu}^{c}$ (\cite{rfhm}) is isomorphic (\cite[Ex.3.3]{aee}) to the crossed
product $C({\bf{T}} ^2)\cp_{(X^c_{\nu})_{\alpha_{\mu\nu}}} \mbox{$Z\!\!\!Z$},$ where $X^c_{\nu}$ is the
vector space of continuous functions on $\mbox{$I\!\!R$}\times {\bf{T}}$ satisfying
$f(x+1,y)=e(-c(y-\nu))f(x,y)$. The left and right actions of $\mbox{$C({\bf{T}}^2)$}$ are defined
by pointwise multiplication, the inner products by $ \langle
f,g\rangle_{{\scriptscriptstyle{L}}}=f\overline{g}$, and $\langle f,g\rangle_{{\scriptscriptstyle{R}}}=\overline{f}g$, and
$\alpha_{\mu\nu}\in \mbox{Aut} (C({\bf{T}} ^2))$ is given by $\alpha_{\mu\nu}(x,y)=(x+2\mu, y+2\nu)$, and, for
$t\in\mbox{$I\!\!R$}$, $e(t)=exp(2\pi it)$. {
Our purpose is to find isomorphisms in the family
\mbox{$\{D_{\mu\nu}^{c}: \mu,\nu\in \mbox{$I\!\!R$}, c\in \mbox{$Z\!\!\!Z$}, c > 0\}$}. We concentrate in fixed
values of $c$, because $K_0(D_{\mu\nu}^{c})\cong \mbox{$Z\!\!\!Z$}^3 \oplus \mbox{$Z\!\!\!Z$}_c$(\cite{fpa}).
Besides, since $\alpha_{\mu\nu} = \alpha_{\mu +m , \nu + n}$ for all $m,n\in \mbox{$Z\!\!\!Z$}$, we view
{}from now on the parameters $\mu$ and $\nu$ as running in ${\bf{T}}$.
Let $M^{c}$ denote the set of continuous functions on $\mbox{$I\!\!R$}\times{\bf{T}}$ satisfying
\newline $f(x+1,y)=e(-cy)f(x,y).$ Then $M^{c}$ is a Hilbert \cstar-bimodule over $\mbox{$C({\bf{T}}^2)$}$, for
pointwise action and inner products given by the same formulas as in $X^c$.
The map $f\mapsto \tilde{f}$, where $\tilde{f}(x,y)= f(x,y+\nu)$, is a Hilbert
\mbox{$C^*$}-bimodule isomorphism between $(X^c_{\nu})_{\alpha_{\mu\nu}}$ and
${\scriptsize{C({\bf{T}}^2)_{\sigma}\otimes M^{c}\otimes C({\bf{T}}^2)_{\rho}}}$, where
$\sigma(x,y)=(x,y+\nu)$, and $\rho(x,y)=(x+2\mu,y+\nu)$. In view of
Proposition \ref{cpiso} we have:
\[D_{\mu\nu}^{c}\cong C({\bf{T}} ^2)\cp_{C({\bf{T}}^2)_{\sigma}\otimes M^c\otimes
C({\bf{T}}^2)_{\rho}} \mbox{$Z\!\!\!Z$} \cong \]
\[\cong C({\bf{T}} ^2)\cp_{(M^c)_{\rho\sigma}}\mbox{$Z\!\!\!Z$} \cong C({\bf{T}} ^2)\cp_{M^c_{\alpha_{\mu\nu}}}\mbox{$Z\!\!\!Z$}.\]
As a left module over $\scriptstyle{\mbox{$C({\bf{T}}^2)$}}$, $M^c$ corresponds to the module
denoted by $X(1,c)$ in \cite[3.7]{rfcan}. It is shown there that $M^c$
represents the element $(1,c)$ of $K_0(\mbox{$C({\bf{T}}^2)$})\cong \mbox{$Z\!\!\!Z$}^2$, where the last
correspondence is given by $[X]\mapsto (a,b)$, $a$ being the dimension of the
vector bundle corresponding to $X$ and $-b$ its twist. It is also proven in
\cite{rfcan} that any line bundle over $\mbox{$C({\bf{T}}^2)$}$ corresponds to the left module
$M^c$, for exactly one value of the integer $c$, and that $M^c\otimes M^d$ and
$M^{c+d}$ are isomorphic as left modules. It follows now, by putting these
results together, that the map $c\mapsto [M^c]$ is a group isomorphism from
$\mbox{$Z\!\!\!Z$}$ to $\mbox{CPic}(\mbox{$C({\bf{T}}^2)$})$.
\begin{lemma}
\label{cpct}
\[\mbox{Pic}(C({\bf{T}}^2))\cong \mbox{$Z\!\!\!Z$}\cp_{\delta} \mbox{Aut}(C({\bf{T}}^2)),\] where
$\delta_{\alpha}(c)=\mbox{{\em{det}}}\alpha_*\cdot c,$ for
$\alpha\in\mbox{Aut}(C({\bf{T}}^2))$, and $c\in \mbox{$Z\!\!\!Z$}$; $\alpha_*$ being the usual
automorphism of $K_0(C({\bf{T}}^2))\cong \mbox{$Z\!\!\!Z$}^2$, viewed as an element of
$GL_2(\mbox{$Z\!\!\!Z$})$.
\end{lemma}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } By Theorem \ref{sdpr} we have:
\[\mbox{Pic}(\mbox{$C({\bf{T}}^2)$})\cong \mbox{CPic}(\mbox{$C({\bf{T}}^2)$})\cp_{\delta}\mbox{Aut}(\mbox{$C({\bf{T}}^2)$}).\] If we identify
$\mbox{CPic}(\mbox{$C({\bf{T}}^2)$})$ with $\mbox{$Z\!\!\!Z$}$ as above, it only remains to show that
$\alpha(M^c)\cong M^{\mbox{det}\alpha_*\cdot c}$. Let us view $\alpha_*\in
GL_2(\mbox{$Z\!\!\!Z$})$ as above. Since $\alpha_*$ preserves the dimension of a bundle,
and takes $\mbox{$C({\bf{T}}^2)$}$ (that is, the element $(1,0)\in \mbox{$Z\!\!\!Z$}^2$) to itself, we have
\[\alpha_*=\left( \begin{array}{cc}
1 & 0 \\
0 & \mbox{det} \alpha_*
\end{array}
\right)\]
Now,
\[\alpha_*(M^c)=\alpha_*(1,c)=(1,\mbox{det}\alpha_*\cdot
c)=M^{\mbox{det}\alpha_*\cdot c}.\] Since there is cancellation in the
positive semigroup of finitely generated projective modules over $\mbox{$C({\bf{T}}^2)$}$
(\cite{rfcan}), the result above implies that $\alpha_*(M^c)$ and
$M^{\mbox{det}\alpha_*\cdot c}$ are isomorphic as left modules. Therefore, by
Remark \ref{spm}, they are isomorphic as Hilbert \cstar-bimodules .
\hfill {\bf Q.E.D.} \vspace*{.1in}
\vspace{.2in}
\begin{thm}
\label{qhmiso}
If $(\mu,\nu)$ and $(\mu',\nu')$ belong to the same orbit under the usual
action of $GL(2,\mbox{$Z\!\!\!Z$})$ on ${\bf{T}}^2$, then the quantum Heisenberg manifolds $D_{\mu\nu}^{c}$
and $ D^{c}_{\mu'\nu'}$ are isomorphic.
\end{thm}
\vspace{.2in}\hspace{.6cm}{\bf Proof$\,$:\ } If $(\mu,\nu)$ and $(\mu',\nu')$ belong to the same orbit under the
action of $GL(2,\mbox{$Z\!\!\!Z$})$, then $\alpha_{\mu'\nu'}=\sigma\alpha_{\mu\nu}\sigma^{-1}$, for some
$\sigma\in GL(2,\mbox{$Z\!\!\!Z$})$. Therefore, by Lemma \ref{cpct} and Proposition
\ref{cpiso}:
\[M^{c}_{\alpha_{\mu'\nu'}}\cong M^{c}_{\sigma\alpha_{\mu\nu}\sigma^{-1}}\cong M^c\otimes
\mbox{$C({\bf{T}}^2)$}_{\sigma\alpha\sigma^{-1}}\cong \]
\[\cong\mbox{$C({\bf{T}}^2)$}_{\sigma}\otimes M^{\mbox{det}\sigma_*^{-1}\cdot c}\otimes
\mbox{$C({\bf{T}}^2)$}_{\alpha_{\mu\nu}\sigma^{-1}}\cong\sigma(M_{\alpha_{\mu\nu}}^{\mbox{det}\sigma_*\cdot
c})\stackrel{cp}{\cong} M^{\mbox{det}\sigma_*\cdot c}_{\alpha_{\mu\nu}}.\] In case $\mbox{det}
\sigma_*=-1$ we have
\[M^{\mbox{det}\sigma_*\cdot c}_{\alpha_{\mu\nu}}\cong M^{-c}_{\alpha_{\mu\nu}}\stackrel{cp}{\cong}
\widetilde{M^{-c}_{\alpha_{\mu\nu}}}\cong \mbox{$C({\bf{T}}^2)$}_{\alpha^{-1}_{\mu\nu}}\otimes M^c\cong
(M^c)_{\alpha^{-1}_{\mu\nu}},\] since $\mbox{det}\alpha_*=1$, because $\alpha_{\mu\nu}$ is
homotopic to the identity.
On the other hand, it was shown in \cite[0.3]{gpots} that
$M^c_{\alpha_{\mu,\nu}^{-1}}\cpcongM^c_{\alpha_{\mu\nu}}$.
Thus, in any case, $M^c_{\alpha_{\mu'\nu'}}\stackrel{cp}{\cong}
M^c_{\alpha_{\mu\nu}}$. Therefore
\[D^c_{\mu'\nu'}\cong \mbox{$C({\bf{T}}^2)$}\cp_{M^c_{\alpha_{\mu'\nu'}}}\mbox{$Z\!\!\!Z$}\cong \mbox{$C({\bf{T}}^2)$}\cp
_{M^c_{\alpha_{\mu\nu}}}\mbox{$Z\!\!\!Z$}\cong D^c_{\mu\nu}.\]
\hfill {\bf Q.E.D.} \vspace*{.1in}
| proofpile-arXiv_065-828 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
At the mean-field level the linear $\sigma-\omega$ (or Walecka)
model~\cite{walecka} satisfactorily explains many properties of
nuclear matter and finite nuclei with two free parameters. The
resulting nuclear-matter compression modulus at saturation density
exceeds, however, the experimental bound~\cite{brown-osnes,co}. A way
out of this difficulty is to introduce non-linear scalar
self-couplings~\cite{boguta,waldhauser}. The non-linear model that
obtains has been shown to reproduce well ground-state nuclear
properties (though with some instabilities at high densities and for
low values of the compressibility, $\kappa\lesssim 200$
MeV~\cite{waldhauser}). This model is renormalizable and has four
free parameters to fit.
One alternative approach which renders a satisfactory compression
modulus without increasing the number of free parameters is that
advanced by Zimanyi and Moszkowski~\cite{zm} and by Heide and
Rudaz~\cite{heide}. This model employs the same degrees of freedom
and the same number of independent couplings that are present in the
Walecka model. The difference lies in the coupling among the fields.
In the work of Zimanyi and Moszkowski (ZM), for instance, the authors
use a non-renormalizable derivative coupling between the scalar-meson
and the baryon fields which they later adjust to reproduce the
experimental conditions at saturation. Their results for the
compression modulus, $\kappa=224$ MeV, and effective mass, $M^*=797$
MeV, compare very well with Skyrme-type calculations~\cite{zimanyi}.
Since it is desirable to have models that, at the mean-field level,
provide reasonable nuclear matter results, the original ZM model and
its two variations (to which we shall refer hereafter as ZM2 and ZM3)
described in the appendix of the original paper of Zimanyi and
Moszkowski~\cite{zm}, have deserved some recent attention.
Particularly, Koepf, Sharma and Ring~\cite{koepf} (KSR) have performed
nuclear-matter and finite-nuclei calculations and compared the ZM
model against i) the linear $\sigma -\omega$ model and, ii) models
with non-linear scalar couplings of phenomenological origin. One of
their findings was that the ZM model, when used to calculate the
energy spectrum of $^{208}$Pb, gives a spin-orbit splitting that
compares poorly with the data. It is well known that the splitting of
the energy levels due to the spin-orbit interaction is very sensitive
to the difference $\lambda{=}V-S$ between the vector, $V$, and the
scalar, $S$, potentials. For the ZM model one gets $\lambda{=}223$
MeV, which is short of the 785 MeV of the successful (in this regard)
Walecka model. A simplified summary of KSR's results also indicates
that: 1) The (non)linear $\sigma -\omega$ model predicts a
(good)reasonable spin-orbit splitting, a (soft)stiff equation of state
and a (good)small effective nucleon mass; 2) The ZM model predicts a
soft equation of state, a reasonable effective nucleon mass and a poor
spin-orbit splitting for finite nuclei. To this list we may
add~\cite{delfino,delfino2} that the relativistic content of the
various models, as given by the ratio of scalar to baryon
densities, differs also being the linear and non-linear
$\sigma-\omega$ models more relativistic than the usual ZM.
At this point it is clear that fixing the spin-orbit problem in the ZM
model would make it similar in results to the non-linear
$\sigma-\omega$ model but with the additional bonus of having to fit
two parameters instead of four. The modified ZM models, ZM2 and ZM3,
aim at this. All three models come about in the following way. In
the standard ZM model the non-linear effective scalar factor
$1/m^*{=}1+g_\sigma \sigma/M$, whose meaning will become clear below,
multiplies the nucleon derivative and the nucleon-vector coupling
terms. When its two free parameters are fitted to the nuclear-matter
baryon density, $\rho_0{=}0.148$ fm$^{-3}$, and energy density per
nucleon at saturation, $E_b{=}-15.75$ MeV, one obtains the results
shown in the first row of Table~\ref{table1}. If we now let $1/m^*$
to act upon the nucleon derivative and all terms involving the vector
field we end up with the ZM2 version of the ZM model, whose
predictions are shown in the second row of the table. The
compressibility is slightly lower and the difference between the
vector and scalar potentials has increased to 278 MeV~\cite{delfino2}.
Finally, the ZM3 version of the model is obtained by allowing $1/m^*$
to act just on the nucleon derivative term. With this change one
obtains the results displayed in the third row of the table; the
compressibility diminishes again and $\lambda$ reaches 471
MeV~\cite{delfino2}. It is worth calling the attention to the fact
that the particular non-linear couplings used in these models change
the traditional conception whereby to a lower value of $m^*=M^*/M$
should correspond a higher compressibility $\kappa$. Particularly,
the ZM3 field equations couple the non-linear scalar field to the
vector field and this new source is responsible for the $m^*{=}0.72$
and $\kappa$=156 MeV, of the table. In the phenomenological
non-linear scalar-coupling model of Feldmeir and
Lindner~\cite{feldmeier} (hiperbolic ansatz), for example, the price
to pay for the reduced effective mass, $m^*=0.71$, is an enhanced
compressibility, $\kappa=410$ MeV.
If we assume that the difference $\lambda$ between the scalar and the
vector potentials is a good indicator of the strength of the
spin-orbit splitting in finite nuclei, it is clear from
Table~\ref{table1} that ZM3 offers the closest approach of the three
to the non-linear $\sigma -\omega$ model. In this situation it is
relevant to study the model predictions for a number of properties of
finite nuclei for which there exist experimental data. In this work
we apply the ZM3 model to the study of the energy spectrum and
ground-state properties of $^{16}$O, $^{40}$Ca, $^{48}$Ca,
$^{90}$Zr and $^{208}$Pb. We calculate energy levels, charge {\em
r.m.s} radius and nucleon density distributions and compare the
results obtained with the predictions of the standard ZM model and
with the experimental data in order to extract conclusions.
In the next section we introduce the ZM and ZM3 models and give the
necessary detail to understand the origin of the results we obtain.
These are presented in the last section where we also draw some
conclusions.
\section{The Model}
The ZM and ZM3 models of interest for this work can be derived
from the Lagrangean
\begin{eqnarray}
{\cal L} &=& \sum_{a=1}^A \bar{\psi}_a \left\{ \gamma_{\mu}
\left[\left(i\partial^{\mu}-e\:\frac{(1+\tau_3)}{2}
A^{\mu}\right)-g^*_{\omega}
\omega^{\mu}-\frac{g^*_{\rho}}{2}\tau_3 \rho^{\mu}\right]-
(M-g^*_\sigma\:\sigma)\right\}\psi_a
\label{L3}\\
&&+\frac{1}{2}\left(\partial_\mu\sigma\partial^\mu\sigma -
m^2_\sigma\sigma^2\right)
-\frac{1}{4}\left(\partial_\mu\omega_\nu-\partial_\nu\omega_\mu
\right)^2+
\frac{1}{2}m^2_\omega\omega_\mu\omega^\mu \nonumber\\
&&-\frac{1}{4}\left(\partial_\mu\rho_\nu-
\partial_\nu\rho_\mu\right)^2+
\frac{1}{2}m^2_\rho \rho_\mu \rho^\mu -\frac{1}{4}\left(\partial_\mu
A_\nu-
\partial_\nu A_\mu\right)^2\;, \nonumber
\end{eqnarray}
where the effective coupling constants are given in each model
according to
\begin{center}
\begin{tabular}{ccc}\hline\hline
{\rm Model}& $g^*_\sigma$ & $g^*_{\omega,\rho}$ \\ \hline
{\rm ZM} & $m^*g_\sigma$ & $g_{\omega,\rho}$ \\
{\rm ZM3}& $m^*g_\sigma$ & $m^*g_{\omega,\rho}$ \\ \hline\hline
\end{tabular}
\end{center}
where we define
\[
m^*=\left(1+ \frac{g_\sigma \sigma }{M}\right)^{-1}.
\]
In equation~(\ref{L3}) the nucleon field, $\psi$, couples to the
scalar-isoscalar meson field, $\sigma$, and the vector-isoscalar
meson
field, $\omega_\mu$. A third isovector meson field, $\rho_\mu$,
(neutral component) is included to account for the asymmetry between
protons and neutrons. The $\rho_\mu$ and the electromagnetic field
$A_\mu$ both couple to the baryon field.
The Euler-Lagrange equations of motion using the Lagrangean
(\ref{L3}) give the following equations for the fields,
\begin{eqnarray}
\left\{ \gamma_{\mu}
\left[\left(i\partial^{\mu}-e\:\frac{(1+\tau_3)}{2}
A^{\mu}\right)-g^*_{\omega}
\omega^{\mu}-\frac{g^*_{\rho}}{2}\tau_3 \rho^{\mu}\right]-
(M-g^*_\sigma\:\sigma)\right\}\psi_a &=&0 \;,
\end{eqnarray}
\begin{eqnarray}
\partial_\mu \omega^{\mu\nu} + m_\omega^2 \omega^\nu &=&
g^*_\omega \sum_{a=1}^A \bar{\psi}_a \gamma^\nu \psi_a \;, \\
\partial_\mu \rho^{\mu\nu} + m_\rho^2\rho^\nu &=&
\frac{g^*_\rho}{2} \sum_{a=1}^A \bar{\psi}_a
\gamma^\nu\tau_3 \psi_a \;, \\
\partial_\mu A^{\mu\nu} &=&
\frac{e}{2}\sum_{a=1}^A \bar{\psi}_a \gamma^\nu(1+\tau_3) \psi_a \;,
\\
\partial_\mu \partial^\mu \sigma + m_\sigma^2 \sigma &=&
{m^*}^2 g_\sigma\sum_{a=1}^A \bar{\psi}_a\psi_a
-\sum_{a=1}^A \left(\frac{\partial g^*_\omega}{\partial \sigma}
\bar{\psi}_a\gamma_\mu\psi_a\omega^\mu
+\frac{1}{2}\frac{\partial g^*_\rho}{\partial \sigma}
\bar{\psi}_a\gamma_\mu\tau_3\psi_a\rho^\mu \right) \;.
\end{eqnarray}
In the mean-field approximation all baryon currents are replaced by
their ground-state expectation values. In a system with spherical
symmetry the mean value of the spatial components of the vector-meson
fields vanish, resulting in the following mean-field equations,
\begin{eqnarray}
\left\{ \gamma_{\mu}
i\partial^{\mu}-\gamma_0\left[e\:\frac{(1+\tau_3)}{2}A^0
-g^*_{\omega}\omega^0 -\frac{g^*_{\rho}}{2}\tau_3 \rho^0\right]-
(M-g^*_\sigma\:\sigma)\right\}\psi_a &=&0 \;,\label{dirac3}
\end{eqnarray}
\begin{eqnarray}
-\nabla^2 \omega^0 + m_\omega^2
\omega^0 &=& g^*_\omega \sum_{a=1}^A \psi^\dagger_a\psi_a \equiv
g^*_\omega \rho_b = g^*_\omega (\rho_p+\rho_n) \;,\label{omega3} \\
-\nabla^2 \rho^0 + m_\rho^2\rho^0 &=& \frac{g^*_\rho}{2} \sum_{a=1}^A
\psi^\dagger_a \tau_3 \psi_a \equiv g^*_\rho \rho_3 = g^*_\omega
(\rho_p-\rho_n) \;,\label{rho3}\\
-\nabla^2 A^0 &=&
\frac{e}{2}\sum_{a=1}^A \psi^\dagger_a(1+\tau_3) \psi_a \equiv e
\rho_p \label{coulomb} \;,\\ -\nabla^2 \sigma + m_\sigma^2 \sigma &=&
{m^*}^2 g_\sigma\sum_{a=1}^A \bar{\psi}_a\psi_a -\sum_{a=1}^A
\left(\frac{\partial g^*_\omega}{\partial \sigma}
\psi^\dagger_a\psi_a\omega^0 +\frac{1}{2}\frac{\partial
g^*_\rho}{\partial \sigma} \psi^\dagger_a\tau_3\psi_a\rho^0 \right)
\;, \label{sigma3}\\
&\equiv& {m^*}^2 g_\sigma \rho_s - \frac{\partial
g^*_\omega}{\partial \sigma} \rho_b \omega^0 - \frac{\partial
g^*_\rho}{\partial \sigma} \rho_3 \rho^0 \;.\nonumber
\end{eqnarray}
Equations~(\ref{dirac3})~to~(\ref{sigma3}) are a set of coupled
non-lineal differential equations which may be solved by iteration.
For a given set of initial meson potentials, the Dirac equation
(\ref{dirac3}) is solved. Once the baryon wave functions are
determined the source terms for Eqs.~(\ref{omega3})-(\ref{sigma3})
are evaluated and new meson fields are obtained. The procedure is
iterated until self-consistency is achieved.
The coupling constants $g_{\sigma}$ and $g_{\omega}$ are chosen to
reproduce the saturation baryon density in symmetric nuclear matter,
$\rho_0{=}0.148$ fm$^{-3}$, and the energy density per nucleon at
saturation, $E/A{=}-15.75$ MeV. The third coupling constant,
$g_\rho$, is obtained by fitting the bulk symmetry energy
in nuclear matter given by the expression
\begin{equation}
a_4=\frac{g_\rho^2}{8m_\rho^2}\left(\frac{g^*_\rho}{g_\rho}\right)\rho_0 +
\frac{k_F^2}{6E^*_F}
\end{equation}
to $a_4{=}32.5$ MeV at $\rho{=}\rho_0$. We have used
\[ \rho_0=\frac{2}{3\pi^2}k_F^3 \]
and
\[ E_F^*=\sqrt{k_F^2+{m^*}^2M^2} \;.\]
The coupling constants and masses used in the calculations are given
in Table~{\ref{table2}}.
\section{Results}
Results of the present calculation for the single particle energy
levels in $^{16}$O, $^{40}$Ca, $^{48}$Ca, $^{90}$Zr, and $^{208}$Pb
are shown in Tables~\ref{table3} to~\ref{table5}. Through the tables
a good overall agreement is obtained, i) between the two models and,
ii) with the experimental data. Nonetheless, the ZM3 model fares
better than ZM in reproducing the observed spin-orbit splittings.
This is due to the difference in the way that the scalar and vector
mesonic fields couple to each other ---via non-linear effective
coupling constants--- in each model. This difference is better
illustrated in figures~\ref{fig1} to~\ref{fig4} where the ``central
potential", $V_0$, and the ``reduced spin-orbit term", $V_{ls}$, are
depicted as functions of the radial distance, $r$, for oxygen and
lead. We have defined the central potential to be
\begin{equation}
V_0= V+S \label{V0}
\end{equation}
with
\[ V = g_\omega^* \omega_0 \] and
\[ S = -g_\sigma^* \sigma. \] The reduced spin-orbit term, on the
other hand, is given by the expression,
\begin{equation}
V_{ls} =\frac{V' - S'}{2m_sr} \;, \label{vls}
\end{equation}
with $ m_s = M - 0.5*( V - S )$ and with the primes denoting partial
derivatives with respect to $r$. Though the choice of mass in the
denominator of Eq.~(\ref{vls}) could be opened to some debate, the
figures are intended to compare strengths in both models and not to
draw absolute conclusions.
Figures~\ref{fig1} and~\ref{fig3} illustrate the origin of the
behaviour of both models. ZM is shallower in the center and raises
more steeply on the surface than ZM3; this holds true both in oxygen
and in lead. The resulting spin-orbit potential (figures~\ref{fig2}
and~\ref{fig4}) is, thus, deeper at the surface in ZM3 than in ZM.
To estimate the magnitude of this difference we calculated the
ratio expected for the spin-orbit splittings in ZM3 and ZM, using the
parameters from the fit to nuclear matter of Table~\ref{table1}. The
result \[ \frac{V^{ZM3}_{ls}}{V^{ZM}_{ls}} \propto \frac{(V-S)^{ZM3}
m^*_{ZM}} {(V-S)^{ZM} m^*_{ZM3}} \approx 2.5 \] agrees nicely with the
ratios extracted from the levels in tables~\ref{table3}-\ref{table5}.
It is worth noticing that our calculated spectra for the ZM model,
differ from those presented by KSR in Ref.~\cite{koepf} for the same
model. We have traced the origin of this discrepancy to the symmetry
energy $a_4$ used to fit the $\rho$-meson coupling constant.
Figure~\ref{fig5} illustrates the effect that a decreasing $a_4$ has
on the energy of a few selected single particle orbits in $^{208}$Pb.
The shift in energy due to the presence of the $\rho$-meson may be as
large as 5 MeV depending on the choice of $a_4$ (the arrow indicates
the value used in the calculations shown here, $a_4=32.5$ MeV). The
spectrum of $^{208}$Pb calculated with a vanishing $g_{\rho}$ agrees
with the results of KSR. Notice also that the splittings amongst
spin-orbit partners are insensitive to $g_{\rho}$ since changes in
$a_4$ shift the levels globally.
At this point one of our conclusions is that, as demonstrated
in~\cite{koepf}, the ZM model does not do well in reproducing the
energy splittings due to the spin-orbit interaction. However, the
overall spectrum turns out to be satisfactory. In this sense the
isovector meson plays in ZM the same important role, for asymmetric
nuclei, that has been shown to play in the non-linear Walecka
model~\cite{sharma}.
Table~\ref{table6} shows the results obtained for some of the static
ground-state properties of the same nuclei as above. One feature to
point out is that, systematically, ZM3 predicts a {\em r.m.s.} value
for the charge radius that is larger than the one calculated with the
ZM model. This is due to the slightly smaller binding of the protons
in this model that produces a longer tail in the charge distribution,
as shown in Figs.~\ref{fig6} and~\ref{fig7}. The baryon density
extends also farther in ZM3 as can be appreciated from
Figs.~\ref{fig8} and~\ref{fig9}. The baryon distribution at the
surface indicates that edge effects are more important in ZM3 than in
the ZM model, something that agrees with our discussion of the
spin-orbit splittings in the previous paragraphs.
Summarizing, we have applied the ZM3 model to the study of the energy
spectrum and ground-state properties of $^{16}$O, $^{40}$Ca,
$^{48}$Ca, $^{90}$Zr and $^{208}$Pb. The interest in this model, and
the standard ZM, resides in their ability to describe nuclear matter
at saturation with two free parameters. For finite nuclei, we have
shown that ZM3 gives a reasonable description of nuclear spectra and
improves upon ZM regarding the energy splitting of levels due to the
spin-orbit interaction. The results of the calculations described
here were compared with those obtained using the standard ZM model and
with the experimental data. It is our conclusion that models of this
kind, with derivative couplings involving the scalar and the vector
fields, offer a valid framework to pursue calculations where the
requirement of simultaneous reasonable values for the nuclear
compressibility and the spin-orbit splitting cannot be side-stepped.
\vspace{1cm}
{\bf Acknowledgements}
One of the authors (MC) would like to express his thanks to the
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico
(CNPq) and to the Departamento de Fisica Nuclear e Altas Energias
(DNE-CBPF) Brazil, for their financial support. AG is a fellow of the
the CONICET, Argentina.
\newpage
| proofpile-arXiv_065-831 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The magnetization dynamics of single-domain nanometric particles at low temperature is presently a subject of intense interest, in the hope of finding evidence for quantum tunneling of the magnetic moment through the anisotropy barrier associated with the particle \cite{chud}. Apart from some pioneering attempts at a study of a unique particle \cite{werns}, most efforts are concentrated on macroscopic samples, in which an accurate knowledge of the effective distribution of barriers is difficult, hence hindering a clear interpretation of the results \cite{barba1},\cite{T.Lnt}. Moreover, except in a few cases \cite{paulsen}, the low-temperature range of the published data is often limited to pumped-He cryogenic techniques ($\sim2K$), which still makes an unambiguous characterization of quantum effects more difficult.
In this paper, we present magnetic measurements which have been performed using a dilution refrigerator \cite{PP}, that allow data to be taken down to $\sim 50mK$. We have studied a sample of $\gamma-Fe_2O_3$ ~particles, dispersed in a silica matrix, with a typical diameter of $\sim 6 \ nm$. The relaxation dynamics of $\gamma-Fe_2O_3$ ~particles has already been shown to exhibit some anomalies \cite{Zhang}, that appear at the very end of the accessible temperature range (1.8 K). Our present data show that the relaxation rates in our sample do indeed fail to go down to zero when the temperature is lowered to $60 \ mK$.
\section{Sample characterization}
The small particles of $\gamma-Fe_2O_3$ ~(maghemite) are embedded in a silica matrix, obtained
by a polymerization process at room temperature. They are diluted to a volume fraction of $4.10^{-4}$, in order to have them as independent as possible. The diameter distribution obtained by transmission electron microscopy is shown in the inset of Fig. 1; it can be fitted to a log-normal shape with peak value $d_0=6.3\ nm$ and standard deviation $\sigma=0.25$.
\begin{figure}[h]
\centerline{\epsfysize=7cm \epsfbox{fig1.eps} }
\caption{Total magnetic moment of the sample, measured in ZFC and FC procedures. The inset shows the size distribution of the particles deduced from transmission electron microscopy.}
\end{figure}
Fig.1 presents magnetic characterization measurements performed with a commercial SQUID magnetometer (Cryogenic Ltd). Here and all throughout the paper, we have plotted the measured magnetic moment in cgs units (corresponding to a total maghemite volume $\sim 4.10^{-5} \ cm^3$). The ``ZFC" curve is measured in the usual way by cooling the sample (down to 10 K) in zero field, applying a field and then raising the temperature;
the field-cooled one (``FC") is obtained while cooling in the field $H$.
The ZFC curve shows a broad maximum around $T^{peak}\simeq 73 \ K$. It represents the progressive deblocking of larger and larger particles as the temperature T is raised. Let us consider that a particle of volume $V$ involves an anisotropy barrier $U=K_a.V$, where $K_a$ is a density of anisotropy energy. If the time spent at a given T is $t$ ($\sim100\, s$), then for thermally activated dynamics most particles which are being deblocked at T have a typical volume V obeying an Arrhenius law
\begin{equation} \label{U=}
K_a.V=k_B.T.\ln{t\over \tau_0} \qquad ,
\end{equation}
where $\tau_0\sim 10^{-10}s$ is a microscopic attempt time \cite{dormann}. By assuming in addition that the saturated moment of a particle is proportional to its volume, and that the moments follow a Langevin function when they are deblocked (super\-para\-magnetism), we have calculated the ZFC curve corresponding to the measured size distribution. The peak is obtained at the measured temperature for $K_a=7.5\, 10^5\,erg/cm^3$. This value is in agreement \cite{romain} with high-field measurements where the integral of the work needed for saturating the sample has been evaluated and also with M\"ossbauer spectroscopy results. It is one order of magnitude larger than the bulk maghemite value, as commonly observed in small particles where shape and surface contributions have increased the magnetic anisotropy \cite{dormann}.
Note that, due to the distribution width and to the 1/T variation of super\-para\-magnetism, the ZFC-peak is found at a temperature which is three times larger than that corresponding to the peak value $d_0$ of the size distribution ($T_b(d_0)=25\, K$) \cite{romain}.
\section{Magnetic behavior towards very low temperatures}
The setup used for the low-T experiments is a home made combination of an r.f. SQUID magnetometer \cite{ocio} and a dilution refrigerator \cite{PP}. The sample is coupled to the mixing chamber through a thermal impedance which allows a temperature range of 35 mK to 7 K. For relaxation measurements at the lowest temperatures, some spurious heating has been found when the field is varied, due to eddy currents in the thermalization link; we have therefore carefully adjusted the field amplitude, and chosen a ``slow'' cut-off procedure (5 s), in such a way that the results become independent of both these parameters. We also have limited our lower range to 60 mK.
The sample is first cooled in zero field from room temperature to the dilution regime. From that point, the temperature can no longer be easily raised above 7 K. The procedure for the relaxation measurements at $T_0\le 5\, K$ starts with heating the sample to a high enough temperature for deblocking of all particles which may participate in the dynamics at $T_0$, e.g. 7 K. Then the sample is field-cooled from 7 K to $T_0$, the field is decreased to zero and the SQUID signal variation corresponding to the slow relaxation processes is measured. This procedure of not heating up to room temperature makes sense because our sample is highly diluted; in a first approximation the particles can be considered independent of each other. We have checked that our choice of the reinitialization temperature had no influence on the resulting dynamics.
\begin{figure}[h]
\centerline{\epsfysize=7cm \epsfbox{fig2.eps} }
\caption{ Typical relaxation curves at low temperatures, as a function of the decimal logarithm of the time in seconds. The curves have been vertically shifted by arbitrary values. }
\end{figure}
\begin{figure}[h]
\centerline{\epsfysize=7cm \epsfbox{fig3.eps} }
\caption{ Magnetic viscosity as a function of temperature.}
\end{figure}
Figure 2 presents examples of relaxation curves. They are roughly logarithmic in time, apart from some uncertainty in the first seconds, which should be related to the 5 s field cut-off duration. In this paper, we only consider the average logarithmic slope of the curves (``magnetic viscosity"), which we determine between $10^2$ and $10^3\ s$.
Figure 3 shows our set of results. For decreasing temperatures, the measured viscosity first decreases, then flattens out, and surprisingly increases back below 150 mK. We present a simple model for the T-dependence of the viscosity before discussing this result in more detail.
\section{A simple picture of thermal relaxation}
By thinking of the sample relaxation at T as a sum of independent processes, one may write the total relaxing moment $M_T(t)$ as
\begin{equation} \label{M=sum}
M_T(t)=\int_0^{+\infty }\ m(U)\ P(U) \exp-{t\over \tau(U)}\ dU
\end{equation}
where the summation runs over the barrier distribution P(U) associated with the size distribution of the particles. $m(U)$ stands for the ``field-cooled moment" of the particles with anisotropy barrier U, which is the thermal average of the
moments at their blocking temperature; as a first approximation, one may assume $U=K_a.V$ and $m(U)\propto V$, hence $m(U) \propto U$. At any temperature $T$ and after a time $t$ following the field cut-off, one may consider that the only relaxing objects are those for which $\tau(U)=t$. The logarithmic derivative $S$ of the magnetization (magnetic viscosity) can then be easily derived as
\begin{equation} \label{S1}
S\equiv {\partial M_T \over \partial \ln t}\propto T.P(U_c).m(U_c) \quad\hbox{where} \quad U_c=k_B.T.\ln{t\over\tau_0} \quad .
\end{equation}
The magnetic viscosity is commonly expected to be proportional to T \cite{street}, a controversial point
since in our cases of interest the energy barrier distribution $P(U)$ may vary significantly \cite{barba1},\cite{T.Lnt}. Indeed, from Eq. \ref{S1}, one sees that the distribution of interest is $P(U).m(U)$ rather than $P(U)$ itself; with $m(U_c) \propto U_c$, Eq. \ref{S1} then becomes
\begin{equation} \label{S2}
S\propto T^2. \ln({t\over\tau_0}).\ P\Big( U_c=k_B.T.\ln{t\over \tau_0}\Big) \quad .
\end{equation}
We believe that these $t$ and $T^2$-dependences of the viscosity are probably hidden in most experimental results, due to the combination of the distributions $P(U)$ and $m(U)$ which are not accurately known (the $\ln^2 (t/\tau_0)$-variation of the magnetization is very close to $\ln t$, due to the microscopic value of $\tau_0$). However, it seems to us that the first approximation of the viscosity in the case of non-interacting particles with a flat distribution of barriers should be a quadratic rather than a linear function of temperature.
\section{Discussion}
As expected from thermally activated dynamics and a regular distribution of barriers, the 0.5-5 K viscosity is seen to decrease for decreasing temperatures. It shows a slight upwards curvature which is compatible with a $T^2$-dependence and a flat distribution; actually,
this T-range corresponds to the blocking of 2-3 nm objects, which are not well characterized from the distribution in Fig.1. However, it is clear from Fig.3 that a normal extrapolation will not yield a zero viscosity at zero temperature; below 150 mK, the viscosity data even show a systematic tendency to increase as T is lowered. A similar behavior has been noted in an array of cobalt particles \cite{wegrowe}, and also in a Permalloy sample \cite{Vitale}. With respect to maghemite, a viscosity anomaly (plateau from 2.2 to 1.8 K) has been observed in a system of particles dispersed in a glassy matrix \cite{Zhang}; no anomaly was visible for the same particles in water, suggesting the influence of the matrix via magneto\-striction phenomena \cite{Zhang}.
\begin{figure}[h]
\centerline{\epsfysize=7cm \epsfbox{fig4.eps} }
\caption{ ZFC and FC curves in the low temperature region. The ``TRM'' curve has been measured when, after field-cooling to 60 mK, the field is cut and the temperature is raised. This measured TRM is equal to the difference between FC and ZFC, as usual when linear response theory applies.}
\end{figure}
We consider that our present results may give rise to two possible conclusions (a combination of both is also possible).
First, one may assume that the dynamics is thermally activated. The implication of our results is that the distribution of energy barriers $P(U)$ increases abruptly towards smaller values, more rapidly than $1/U^2$. This is a surprising result, very different from the framework in which viscosity measurements are commonly interpreted in the literature (approximately a flat distribution). We have in addition performed ZFC/FC measurements in this low-T range, which are displayed in Fig.4. They show an increase in the magnetization for decreasing T, which is 1/T-like and of the same amount in both ZFC and FC cases (see TRM in Fig. 4). If this behavior is ascribed to clusters of e.g. 10 spins, the Curie constant would correspond to 0.5\% of the total $\gamma-Fe_2O_3$ ~amount. Thus, there are indeed some very small magnetic entities which are not frozen, even at 60 mK. Fig. 4 also shows a significant difference between the ZFC and FC curves, which corresponds to the slow dynamics observed in this low-T range. All data are therefore compatible with the existence of a significant low-energy tail of the barrier distribution, increasing further for the lowest values. One may think of very small particles; it would be of interest to check other systems of small particles for this possibility. It has also been proposed that such small barriers arise from decompensation effects at the surface of the ferrimagnetic particles \cite{berko}; surface defects might be an intrinsic component of the dynamics of nanometric particles at very low temperatures.
A second possible conclusion concerns the quantum tunneling of the particle magnetization (QTM) through its anisotropy barrier.
In a first approximation, the contribution of such processes could be independent of temperature; from \cite{chud}, quantum processes should be of the same order of magnitude as thermal processes below a crossover temperature $T_c$, which can be here estimated as $T_c\simeq 100\,mK$ ($T_c$ does not depend on the barrier height, which only influences the relaxation rates). It is therefore possible that such processes contribute significantly in our T-range (one may even wonder why they should not be visible). The increase of the viscosity towards lower T can be understood in two ways. On the one hand, it has been argued in \cite{Vitale} that the viscosity should be T-independent if the two energy levels between which quantum tunneling occurs are sufficiently separated with respect to $k_BT$, whereas it should go like 1/T for quasi-degenerate levels, which could be our situation of low-field relaxations. A low-T increase of the viscosity in Permalloy has thus been described as quantum jumps of a Bloch wall between pinning sites of comparable energies \cite{Vitale}. In more general terms, on the other hand, one may think that lower temperatures decrease the coupling to phonons, therefore reducing the dissipation and enhancing quantum tunneling processes \cite{chud2}.
A ``T.Lnt" plot has been proposed to help distinguish between thermal and quantum processes in size-distributed particles \cite{T.Lnt}, but this is not possible with the present relaxation data, obtained by measuring only SQUID signal variation (and not the full value of the magnetization). Actually, the question of a satisfactory evidence of QTM processes in such systems remains controversial; however, we believe that the numerous observations of anomalies in the low-T dynamics of small particles lead us to the minimal conclusion that things are not as simple as we had thought.
| proofpile-arXiv_065-836 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The electroweak equivalence theorem (ET)~\cite{et0}-\cite{mike}
quantitatively connects the high energy scattering amplitudes of
longitudinally polarized weak gauge bosons ($V^a_L=W^\pm_L,Z_L$) to
the corresponding amplitudes of would-be Goldstone bosons
($\phi^a=\phi^\pm ,\phi^0$).
The ET has been widely used and has proven to be a powerful tool in
studying the electroweak symmetry breaking (EWSB) mechanism, which
remains a mystery and awaits experimental exploration at
the CERN Large Hadron Collider (LHC) and the future linear colliders.
After some initial proposals~\cite{et0}, Chanowitz and
Gaillard~\cite{mike-mary} gave the first general formulation of the ET
for an arbitrary number of external longitudinal vector bosons and
pointed out the non-trivial cancellation of terms growing like powers
of the large energy which arise from external longitudinal
polarization vectors.
The existence of radiative modification factors to the ET
was revealed by Yao and Yuan and further discussed by
Bagger and Schmidt~\cite{YY-BS}.
In recent systematic investigations,
the precise formulation for the ET has been given for both
the standard model (SM)~\cite{et1,et2,bill}
and chiral Lagrangian formulated electroweak theories (CLEWT)~\cite{et3},
in which convenient renormalization schemes for exactly simplifying
these modification factors have been proposed
for a class of $R_\xi$-gauges.
A further general study of both multiplicative and additive modification
factors [cf. eq.~(1.1)] has been performed
in Ref.~\cite{et4,et5} for both the SM and CLEWT,
by analyzing the longitudinal-transverse ambiguity
and the physical content of the ET as a criterion for
probing the EWSB sector. According to these studies,
the ET can be precisely formulated as~\cite{mike-mary}-\cite{et5}
$$
T[V^{a_1}_L,\cdots ,V^{a_n}_L;\Phi_{\alpha}]
= C\cdot T[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}]+ B ~~,
\eqno(1.1)
$$
$$
\begin{array}{ll}
C & \equiv C^{a_1}_{\rm mod}\cdots C^{a_n}_{\rm mod}
= 1 + O({\rm loop}) ~~,\\[0.25cm]
B & \equiv\sum_{l=1}^n (~C^{a_{l+1}}_{\rm mod}\cdots C^{a_n}_{\rm mod}
T[v^{a_1},\cdots ,v^{a_l},i\phi^{a_{l+1}},\cdots ,
i\phi^{a_n};\Phi_{\alpha}]
+ ~{\rm permutations ~of}~v'{\rm s ~and}~\phi '{\rm s}~) \\[0.25cm]
& = O(M_W/E_j){\rm -suppressed}\\[0.25cm]
& v^a\equiv v^{\mu}V^a_{\mu} ~,~~~
v^{\mu}\equiv \epsilon^{\mu}_L-k^\mu /M_V = O(M_V/E)~,~~(M_V=M_W,M_Z)~~,
\end{array}
\eqno(1.1a,b,c)
$$
with the conditions
$$
E_j \sim k_j \gg M_W ~, ~~~~~(~ j=1,2,\cdots ,n ~)~~,
\eqno(1.2a)
$$
$$
C\cdot T[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}] \gg B~~,
\eqno(1.2b)
$$
where $~\phi^a$~'s are the Goldstone boson fields and
$\Phi_{\alpha}$ denotes other possible physical in/out states.
~$C_{\rm mod}^a=1+O({\rm loop})$~ is a
renormalization-scheme and gauge dependent
constant called the modification factor, and
$E_j$ is the energy of the $j$-th external line. For $~E_j \gg M_W~$,
the $B$-term is only $~O(M_W/E_j)$-suppressed relative to
the leading term~\cite{et4},
$$
B = O\left(\frac{M_W^2}{E_j^2}\right)
T[ i\phi^{a_1},\cdots , i\phi^{a_n}; \Phi_{\alpha}] +
O\left(\frac{M_W}{E_j}\right)T[ V_{T_j} ^{a_{r_1}}, i\phi^{a_{r_2}},
\cdots , i\phi^{a_{r_n}}; \Phi_{\alpha}]~~.
\eqno(1.3)
$$
Therefore it can be either larger or smaller than
$~O(M_W/E_j)$, depending on the magnitudes of the $\phi^a$-amplitudes
on the RHS of (1.3)~\cite{et4,et5}.
For example, in the CLEWT, it was found that
$~B=O(g^2)~$ \cite{et4,et5,mike}, which is a constant
depending only on the SM gauge coupling constant and
does not vanish with increasing energy.
Thus, the condition (1.2a) is necessary but not sufficient for
ignoring the whole $B$-term. For sufficiency, the condition (1.2b)
must also be imposed~\cite{et4}.
In section~3.3, we shall discuss minimizing the approximation from
ignoring the $B$-term when going beyond lowest order calculations.
In the present work, we shall primarily study the simplification of
the radiative modification factors, $C^a_{\rm mod}$~, to unity.
As shown in (1.1), the modification factors differ from unity
at loop levels for all external would-be
Goldstone bosons, and are not suppressed by the $~M_W/E_j$-factor.
Furthermore, these modification factors
may depend on both the gauge and scalar coupling
constants~\cite{et1,et2}.
Although $~~C^a_{\rm mod}-1 = O({\rm loop})~~$,
this does {\it not} mean that $~C^a_{\rm mod}$-factors cannot appear
at the leading order of a perturbative expansion. An example
is the $~1/{\cal N}$-expansion~\cite{1/N}
in which the leading order contributions
include an infinite number of Goldstone boson loops so that
the $~C^a_{\rm mod}$~'s will survive the large-${\cal N}$ limit
if the renormalization scheme is not properly chosen.
In general, the appearance of $C^a_{\rm mod}$~'s
at loop levels alters the
high energy equivalence between $V_L$ and Goldstone boson amplitudes
and potentially invalidates the na\"{\i}ve intuition gained from tree
level calculations. For practical applications of the ET at loop
levels, the modification factors complicate the calculations and
reduce the utility of the equivalence theorem. Thus, the
simplification of $C_{\rm mod}^a$ to unity is very useful.
The factor $C_{\rm mod}^a$ has been derived in the general
$R_\xi$-gauges for both the SM~\cite{et1,et2} and CLEWT~\cite{et3},
and been simplified to unity in a renormalization scheme,
called {\it Scheme-II}~ in those references, for a
broad class of $R_\xi$-gauges. {\it Scheme-II}~ is particularly
convenient for 't~Hooft-Feynman gauge, but cannot be applied to
Landau gauge. In the present work, we make a natural generalization
of our formalism and construct a new scheme, which we call {\it
Scheme-IV}~, for {\it all} ~$R_\xi$-gauges including both
't~Hooft-Feynman and Landau gauges.
In the Landau gauge, the exact simplification of $C_{\rm mod}^a$ is
straightforward for the $U(1)$ Higgs theory~\cite{et2,bill};
but, for the realistic non-Abelian
theories (such as the SM and CLEWT) the situation is much
more complicated. Earlier Landau gauge formulations of the non-Abelian
case relied on explicit calculation of new loop level quantities,
$~\Delta^a_i$, involving the Faddeev-Popov ghosts~\cite{et2,PHD}.
This new {\it Scheme-IV}~ proves particularly convenient for
Landau gauge. This is very useful since Landau gauge has been widely used
in the literature and proves particularly convenient for studying
dynamical EWSB. For instance, in the CLEWT, the complicated non-linear
Goldstone boson-ghost interaction vertices from the Faddeev-Popov term
(and the corresponding higher dimensional counter terms) vanish in
Landau gauge, while the Goldstone boson and ghost fields remain
exactly massless~\cite{app}.
In the following analysis, we shall adopt the notation of
references~\cite{et1,et2} unless otherwise specified. This paper
is organized as follows: In section~2 we derive the necessary
Ward-Takahashi (WT) identities and construct our new renormalization
scheme.
In section~3, we derive the precise formulation of
{\it Scheme-IV}~ such that the ET is
free from radiative modifications (i.e., $C^a_{\rm mod}=1$) in all
$R_\xi$-gauges
including both Landau and 't~Hooft-Feynman gauges.
This is done for a variety of models including the $SU(2)_L$ Higgs
theory, the full SM, and both the linearly and non-linearly realized
CLEWT. We further propose a convenient new
prescription, called the `` Divided Equivalence Theorem '' (DET), for
minimizing the error caused by ignoring the $B$-term.
Finally, we discuss the relation of {\it Scheme-IV}~ to our previous
schemes.
In section~4, we perform explicit one-loop calculations to demonstrate
our results. Conclusions are given in section~5.
\section{The Radiative Modification Factor $C_{\rm mod}^a$
and Renormalization {\it Scheme-IV}}
In the first part of this section,
we shall define our model and briefly explain how
the radiative modification factor to the ET ($C_{\rm mod}^a$) originates
from the quantization and renormalization procedures.
Then, we analyze the properties of the $C_{\rm mod}^a$ in different
gauges and at different loop levels.
This will provide the necessary preliminaries
for our main analyses and make this paper self-contained.
In the second part of this section, using WT identities,
we construct the new renormalization {\it Scheme-IV}~
for the exact simplification of the $C_{\rm mod}^a$-factor
in {\it all} $R_\xi$-gauges including both
't~Hooft-Feynman and Landau gauges. Our prescription for obtaining
$C_{\rm mod}^a=1$~ {\it does not require any explicit calculations
beyond those needed for the usual on-shell renormalization program.}
\subsection{The Radiative Modification Factor $C_{\rm mod}^a$}
For simplicity, we shall first derive our results in the
$SU(2)_L$ Higgs theory by taking $~g^{\prime}=0~$ in the electroweak
$SU(2)_L\otimes U(1)_Y$ standard model (SM).
The generalizations to the full SM and to
the effective Lagrangian formulations
are straightforward (though there are some further complications)
and will be given in later sections.
The field content for the $SU(2)_L$ Higgs theory
consists of the physical fields, $H$, $W^a_\mu$, and $f$($\bar{f}$)
representing the Higgs, the weak gauge bosons and the fermions,
respectively, and the unphysical
fields $\phi^a$, $c^a$, and $\bar{c}^a$, representing the would-be
Goldstone bosons, the Faddeev-Popov ghosts, and the anti-ghosts
respectively.
We quantize the theory using the following general
$R_\xi$-gauge fixing condition
$$
\begin{array}{l}
\displaystyle
{\cal L}_{\rm GF} ~=~ -{1\over2}(F^a_0)^2 ~~,\\[0.4cm]
F^a_0\ ~=~ (\xi_0^a)^{-{1\over2}}\partial_\mu W^{a\mu}_0
-(\xi_0^a)^{1\over2}\kappa_0^a\phi^a_0
~=~ (\underline{\bf K}_0^a)^T \underline{\bf W}_0^a ~~,\\[0.3cm]
\underline{\bf K}_0^a \equiv
\displaystyle\left( (\xi_0^a)^{-\frac{1}{2}}\partial_{\mu},
-(\xi_0^a)^{\frac{1}{2}}\kappa_0^a \right)^T ~~,~~~
\underline{\bf W}_0^a \equiv (W_0^{a\mu}, \phi_0^a)^T ~~,
\end{array}
\eqno(2.1)
$$
where the subscript ``$_0$'' denotes bare quantities.
For the case of the $SU(2)_L$ theory, we can
take $~\xi_0^a=\xi_0~$, $~\kappa_0^a =\kappa_0~$, for $a=1,2,3$.
The quantized bare Lagrangian for the $SU(2)_L$ model is
$$
{\cal L}_{SU(2)_L} = -{1\over4}W^{a\mu\nu}_{0}W^{a}_{0\mu\nu}
+ \left|D_0^\mu\Phi_0\right|^2 - U_0\left(\left|\Phi_0
\right|^2\right) - {1\over2}(F^a_0)^2 + ({\xi_0^a})^{1\over2}
\bar{c}^a_0\hat{s}F^a_0 + ~{\cal L}_{\rm fermion}
\eqno(2.2)
$$
where $~\hat{s}~$ is the Becchi-Rouet-Stora-Tyutin (BRST)~\cite{BRST}
transformation operator. Since our analysis and formulation
of the ET do not rely on any details of the Higgs potential
or the fermionic part, we do not list their explicit forms here.
The Ward-Takahashi (WT) and Slavnov-Taylor (ST) identities of
a non-Abelian gauge theory are most conveniently
derived from the BRST symmetry of the quantized action.
The transformations of the bare fields are
$$
\begin{array}{ll}
\hat{s} W^{a\mu}_0 = D_0^{a\mu}{c}_0^{a} = \partial_{\vphantom\mu}^{\mu}{c}_0^{a}
+ g_0\varepsilon^{abc}\!\Dbrack{W^{\mu b}_0{c}_0^{c}}~, &
\hat{s} H_0 = -\displaystyle {g_0\over2}\!\Dbrack{\phi_0^a{c}_0^{a}
\vphantom{W^{b}}} ~,\\[0.4cm]
\hat{s}\phi_0^a = D_0^{\phi}c_0^a
= M_{W0}{c}_0^a + \displaystyle{g_0\over2}
\Dbrack{H^{\phantom b}_0{c}_0^a} + \displaystyle{g_0\over2}
\varepsilon^{abc}\!\Dbrack{\phi_{0}^{b}{c}_0^c} ~,~~ &
\hat{s}{c}_0^a = -\displaystyle{g_0\over2}\varepsilon^{abc}\!\Dbrack{
{c}_{0}^{b}{c}_{0}^{c}} ~, \\[0.4cm]
\hat{s}F_0^a = \xi_0^{-{1\over2}}\cdot\partial_{\mu}\hat{s}W_0^{a\mu} -
\xi_0^{1\over2}\kappa_0 \cdot \hat{s}\phi_0^a ~, &
\hat{s}\bar{c}_0^{a} = -\xi_0^{-{1\over2}}F^{a}_0 ~,
\end{array}
\eqno(2.3)
$$
where expressions such as $\Dbrack{W_{0}^{\mu b}c_0^c}\!(x)$ indicate
the local composite operator fields formed from $W_{0}^{\mu b}(x)$ and
$c_0^c(x)$.
The appearance of the modification factor $C_{\rm mod}^a$ to the ET is
due to the amputation and the renormalization of external
massive gauge bosons and their corresponding Goldstone boson fields.
For the amputation, we need a general ST identity for the propagators
of the gauge boson, Goldstone boson
and their mixing~\cite{YY-BS}-\cite{et2}.
By introducing the external source term
$~~\int dx^4 [J_i\chi_0^i + \bar{I}^ac_0^a +\bar{c}^a_0I^a]~~$
(where $~\chi_0^i~$ denotes any possible fields except the (anti-)ghost
fields) to the generating functional, we get
the following generating equation for connected Green functions:
$$
0=J_i(x)<0|T\hat{s}\chi_i^a(x)|0> -\bar{I}^a(x)<0|T\hat{s}c^a_0(x)|0>
+<0|T\hat{s}\bar{c}^a_0(x)|0> I^a(x)
\eqno(2.4)
$$
from which we can derive the ST identity for the matrix propagator of
$~\underline{\bf W}_0^a~$,
$$
\underline{\bf K}_0^T \underline{\bf D}_0^{ab} (k)
~=~ - \displaystyle
\left[\underline{\bf X}^{ab}\right]^T (k)
\eqno(2.5)
$$
with
$$
\underline{\bf D}_0^{ab}(k)
~=~ <0|T\underline{\bf W}_0^a (\underline{\bf W}_0^b)^T|0>(k) ~~,~~~~
{\cal S}_0(k)\delta^{ab} ~=~ <0|Tc_0^b\bar{c}_0^a|0>(k)~~,
\eqno(2.5a)
$$
$$
\underline{\bf X}^{ab}(k)
~\equiv ~\hat{\underline{\bf X}}^{ab}(k){\cal S}_0(k)
~\equiv ~\left(
\begin{array}{l}
\xi_0^{\frac{1}{2}}<0|T\hat{s}W_0^{b\mu}|0> \\
\xi_0^{\frac{1}{2}}<0|T\hat{s}\phi_0^b|0>
\end{array} \right)_{(k)}\cdot {\cal S}_0(k)~~.
\eqno(2.5b)
$$
To explain how the modification factor $C_{\rm mod}^a$ to the ET arises,
we start from the well-known ST identity~\cite{mike-mary}-\cite{et2}
$~~<0|F_0^{a_1}(k_1)\cdots F_0^{a_n}(k_n)\Phi_{\alpha}|0> =0~~$ and
set $~n=1~$, i.e.,
$$
0=G[F_0^a(k);\Phi_{\alpha}]
=\underline{\bf K}_0^T G[\underline{\bf W}_0^a(k);\Phi_{\alpha}]
=- [\underline{\bf X}^{ab}]^T T[\underline{\bf W}_0^a(k);\Phi_{\alpha}] ~~.
\eqno(2.6)
$$
Here $~G[\cdots ]~$ and $~T[\cdots ]~$ denote the Green function and
the $S$-matrix element, respectively. The identity (2.6) leads directly
to
$$
\frac{k_{\mu}}{M_{W0}}T[W_0^{a\mu}(k);\Phi_{\alpha}]
= \widehat{C}_0^a(k^2)T[i\phi_0^a;\Phi_{\alpha}]
\eqno(2.7)
$$
with $~\widehat{C}^a_0(k^2)~$ defined as
$$
\widehat{C}^a_0(k^2)\equiv{{1+\Delta^a_1(k^2)+\Delta^a_2(k^2)}\over
{1+\Delta^a_3(k^2)}} ~~,
\eqno(2.8)
$$
in which the quantities $\Delta^a_i$ are the proper vertices of the
composite operators
$$
\begin{array}{rcl}
\Delta^a_1(k^2)\delta^{ab}
& = & \displaystyle{g_0\over2M_{W0}}<0|T
\Dbrack{H^{\phantom b}_0{c}_0^b}|\bar{c}_0^a>(k) ~~,\\[0.4cm]
\Delta^a_2(k^2)\delta^{ab}
& = & -\displaystyle{g_0\over2M_{W0}}\varepsilon^{bcd}<0|T
\Dbrack{\phi_{0}^{c}{c}_0^d}|\bar{c}_0^{a}>(k) ~~,\\[0.4cm]
ik^\mu\Delta^a_3(k^2)\delta^{ab}
& = & -\displaystyle{g_0\over2}\varepsilon^{bcd}<0|T
\Dbrack{W^{\mu b}_0{c}_0^c}|\bar{c}_0^{a}>(k) ~~,
\end{array}
\eqno(2.9)
$$
which are shown diagrammatically in figure 1.
\vbox{
\par\epsfxsize=\hsize\epsfbox{hjhwk1.ps}
\sfig{}\figa{Composite operator diagrams contributing to
radiative modification factor of the equivalence
theorem in non-Abelian Higgs theories. (a). $~\Delta^a_1$~;
(b). $~\Delta^a_2~$; (c). $~\Delta^a_3~$.}}
\noindent
After renormalization, (2.8) becomes
$$
\frac{k_{\mu}}{M_{W}}T[W^{a\mu}(k);\Phi_{\alpha}]
= \widehat{C}^a(k^2)T[i\phi^a;\Phi_{\alpha}]
\eqno(2.10)
$$
with the finite renormalized coefficient
$$
\widehat{C}^a(k^2) = Z_{M_W}\left(\frac{Z_W}{Z_\phi}\right)^{\frac{1}{2}}
\widehat{C}^a_0(k^2) ~~.
\eqno(2.11)
$$
The renormalization constants are defined as
$~W^{a\mu}_0 =Z_W^{1\over 2} W^{a\mu}~$,~
$\phi_0^a = Z_\phi^{1\over 2}\phi^a~$,
and $~M_{W0}=Z_{M_W}M_{W}~$.
The modification factor to the ET is precisely
the value of this finite renormalized coefficient $~\widehat{C}^a(k^2)~$
on the gauge boson mass-shell:
$$
C^a_{\rm mod} = \left.\widehat{C}^a(k^2)\right|_{k^2=M_W^2}~~,
\eqno(2.12)
$$
provided that the usual on-shell subtraction for $M_W$ is adopted.
In Sec.~3, we shall transform the identity (2.10) into
the final form of the ET (which connects the $~W_L^a$-amplitude to
that of the corresponding $\phi^a$-amplitude)
for an arbitrary number of external longitudinal gauge bosons
and obtain a {\it modification-free} formulation of the ET
with $~C^a_{\rm mod}=1~$ to all loop orders.
As shown above, the appearance of the $C^a_{\rm mod}$ factor to the
ET is due to the amputation and renormalization of external
$W^{a\mu}$ and $\phi^a$ lines by using the ST identity (2.5).
Thus it is natural that the $C^a_{\rm mod}$ factor contains
$W^{a\mu}$-ghost, $\phi^a$-ghost and Higgs-ghost interactions expressed
in terms of these $\Delta^a_i$-quantities. Further
simplification can be made by re-expressing $C^a_{\rm mod}$ in terms
of known $W^{a\mu}$ and $\phi^a$ proper self-energies using
WT identities as first proposed in Refs.~\cite{et1}-\cite{et3}.
This step is the basis of our simplification of
$~C_{\rm mod}^a =1~$ and will be also adopted
for constructing our new {\it Scheme-IV}
in Sec.~2.2. We must emphasize that, {\it our simplification
of $~C_{\rm mod}^a =1~$ does not need any explicit calculation of
the new loop-level $\Delta^a_i$-quantities} which
involve ghost interactions and are quite complicated. This is precisely
why our simplification procedure is useful.
Finally, we analyze the properties of the $~\Delta_i^a$-quantities
in different gauges and at different loop-levels.
The loop-level $\Delta_i^a$-quantities are
non-vanishing in general
and make $~\widehat{C}_0(k^2)\neq 1~$ and $~C^a_{\rm mod}\neq 1~$
order by order. In Landau gauge, these $\Delta_i^a$-quantities can be
partially simplified,
especially at the one-loop order, because the tree-level
Higgs-ghost and $\phi^a$-ghost vertices vanish. This
makes $~\Delta_{1,2}^a=0~$ at one loop.\footnote{
We note that, in the non-Abelian case,
the statement that $~\Delta^a_{1,2}=0~$ for Landau gauge
in Refs.~\cite{YY-BS,et2} is only valid at the one-loop order.}~
In general,
$$
\Delta_1^a=\Delta_2^a = 0+O(2~{\rm loop})~,
~~~\Delta_3^a =O(1~{\rm loop})~,~~~~~
(~{\rm in~Landau~gauge}~)~.
\eqno(2.13)
$$
Beyond the one-loop order, $~\Delta^a_{1,2}\neq 0~$ since the
Higgs and Goldstone boson fields can still indirectly couple to the ghosts
via loop diagrams containing internal gauge fields, as shown in
Figure~2.
\vbox{
\par\epsfxsize=\hsize\epsfbox{hjhwk2.ps}
\sfig{}\figb{The lowest order diagrams contributing to
$\Delta^a_1(k^2)$ and $\Delta^a_2(k^2)$ in Landau gauge.}}
\noindent
We note that the 2-loop diagram Figure~2b is non-vanishing in the
full SM due to the tri-linear $~A_{\mu}$-$W_{\nu}^{\pm}$-$\phi^{\mp}$
and $~Z_{\mu}$-$W_{\nu}^{\pm}$-$\phi^{\mp}$ vertices, while it
vanishes in the $SU(2)_L$ theory since the couplings of
these tri-linear vertices are proportional to $~\sin^2\theta_W$~.
\subsection{Construction of Renormalization {\it Scheme-IV} }
From the generating equation for WT identities~\cite{et2,BWLee},
we obtain a set of identities for bare inverse propagators which
contain the bare modification factor $~\widehat{C}_0^a(k^2)~$ [derived
in (2.8), (2.9) and (2.12)]~\cite{et1,et2}
$$
\begin{array}{rcl}
ik^\mu[i{\cal D}^{-1}_{0,\mu\nu,ab}(k)+ \xi^{-1}_0 k_\mu k_\nu]\ &=& -
M_{W0}\widehat{C}^a_0(k^2)[i{\cal D}^{-1}_{0,\phi\nu,ab}(k)- i
\kappa_0 k_\nu] \\[0.3cm]
ik^\mu[-i{\cal D}^{-1}_{0,\phi\mu,ab}(k)+i \kappa_0 k_\mu]\ &=& -
M_{W0}\widehat{C}^a_0(k^2)[i{\cal D}^{-1}_{0,\phi\phi,ab}(k)
+\xi_0 \kappa^2_0]\\[0.3cm]
i{\cal S}^{-1}_{0, ab}(k)\ &=& [1+\Delta_3^a(k^2)][k^2-\xi_0\kappa_0
M_{W0}\widehat{C}^a_0(k^2)]\delta_{ab}
\end{array}
\eqno(2.14)
$$
where ${\cal D}_{0,\mu\nu}$, ${\cal D}_{0,\phi\nu}$, ${\cal
D}_{0,\phi\phi}$, ${\cal S}_{0, ab}$ are the unrenormalized full
propagators for gauge boson, gauge-Goldstone-boson mixing and
ghost, respectively.
The renormalization program is chosen to match the on-shell
scheme~\cite{Hollik} for the physical degrees of freedoms, since this
is very convenient and popular for computing the electroweak radiative
corrections (especially for high energy processes).
Among other things, this choice means that the proper
self energies of physical particles are renormalized so as to vanish
on their mass-shells, and that the vacuum expectation value of the
Higgs field is renormalized
such that the tadpole graphs are exactly cancelled. If the vacuum
expectation value were not renormalized in this way, there would be
tadpole contributions to figure~1a.
The renormalization constants of the unphysical degrees of freedoms are
defined as
$$
\phi^a_0 = Z_{\phi}^{1\over2}\phi^a ~,\ \ c^a_0=Z_cc^a ~,
\ \ \bar{c}^a_0=\bar{c}^a ~,\ \ \xi_0^a =Z_\xi \xi^a ~,
\ \ \kappa_0^a =Z_\kappa\kappa^a ~.
\eqno(2.15)
$$
Some of these renormalization constants will be chosen such that
the ET is free from radiative modifications, while the others are left
to be determined as usual~\cite{Hollik}
so that our scheme is most convenient for the
practical application of the ET.
Using (2.15) and the relations
$~{\cal D}_{0,\mu\nu}=Z_W{\cal D}_{\mu\nu},~~
{\cal D}_{0,\phi\nu}=Z^{\frac{1}{2}}_\phi Z^\frac{1}{2}_W
{\cal D}_{\phi \nu}$~, and $~{\cal S}_0 = Z_c{\cal S}~$, we obtain the
renormalized identities
$$
\begin{array}{l}
ik^\mu[i{\cal D}^{-1}_{\mu\nu,ab}(k) +
{Z_W\over Z_\xi}\xi^{-1}k_\mu
k_\nu]\ ~=~ - \widehat{C}^a(k^2)M_W[i{\cal D}^{-1}_{\phi\nu,ab}(k)
- Z_\kappa Z^{1\over2}_W Z^{1\over2}_\phi ik_\nu\kappa]
\\[0.35cm]
ik^\mu[-i{\cal D}^{-1}_{\phi\mu,ab}(k) + Z_\kappa Z^{1\over2}_W
Z^{1\over2}_\phi ik_\mu\kappa]\ ~=~ -\widehat{C}^a(k^2)M_W[i{\cal
D}^{-1}_{\phi\phi,ab}(k) + Z^2_\kappa Z_\xi Z_\phi\xi\kappa^2]
\\[0.35cm]
i{\cal S}^{-1}_{ab}(k) ~=~ Z_c[1+\Delta_3(k^2)][k^2-\xi\kappa M_W Z_\xi
Z_\kappa ({Z_\phi\over Z_W})^{1\over2}
\widehat{C}^a(k^2)]\delta_{ab}
\end{array}
\eqno(2.16)
$$
Note that the renormalized coefficient $~\widehat{C}^a(k^2)~$
appearing in (2.16) is precisely the same as that in (2.12).
Constraints on $Z_\xi$, $Z_\kappa$, $Z_\phi$ and $Z_c$
can be drawn from the fact that the coefficients in the
renormalized identities of (2.16) are finite. This implies that
$$
\begin{array}{ll}
Z_\xi = \Omega_\xi Z_W~, & Z_\kappa=\Omega_\kappa Z^{\frac{1}{2}}_W
Z^{-\frac{1}{2}}_{\phi}Z_\xi^{-1}~,\\[0.3cm]
Z_\phi =\Omega_\phi Z_W Z^2_{M_W} \hat{C}_0(sub.~ point)~,~~~ &
Z_c=\Omega_c[1+\Delta_3(sub.~ point)]^{-1}~,
\end{array}
\eqno(2.17)
$$
with
$$
\Omega_{\xi ,\kappa , \phi , c} ~=~ 1 + O({\rm loop}) ~=~
{\rm finite}~~,
\eqno(2.17a)
$$
where $~\Omega_\xi~$, $~\Omega_\kappa~$,
$~\Omega_\phi~$ and $~\Omega_c~$ are unphysical and
arbitrary finite constants to be determined by the subtraction conditions.
The propagators are expressed in terms of the proper self-energies as
$$
\begin{array}{rcl}
i{\cal D}^{-1}_{\mu\nu,ab}(k) &=& \left[\left(g_{\mu\nu} -
{k_\mu k_\nu\over k^2}\right)\left\lgroup -k^2 + M_W^2 -
\Pi^a_{WW}(k^2)\right\rgroup\right. \\[0.4cm]
& & \left.\kern30pt + {k_\mu k_\nu\over k^2}\left\lgroup
-\xi^{-1}k^2 + M_W^2 - {\widetilde\Pi}^a_{WW}(k^2)\right\rgroup
\right]\delta_{ab} ~~,\\[0.4cm]
i{\cal D}^{-1}_{\phi\mu,ab}(k) &=& -ik_\mu\left\lgroup M_W -
\kappa + {\widetilde\Pi}^a_{W\phi}(k^2)\right\rgroup\delta_{ab}~,
\\[0.3cm]
i{\cal D}^{-1}_{\phi\phi,ab}(k) &=& \left\lgroup k^2 - \xi\kappa^2
- {\widetilde\Pi}^a_{\phi\phi}(k^2)\right\rgroup\delta_{ab}~~,
\\[0.3cm]
i{\cal S}^{-1}_{ab}(k) &=& \left\lgroup k^2 - \xi\kappa M_W -
{\widetilde\Pi}^a_{c\bar{c}}(k^2)\right\rgroup\delta_{ab}~~,
\\[0.3cm]
\end{array}
\eqno(2.18)
$$
where $\Pi^a_{WW}$ is the proper self-energy of the physical part of the
gauge boson, and the $\widetilde\Pi^a_{ij}$~'s are the unphysical proper
self-energies.
Expanding the propagators in (2.16) in terms of proper self-energies
yields the following identities containing
$~\widehat{C}(k^2)~$:
$$
\begin{array}{lll}
\widehat{C}^a(k^2) & = &
\displaystyle{\xi^{-1}k^2\left(\Omega_\xi^{-1} - 1\right) +
M_W^2 - {\widetilde\Pi}^a_{WW}(k^2)\over
M_W\kappa\left(\Omega_\kappa\Omega_\xi^{-1}-1\right) +
M_W^2 + M_W{\widetilde\Pi}^a_{W\phi}(k^2)} ~~,\\[0.5cm]
\widehat{C}^a(k^2) & = & \displaystyle{k^2\over M_W}{
\kappa\left(\Omega_\kappa\Omega_\xi^{-1}-1\right)
+ M_W + {\widetilde\Pi}^a_{W\phi}(k^2) \over
\xi\kappa^2\left(\Omega_\kappa^2\Omega_\xi^{-1}-1\right)+
k^2 - {\widetilde\Pi}^a_{\phi\phi}(k^2) } ~~,\\[0.5cm]
{\widetilde\Pi}^a_{c\bar{c}}(k^2) & = &
k^2 - \xi\kappa M_W -
Z_c\left[1+\Delta_3(k^2)\right]\left[M_W^2-\xi\kappa
\Omega_\kappa M_W \widehat{C}^a(k^2)\right]~~. \\[0.3cm]
\end{array}
\eqno(2.19)
$$
We are now ready to construct of our new renormalization
scheme, {\it Scheme-IV}~, which will insure
$~\widehat{C}(M_W^2)=1~$ for all $R_\xi$-gauges,
including Landau gauge. The $R_\xi$-gauges are a continuous
one parameter family of gauge-fixing conditions [cf. (2.1)]
in which the parameter ~$\xi$~ takes values from ~$0$~ to
$~\infty$~. In practice, however, there are only three important
special cases: the Landau gauge ($\xi=0$), the 't~Hooft-Feynman
gauge ($\xi=1$) and unitary gauge ($\xi\rightarrow\infty$). In the
unitary gauge, the unphysical degrees of freedom freeze out and
one cannot discuss the amplitude for the would-be
Goldstone bosons. In addition, the loop renormalization
becomes inconvenient in this gauge
due to the bad high energy behavior of massive gauge-boson propagators
and the resulting complication of the divergence structure.
The 't~Hooft-Feynman gauge offers great calculational advantages,
since the gauge boson propagator
takes a very simple form and the tree-level mass poles of each weak
gauge boson and its corresponding Goldstone boson and ghost are
all the same. The Landau gauge proves very convenient
in the electroweak chiral Lagrangian formalism~\cite{app}
by fully removing the complicated tree-level
non-linear Goldstone boson-ghost
interactions [cf. Sec.~3.2] and in this
gauge unphysical would-be Goldstones are exactly massless
like true Goldstone bosons.
To construct the new {\it Scheme-IV}~, we note that
{\it a priori}, we have six free parameters to be specified:
~$\xi$~, ~$\kappa$~, ~$Z_\phi$~, ~$Z_c$~, ~$\Omega_\xi$~,
and ~$\Omega_\kappa$~ in a general $R_\xi$-gauge.
For Landau gauge ($~\xi =0~$),
the gauge-fixing term $~{\cal L}_{\rm GF}~$ [cf. (2.1)]
gives vanishing Goldstone-boson masses without any $\kappa$-dependence,
and the bi-linear gauge-boson vertex
$~-\frac{1}{2\xi_0}(\partial_{\mu}W_0^{\mu})^2~$ diverges, implying
that the $W$-propagator is transverse and independent of ~$\Omega_\xi$~.
The only finite term
left in $~{\cal L}_{\rm GF}~$ for Landau gauge is the gauge-Goldstone
mixing vertex
$~~\kappa_0\phi_0\partial_{\mu}W_0^{\mu}
= ~\Omega_\xi^{-1}\Omega_\kappa\kappa\phi\partial_{\mu}W^{\mu}~~$
[cf. (2.17)],
which will cancel the tree-level $~W$-$\phi$ mixing
from the Higgs kinetic term $~\left| D_0^{\mu}\Phi_0\right|^2~$ in (2.2)
provided that we choose $~\kappa = M_W~$. Hence, for the purpose
of including Landau gauge into our {\it Scheme-IV}~, we shall not
make use of the degree of freedoms from $\Omega_\xi$ and $\Omega_\kappa$~,
and in order to remove the tree-level $~W$-$\phi~$ mixing, we shall set
$~\kappa = M_W~$. Thus, we fix the free parameters
~$\Omega_\xi$~, ~$\Omega_\kappa$ ~and $\kappa$~ as follows
$$
\Omega_\xi ~=~\Omega_\kappa ~=~ 1~~,~~~~ \kappa ~=~ M_W~~,~~~~~
(~{\rm in}~ Scheme-IV~)~~.
\eqno(2.20)
$$
From (2.17), the choice $~~\Omega_\xi =\Omega_\kappa = 1~~$ implies
$$
F^a_0 = F^a ~~,
\eqno(2.21)
$$
i.e., the gauge-fixing function $~F^a_0~$ is unchanged after the
renormalization. For the remaining three unphysical parameters
$~\xi$~, ~$Z_\phi$ ~and ~$Z_c$~, we shall leave ~$\xi$~ free to cover all
$R_\xi$-gauges and leave ~$Z_c$~ determined by the usual on-shell
normalization condition
$$
{\left. \displaystyle\frac{d}{dk^2}\widetilde{\Pi}^a_{c\bar{c}}
(k^2) \right| }_{k^2=\xi M_W^2} ~=~0 ~~ .
\eqno(2.22)
$$
Therefore, in our {\it Scheme-IV}~, {\it the only free parameter,}
which we shall specify for insuring
$~\widehat{C}(M_W^2)=1~$, is the wavefunction renormalization constant
$~Z_\phi~$ for the unphysical Goldstone boson.
Under the above choice (2.20),
the first two equations of (2.19) become
$$
\begin{array}{ll}
\widehat{C}^a(M_W^2) &
~=~ \displaystyle{M_W^2 - {\widetilde\Pi}^a_{WW}(M_W^2)\over M_W^2 +
M_W{\widetilde\Pi}^a_{W\phi}(M_W^2)}
~=~ \displaystyle{M_W^2 + M_W{\widetilde\Pi}^a_{W\phi}(M_W^2)\over
M_W^2 - {\widetilde\Pi}^a_{\phi\phi}(M_W^2)} \\[0.5cm]
& ~=~ \displaystyle
\left[{M_W^2 - {\widetilde\Pi}^a_{WW}(M_W^2)\over M_W^2 -
{\widetilde\Pi}^a_{\phi\phi}(M_W^2)}\right]^{1\over2} ~~,\\[0.3cm]
\end{array}
\eqno(2.23)
$$
at $~k^2 = M_W^2~$.
Note that (2.23) re-expresses the factor $~\widehat{C}^a(M_W^2)~$ in terms
of only two renormalized proper self-energies:
$\tilde{\Pi}_{\phi\phi}$ and $\tilde{\Pi}_{WW}$
(or $\tilde{\Pi}_{W\phi}$ ). We emphasize that, unlike the most general
relations (2.19) adopted in Refs.~\cite{et1,et2},
the identity (2.23) compactly takes the {\it same symbolic form
for any $R_\xi$-gauge including both 't~Hooft-Feynman and Landau gauges}
under the choice (2.20).\footnote{In fact, (2.23) holds for
arbitrary $~\kappa$~.}
From the new identity (2.23), we deduce that the modification factor
$~C^a(M_W^2)~$ can be made equal to unity provided the condition
$$
{\widetilde\Pi}^a_{\phi\phi}(M_W^2)={\widetilde\Pi}^a_{WW}(M_W^2)
\eqno(2.24)
$$
is imposed.
This is readily done by adjusting $~Z_\phi ~$ in correspondence to the
unphysical arbitrary finite quantity
$~\Omega_{\phi} = 1+\delta\Omega_{\phi}~$ in (2.17).
The precise form of the
needed adjustment can be determined by expressing
the renormalized proper self-energies
in terms of the bare proper self-energies plus the corresponding counter
terms~\cite{et2},
$$
\begin{array}{lll}
\widetilde{\Pi}^a_{WW}(k^2) & =~ \tilde{\Pi}_{WW ,0}(k^2)
+\delta\tilde{\Pi}_{WW} & =~ \displaystyle
Z_W\widetilde{\Pi}^a_{WW,0}(k^2)+
(1-Z_WZ_{M_W}^2)M_W^2 ~~, \\[0.35cm]
\widetilde{\Pi}^a_{W\phi}(k^2) & =~ \tilde{\Pi}_{W\phi ,0}(k^2)
+\delta\tilde{\Pi}_{W\phi}
& =~ \displaystyle (Z_WZ_\phi)^{1\over2}
\widetilde{\Pi}^a_{W\phi,0}(k^2) + [(Z_WZ_\phi
Z_{M_W}^2)^{1\over2}-1]M_W ~~, \\[0.35cm]
\widetilde{\Pi}^a_{\phi\phi}(k^2) & =~\tilde{\Pi}_{\phi\phi ,0}(k^2)
+\delta\tilde{\Pi}_{\phi\phi} & =~ \displaystyle
Z_\phi\widetilde{\Pi}^a_{
\phi\phi,0}(k^2) + (1-Z_\phi )k^2 ~~, \\[0.2cm]
\end{array}
\eqno(2.25)
$$
which, at the one-loop order, reduces to
$$
\begin{array}{ll}
\tilde{\Pi}_{WW}(k^2) &
=~\tilde{\Pi}_{WW ,0}(k^2)-[\delta Z_W +2\delta Z_{M_W}]M_W^2~~,\\[0.2cm]
\tilde{\Pi}_{W\phi}(k^2) &
=~\tilde{\Pi}_{W\phi ,0}(k^2)
-[\frac{1}{2}(\delta Z_W +\delta Z_\phi )+ \delta Z_{M_W}]M_W~~,\\[0.2cm]
\tilde{\Pi}_{\phi\phi}(k^2) &
=~\tilde{\Pi}_{\phi\phi ,0}(k^2)-\delta Z_\phi k^2 ~~.\\[0.2cm]
\end{array}
\eqno(2.26)
$$
Note that, in the above expressions for the $R_\xi$-gauge counter terms
under the choice $~~\Omega_\xi =\Omega_\kappa = 1~~$ [cf. (2.20)],
there is no explicit dependence on the gauge
parameters $~\xi~$ and $~\kappa~$ so that (2.25) and (2.26) take
the {\it same} forms for all $R_\xi$-gauges.
From either (2.25) or (2.26), we see that
in the counter terms to the self-energies there are three independent
renormalization constants $~Z_W,~Z_{M_W}$, and ~$Z_\phi~$. Among them,
$~Z_W~$ and $~Z_{M_W}~$ have been determined by the renormalization
of the physical sector, such as in the on-shell scheme (which
we shall adopt in this paper)~\cite{Hollik},
$$
\begin{array}{ll}
\left. \displaystyle\frac{d}{dk^2}\Pi_{WW}^a(k^2)\right|_{k^2=M_W^2}=0~,~~ &
(~{\rm for}~ Z_W~)~;\\[0.58cm]
\Pi_{WW}^a(k^2)|_{k^2=M_W^2}=0~,~~ & (~{\rm for}~ Z_{M_W}~)~.\\[0.3cm]
\end{array}
\eqno(2.27)
$$
We are just left with $~Z_\phi~$ from the unphysical sector
which can be adjusted, as shown in eq.~(2.17).
Since the ghost self-energy $~\widetilde{\Pi}^a_{c\bar{c}}~$ is irrelevant
to above identity (2.23), we do not list, in (2.25) and (2.26),
the corresponding counter term $~\delta\widetilde{\Pi}^a_{c\bar{c}}~$
which contains one more renormalization constant $~Z_c~$
and will be determined as usual [cf. (2.22)].
Finally, note that we have already
included the Higgs-tadpole counter term $~-i\delta T~$
in the bare Goldstone boson and Higgs boson self-energies, through the
well-known ~tadpole~$=0~$ condition~\cite{et2,Hollik,MW}.
Now, equating ${\widetilde\Pi}^a_{\phi\phi}(M_W^2)$
and ${\widetilde\Pi}^a_{WW}(M_W^2)$ according to (2.24),
we solve for ~$Z_\phi$:
$$
Z_\phi ~=~ \displaystyle
Z_W{Z_{M_W}^2M_W^2-\widetilde{\Pi}^a_{WW,0}(M_W^2)\over
M_W^2-\widetilde{\Pi}^a_{\phi\phi,0}(M_W^2)} ~~,~~~~
(~{\rm in}~ Scheme-IV~)~.
\eqno(2.28)
$$
~$Z_\phi~$ is thus expressed in terms of known quantities, that is,
in terms of the renormalization constants of the physical sector and
the bare unphysical proper self-energies of the gauge fields and the
Goldstone boson fields, which must be computed in
any practical renormalization program.
We thus obtain $~\widehat{C}^a(M_W^2)=1~$ without the extra work of
explicitly evaluating the complicated $\Delta_i^a$~'s.
At the one-loop level, the solution for $~Z_\phi
= 1 + \delta Z_\phi ~$ in (2.27) reduces to
$$
\delta Z_\phi = 1 + \delta Z_W + 2\delta Z_{M_W}
+ M_W^{-2}\left[\widetilde{\Pi}^a_{\phi\phi,0}(M_W^2)-
\widetilde{\Pi}^a_{WW,0}(M_W^2)\right] ~~.
\eqno(2.28a)
$$
If we specialize to Landau gauge, (2.28a) can be alternatively
expressed in terms of the bare ghost self-energy
$~\widetilde{\Pi}_{c\bar{c},0}~$ plus the gauge boson
renormalization constants:
$$
\delta Z_\phi = 1 + \delta Z_W + 2\delta Z_{M_W}
+ 2M_W^{-2}\widetilde{\Pi}^a_{c\bar{c},0}(M_W^2)~~,
~~~~~(~\xi =0~)~~,
\eqno(2.28a')
$$
due to the Landau gauge WT identity (valid up to one loop)
$$
\widetilde{\Pi}_{\phi\phi,0}(M_W^2)-\widetilde{\Pi}_{WW,0}(M_W^2)
~=~2\widetilde{\Pi}_{c\bar{c},0}(M_W^2) + ~O(2~{\rm loop}) ~~.
\eqno(2.29)
$$
The validity of (2.29) can be proven directly. From the first two identities
of our (2.14) we derive
$$
\begin{array}{ll}
\displaystyle
\widehat{C}_0^a(M_{W0}^2) & ~=~ \displaystyle
\left[{M_{W0}^2 - {\widetilde\Pi}^a_{WW,0}(M_{W0}^2)\over M_{W0}^2 -
{\widetilde\Pi}^a_{\phi\phi ,0}(M_{W0}^2)}\right]^{1\over2}
~~=~\widehat{C}_0^a(M_{W}^2) + ~O(2~{\rm loop}) \\[0.6cm]
& ~=~ \displaystyle
1+\frac{1}{2}M_W^{-2}
\left[\widetilde{\Pi}_{\phi\phi,0}(M_W^2)-\widetilde{\Pi}_{WW,0}(M_W^2)\right]
+~O(2~{\rm loop})~~,
\end{array}
\eqno(2.30)
$$
and from the third identity of (2.14) plus (2.8) and (2.13) we have
$$
\begin{array}{ll}
\widehat{C}_0^a(M_W^2) \displaystyle & =~
1 - \Delta^a_3(M_W^2) +~O(2~{\rm loop})~~,~~~~~~(~\xi = 0~)~~\\[0.25cm]
& =~1+ M_W^{-2}\widetilde{\Pi}_{c\bar{c},0}^a(M_W^2)
+~O(2~{\rm loop})~~,~~~~~~(~\xi = 0~)~~.
\end{array}
\eqno(3.31)
$$
Thus, comparison of (2.30) with (2.31) gives
our one-loop order Landau-gauge WT identity (2.29)
so that (2.28a$'$) can be simply deduced from (2.28a).
As a consistency check, we note that the same one-loop result
(2.28a$'$) can also be directly derived from (2.8), (2.11) and (2.13)
for Landau gauge by using (3.31) and requiring $~C^a_{\rm mod}=1~$,
In summary, the complete definition of the {\it Scheme-IV}~ for the
$SU(2)_L$ Higgs theory is as follows: The physical sector is
renormalized in
the conventional on-shell scheme~\cite{et2,Hollik}.
This means that the vacuum expectation value is renormalized
so that tadpoles are exactly cancelled,
the proper self-energies of physical states vanish on their
mass-shells, and the residues of the propagator poles are normalized
to unity. For the gauge sector, this means that $~Z_W$ ~and ~$Z_{M_W}$~
are determined by (2.27).
In the unphysical sector, the parameters ~$\kappa$~, ~$\Omega_\kappa$
~and ~$\Omega_\xi$~ are chosen as in (2.20). The ghost wavefunction
renormalization constant $~Z_c~$ is determined as usual [cf. (2.22)].
The Goldstone wavefunction renormalization constant
~$Z_\phi$~ is chosen as in (2.28) [or (2.28a)]
so that $~~\widehat{C}(M_W^2)=1~~$ is ensured.
From (2.12), we see that this will automatically
render the ET {\it modification-free}.
\subsection{{\it Scheme-IV\/} in the Standard Model}
For the full SM, the renormalization is
greatly complicated due to the various mixings in the neutral
sector~\cite{et2,Hollik}. However, the first two WT identities in
(2.19) take the {\it same} symbolic
forms for both the charged and neutral sectors
as shown in Ref.~\cite{et2}.
This makes the generalization of our {\it Scheme-IV}~
to the SM straightforward. Even so, we still
have a further complication in our final result
for determining the wavefunction renormalization constant
$~Z_{\phi^Z}~$ of the neutral Goldstone field $~\phi^Z~$,
due to the mixings in the counter term
to the bare $Z$ boson self-energy.
The SM gauge-fixing term can be compactly written as follows~\cite{et2}
$$
{\cal L}_{\rm GF} = -\displaystyle{1\over 2}(F_0^+F_0^- + F_0^-F_0^+)
-{1\over 2}({\bf F}_0^N)^T{\bf F}_0^N ~~,
\eqno(2.32)
$$
where
$$
\begin{array}{ll}
F^\pm_0 & =~ (\xi_0^\pm)^{-{1\over 2}}\partial_\mu W^{\pm\mu}_0
-(\xi_0^\pm)^{1\over 2}\kappa_0^\pm\phi^\pm_0 ~~,\\[0.25cm]
{\bf F}_0^N & =~ (F_0^Z, F_0^A)^T
= (\xi^N_0)^{-{1\over2}}\partial_\mu {\bf N}_0^\mu
-\bar{\kappa}_0\phi_0^Z~~,
\end{array}
\eqno(2.33)
$$
and
$$
{\bf N}_0^\mu = (Z_0^\mu ,A_0^\mu )^T ~~,~~~~
{\bf N}_0^\mu = {\bf Z}_N^{1\over2}{\bf N}~~;~~~~
(\xi^N_0)^{-{1\over2}} = (\xi^N)^{-{1\over2}}{\bf Z}_{\xi_N}^{-{1\over2}}~~,
\\[0.5cm]
\eqno(2.34a)
$$
$$
\displaystyle(\xi^N_0)^{-{1\over2}} =\left[
\begin{array}{ll}
(\xi^Z_0)^{-{1\over2}}~ & ~ (\xi^{ZA}_0)^{-{1\over2}}\\[0.2cm]
(\xi^{AZ}_0)^{-{1\over2}}~ & ~ (\xi^A_0)^{-{1\over2}}\end{array}\right]~~,
~~~~~~
(\xi^N)^{-{1\over2}} =\left[
\begin{array}{ll}
(\xi^Z)^{-{1\over2}}~ & ~ 0 \\[0.2cm]
0 ~ & ~ (\xi^A)^{-{1\over2}}\end{array}\right]~~;\\[0.5cm]
\eqno(2.34b)
$$
$$
\bar{\kappa}_0 = \left( (\xi^Z_0)^{-{1\over2}}\kappa_0^Z,~
(\xi^A_0)^{-{1\over2}}\kappa_0^A\right)^T~,~~~
\bar{\kappa} =
\displaystyle\left( (\xi^Z)^{-{1\over2}}\kappa^Z,~0 \right)^T~~,~~~
\bar{\kappa}_0 = {\bf Z}_{\bar\kappa} \bar{\kappa} ~~.\\[0.5cm]
\eqno(2.34c)
$$
The construction of {\it Scheme-IV}~ for
the charged sector is essentially the same as the $SU(2)_L$ theory and will
be summarized below in (2.41). So, we only need to take
care of the neutral sector. We can derive a set of WT identities
parallel to (2.14) and (2.16) as in Ref.~\cite{et2} and obtain the
following constraints on the renormalization constants for
$~\xi_N~$ and $~\bar{\kappa}~$
$$
\displaystyle
{\bf Z}_{\xi_N}^{-{1\over2}}
~=~ {\bf \Omega}_{\xi_N}^{-{1\over2}}{\bf Z}_N^{-{1\over2}}~~,~~~~
{\bf Z}_{\bar{\kappa}}= \left(\xi_N^{1\over2}\right)^T
\left[ {\bf \Omega}_{\xi_N}^{-{1\over2}}\right]^T
\left(\xi_N^{-{1\over2}}\right)^T
{\bf \Omega}_{\bar{\kappa}}Z_{\phi^Z}^{-{1\over2}} ~~,
\eqno(2.35)
$$
with
$$
\begin{array}{ll}
{\bf \Omega}_{\xi_N}^{-{1\over2}} & ~\equiv~\displaystyle\left[
\begin{array}{ll}
(\Omega_\xi^{ZZ})^{-{1\over2}}~ & (\Omega_\xi^{ZA})^{-{1\over2}}\\[0.3cm]
(\Omega_\xi^{AZ})^{-{1\over2}}~ & (\Omega_\xi^{AA})^{-{1\over2}}
\end{array}\right]~\equiv~\left[
\begin{array}{ll}
(1+\delta\Omega_\xi^{ZZ})^{-{1\over2}}~ &
-{1\over2}\delta\Omega_\xi^{ZA}\\[0.3cm]
-{1\over2}\delta\Omega_\xi^{AZ}~ &
(1+\delta\Omega_\xi^{AA})^{-{1\over2}} \end{array}\right] ~~,\\[0.9cm]
{\bf \Omega}_{\bar\kappa} & ~\equiv~ \displaystyle \left[
\begin{array}{ll}
\Omega^{ZZ}_\kappa~ & 0\\[0.3cm]
\Omega^{AZ}_\kappa~ & 0\end{array}\right]~\equiv~\left[
\begin{array}{ll}
1+\delta\Omega_\kappa^{ZZ}~ & 0\\[0.3cm]
\delta\Omega^{AZ}_{\kappa}~ & 0 \end{array} \right] ~~.\\[0.4cm]
\end{array}
\eqno(2.35a,b)
$$
As in (2.20), we choose
$$
{\bf \Omega}_{\xi_N} ~=~ \displaystyle\left[
\begin{array}{ll}
1~ & 0 \\[0.2cm]
0~ & 1
\end{array} \right] ~~,~~~~
{\bf \Omega}_{\bar{\kappa}} ~=~ \displaystyle\left[
\begin{array}{ll}
1~ & 0 \\[0.2cm]
0~ & 0 \end{array} \right] ~~,~~~~
\kappa_Z = M_Z ~~,~~~~(~{\rm in}~ Scheme-IV~)~~.
\eqno(2.36)
$$
As mentioned above, in the full SM, the corresponding identities for
$~\widehat{C}^W(M_W^2)~$ and $~\widehat{C}^Z(M_Z^2)~$
take the same symbolic forms as (2.23)
$$
\displaystyle
\widehat{C}^W(M_W^2)
~=~ \displaystyle
\left[{M_W^2 - {\widetilde\Pi}_{W^+W^-}(M_W^2)\over M_W^2 -
{\widetilde\Pi}_{\phi^+\phi^-}(M_W^2)}\right]^{1\over2} ~~,~~~~
~\widehat{C}^Z(M_Z^2)
~=~ \displaystyle
\left[{M_Z^2 - {\widetilde\Pi}_{ZZ}(M_Z^2)\over M_Z^2 -
{\widetilde\Pi}_{\phi^Z\phi^Z}(M_Z^2)}\right]^{1\over2} ~~,
\eqno(2.37)
$$
which can be simplified to unity provided that
$$
{\widetilde\Pi}_{\phi^+\phi^-}(M_W^2)~=~{\widetilde\Pi}_{W^+W^-}(M_W^2)
~~,~~~~
{\widetilde\Pi}_{\phi^Z\phi^Z}(M_Z^2)~=~{\widetilde\Pi}_{ZZ}(M_Z^2)~~.
\eqno(2.38)
$$
The solution for $~Z_{\phi^\pm}~$ from the first condition of (2.38)
is the same as in (2.28) or (2.28a), but the solution
for $~Z_{\phi^Z}~$ from the second condition of (2.38) is complicated
due to the mixings in the $~\widetilde{\Pi}_{ZZ,0}~$ counter term:
$$
\begin{array}{l}
\widetilde{\Pi}_{ZZ}(k^2) ~
=~\widetilde{\Pi}_{ZZ,0}(k^2)+\delta\widetilde{\Pi}_{ZZ}~=~
\displaystyle
Z_{ZZ}\widehat{\widetilde{\Pi}}_{ZZ,0}(k^2)+
(1-Z_{ZZ}Z_{M_Z}^2)M_Z^2 ~~, \\[0.2cm]
\displaystyle
~~~~~~\widehat{\widetilde{\Pi}}_{ZZ,0}(k^2)\equiv
\widetilde{\Pi}_{ZZ,0}(k^2) +
Z_{ZZ}^{-{1\over2}}Z_{AZ}^{1\over2}
[\widetilde{\Pi}_{ZA,0}(k^2)+\widetilde{\Pi}_{AZ,0}(k^2)]
+ Z_{ZZ}^{-1}Z_{AZ}\widetilde{\Pi}_{AA,0}(k^2) ~~;\\[0.35cm]
\widetilde{\Pi}_{\phi^Z\phi^Z}(k^2) =~ \displaystyle
Z_{\phi^Z}\widetilde{\Pi}_{
\phi^Z\phi^Z,0}(k^2) + (1-Z_{\phi^Z} )k^2 ~~. \\[0.2cm]
\end{array}
\eqno(2.39)
$$
Substituting (2.39) into the second condition of (2.38), we find
$$
\begin{array}{ll}
Z_{\phi^Z} & =~ \displaystyle
Z_{ZZ}{Z_{M_Z}^2M_Z^2-
\widehat{\widetilde{\Pi}}_{ZZ,0}(M_Z^2)\over
M_Z^2-\widetilde{\Pi}_{\phi^Z\phi^Z,0}(M_Z^2)} ~~,~~~~
(~{\rm in}~ Scheme-IV~)~\\[0.5cm]
& =~1 + \delta Z_{ZZ} + 2\delta Z_{M_Z}
+ M_Z^{-2}\left[\widetilde{\Pi}_{\phi^Z\phi^Z,0}(M_Z^2)-
\widetilde{\Pi}_{ZZ,0}(M_Z^2)\right] ~~,~~~~
(~{\rm at~}1~{\rm loop}~)~,
\end{array}
\eqno(2.40)
$$
where the quantity $~\widehat{\widetilde{\Pi}}_{ZZ,0}~$ is defined in
the second equation of (2.39).
The added complication to the solution
of $~Z_{\phi^Z}~$ due to the mixing effects in the
neutral sector vanishes at one loop.
Finally, we summarize {\it Scheme-IV}~ for the full SM.
For both the physical and unphysical parts, the
renormalization conditions will be imposed separately for the charged
and neutral sectors.
The conditions for the charged sector are identical to
those for the $SU(2)_L$ theory. In the neutral sector,
for the physical part, the photon and electric charge are renormalized
as in {\it QED}~\cite{Hollik}, while for the unphysical part,
we choose (2.36) and (2.40).
The constraints on the whole unphysical sector in the {\it Scheme-IV}~
are as follows:
$$
\begin{array}{l}
\displaystyle\hfil\kappa^\pm = M_W~, \hfil
\Omega_{\xi^\pm} = 1~, \hfil \Omega_{\kappa^\pm} = 1~,\hfil\\[0.5cm]
\displaystyle
\widetilde\Pi_{\phi^+\phi^-}(M_W^2) ~=~ \widetilde\Pi_{W^+W^-}
(M_W^2) ~~\Longrightarrow ~~ Z_{\phi^\pm} = Z_{W^\pm}{Z_{M_W}^2
M_W^2-\widetilde{\Pi}_{W^+W^-,0}(M_W^2)\over
M_W^2-\widetilde{\Pi}_{\phi^+\phi^-,0}(M_W^2)} ~~,\\[0.5cm]
\displaystyle \delta Z_{\phi^\pm} = \delta
Z_{W^\pm} + 2\delta Z_{M_W} + M_W^{-2}\left[\widetilde{\Pi}_{
\phi^+\phi^-,0}(M_W^2) - \widetilde{\Pi}_{W^+W^-,0}(M_W^2)
\right]~,~~~~(~{\rm at}~1~{\rm loop}~)~;\\[0.3cm]
\end{array}
\eqno(2.41)
$$
and
$$
\begin{array}{l}
\displaystyle\hfil\kappa_Z = M_Z~~,~~~~
{\bf \Omega}_{\xi_N} ~=~ \displaystyle\left[
\begin{array}{ll}
1~ & 0 \\[0.2cm]
0~ & 1
\end{array} \right] ~~,~~~~
{\bf \Omega}_{\bar{\kappa}} ~=~ \displaystyle\left[
\begin{array}{ll}
1~ & 0 \\[0.2cm]
0~ & 0 \end{array} \right] ~~,\\[0.98cm]
\displaystyle\widetilde\Pi_{\phi^Z\phi^Z}(M_Z^2)~=~ \widetilde\Pi_{ZZ}
(M_Z^2)~~\Longrightarrow~~ Z_{\phi^Z} = Z_{ZZ}{Z_{M_Z}^2
M_Z^2-\widehat{\widetilde{\Pi}}_{ZZ,0}(M_Z^2)\over
M_Z^2-\widetilde{\Pi}_{\phi^Z\phi^Z,0}(M_Z^2)} ~~,\\[0.5cm]
\displaystyle \delta Z_{\phi^Z} = \delta
Z_{ZZ} + 2\delta Z_{M_Z} + M_Z^{-2}\left[\widetilde{\Pi}_{
\phi^Z\phi^Z,0}(M_Z^2) - \widetilde{\Pi}_{ZZ,0}(M_Z^2)
\right]~,~~~~(~{\rm at}~1~{\rm loop}~)~;
\end{array}
\eqno(2.42)
$$
\noindent
which insure
$$
\widehat{C}^W (M_W^2) ~=~ 1~~,
\hskip 1.5cm
\widehat{C}^Z(M_Z^2) ~=~ 1~~,~~~~
(~{\rm in}~ Scheme-IV~)~~.
\eqno(2.43)
$$
Note that in (2.42) the quantity $~\widehat{\widetilde{\Pi}}_{ZZ,0}~$
is defined in terms of the bare self-energies of the neutral gauge bosons
by the second equation of (2.39) and reduces to
$~{\widetilde{\Pi}}_{ZZ,0}~$ at one loop.
\section{
Precise Modification-Free Formulation of the ET for All $R_\xi$-Gauges}
In this section, we first give the modification-free formulation
of the ET within our new {\it Scheme-IV}~
for both $~SU(2)_L~$ Higgs theory and the full SM. In Sec.~3.2,
we further generalize our result to the
electroweak chiral Lagrangian (EWCL) formalism~\cite{app,review}
which provides the most economical description of
the strongly coupled EWSB sector below the scale of new physics
denoted by the effective cut-off $\Lambda$($\leq 4\pi v\approx
3.1$~TeV).
Numerous applications of the ET in this formalism have appeared
in recent years~\cite{LHC}. The generalization to linearly
realized effective Lagrangians~\cite{linear}
is much simpler and will be briefly discussed
at the end of Sec.~3.2. Also, based upon our modification-free
formulation of the ET, we propose
a new prescription, called `` Divided Equivalence Theorem '' (DET),
for minimizing the approximation due to ignoring the additive $B$-term
in the ET. Finally, in Sec.~3.3, we analyze
the relation of {\it Scheme-IV}~ to
our previous schemes for the precise formulation of the ET.
\subsection{The Precise Formulation in the $SU(2)_L$ theory and the SM}
From our general formulation in Sec.~2.1, we see that the radiative
modification factor $~C^a_{\rm mod}~$ to the ET is precisely
equal to the factor $~\widehat{C}^a(k^2)~$ evaluated at the physical
mass pole of the longitudinal gauge boson in the usual on-shell scheme.
This is explicitly shown in (2.12) for $SU(2)_L$ theory and
the generalization to the full SM is
straightforward~\cite{et1,et2}
$$
\begin{array}{ll}
C^W_{\rm mod}~=~\widehat{C}^W (M_W^2)~~,~~~~ &
C^Z_{\rm mod}~=~\widehat{C}^Z(M_Z^2)~~,
\end{array}
\eqno(3.1)
$$
for the on-shell subtraction of the gauge boson masses $~M_W~$ and $~M_Z~$.
We then directly apply our renormalization
{\it Scheme-IV} to give a new modification-free formulation of the ET
{\it for all $~R_\xi$-gauges.}
For $SU(2)_L$ Higgs theory, we have
$$
\begin{array}{l}
C^a_{\rm mod}~=~1~~,~~~~~
(~Scheme-IV~{\rm for}~SU(2)_L~{\rm Higgs~theory}~)
\end{array}
\eqno(3.2)
$$
where the {\it Scheme-IV}~ is defined in (2.20) and (2.28,28a).
For the SM, we have
$$
C^W_{\rm mod}~=~1~~,~~~~
C^Z_{\rm mod}~=~1~~,~~~~~(~Scheme-IV~{\rm for}~{\rm SM}~)
\eqno(3.3)
$$
where the {\it Scheme-IV}~ is summarized in (2.41) and (2.42).
We emphasize that {\it the only special step to exactly ensure
$~C^a_{\rm mod}=1~$ and $~C^{W,Z}_{\rm mod}=1~$ is to choose the unphysical
Goldstone boson wavefunction renormalization constants $~Z_\phi~$\/}
as in (2.28) for the $SU(2)_L$ theory and
$~Z_{\phi^\pm}$ and $~Z_{\phi^Z}~$ as in (2.41)-(2.42) for the SM.
Therefore, we can re-formulate the ET (1.1)-(1.2) in {\it Scheme-IV}~
with the radiative modifications removed:
$$
T[V^{a_1}_L,\cdots ,V^{a_n}_L;\Phi_{\alpha}]
= T[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}]+ B ~~,
\eqno(3.4)
$$
$$
\begin{array}{ll}
B & \equiv\sum_{l=1}^n
(~T[v^{a_1},\cdots ,v^{a_l},i\phi^{a_{l+1}},\cdots ,
i\phi^{a_n};\Phi_{\alpha}]
+ ~{\rm permutations ~of}~v'{\rm s ~and}~\phi '{\rm s}~) \\[0.25cm]
& = O(M_W/E_j){\rm -suppressed}\\[0.25cm]
& v^a\equiv v^{\mu}V^a_{\mu} ~,~~~
v^{\mu}\equiv \epsilon^{\mu}_L-k^\mu /M_V = O(M_V/E)~,~~(M_V=M_W,M_Z)~,
\end{array}
\eqno(3.4a,b)
$$
with the conditions
$$
E_j \sim k_j \gg M_W ~, ~~~~~(~ j=1,2,\cdots ,n ~)~,
\eqno(3.5a)
$$
$$
T[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}] \gg B ~.
\eqno(3.5b)
$$
Once {\it Scheme-IV}~ is chosen, we need not worry about
the $C^a_{\rm mod}$-factors
in (1.1)-(1.2) in {\it any} $~R_\xi$-gauges and to any loop order.
We remark that {\it Scheme-IV}~ is also valid for the
$1/{\cal N}$-expansion~\cite{1/N}
since the above formulation is based upon the
WT identities (for two-point self-energies)
which take the {\it same} form in any perturbative expansion.
For the sake of many phenomenological applications,
the explicit generalization to the important
effective Lagrangian formalisms will be summarized in the following section.
\subsection{Generalization to the Electroweak Chiral Lagrangian Formalism}
The radiative modification-free formulation of the ET for the
electroweak chiral Lagrangian (EWCL) formalism was given
in Ref.~\cite{et3} for {\it Scheme-II}~ which cannot be used in
Landau gauge. However, since
Landau gauge is widely used for the EWCL in the literature
due to its special convenience for this non-linear
formalism~\cite{app},
it is important and useful to generalize our {\it Scheme-IV}
to the EWCL.
As to be shown below, this generalization is straightforward.
We shall summarize our main results for the full
$~SU(2)\otimes U(1)_Y~$ EWCL.
For the convenience of practical applications of the ET
within this formalism, some useful technical details will be provided
in Appendices-A and -B.
In the following analyses,
we shall not distinguish the notations between
bare and renormalized quantities unless it is necessary.
We start from the quantized $~SU(2)_L\otimes U(1)_Y~$ bare EWCL
$$
\begin{array}{ll}
{\cal L}^{\rm [q]} &
=~ {\cal L}_{\rm eff} + {\cal L}_{\rm GF} +{\cal L}_{\rm FP}
\\[0.5cm]
{\cal L}_{\rm eff} & =~\displaystyle
\left[{\cal L}_{\rm G}+{\cal L}^{(2)}+{\cal L}_{\rm F}\right] +
{\cal L}_{\rm eff}^{\prime} \\[0.4cm]
{\cal L}_{\rm G} & =~-\displaystyle\frac{1}{2}{\rm Tr}
({\bf W}_{\mu\nu}{\bf W}^{\mu\nu})-\frac{1}{4}B_{\mu\nu}B^{\mu\nu}~~,\\[0.3cm]
{\cal L}^{(2)} & =~\displaystyle\frac{v^2}{4}{\rm Tr}[(D_\mu U)^\dagger
(D^\mu U)] ~~, \\[0.3cm]
{\cal L}_{\rm F} & =~
\overline{F_{Lj}} i\gamma^\mu D_\mu F_{Lj}
+\overline{F_{Rj}} i\gamma^\mu D_\mu F_{Rj}
-(\overline{F_{Lj}}UM_{j} F_{Rj} +\overline{F_{Rj}}M_{j}
U^{\dagger} F_{Lj} )~~,
\end{array}
\eqno(3.6)
$$
with
$$
\begin{array}{l}
{\bf W}_{\mu\nu}\equiv\partial_\mu {\bf W}_\nu -\partial_\nu{\bf W}_\mu
+ ig[{\bf W}_\mu ,{\bf W}_\nu ] ~,~~~~
B_{\mu\nu} \equiv\partial_\mu B_\nu -\partial_\nu B_\mu ~,\\[0.3cm]
U=\exp [i\tau^a\pi^a/v]~~,~~~~
D_\mu U =\partial_\mu U +ig{\bf W}_\mu U -ig^{\prime}U{\bf B}_\mu~~,\\[0.3cm]
\displaystyle
{\bf W}_\mu \equiv W^a_\mu\frac{\tau^a}{2}~~,~~~
{\bf B}_\mu \equiv B_\mu \frac{\tau^3}{2}~~,~~~\\[0.3cm]
\displaystyle
D_\mu F_{Lj} = \left[\partial_\mu -ig\frac{\tau^a}{2}W^a_\mu-ig^\prime
\frac{Y}{2}B_\mu \right] F_{Lj}~~,~~~~
D_\mu F_{Rj} = \left[ \partial_\mu
-ig^\prime \left(\frac{\tau^3}{2}+\frac{Y}{2}\right)B_\mu
\right] F_{Rj}~~,\\[0.3cm]
F_{Lj}\equiv \left(f_{1j},~f_{2j}\right)_L^T~~,~~~
F_{Rj}\equiv \left(f_{1j},~f_{2j}\right)_R^T~~,
\end{array}
\eqno(3.7)
$$
where $~\pi^a~$'s are the would-be Goldstone fields in the non-linear
realization; $f_{1j}$ and $f_{2j}$ are the up- and down- type fermions
of the $j$-th family (either quarks or leptons) respectively,
and all right-handed fermions are $SU(2)_L$ singlet.
In (3.6), the leading order Lagrangian
$\left[{\cal L}_{\rm G}+{\cal L}^{(2)}+{\cal L}_{\rm F}\right]$
denotes the model-independent contributions;
the model-dependent next-to-leading
order effective Lagrangian $~{\cal L}_{\rm eff}^{\prime}~$ is given
in Appendix-A.
Many effective operators contained in $~{\cal L}_{\rm eff}^{\prime}~$
(cf. Appendix-A), as reflections of the new physics, can be
tested at the LHC and possible future electron (and photon) Linear
Colliders (LC) through longitudinal gauge boson scattering
processes~\cite{et5,LHC,LC}. Nonetheless, the analysis of the ET and
the modification factors $C^a_{\rm mod}~$ do not depend on the details
of $~{\cal L}_{\rm eff}^{\prime}~$.
The $~SU(2)_L\otimes U(1)_Y~$
gauge-fixing term, $~{\cal L}_{\rm GF}$, in (3.6)
is the same as that defined in (2.29) for the SM except that
the linearly realized Goldstone boson fields ($\phi^{\pm ,Z}$)
are replaced by the non-linearly realized fields ($\pi^{\pm ,Z}$).
The BRST transformations
for the bare gauge and Goldstone boson fields are
$$
\begin{array}{ll}
\hat{s}W^\pm_\mu & =~-\partial_\mu c^\pm \mp i
\left[e(A_\mu c^\pm - W^\pm_\mu c^A)
+ g {\rm c}_{\rm w}(Z_\mu c^\pm - W^\pm c^Z)\right]\\[0.3cm]
\hat{s}Z_\mu & =~ -\partial_\mu c^Z -ig{\rm c}_{\rm w}
\left[W^+_\mu c^- - W^-_\mu c^+ \right]\\[0.3cm]
\hat{s}A_\mu & =~ -\partial_\mu c^A -ie\left[W^+_\mu c^--W^-_\mu c^+\right]
\\[0.3cm]
\hat{s}\pi^\pm & =~\displaystyle M_W
\left[\pm i(\underline{\pi}^Z c^\pm + \underline{\pi}^\pm\tilde{c}^3)
-\eta\underline{\pi}^\pm
(\underline{\pi}^+c^-+\underline{\pi}^-c^+)
-\frac{\eta}{{\rm c}_{\rm w}}\underline{\pi}^\pm
\underline{\pi}^Zc^Z
+\zeta~c^\pm\right] ~~,\\[0.45cm]
\hat{s}\pi^Z & =~ \displaystyle M_Z
\left[i(\underline{\pi}^-c^+-\underline{\pi}^+c^-)-{\rm c}_{\rm w}
\eta\underline{\pi}^Z(\underline{\pi}^+c^-+\underline{\pi}^-c^+)
-\eta\underline{\pi}^Z\underline{\pi}^Zc^Z + \zeta ~c^Z\right] ~~,
\end{array}
\eqno(3.8)
$$
where $~{\rm c}_{\rm w} \equiv ~ \cos\theta_{\rm W}~$ and
$$
\begin{array}{ll}
\tilde{c}^3 & \equiv ~[\cos 2\theta_{\rm W}]c^Z +
[\sin 2\theta_{\rm W}]c^A ~~, \\[0.4cm]
\eta & \equiv ~\displaystyle\frac{\underline{\pi}\cot\underline{\pi}-1}
{\underline{\pi}^2}
= -\frac{1}{3}+O(\pi^2) ~~,\\[0.4cm]
\zeta & \equiv ~\displaystyle{\underline{\pi}}\cot\underline{\pi}
= 1 - \frac{1}{3v^2}\vec{\pi}\cdot\vec{\pi}+O(\pi^4) ~~, \\[0.4cm]
& \underline{\pi}~\equiv~\displaystyle\frac{\pi}{v}~~,~~~~
\pi ~\equiv~ \left(\vec{\pi}\cdot\vec{\pi}\right)^{\frac{1}{2}}
~=~ \left( 2\pi^+\pi^- +\pi^Z\pi^Z \right)^{\frac{1}{2}} ~~.
\end{array}
\eqno(3.9)
$$
The derivations for $~\hat{s}\pi^\pm~$ and $~\hat{s}\pi^Z~$ are
given in Appendix-B.
Note that the non-linear realization greatly complicates
the BRST transformations for the Goldstone boson fields.
This makes the $~\Delta^a_i$-quantities which appear in the
modification factors much more complex.
With the BRST transformations (3.8), we can write down
the $R_\xi$-gauge Faddeev-Popov ghost Lagrangian in this non-linear
formalism as:
$$
{\cal L}_{\rm FP} \displaystyle
~=~\xi_W^{1\over2}\left[ \bar{c}^+\hat{s}F^+ + \bar{c}^+\hat{s}F^- \right]
+\xi_Z^{1\over2} \bar{c}^+\hat{s}F^Z
+\xi_A^{1\over2} \bar{c}^+\hat{s}F^A ~~.
\eqno(3.10)
$$
The full expression is very lengthy due to the complicated
non-linear BRST transformations for $~\pi^a~$'s.
In the Landau gauge, $~{\cal L}_{\rm FP}~$ is greatly simplified
and has the {\it same} form as that in the linearly realized SM,
due to the decoupling of ghosts from the Goldstone bosons at
tree-level. This is clear from (3.10)
after substituting (2.33) and setting $~~\xi_W=\xi_Z=\xi_A=0~~$
[cf. $(B6)$ in Appendix-B].
This is why the inclusion of Landau gauge into the modification-free
formulation of the ET is particularly useful.
With these preliminaries, we can now generalize our
precise formulation of the ET to the EWCL formalism.
In Sec.~2 and 3, our derivation of the factor-$C^a_{\rm mod}~$ and
construction of the renormalization {\it Scheme-IV}~ for simplifying it
to unity are based upon the general ST and WT identities. The validity
of these general identities does {\it not}
rely on any explicit expression of the $\Delta^a_i$-quantities
and the proper self-energies, and this makes our generalization
straightforward. Our results are summarized as follows.
First we consider the derivation of the modification factor-$C^a_{\rm
mod}~$'s from the amputation and renormalization of external $V_L$ and
$\pi$ lines. Symbolically,
the expressions for $~C^a_{\rm mod}~$'s still have the {\it same}
dependences on the renormalization constants and the $\Delta^a_i$-quantities
but their explicit expressions are changed in the EWCL
formalism~\cite{et3}. We consider the charged sector as an example of
the changes.
$$
C^W_{\rm mod}~=~\widehat{C}^W(M_W^2) ~=~\left.
\displaystyle\left(\frac{Z_W}{Z_{\pi^\pm}}\right)^{1\over2}Z_{M_W}
\frac{1+\Delta_1^W (k^2)+\Delta_2^W (k^2)}
{1+\Delta_3^W (k^2)}\right|_{k^2=M_W^2}
~~,
\eqno(3.11)
$$
which has the same symbolic form as the linear SM case~\cite{et2}
[see also (2.18), (2.11) and (2.12) for the $SU(2)_L$ Higgs theory
in the present paper],
but the expressions for these $~\Delta_i~$'s are given by
$$
\begin{array}{rll}
1+\Delta_1^W (k^2)+\Delta_2^W (k^2) & \equiv & \displaystyle
{1\over M_W}<0|T(\hat{s}\pi^\mp )|\bar{c}^\pm >(k) ~~,\\[0.4cm]
\displaystyle
ik_\mu \left[1+\Delta_3^W (k^2)\right] & \equiv & -<0|T(\hat{s}W^\mp_\mu )
|\bar{c}^\pm >(k) ~~,\\[0.2cm]
\end{array}
\eqno(3.12)
$$
where all fields and parameters are bare, and the BRST transformations for
$~\pi^a~$ and $~W^\pm_\mu~$ are given by (3.8).
From (3.8)-(3.10) we see that
the expression for $~\Delta_1^W (k^2)+\Delta_2^W (k^2)~$ has been
greatly complicated due to the non-linear transformation of the
Goldstone bosons, while the $~\Delta_3^W~$ takes the same symbolic
form as in the linear SM. For Landau gauge,
these $~\Delta_i~$'s still satisfy the relation (2.13)
and the two-loop graph of the type of Fig.~2b
also appears in the $~\Delta_1^W (k^2)+\Delta_2^W (k^2)~$ of (3.10).
We do not give any further detailed expressions for these
$~\Delta_i~$'s in either charged or neutral sector, because the
following formulation of the ET
within the {\it Scheme-IV}~ does {\it not} rely on any of
these complicated quantities for $~C^a_{\rm mod}~$.
Second, consider the WT identities derived in Sec.~2, which
enable us to re-express $~C^a_{\rm mod}$ in terms of
the proper self-energies of the gauge and Goldstone fields
and make our construction of the {\it Scheme-IV}~ possible.
For the non-linear EWCL, the main difference is that we now
have higher order effective operators in $~{\cal L}_{\rm eff}^{\prime}~$
(cf. Appendix-A) which
parameterize the new physics effects below
the effective cutoff scale $~\Lambda~$.
However, they do not
affect the WT identities for self-energies derived in
Sec.~2 because their contributions, by definition, can always be included
into the bare self-energies, as was done in Ref.~\cite{app}.
Thus, our the construction for the {\it Scheme-IV}~ in Sec.~2
holds for the EWCL formalism. Hence, our final conclusion
on the modification-free formulation of the ET in this formalism
is the same as that given in (3.1)-(3.5) of Sec.~3.1, after simply
replacing the linearly realized Goldstone boson fields ($\phi^a$~'s)
by the non-linearly realized fields ($\pi^a$~'s).
Before concluding this section, we remark upon another popular
effective Lagrangian formalism~\cite{linear} for the weakly coupled
EWSB sector (also called the decoupling scenario).
In this formalism, the lowest order Lagrangian is just the
linear SM with a relatively light Higgs boson
and all higher order effective
operators must have dimensionalities larger than four and are
suppressed by the cutoff scale ~$\Lambda$~:
$$
{\cal L}_{\rm eff}^{\rm linear} ~=~\displaystyle
{\cal L}_{\rm SM}+\sum_{n} \frac{1}{\Lambda^{d_n-4}}{\cal L}_n
\eqno(3.13)
$$
where $~d_n~(\geq 5)~$ is the dimension of the effective
operator $~{\cal L}_n~$ and the effective cut-off $~\Lambda~$ has, in
principle, no upper bound.
The generalization of our modification-free
formulation of the ET to this formalism is extremely simple.
All our discussions in Sec.~2 and 3.1
hold and the only new thing is to put the new physics
contributions to the self-energies into the bare self-energies
so that the general relations between the bare and
renormalized self-energies [cf. (2.25) and (2.39)] remain the same.
This is similar to the case of the non-linear EWCL
(the non-decoupling scenario).
\subsection{Divided Equivalence Theorem: a New Improvement}
In this section, for the purpose of minimizing the approximation
from ignoring the additive $B$-term in the ET (3.4) or (1.1),
we propose a convenient new prescription,
called `` Divided Equivalence Theorem '' (DET), based upon our
modification-free formulation (3.4).
We first note that the rigorous {\it Scheme-IV}~ and the previous
{\it Scheme-II}~\cite{et1,et2} (cf. Sec.~3.4)
{\it do not rely on the size of the $B$-term}. Furthermore, the result
$~C^a_{\rm mod}=1~$ greatly simplifies the expression for the $B$-term
[cf. (1.1) and (3.4)].
This makes any further exploration and application of either
the physical or technical content of the ET very convenient.
In the following, we show how the error caused by ignoring $B$-term
in the ET can be minimized through the new prescription DET.
For any given perturbative expansion up to a finite order $N$,
the $S$-matrix $T$ (involving $V_L^a$
or $\phi^a$) and the $B$-term can be generally written as
$~~~T=\sum_{\ell =0}^{N}T_{\ell}~~~$ and
$~~~B=\sum_{\ell =0}^{N}B_{\ell}~~~$. Within our
modification-free formulation (3.4) of the ET, we have no complication
due to the expansion of each $~C^a_{\rm mod}$-factor on the RHS of (1.1)
[i.e., $~~C^a_{\rm mod}=\sum_{\ell =0}^{N}
\left(C^a_{\rm mod}\right)_{\ell}~~$].
Therefore, at $\ell$-th order and with $~C^a_{\rm mod}=1~$ insured,
the exact ET identity in (3.4) can be expanded as
$$
T_{\ell}[V^{a_1}_L,\cdots ,V^{a_n}_L;\Phi_{\alpha}]
~~=~~ T_{\ell}[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}]
~+~ B_{\ell} ~~,
\eqno(3.14)
$$
and the conditions (3.5ab) become, at the $\ell$-th order,
$$
E_j \sim k_j \gg M_W ~, ~~~~~(~ j=1,2,\cdots ,n ~)~,
\eqno(3.15a)
$$
$$
T_{\ell}[i\phi^{a_1},\cdots ,i\phi^{a_n};\Phi_{\alpha}] \gg B_{\ell} ~,
~~~~~(~ \ell =0,1,2,\cdots ~)~.
\eqno(3.15b)
$$
We can estimate the $\ell$-th order $B$-term as [cf. (1.3)]
$$
B_{\ell} = O\left(\frac{M_W^2}{E_j^2}\right)
T_{\ell}[ i\phi^{a_1},\cdots , i\phi^{a_n}; \Phi_{\alpha}] +
O\left(\frac{M_W}{E_j}\right)
T_{\ell}[ V_{T_j} ^{a_{r_1}}, i\phi^{a_{r_2}},
\cdots , i\phi^{a_{r_n}}; \Phi_{\alpha}]~~,
\eqno(3.16)
$$
which is $~O(M_W/E_j)$-suppressed for $~~E_j\gg M_W~$.
When the next-to-leading order (NLO: $\ell = 1$~) contributions
(containing possible new physics effects, cf. Appendix-A for instance)
are included, the main limitation\footnote{We must clarify that,
for the discussion of the {\it physical content} of the ET as a criterion
for probing the EWSB, as done in Refs.~\cite{et4,et5}, the issue of
including/ignoring the $B$-term is essentially {\it irrelevant} because
both the Goldstone boson amplitude and the $B$-term are explicitly
estimated order by order and are compared to each other~\cite{et4,et5}.}~
on the predication of the ET for the $V_L$-amplitude via computing
the Goldstone boson amplitude
is due to ignoring the leading order $B_0$-term. This leading $B_0$-term
is of $O(g^2)$~\cite{et4,et5} in the heavy Higgs SM and the CLEWT and
cannot always be ignored in comparison with the NLO $\phi^a$-amplitude
$~T_1~$ though we usually have $~T_0\gg B_0~$ and
$~T_1\gg B_1~$ respectively~\cite{et4,et5} because of
(3.16). It has been shown~\cite{et5} that,
except some special kinetic regions,
$~~T_0\gg B_0~~$ and $~~T_1\gg B_1~~$ for all effective operators
containing pure Goldstone boson interactions (cf. Appendix-A),
as long as $~E_j\gg M_W~$~.
Based upon the above new equations (3.14)-(3.16), we can precisely
formulate the ET at {\it each given order-$\ell$}
of the perturbative expansion
where only $~B_{\ell}~$, {\it but not the whole $B$-term}, will be
ignored to build the longitudinal/Goldstone boson equivalence.
Hence, {\it the equivalence is divided order by order} in the perturbative
expansion, and the condition for this divided equivalence is
$~~T_{\ell} \gg B_{\ell}~~$ (at the $\ell$-th order)
which is much weaker than $~~T_{\ell}\gg B_0~~$ [deduced from (3.5b)]
for $~\ell\geq 1~$. For convenience, we call this formulation as
`` Divided Equivalence Theorem '' (DET).
Therefore, to improve the prediction of $V_L$-amplitude for
the most interesting NLO contributions (in $~T_1~$)
by using the ET, we propose the following simple prescription:
\begin{description}
\item[{\bf (i).}] Perform a direct and precise unitary gauge
calculation for the tree-level $V_L$-amplitude $~T_0[V_L]~$
which is quite simple.
\item[{\bf (ii).}]
Make use of the DET (3.14) and deduce $~T_1[V_L]~$ from the
Goldstone boson amplitude $~T_1[GB]~$, by ignoring $~B_1~$ only.
\end{description}
To see how simple the direct unitary gauge calculation of the tree-level
$V_L$-amplitude is, we calculate the $~W^+_LW^-_L\rightarrow W^+_LW^-_L~$
scattering amplitude in the EWCL formalism as a typical example.
The exact tree-level amplitude $~T_0[W_L]~$ only takes three lines:
$$
\begin{array}{l}
T_0[W_L]=
ig^2\left[ -(1+x)^2 \sin^2\theta
+ 2x(1+x)(3\cos\theta -1)
-{\rm c}_{\rm w}^2
\displaystyle\frac{4x(2x+3)^2\cos\theta}{4x+3
-{\rm s}_{\rm w}^2{\rm c}_{\rm w}^{-2}}\right.\\[0.4cm]
\left. +{\rm c}_{\rm w}^2
\displaystyle\frac{8x(1+x)(1-\cos\theta )(1+3\cos\theta )
+2[(3+\cos\theta )x+2][(1-\cos\theta )x-\cos\theta ]^2}
{2x(1-\cos\theta )+{\rm c}_{\rm w}^{-2}} \right]\\[0.4cm]
+ie^2 \left[ -\displaystyle\frac{x(2x+3)^2\cos\theta}
{x+1}+4(1+x)(1+3\cos\theta )+
\displaystyle\frac{[(3+\cos\theta )x+2][(1-\cos\theta )x
-\cos\theta ]^2}{x(1-\cos\theta )} \right]~,
\end{array}
\eqno(3.17)
$$
where $~\theta~$ is the scattering angle,
$~x\equiv p^2/M_W^2~$ with $~p~$ denoting the C.M. momentum, and
$~{\rm s}_{\rm w}\equiv \sin\theta_W~,
~{\rm c}_{\rm w}\equiv \cos\theta_W~$.
(3.17) contains five diagrams:
one contact diagram, two $s$-channel diagrams
by $Z$ and photon exchange, and two similar $t$-channel diagrams.
The corresponding Goldstone boson amplitude also contain five similar
diagrams except all external lines being scalars. However, even for
just including the leading $B_0$-term which contains only one
external $v^a$-line [cf. (3.4a) or (1.1b)], one has to compute
extra $~5\times 4=20~$ tree-level graphs due to all possible permutations
of the external $v^a$-line.
It is easy to figure out how the number of these
extra graphs will be greatly increased
if one explicitly calculates the whole
$B$-term. Therefore, we point out that,
as the lowest order tree-level $V_L$-amplitude is
concerned, it is {\it much simpler} to directly calculate the
precise tree-level $V_L$-amplitude in the unitary gauge
than to indirectly calculate
the $R_\xi$-gauge Goldstone boson amplitude {\it plus} the complicated
$B_0$ or the whole $B$ term [which contains much more diagrams
due to the permutations of $~v_\mu$-factors
in (1.1b) or (3.4a)] as proposed in some literature~\cite{other}.
To minimize the numerical error related to the $B$-term,
our new prescription DET is the best and the most convenient.
Then, let us further exemplify, up to the NLO of the EWCL formalism,
how the precision of the ET is improved by the above new prescription
DET. Consider the lowest order contributions from
$~{\cal L}_{\rm G}+{\cal L}^{(2)}+{\cal L}_{\rm F}~$
[cf. (3.6)] and the NLO contributions
from the important operators $~{\cal L}_{4,5}~$ (cf. Appendix A).
For the typical process $~W_LW_L\rightarrow W_LW_L~$ up to the NLO,
both explicit calculations and the power counting analysis~\cite{et4,et5}
give
$$
\begin{array}{ll}
\displaystyle
T_0 ~=~ O\left(\frac{E^2}{v^2}\right) ~~,~~~~~ &
B_0 ~=~ O(g^2)~~; \\[0.4cm]
\displaystyle
T_1 ~=~ O\left(\frac{E^2}{v^2}\frac{E^2}{\Lambda^2}\right)~~,~~~~~ &
\displaystyle
B_1 ~=~ O\left(g^2\frac{E^2}{\Lambda^2}\right)~~;
\end{array}
\eqno(3.18)
$$
where $~v=246$~GeV and $~\Lambda\simeq 4\pi v \simeq 3.1$~TeV.
From the condition (3.5b) and (3.15b) and up to the NLO, we have
$$
\begin{array}{lll}
(3.5b): & T_1 \gg B_0 ~~\Longrightarrow~~ 1\gg 24.6\%~~,
& ({\rm for}~ E=1~{\rm TeV})~~;\\[0.3cm]
(3.15b): & T_0 \gg B_0 ~~\Longrightarrow~~ 1\gg 2.56\%~~,
& ({\rm for}~ E=1~{\rm TeV})~~;\\[0.3cm]
& T_1 \gg B_1 ~~\Longrightarrow~~ 1\gg 2.56\%~~,
& ({\rm for}~ E=1~{\rm TeV})~~.
\end{array}
\eqno(3.19)
$$
Here we see that, up to the NLO and for $~E=1$~TeV,
the precision of the DET (3.14)-(3.16) is increased
by about a factor of $10$
in comparison with the usual prescription of the ET [cf. (3.5a,b)]
as ignoring the $B$-term is concerned.
It is clear that {\it the DET (3.14)-(3.16)
can be applied to a much wider high energy
region than the usual ET due to the much weaker condition (3.15b).}
In general, to do a calculation up to any order $~\ell \geq 1~$,
we can apply the DET to minimize the approximation
due to the $B$-term by following way:
computing the full $V_L$-amplitude up to
the $(\ell -1)$-th order and applying
the DET (3.14) at $\ell$-th order with $~B_{\ell}~$ ignored.
The practical applications of this DET up to NLO
($\ell =1$) turns out most convenient.
It is obvious that the above formulation for
DET generally holds for both the SM and
the effective Lagrangian formalisms.
\subsection{Comparison of Scheme-IV with Other Schemes}
The fact that we call the new renormalization scheme, {\it Scheme-IV}~,
implies that there are three other previous renormalization
schemes for the ET. {\it Schemes-I~} and {\it -II}~
were defined in references~\cite{et1,et2}, while {\it
Scheme-III}~ was defined in reference~\cite{et4}.
{\it Scheme-I}~\cite{et1,et2} is a generalization of the usual one-loop
level on-shell scheme~\cite{Hollik} to all orders.
In this scheme, the unphysical sector is renormalized such that, for
example, in the pure $SU(2)_L$ Higgs theory
$$
\begin{array}{c}
\widetilde{\Pi}^a_{WW}(\xi\kappa M_W)
=\widetilde{\Pi}^a_{W\phi}(\xi\kappa M_W)
=\widetilde{\Pi}^a_{\phi\phi}(\xi\kappa M_W)
=\widetilde{\Pi}^a_{c\bar{c}}(\xi\kappa M_W)=0~~,
\end{array}
\eqno(3.20a)
$$
$$
\begin{array}{c}
\displaystyle
\left.{d\over dk^2}\widetilde{\Pi}^a_{\phi\phi}(k^2)\right|_{k^2=
\xi\kappa M_W} ~=~0~~,~~~~~
\left.{d\over dk^2}\widetilde{\Pi}^a_{c
\overline{c}}(k^2)\right|_{k^2=\xi\kappa M_W} ~=~ 0~~,
\end{array}
\eqno(3.20b)
$$
where $~k^2=\xi\kappa M_W~$ is the tree level mass pole of the
unphysical sector. In this scheme,
the modification factor is not unity, but does take a very simple form
in terms of a single parameter determined by the renormalization
scheme~\cite{et1,et2},
$$
C^a_{\rm mod} ~=~ \Omega_\kappa^{-1} ~~, ~~~~~
(~Scheme-I~{\rm with}~\kappa =M_W~{\rm and}~\xi =1~)~~.
\eqno(3.21)
$$
{\it Scheme-II}~\cite{et1,et2} is a variation of the
usual on-shell scheme, in which the unphysical sector
is renormalized such that $C^a_{mod}$ is set equal to unity.
The choice here is to impose all of
the conditions in (3.20) except that the Goldstone boson wavefunction
renormalization constant $Z_\phi$ is not determined by (3.20b) but
specially chosen. To accomplish this, $\Omega_\xi$ is adjusted so
that ~$\widetilde{\Pi}^a_{WW}(\xi\kappa M_W)=0~$, and $Z_\phi$ is adjusted
so that $\widetilde{\Pi}^a_{\phi\phi}(\xi\kappa M_W)=0$. $\Omega_\kappa$
is set to unity, which ensures that $~\widetilde{\Pi}^a_{W\phi}(\xi\kappa
M_W)=\widetilde{\Pi}^a_{c\bar{c}}(\xi\kappa M_W)=0~$, and $Z_c$ is
adjusted to ensure that the residue of the ghost propagator is
unity. The above conditions guarantee that $\widehat{C}^a(\xi\kappa
M_W)=1$. The final choice is to set $~\kappa = \xi^{-1}M_W~$ so
that $~C^a_{\rm mod}=1~$. This scheme is particularly convenient for
the 't~Hooft-Feynman gauge, where $\kappa = M_W$.
For $~\xi\neq 1~$, there is a complication
due to the tree level gauge-Goldstone-boson mixing term
proportional to $~\kappa -M_W = (\xi^{-1}-1)M_W~$. But this is
not a big problem since the mixing term corresponds to a
tree level gauge-Goldstone-boson propagator similar to that found in
the Lorentz gauge ($~\kappa =0~$)~\cite{lorentz}. The main shortcoming of
this scheme is that it does not include Landau gauge
since, for $~\xi =0~$,
the choice $~\kappa = \xi^{-1}M_W~$ is singular
and the quantities $~\Omega_{\xi ,\kappa}~$ have no meaning.
In contrast to {\it Scheme-II}~, {\it Scheme-IV}~ is
valid for all $R_\xi$-gauges including both Landau and 't~Hooft-Feynman
gauges. The primary inconvenience of {\it Scheme-IV}~ is
that for non-Landau gauges all unphysical mass poles
deviate from their tree-level values~\cite{RT,Hollik,et2},
thereby invalidating condition (3.20a).\footnote{
The violation of (3.20a) in non-Landau gauges
is not special to {\it Scheme-IV}~,
but is a general feature of all schemes~\cite{RT}-\cite{BV}
which choose the renormalization prescription (2.21)
for the gauge-fixing condition~\cite{RT,Hollik,et2}.}~
This is not really a problem since
these poles have no physical effect.
{\it Scheme-III}~\cite{et4} is specially designed for
the pure $V_L$-scatterings in the strongly coupled EWSB sector.
For a $~2\rightarrow n-2~$
( $n\geq 4$ ) strong pure $V_L$-scattering process,
the $B$-term defined in (1.1) is of order$~~O(g^2)v^{n-4}~~$,
where $~v=246$~GeV. By the electroweak power counting
analysis~\cite{et5,et4}, it has been shown~\cite{et4} that all
$g$-dependent contributions from either vertices or the mass poles of
gauge-boson, Goldstone boson and ghost fields are at most
of $~O(g^2)~$ and the contributions of fermion Yukawa couplings ($y_f$)
coming from fermion-loops are of $~O(\frac{y_f^2}{16\pi^2})\leq
O(\frac{g^2}{16\pi^2})~$ since $~y_f\leq y_t \simeq O(g)~$.
Also, in the factor $C^a_{\rm mod}$ all loop-level $\Delta^a_i$-quantities
[cf. eq.~(2.9) and Fig.1] are of $~~O(\frac{g^2}{16\pi^2})~~$
since they contain at least two ghost-gauge-boson or ghost-scalar
vertices. Hence, if the
$~B$-term (of $~O(g^2)f_\pi^{n-4}~$) is ignored in the strong pure
$V_L$-scattering amplitude, all other $g$- and $y_f$-dependent terms
should also be ignored. Consequently we can simplify
the modification factor such that $~C^a_{\rm mod} \simeq 1+O(g^2)~$
by choosing~\cite{et4}
$$
\begin{array}{l}
\displaystyle
\left.
Z_{\phi^a}=\left[\left(\displaystyle\frac{M_V}{M_V^{\rm phys}}\right)^2
Z_{V^a}Z_{M_V}^2 \right]\right|_{g,e,y_f=0} ~~,~~~~(~{\it Scheme-III}~)~.
\end{array}
\eqno(3.22)
$$
All other renormalization conditions
can be freely chosen as in any standard renormalization scheme.
(Here $~M_V^{\rm phys}~$ is the physical mass pole of the
gauge boson $V^a$. Note that we have set $~M_V^{\rm phys}=M_V~$ in
{\it Scheme-IV}~ for simplicity.) In this scheme,
because of the neglect of all gauge and Yukawa couplings,
all gauge-boson, Goldstone-boson and ghost mass poles are
approximately zero. Thus, all $R_\xi$-gauges (including both
't~Hooft-Feynman and Landau gauges) become {\it equivalent}, for
the case of strong pure $V_L$-scatterings in both the
heavy Higgs SM or the EWCL formalism. But, for
processes involving fields other than longitudinal gauge bosons,
only {\it Scheme-II}~ and {\it Scheme-IV}~
are suitable.\footnote{Some interesting examples are
$~W_LW_L,Z_LZ_L\rightarrow t\bar{t}~$, $~V_LH\rightarrow V_LH~$, and
$~AA\rightarrow W_LW_L, WWV_LV_L~$, etc. ($A=$photon).}~~
Even in the case of pure $V_L$-scattering, we
note that in the kinematic regions around
the $t$ and $u$ channel singularities, photon exchange
becomes important and must be retained~\cite{et5}.
In this case, {\it Scheme-IV}~ or {\it Scheme-II}~ is required to remove
the $~C^a_{\rm mod}$-factors.
In summary, renormalization {\it Scheme-IV} ensures the
modification-free formulation of the ET [cf. (3.1)-(3.5)]. It is
valid for all $R_\xi$-gauges, but is particularly convenient for the
Landau gauge where all unphysical Goldstone boson and
ghost fields are exactly massless~\cite{et2,MW}.
{\it Scheme-II}~\cite{et1,et2}, on the other hand, is
particularly convenient for 't~Hooft-Feynman gauge.
For all other $R_\xi$-gauges,
both schemes are valid, but the {\it Scheme-IV}~ may be more
convenient due to the absence of the tree-level gauge-Goldstone boson
mixing.
\section{Explicit One-Loop Calculations}
\subsection{One-loop Calculations for the Heavy Higgs Standard Model}
To demonstrate the effectiveness of {\it Scheme-IV}~, we first
consider the
heavy Higgs limit of the standard model. The complete one-loop
calculations for proper self-energies and renormalization constants in
the heavy Higgs limit have been given for general $R_\xi$-gauges in
reference~\cite{et2} for renormalization {\it Scheme-I}~. Since, in
this scheme,
$~\Omega^{W,ZZ}_{\xi,\kappa}=1+\delta\Omega^{W,ZZ}_{\xi,\kappa}
\approx 1$~ at one-loop in the heavy Higgs limit, {\it Scheme-I}~
coincides with {\it Scheme-IV}~ to this order. Hence, we can
directly use those results to demonstrate that $C^{W,Z}_{mod}$ is
equal to unity in {\it Scheme-IV}~.
The results for the charged and neutral sectors are~\cite{et2}:
$$
\begin{array}{rcl}
\displaystyle
\widetilde{\Pi}_{WW,0}(k^2) &
= & \displaystyle -{g^2\over16\pi^2}\left[{
1\over8}m_H^2+{3\over4}M_W^2\ln{m_H^2\over M_W^2}\right]~~,\\[0.5cm]
\displaystyle
\widetilde{\Pi}_{ZZ,0}(k^2) &
= & \displaystyle -{g^2\over{16\pi^2c_{\rm w}^2}}\left[{
1\over8}m_H^2+{3\over4}M_Z^2\ln{m_H^2\over M_Z^2}\right]~~,\\[0.5cm]
\displaystyle
\widetilde{\Pi}_{\phi^\pm\phi^\pm,0}(k^2) &
= & \displaystyle -{g^2\over16\pi^2}
k^2\left[{1\over8}{m_H^2\over M_W^2} + \left({3\over4} -
{\xi_W\over2}\right)\ln{m_H^2\over M_W^2}\right]~~,\\[0.5cm]
\displaystyle
\widetilde{\Pi}_{\phi^Z\phi^Z,0}(k^2) &
= & \displaystyle -{g^2\over{16\pi^2c_{\rm w}^2}}
k^2\left[{1\over8}{m_H^2\over M_Z^2} + \left({3\over4} -
{\xi_Z\over2}\right)\ln{m_H^2\over M_Z^2}\right]~~,\\[0.5cm]
\displaystyle
\delta Z_{M_W} &
= & \displaystyle -{g^2\over16\pi^2}\left[{1\over16}
{m_H^2\over M_W^2}+{5\over12}\ln{m_H^2\over M_W^2}\right]~~,\\[0.5cm]
\displaystyle
\delta Z_{M_Z} &
= & \displaystyle -{g^2\over16\pi^2c_{\rm w}^2}\left[{1\over16}
{m_H^2\over M_Z^2}+{5\over12}\ln{m_H^2\over M_Z^2}\right]~~,\\[0.5cm]
\displaystyle
\delta Z_{W} &
= & \displaystyle -{g^2\over16\pi^2}{1\over12}
\ln{m_H^2\over M_W^2}~~,\\[0.5cm]
\displaystyle
\delta Z_{ZZ} &
= & \displaystyle {-}{g^2\over16\pi^2}{1\over12c_{\rm w}^2}
\ln{m_H^2\over M_Z^2}~~,\\[0.5cm]
\displaystyle
\delta Z_{\phi^\pm} & = &
\displaystyle -{g^2\over16\pi^2}\left[
{1\over8}{m_H^2\over M_W^2} + \left({3\over4}-{\xi_W\over2}
\right)\ln{m_H^2\over M_W^2}\right]~~,\\[0.5cm]
\displaystyle
\delta Z_{\phi^Z} & = &
\displaystyle -{g^2\over16\pi^2c_{\rm w}^2}\left[
{1\over8}{m_H^2\over M_W^2} + \left({3\over4}-{\xi_Z\over2}
\right)\ln{m_H^2\over M_Z^2}\right]~~,\\[0.5cm]
\displaystyle
\Delta^{W}_1(k^2) & = &
\displaystyle {-}{g^2\over16\pi^2}{\xi_W\over4}
\ln{m_H^2\over M_W^2}~~,\\[0.5cm]
\displaystyle
\Delta^{ZZ}_1(k^2) & = &
\displaystyle {-}{g^2\over16\pi^2}{\xi_Z\over4c_{\rm w}^2}
\ln{m_H^2\over M_Z^2}~~.\\[0.5cm]
\end{array}
\eqno(4.1)
$$
Note that $\Delta^W_{2,3}$ and the corresponding neutral sector terms
(cf. Fig.~1) are not enhanced by powers or logarithms of the large
Higgs mass, and thus are ignored in this approximation. Substituting
$~\delta Z_W$, $\delta Z_{M_W}$, $\widetilde{\Pi}_{\phi^\pm\phi^\pm
,0}$, $\widetilde{\Pi}_{WW ,0}~$ and
$~\delta Z_{ZZ}$, $\delta Z_{M_Z}$, $\widetilde{\Pi}_{\phi^Z\phi^Z
,0}$, $\widetilde{\Pi}_{ZZ ,0}~$
into the right hand sides of
eqs.~(2.41) and (2.42) respectively, we obtain
$$
\begin{array}{ll}
\delta Z_{\phi^\pm} &
\displaystyle =~\displaystyle{g^2\over16\pi^2}\left[-{1\over8}
{m_H^2\over M_W^2}+\left(-{3\over4}+{\xi_W\over2}\right)
\ln{m_H^2\over M_W^2}\right]~~,\\[0.5cm]
\delta Z_{\phi^Z} & =~\displaystyle{g^2\over16\pi^2c^2_{\rm w}}
\left[-{1\over8}
{m_H^2\over M_W^2}+\left(-{3\over4}+{\xi_Z\over2}\right)
\ln{m_H^2\over M_Z^2}\right]~~,
\end{array}
\eqno(4.2)
$$
verifying the equivalence of {\it Schemes-I}~ and {\it -IV}~ in
this limit. This means that the one-loop value of ~$C_{\rm
mod}^{W,Z}$~ should be equal to unity.
Using (2.8), (2.11) and the
renormalization constants given in (4.1), we directly compute the
~$C_{\rm mod}^{W,Z}$~ up to one-loop
in the $R_\xi$-gauges for the heavy Higgs case as
$$
\begin{array}{ll}
\displaystyle
C_{\rm mod}^W &
\displaystyle = ~1+{1\over2}\left(\delta Z_W -
\delta Z_{\phi^\pm} +2\delta Z_{M_W}\right)
+ \Delta^{W}_{1}(M_W^2)\\[0.5cm]
\displaystyle
&
\displaystyle =~1 + {g^2\over16\pi^2}\left\{\left({1\over16} - {1\over16}
\right){m_H^2\over M_W^2} + \left({1\over24} + {3\over8} -
{5\over12} - {\xi_W\over4}+ {\xi_W\over4}\right)
\ln{m_H^2\over M_W^2}\right\}\\[0.5cm]
& =~ 1 + O(2~{\rm loop})~~,\\[0.5cm]
\displaystyle
C_{\rm mod}^Z &
\displaystyle = ~1+{1\over2}\left(\delta Z_{ZZ} -
\delta Z_{\phi^Z} +2\delta Z_{M_Z}\right)
+ \Delta^{Z}_{1}(M_Z^2)\\[0.5cm]
\displaystyle
& \displaystyle
=~1 + {g^2\over16\pi^2c_{\rm w}^2}\left\{\left({1\over16} - {1\over16}
\right){m_H^2\over M_W^2} + \left({1\over24} + {3\over8} -
{5\over12} - {\xi_Z\over4}+ {\xi_Z\over4}\right)
\ln{m_H^2\over M_Z^2}\right\}\\[0.5cm]
& =~ 1 + O(2~{\rm loop})~~. \\[0.5cm]
\end{array}
\eqno(4.3)
$$
Equation (4.3) is an explicit one-loop proof that $~C^{W,Z}_{mod}=1~$ in
{\it Scheme-IV}~. The agreement of {\it Schemes-I}~ and {\it -IV}~
only occurs in the heavy Higgs limit up to one-loop order.
When the Higgs is not very heavy,
the full one-loop corrections from all scalar and gauge couplings must
be included, so that {\it Scheme-IV}~ and {\it Scheme-I}~ are
no longer equivalent.
\subsection{Complete One-Loop Calculations for the $U(1)$ Higgs theory}
The simplest case to explicitly demonstrate {\it Scheme-IV}~ for
arbitrary Higgs mass is the $U(1)$ Higgs theory. In this section, we
use complete one-loop calculations in the $U(1)$ Higgs theory (for any
value of $m_H$) to explicitly verify that $C_{mod}=1$ in {\it
Scheme-IV}~ for both Landau and 't~Hooft-Feynman gauges.
The $U(1)$ Higgs theory contains minimal field content:
the physical Abelian gauge field $A_\mu$ (with mass $M$), the Higgs
field $H$ (with mass $m_H$),
as well as the unphysical Goldstone boson field
$\phi$ and the Faddeev-Popov ghost fields $c, \overline{c}$
(with mass poles at $\xi\kappa M$). Because the symmetry group is
Abelian, $\Delta_2$ and $\Delta_3$ do not occur and the modification
factor is given by
$$
C_{\rm mod} = \left({Z_A\over Z_\phi}\right)^{1\over2}
Z_M[1+\Delta_1(M^2)]~~,
\eqno(4.4)
$$
with
$$
\Delta_1(k^2)=\displaystyle\frac{Z_gZ_H^{\frac{1}{2}}}{Z_M}
\frac{g\mu^{\epsilon}}{M}
\int_q <0|H(-k-q)c(q)|\bar{c}(k)>
\eqno(4.5)
$$
where $~~\int_q \equiv \displaystyle\int\frac{dq^D}{(2\pi)^D}~~$
and $~D=4-2\epsilon~$. $~\Delta_1~$ vanishes identically in Landau
gauge ($~\xi =0~$), because in the $U(1)$ theory the ghosts
couple only to the Higgs boson and that coupling is proportional to
$~\xi~$. In {\it Scheme-IV}~, the wavefunction renormalization
constant $Z_\phi$ of the Goldstone boson field is defined to be
$$
Z_\phi \displaystyle
= Z_A{Z_{M}^2M^2-\widetilde{\Pi}_{AA,0}(M^2)\over
M^2-\widetilde{\Pi}_{\phi\phi,0}(M^2)}~~.
\eqno(4.6)
$$
Substituting (4.6) into (4.5), we obtain the following one-loop
expression for $C_{\rm mod}$
$$
\displaystyle
C_{\rm mod} \displaystyle
= 1 + {1\over2}M^{-2}\left[\widetilde{\Pi}_{AA,0}(M^2)-
\widetilde{\Pi}_{\phi\phi,0}(M^2)\right]+\Delta_1(M^2)~~.
\eqno(4.7)
$$
We shall now explicitly verify that $C_{mod}$ is equal to unity in
both Landau and 't~Hooft-Feynman gauges.
In Landau gauge:\\[0.35cm]
$$
\begin{array}{rl}
\displaystyle
\left.\widetilde{\Pi}_{AA,0}(k^2)\right|_{\xi=0}
= ig^2 &
\displaystyle \left\{-I_1(m_H^2)
\vphantom{\left\lgroup I_{41}(k^2;M^2,m_H^2)\right\rgroup}
-4M^2I_2(k^2;M^2,m_H^2) + k^2I_2(k^2;0,m_H^2)\right. \\[0.5cm]
\displaystyle
&\left. +4k^2I_3(k^2;0,m_H^2) + 4\left\lgroup
I_{41}(k^2;M^2,m_H^2) + k^2I_{42}(k^2;a^2,b^2)
\right\rgroup\right\} ~~,\\[0.5cm]
\displaystyle
\left.\widetilde{\Pi}_{\phi\phi,0}(k^2)\right|_{\xi=0}
= ig^2
&\displaystyle \left\{
-{m_H^2\over M^2}I_1(m_H^2) + {m_H^4\over M^2}
I_2(k^2;0,m_H^2) - 4k^2I_2(k^2;M^2,m_H^2)\right. \\[0.5cm]
\displaystyle
&\displaystyle \left.\kern60pt + 4{k^2\over M^2}\left\lgroup I_{41}
(k^2;M^2,m_H^2) - I_{41}(k^2;0,m_H^2)\right\rgroup\right.
\\[0.5cm]
&
\displaystyle
\left.\kern60pt + 4{m_H^4\over M^2}\left\lgroup I_{42}(k^2;M^2,
m_H^2) - I_{42}(k^2,0,m_H^2)\right\rgroup\right\}~~,\\[0.5cm]
\displaystyle
\left.\Delta_1(k^2)\vphantom{\widetilde{\Pi}_{\phi\phi,0}}
\right|_{\xi=0}\kern-5pt = 0 ~~, & \\[0.35cm]
\end{array}
\eqno(4.8)
$$
where the quantities $I_j$'s denote the one-loop integrals:
$$
\begin{array}{rl}
\displaystyle
I_1(a^2) &
\displaystyle =~\mu^{2\epsilon}\int{d^{4-2\epsilon}p
\over(2\pi)^{4-2\epsilon}}{1\over p^2-a}
\equiv\mu^{2\epsilon}\int_p{1\over p^2-a} ~~,\\[0.5cm]
\displaystyle
I_2(k^2;a^2,b^2) &
\displaystyle =~ \mu^{2\epsilon}\int_p
{1\over(p^2-a)[(p+k)^2-b]} ~~,\\[0.5cm]
\displaystyle
I_3^{\mu}(k;a^2,b^2) &
\displaystyle =~ \mu^{2\epsilon}\int_p{p^\mu\over
(p^2-a)[(p+k)^2-b]}= k^\mu I_3(k^2;a^2,b^2) ~~,\\[0.5cm]
\displaystyle
I_4^{\mu\nu}(k;a^2,b^2) &
\displaystyle =~ \mu^{2\epsilon}\int_p
{p^\mu p^\nu\over(p^2-a)[(p+k)^2-b]}
= g^{\mu\nu} I_{41}(k^2;a^2,b^2) +
k^\mu k^\nu I_{42}(k^2;a^2,b^2) ~~,\\[0.35cm]
\end{array}
\eqno(4.9)
$$
which are evaluated in Appendix-C. Substituting (4.9) into the
right hand side of (4.7), we obtain
$$
C_{\rm mod}=1+~O(2~{\rm loop})~~,~~~~~(~{\rm in~Landau~gauge}~)~~,
\eqno(4.10)
$$
as expected.
We next consider the 't~Hooft-Feynman gauge in which
$~\Delta_1(M^2)~$ is non-vanishing:
$$
\begin{array}{rl}
\displaystyle
\left.\widetilde{\Pi}_{AA,0}(k^2)\right|_{\xi=1}
&
\displaystyle =~ig^2\left\{-I_1(m_H^2)-I_1(M^2)
+(k^2-4M^2)I_2 +4k^2I_3+4(I_{41}+k^2I_{42})\right\}~,\\[0.5cm]
\displaystyle
\left.\widetilde{\Pi}_{\phi\phi,0}(k^2)\right|_{\xi=1}
&
\displaystyle =~ig^2\left\{\left(1+{m_H^2\over M^2}\right)\left\lgroup
I_1(M^2)-I_1(m_H^2)\right\rgroup + \left({m_H^4\over M^2}
-M^2-4k^2\right)I_2 -4k^2I_3 \right\} ~,\\[0.5cm]
\displaystyle
\left.\Delta_1(k^2)\vphantom{\widetilde{\Pi}_{\phi\phi,0}}
\right|_{\xi=1}
&
\displaystyle =~ig^2I_2(M^2;M^2,m_H^2) ~,\\[0.3cm]
\end{array}
\eqno(4.11)
$$
where $~I_j = I_j(k^2;M^2,m_H^2)~$ for $~j\geq 2~$.
Again, we substitute (4.11) into equation~(4.7) and find that
$$
C_{\rm mod}=1+~O(2~{\rm loop})~~,~~~~~(~{\rm in~'t~Hooft-Feynman~gauge}~)~~.
\eqno(4.12)
$$
\section{Conclusions}
In this paper, we have constructed a convenient new renormalization
scheme, called {\it Scheme-IV}, which rigorously reduces all radiative
modification factors to the equivalence theorem ($~C^a_{\rm mod}$~'s)
to unity in all $R_\xi$-gauges including both 't~Hooft-Feynman and
Landau gauges.
This new {\it Scheme-IV}~ proves particularly convenient for
Landau gauge which cannot be included in the previously described {\it
Scheme-II}~\cite{et1,et2}. Our formulation is explicitly constructed
for both the $SU(2)_L$ and $SU(2)_L\otimes U(1)_Y$ theories
[cf. sections~2 and 3]. Furthermore, we have generalized our formulation
to the important effective Lagrangian formalisms for both
the non-linear~\cite{app} and linear~\cite{linear} realizations of the
electroweak symmetry breaking (EWSB) sector, where the new physics
(due to either strongly or weakly coupled EWSB mechanisms)
has been parameterized by effective operators (cf. section~3.2 and
Appendix-A).
In the construction of the {\it Scheme-IV}~
(cf. section~2.2), we first re-express
the $~C^a_{\rm mod}$-factors in terms of proper self-energies of the
unphysical sector by means of the $R_\xi$-gauge WT identities. Then,
we simplify the $~C^a_{\rm mod}$-factors to unity by
specifying the subtraction condition for the Goldstone boson wavefunction
renormalization constant $Z_{\phi^a}$ [cf. (2.28) and (2.41-42)].
This choice for $Z_{\phi^a}$
is determined by the known gauge and Goldstone boson self-energies
(plus the gauge boson wavefunction and mass renormalization constants)
which must be computed in any practical renormalization scheme.
We emphasize that the implementation of the {\it Scheme-IV}~
requires no additional calculation (of $\Delta^a_i$-quantities, for
instance) beyond what is required for the {\it standard}
radiative correction computations~\cite{Hollik}.
Based upon this radiative modification-free formulation for the
equivalence theorem [cf. (3.4)], we have further proposed
a new prescription, which we call the `` Divided Equivalence Theorem ''
(DET) [cf. (3.14)-(3.15) and discussions followed],
for minimizing the approximation due to ignoring the additive $B$-term
in the equivalence theorem (3.4) or (1.1).
Finally, we have explicitly verified that,
in {\it Scheme-IV}, the $~C^a_{\rm mod}$-factor is
reduced to unity in the heavy Higgs limit of the standard model
(cf. section~4.1) and for arbitrary Higgs mass in the $U(1)$ Higgs theory
(cf. section~4.2).
\Ack
We thank Michael Chanowitz, Yu-Ping Kuang, C.--P. Yuan and Peter Zerwas for
carefully reading the manuscript, and for their useful suggestions and
support. H.J.H is supported by the AvH of Germany. \DOE
| proofpile-arXiv_065-840 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection {section}{1}{\z@}{+3.0ex plus +1ex minus
\expandafter\ifx\csname mathrm\endcsname\relax\def\mathrm#1{{\rm #1}}\fi
\makeatother
\unitlength1cm
\renewcommand{\topfraction}{0.3}
\renewcommand{\bottomfraction}{1.0}
\renewcommand{\textfraction}{0.1}
\renewcommand{\floatpagefraction}{0.5}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\barr#1{\begin{array}{#1}}
\def\end{array}{\end{array}}
\def\begin{figure}{\begin{figure}}
\def\end{figure}{\end{figure}}
\def\begin{table}{\begin{table}}
\def\end{table}{\end{table}}
\def\begin{center}{\begin{center}}
\def\end{center}{\end{center}}
\def\nonumber{\nonumber}
\def\displaystyle{\displaystyle}
\def\textstyle{\textstyle}
\def1.4{1.4}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\delta{\delta}
\def\varepsilon{\varepsilon}
\def\lambda{\lambda}
\def\sigma{\sigma}
\defi\epsilon{i\epsilon}
\def\Gamma{\Gamma}
\def\Delta{\Delta}
\defg_{\PWpm}^2(0){g_{\mathswitchr {W^\pm}}^2(0)}
\defg_{\PWO}^2(\MZ^2){g_{\mathswitchr {W^0}}^2(\mathswitch {M_\PZ}^2)}
\def\refeq#1{\mbox{(\ref{#1})}}
\def\reffi#1{\mbox{Fig.~\ref{#1}}}
\def\reffis#1{\mbox{Figs.~\ref{#1}}}
\def\refta#1{\mbox{Tab.~\ref{#1}}}
\def\reftas#1{\mbox{Tabs.~\ref{#1}}}
\def\refse#1{\mbox{Sect.~\ref{#1}}}
\def\refses#1{\mbox{Sects.~\ref{#1}}}
\def\refapp#1{\mbox{App.~\ref{#1}}}
\def\citere#1{\mbox{Ref.~\cite{#1}}}
\def\citeres#1{\mbox{Refs.~\cite{#1}}}
\def\raise.9mm\hbox{\protect\rule{1.1cm}{.2mm}}{\raise.9mm\hbox{\protect\rule{1.1cm}{.2mm}}}
\def\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}{\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}}
\def\rlap{$\cdot$}\hspace*{2mm}{\rlap{$\cdot$}\hspace*{2mm}}
\def\dash\dash\dash\dash{\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\dash\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\dash}
\def\dash\hspace*{-.5mm}\dash\hspace*{1mm}\dash\hspace*{-.5mm}\dash{\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\hspace*{-.5mm}\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\hspace*{1mm}\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\hspace*{-.5mm}\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}}
\def\dot\dot\dot\dot\dot\dot{\rlap{$\cdot$}\hspace*{2mm}\dot\rlap{$\cdot$}\hspace*{2mm}\dot\rlap{$\cdot$}\hspace*{2mm}\dot}
\def\dot\dash\dot\dash\dot{\rlap{$\cdot$}\hspace*{2mm}\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\rlap{$\cdot$}\hspace*{2mm}\raise.9mm\hbox{\protect\rule{2mm}{.2mm}}\hspace*{1mm}\rlap{$\cdot$}\hspace*{2mm}}
\newcommand{\unskip\,\mathrm{GeV}}{\unskip\,\mathrm{GeV}}
\newcommand{\unskip\,\mathrm{MeV}}{\unskip\,\mathrm{MeV}}
\newcommand{\unskip\,\mathrm{TeV}}{\unskip\,\mathrm{TeV}}
\newcommand{\unskip\,\mathrm{fb}}{\unskip\,\mathrm{fb}}
\newcommand{\unskip\,\mathrm{pb}}{\unskip\,\mathrm{pb}}
\newcommand{\unskip\,\mathrm{nb}}{\unskip\,\mathrm{nb}}
\newcommand{{\cal O}}{{\cal O}}
\renewcommand{\L}{{\cal L}}
\newcommand{\mathswitch{{\cal{O}}(\alpha)}}{\mathswitch{{\cal{O}}(\alpha)}}
\def\mathswitchr#1{\relax\ifmmode{\mathrm{#1}}\else$\mathrm{#1}$\fi}
\newcommand{\mathswitchr B}{\mathswitchr B}
\newcommand{\mathswitchr W}{\mathswitchr W}
\newcommand{\mathswitchr Z}{\mathswitchr Z}
\newcommand{\mathswitchr A}{\mathswitchr A}
\newcommand{\mathswitchr g}{\mathswitchr g}
\newcommand{\mathswitchr H}{\mathswitchr H}
\newcommand{\mathswitchr e}{\mathswitchr e}
\newcommand{\mathswitch \nu_{\mathrm{e}}}{\mathswitch \nu_{\mathrm{e}}}
\newcommand{\mathswitch \bar\nu_{\mathrm{e}}}{\mathswitch \bar\nu_{\mathrm{e}}}
\newcommand{\mathswitch \nu_\mu}{\mathswitch \nu_\mu}
\newcommand{\mathswitchr d}{\mathswitchr d}
\newcommand{\mathswitchr f}{\mathswitchr f}
\newcommand{\mathswitchr h}{\mathswitchr h}
\newcommand{\mathswitchr l}{\mathswitchr l}
\newcommand{\mathswitchr u}{\mathswitchr u}
\newcommand{\mathswitchr s}{\mathswitchr s}
\newcommand{\mathswitchr b}{\mathswitchr b}
\newcommand{\mathswitchr c}{\mathswitchr c}
\newcommand{\mathswitchr t}{\mathswitchr t}
\newcommand{\mathswitchr q}{\mathswitchr q}
\newcommand{\mathswitchr {e^+}}{\mathswitchr {e^+}}
\newcommand{\mathswitchr {e^-}}{\mathswitchr {e^-}}
\newcommand{\mathswitchr {\mu^-}}{\mathswitchr {\mu^-}}
\newcommand{\mathswitchr {W^+}}{\mathswitchr {W^+}}
\newcommand{\mathswitchr {W^-}}{\mathswitchr {W^-}}
\newcommand{\mathswitchr {W^\pm}}{\mathswitchr {W^\pm}}
\newcommand{\mathswitchr {W^0}}{\mathswitchr {W^0}}
\newcommand{\mathswitchr {Z^0}}{\mathswitchr {Z^0}}
\newcommand{$\PZ\to\Pb\bar\Pb$}{$\mathswitchr Z\to\mathswitchr b\bar\mathswitchr b$}
\newcommand{$\PZ\to\Pc\bar\Pc$}{$\mathswitchr Z\to\mathswitchr c\bar\mathswitchr c$}
\newcommand{$\PZ\to\Pf\bar\Pf$}{$\mathswitchr Z\to\mathswitchr f\bar\mathswitchr f$}
\newcommand{$\PZ\to\Pq\bar\Pq$}{$\mathswitchr Z\to\mathswitchr q\bar\mathswitchr q$}
\newcommand{$\PZ\to\Pu\bar\Pu,\Pd\bar\Pd$}{$\mathswitchr Z\to\mathswitchr u\bar\mathswitchr u,\mathswitchr d\bar\mathswitchr d$}
\newcommand{$\PZ\to\nu\bar\nu$}{$\mathswitchr Z\to\nu\bar\nu$}
\newcommand{R_\Pb}{R_\mathswitchr b}
\newcommand{R_\Pc}{R_\mathswitchr c}
\newcommand{\Gamma_\Pb}{\Gamma_\mathswitchr b}
\newcommand{\Gamma_\Pc}{\Gamma_\mathswitchr c}
\newcommand{\Gamma_\Pd}{\Gamma_\mathswitchr d}
\newcommand{\Gamma_\Pu}{\Gamma_\mathswitchr u}
\newcommand{\Gamma_\Pq}{\Gamma_\mathswitchr q}
\newcommand{\Gamma_{\mathrm h}}{\Gamma_{\mathrm h}}
\newcommand{\Gamma_{\mathrm T}}{\Gamma_{\mathrm T}}
\newcommand{\Gamma_{\mathrm l}}{\Gamma_{\mathrm l}}
\def\mathswitch#1{\relax\ifmmode#1\else$#1$\fi}
\newcommand{\mathswitch {M_\PB}}{\mathswitch {M_\mathswitchr B}}
\newcommand{\mathswitch {m_\Pf}}{\mathswitch {m_\mathswitchr f}}
\newcommand{\mathswitch {m_\Pl}}{\mathswitch {m_\mathswitchr l}}
\newcommand{\mathswitch {m_\Pq}}{\mathswitch {m_\mathswitchr q}}
\newcommand{\mathswitch {M_\PV}}{\mathswitch {M_\PV}}
\newcommand{\mathswitch {M_\PW}}{\mathswitch {M_\mathswitchr W}}
\newcommand{\mathswitch {M_\PWpm}}{\mathswitch {M_\mathswitchr {W^\pm}}}
\newcommand{\mathswitch {M_\PWO}}{\mathswitch {M_\mathswitchr {W^0}}}
\newcommand{\mathswitch {\lambda}}{\mathswitch {\lambda}}
\newcommand{\mathswitch {M_\PZ}}{\mathswitch {M_\mathswitchr Z}}
\newcommand{\mathswitch {M_\PH}}{\mathswitch {M_\mathswitchr H}}
\newcommand{\mathswitch {m_\Pe}}{\mathswitch {m_\mathswitchr e}}
\newcommand{\mathswitch {m_\mu}}{\mathswitch {m_\mu}}
\newcommand{\mathswitch {m_\tau}}{\mathswitch {m_\tau}}
\newcommand{\mathswitch {m_\Pd}}{\mathswitch {m_\mathswitchr d}}
\newcommand{\mathswitch {m_\Pu}}{\mathswitch {m_\mathswitchr u}}
\newcommand{\mathswitch {m_\Ps}}{\mathswitch {m_\mathswitchr s}}
\newcommand{\mathswitch {m_\Pc}}{\mathswitch {m_\mathswitchr c}}
\newcommand{\mathswitch {m_\Pb}}{\mathswitch {m_\mathswitchr b}}
\newcommand{\mathswitch {m_\Pt}}{\mathswitch {m_\mathswitchr t}}
\newcommand{\scriptscriptstyle}{\scriptscriptstyle}
\newcommand{\mathswitch {s_\PW}}{\mathswitch {s_{\scriptscriptstyle\mathswitchr W}}}
\newcommand{\mathswitch {c_\PW}}{\mathswitch {c_{\scriptscriptstyle\mathswitchr W}}}
\newcommand{\mathswitch {\bar s_{\scrs\PW}}}{\mathswitch {\bar s_{\scriptscriptstyle\mathswitchr W}}}
\newcommand{\mathswitch {\bar s_{\PW,\Pf}}}{\mathswitch {\bar s_{\mathswitchr W,\mathswitchr f}}}
\newcommand{\mathswitch {\bar s_{\PW,\Pq}}}{\mathswitch {\bar s_{\mathswitchr W,\mathswitchr q}}}
\newcommand{\mathswitch {Q_\Pf}}{\mathswitch {Q_\mathswitchr f}}
\newcommand{\mathswitch {Q_\Pl}}{\mathswitch {Q_\mathswitchr l}}
\newcommand{\mathswitch {Q_\Pq}}{\mathswitch {Q_\mathswitchr q}}
\newcommand{\mathswitch {v_\Pf}}{\mathswitch {v_\mathswitchr f}}
\newcommand{\mathswitch {a_\Pf}}{\mathswitch {a_\mathswitchr f}}
\newcommand{\mathswitch {g_\Pe}^{\sigma}}{\mathswitch {g_\mathswitchr e}^{\sigma}}
\newcommand{\mathswitch {g_\Pe}^-}{\mathswitch {g_\mathswitchr e}^-}
\newcommand{\mathswitch {g_\Pe}^+}{\mathswitch {g_\mathswitchr e}^+}
\newcommand{\mathswitch {G_\mu}}{\mathswitch {G_\mu}}
\hyphenation{brems-strah-lung}
\newcommand{a\hspace{-0.5em}/\hspace{0.1em}}{a\hspace{-0.5em}/\hspace{0.1em}}
\newcommand{\varepsilon \hspace{-0.5em}/\hspace{0.1em}}{\varepsilon \hspace{-0.5em}/\hspace{0.1em}}
\newcommand{k\hspace{-0.52em}/\hspace{0.1em}}{k\hspace{-0.52em}/\hspace{0.1em}}
\newcommand{p\hspace{-0.42em}/}%\hspace{0.1em}}{p\hspace{-0.42em}/
\newcommand{q\hspace{-0.5em}/\hspace{0.1em}}{q\hspace{-0.5em}/\hspace{0.1em}}
\newcommand{h\hspace{-0.5em}\vspace{-0.3em}-\hspace{0.1em}}{h\hspace{-0.5em}\vspace{-0.3em}-\hspace{0.1em}}
\marginparwidth 1.2cm
\newcommand{\chi^2_{\mathrm{min}}/_{\mathrm{d.o.f.}}}{\chi^2_{\mathrm{min}}/_{\mathrm{d.o.f.}}}
\newcommand{{\tiny\gsim}1000}{{\tiny\gsim}1000}
\newcommand{y_\Pb}{y_\mathswitchr b}
\newcommand{y_\Pq}{y_\mathswitchr q}
\newcommand{y_\Pu}{y_\mathswitchr u}
\newcommand{y_\Pd}{y_\mathswitchr d}
\newcommand{y_\Pc}{y_\mathswitchr c}
\newcommand{f_1}{f_1}
\newcommand{f_2}{f_2}
\newcommand{\alpha(\MZ^2)}{\alpha(\mathswitch {M_\PZ}^2)}
\newcommand{\alpha_{\mathrm s}}{\alpha_{\mathrm s}}
\newcommand{\alpha_{\mathrm s}(\MZ^2)}{\alpha_{\mathrm s}(\mathswitch {M_\PZ}^2)}
\newcommand{{\mathrm{1-loop}}}{{\mathrm{1-loop}}}
\newcommand{{\mathrm{bos}}}{{\mathrm{bos}}}
\newcommand{{\mathrm{ferm}}}{{\mathrm{ferm}}}
\newcommand{{\mathrm{univ}}}{{\mathrm{univ}}}
\newcommand{{\mathrm{SC}}}{{\mathrm{SC}}}
\newcommand{{\mathrm{IB}}}{{\mathrm{IB}}}
\newcommand{\BW} {{\mathrm{WPD}}}
\newcommand{\BZ} {{\mathrm{ZPD}}}
\newcommand{{\cal {M}}}{{\cal {M}}}
\newcommand{\kappa}{\kappa}
\newcommand{\Delta\alpha}{\Delta\alpha}
\newcommand{\Delta\rho}{\Delta\rho}
\newcommand{\Delta_{\mathrm{LL}}}{\Delta_{\mathrm{LL}}}
\newcommand{\frac{\cw^2}{\sw^2}\dr}{\frac{\mathswitch {c_\PW}^2}{\mathswitch {s_\PW}^2}\Delta\rho}
\newcommand{y_{\mathrm h}}{y_{\mathrm h}}
\newcommand{{\mathrm{mass}}}{{\mathrm{mass}}}
\newcommand{{\mathrm{QCD}}}{{\mathrm{QCD}}}
\newcommand{{\mathrm{QED}}}{{\mathrm{QED}}}
\newcommand{{\mathrm{LEP}}}{{\mathrm{LEP}}}
\newcommand{{\mathrm{SLD}}}{{\mathrm{SLD}}}
\newcommand{{\mathrm{SM}}}{{\mathrm{SM}}}
\def\mathop{\mathrm{arctan}}\nolimits{\mathop{\mathrm{arctan}}\nolimits}
\def\mathop{\mathrm{Li}_2}\nolimits{\mathop{\mathrm{Li}_2}\nolimits}
\def\mathop{\mathrm{Re}}\nolimits{\mathop{\mathrm{Re}}\nolimits}
\def\mathop{\mathrm{Im}}\nolimits{\mathop{\mathrm{Im}}\nolimits}
\def\mathop{\mathrm{sgn}}\nolimits{\mathop{\mathrm{sgn}}\nolimits}
\newcommand{\mpar}[1]{{\marginpar{\hbadness10000%
\sloppy\hfuzz10pt\boldmath\bf#1}}%
\typeout{marginpar: #1}\ignorespaces}
\marginparwidth 1.2cm
\marginparsep 0.2cm
\def\today{\relax}
\def\relax{\relax}
\def\relax{\relax}
\def\relax{\relax}
\def\draft{
\def******************************{******************************}
\def\thtystars\thtystars{******************************\thtystars}
\typeout{}
\typeout{\thtystars\thtystars**}
\typeout{* Draft mode!
For final version remove \protect\draft\space in source file *}
\typeout{\thtystars\thtystars**}
\typeout{}
\def\today{\today}
\def\relax{\marginpar[\boldmath\hfil$\uparrow$]%
{\boldmath$\uparrow$\hfil}%
\typeout{marginpar: $\uparrow$}\ignorespaces}
\def\relax{\marginpar[\boldmath\hfil$\downarrow$]%
{\boldmath$\downarrow$\hfil}%
\typeout{marginpar: $\downarrow$}\ignorespaces}
\def\relax{\marginpar[\boldmath\hfil$\rightarrow$]%
{\boldmath$\leftarrow $\hfil}%
\typeout{marginpar: $\leftrightarrow$}\ignorespaces}
\def\Mua{\marginpar[\boldmath\hfil$\Uparrow$]%
{\boldmath$\Uparrow$\hfil}%
\typeout{marginpar: $\Uparrow$}\ignorespaces}
\def\Mda{\marginpar[\boldmath\hfil$\Downarrow$]%
{\boldmath$\Downarrow$\hfil}%
\typeout{marginpar: $\Downarrow$}\ignorespaces}
\def\Mla{\marginpar[\boldmath\hfil$\Rightarrow$]%
{\boldmath$\Leftarrow $\hfil}%
\typeout{marginpar: $\Leftrightarrow$}\ignorespaces}
\overfullrule 5pt
\oddsidemargin -15mm
\marginparwidth 29mm
}
\makeatletter
\def\eqnarray{\stepcounter{equation}\let\@currentlabel=\theequation
\global\@eqnswtrue
\global\@eqcnt\z@\tabskip\@centering\let\\=\@eqncr
$$\halign to \displaywidth\bgroup\hskip\@centering
$\displaystyle\tabskip\z@{##}$\@eqnsel&\global\@eqcnt\@ne
\hskip 2\arraycolsep \hfil${##}$\hfil
&\global\@eqcnt\tw@ \hskip 2\arraycolsep $\displaystyle\tabskip\z@{##}$\hfil
\tabskip\@centering&\llap{##}\tabskip\z@\cr}
\def | proofpile-arXiv_065-841 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Absorption lines in quasar spectra,
especially the ``forest'' of Ly$\alpha$\ lines produced by concentrations
of neutral hydrogen, are uniquely suited
for probing structure formation in the high-redshift universe.
The absorbers trace relatively pristine baryonic material over
a wide range of redshifts, densities, temperatures,
and ionization states. Recent advances in
computer technology have expanded our ability to predict the
conditions of the absorbing gas,
while high-precision observations made using the HIRES spectrograph
(\cite{vog94}) on the 10m Keck telescope
have quantified the statistics of the low column density absorbers to
unprecedented accuracy (e.g., Hu et al. 1995, hereafter \cite{hu95}).
These data provide stringent
constraints on theories of structure formation and
the state of the intergalactic medium (IGM) at high redshifts.
However, a detailed confrontation between theory and observations
requires that simulations and observed quasar spectra be analyzed
using similar techniques. In this paper we apply an automated
Voigt-profile fitting algorithm to Ly$\alpha$\ spectra from a simulation
of the cold dark matter (CDM) scenario (\cite{Pee82}; \cite{Blu84}).
We compare the statistics of the resulting line population to
those derived by \cite{hu95} from HIRES spectra.
Recent cosmological simulations that incorporate gas dynamics, radiative
cooling, and photoionization reproduce many of the observed
features of quasar absorption spectra, suggesting that the Ly$\alpha$\ forest
arises as a natural consequence of hierarchical structure formation
in a universe with a photoionizing UV background
(\cite{cen94}; \cite{zha95}; Hernquist et al.\ 1996, hereafter
\cite{her96}; \cite{kat96b}; \cite{mir96}).
Most of the low column density lines are produced by structures
of moderate overdensity that are far from dynamical or thermal
equilibrium, blurring the traditional distinction between
the Ly$\alpha$\ forest and Gunn-Peterson (1965) absorption from a
smooth IGM.
\cite{her96} used a simple flux threshold algorithm to identify
lines in their simulated spectra, defining any region with
transmitted flux continuously below a specified threshold as a
single line. They showed that their simulation of a CDM
universe reproduced the observed abundance of absorption systems
as determined by Petitjean et al.\ (1993, hereafter \cite{pet93})
quite well over most of the column density range
$10^{14}\cdunits < N_{\rm HI} < 10^{22}\cdunits$, with a significant
discrepancy for $N_{\rm HI} \sim 10^{17}\cdunits$.
The traditional technique for identifying and characterizing quasar
absorption lines is to fit spectra by a superposition of Voigt profiles.
The HIRES spectra have very high signal-to-noise ratio and resolution,
and most of the lines found in this way are weak absorbers with
column densities $N_{\rm HI} < 10^{14}\cdunits$. The threshold and Voigt-profile
procedures behave very differently in this regime, since a single feature
identified by the threshold method will often be decomposed into a
blend of weaker lines when it is modeled as a superposition of Voigt profiles.
In order to compare to published line population statistics from
HIRES data, therefore, it is essential to analyze the simulated
spectra by Voigt-profile decomposition.
The physical model implicit in the decomposition technique
is that of a collection of discrete, compact clouds,
each characterized by a single temperature (or at least by a single
velocity dispersion, which could include contributions from thermal
motion and from Gaussian-distributed ``turbulent'' velocities).
The simulations undermine this physical picture because
the absorbing systems merge continuously
into a smoothly fluctuating background, often contain gas at a range
of temperatures, and are usually broadened in frequency space by
coherent velocity flows that do not resemble Gaussian turbulence.
Nonetheless, any spectrum can be described phenomenologically by
a superposition of Voigt-profile lines, with the number of components
increasing as the signal-to-noise ratio improves and more subtle
features must be matched. The distributions of fitted column densities
and $b$-parameters provide a useful statistical basis for
comparing simulations and observations, and this is the approach that
we adopt in this paper. We will discuss the correspondence
between the parameters of the Voigt-profile components and the
physical state of the absorbing gas elsewhere (Dav\'e et al., in preparation).
\section{Simulation and Artificial Spectra}
The simulation analyzed here is the same as that of HKWM:
a CDM universe with $\Omega=1$,
$H_0 = 50$~km~s$^{-1}$Mpc$^{-1}$, baryon fraction $\Omega_b=0.05$,
and a periodic simulation cube 22.222 comoving Mpc on a side
containing $64^3$ gas particles and $64^3$ dark matter particles,
with individual particle masses of $1.45\times 10^8 M_\odot$ and
$2.8\times 10^9 M_\odot$, respectively.
The power spectrum is normalized to $\sigma_8=0.7$, roughly the value
required to reproduce observed galaxy cluster masses (\cite{bah92}; \cite{whi93});
we will consider COBE-normalized CDM models with $\Omega=1$ and $\Omega<1$
in future work.
We use the N-body + smoothed-particle hydrodynamics
code TreeSPH (\cite{her89}) adapted for
cosmological simulations (Katz, Weinberg, \& Hernquist 1996,
hereafter \cite{kat96}) to evolve the model from $z=49$ to $z=2$.
Instead of the $\nu^{-1}$ UV background spectrum adopted by HKWM,
we use the spectrum of Haardt \& Madau (1996; hereafter \cite{haa96}),
which is computed as a function of redshift based on the UV output
of observed quasars and reprocessing by the observed Ly$\alpha$\ forest.
The spectral shape is significantly different from $\nu^{-1}$, but
the UV background influences our Ly$\alpha$\ forest results primarily
through the HI photoionization rate $\Gamma$,
a cross-section weighted integral of the spectrum
(\cite{kat96}, equation 29).
The new simulation also includes star formation and feedback
(see KWH), but this has no noticeable effect on the Ly$\alpha$\ forest results.
A comparison between the galaxy populations of this
simulation and the HKWM simulation appears in
Weinberg, Hernquist, \& Katz (1996).
The mean opacity of the Ly$\alpha$\ forest depends on the parameter combination
$\Omega_b^2/\Gamma$.
Since observational determinations of $\Omega_b$ and $\Gamma$
remain quite uncertain, we treat the overall intensity of the UV
background as a free parameter and scale it to match the observed
mean Ly$\alpha$\ optical depth $\bar\tau_\alpha$.
When evolving the simulation, we divide HM's intensities by a
factor of two, retaining their redshift history and spectral shape.
We find that we must reduce the intensities by further factors of
1.28 and 1.38 at $z=2$ and $z=3$, respectively, in order to match
the estimate $\bar\tau_\alpha = 0.0037(1+z)^{3.46}$ of
Press, Rybicki, \& Schneider (1993; hereafter \cite{pre93}).
Although we apply this final reduction only at the analysis stage,
to compute neutral fractions when generating spectra,
the result is virtually identical to that of changing the
intensity during dynamical evolution (Miralda-Escud\'e et al.\ 1997,
hereafter \cite{mir97}). In order to match the PRS mean optical
depth at $z=3$ with the original HM background intensity we would
need $\Omega_b \approx 0.08$, closer to the value advocated by
Tytler, Fan, \& Burles (1996; but see \cite{rug96} and
references therein).
The value of $\bar\tau_\alpha$ plays the role of
a normalizing constraint, used to fix the important combination of free
parameters in our IGM model. Once $\Omega_b^2/\Gamma$ is set,
there is no further freedom to adjust the simulation predictions,
and the remaining properties of the Ly$\alpha$\ forest provide tests of
the cosmological scenario itself.
We generate artificial spectra at $z=2$ and $z=3$ along 300 random
lines of sight through the simulation cube, using the
methods described in HKWM and Cen et al.\ (1994).
We do not consider higher redshifts here because the strong absorption
leads to severe blending of lines.
The wavelength spread across the box is 23.4~\AA\ at $z=2$ and
35.9~\AA\ at z=3.
Each artificial spectrum contains 1000 pixels; an individual
pixel has a velocity width $\sim 2\;{\rm km}\,{\rm s}^{-1}$ and a spatial extent
$\sim 20$ comoving kpc, twice the gravitational softening length.
In the Ly$\alpha$\ forest regime, the gas distribution is smooth on these scales.
\section{Fitting Voigt Profiles to Artificial Spectra}
We want the analysis of our simulated spectra to closely match
that used in typical observational studies, HKCSR in particular.
To this end, we have developed an automated Voigt-profile fitting routine,
AUTOVP, which allows us to efficiently handle large quantities of
simulated data and which provides an objective algorithm that
can be applied to observational data.
We add noise to our simulated spectra employing a combination of
Gaussian photon noise with signal-to-noise ratio $S/N=50$
in the continuum (corresponding roughly to the \cite{hu95} data)
and a fixed readout noise chosen to match the
characteristics of the Keck HIRES spectrograph.
Varying $S/N$ changes our results only at the lowest column densities.
While we know the true continuum level in the
simulated spectra, this is not the case for the observational data.
We therefore estimate the continuum in the simulated spectra by
the iterative procedure commonly used for Echelle data:
fitting a third-order polynomial to the
data set, excluding any points lying $\ga 2\sigma$
below the fit, refitting the non-excluded points, and repeating
until convergence is achieved.
This technique is based upon the implicit assumption that the regions of
lowest absorption in a high-resolution spectrum lie close to the
true continuum level.
Because the simulated spectra show
fluctuating Gunn-Peterson absorption that increases with $z$,
continuum fitting has a systematic tendency to remove flux,
an average of 1.2\% at $z=2$ and 5.7\% at $z=3$.
The effect would probably be somewhat smaller in observational data
because a typical HIRES Echelle order ($\sim 45$\AA) is longer than
one of our simulated spectra, giving a higher probability that
the spectrum contains a region of genuinely low absorption.
Given a normalized, continuum-fitted spectrum and its noise vector,
we apply AUTOVP to detect lines and fit Voigt profiles.
In its first phase, AUTOVP identifies lines and makes an initial
estimate of their column densities and $b$-parameters.
Detection regions are identified above an 8$\sigma$ confidence
level, following the method of \cite{lan87}.
For line identification purposes, the data is convolved with a two-pixel-width
Gaussian and $1\sigma$ of noise is subtracted to yield a
``minimum flux".
For non-saturated regions, a single Voigt profile is placed at the
lowest flux value in the detection region,
and $N_{\rm HI}$ and $b$ are
reduced by small increments from large initial values until the model is
everywhere above the minimum flux.
For saturated regions a line is placed in the middle of the trough, and
$N_{\rm HI}$ and $b$ are adjusted to fit the ``cusp" regions, {\it i.e.\hskip 3pt}
regions about five pixels wide on either side of the trough.
The resulting first-guess line
is then subtracted from the data to obtain a residual flux.
Detections regions are identified in
the residual flux, and the procedure is repeated until there are no more
$8\sigma$ detections.
The line identification
procedure is very robust, never failing for non-saturated
regions and only occasionally producing a bad fit even in complex,
blended, saturated regions, where $N_{\rm HI}$ and $b$ are largely degenerate
and the cusp regions are difficult to identify.
In its second phase, AUTOVP takes the initial guess
and performs a simultaneous $\chi^2$-minimization on the parameters
($v_{\rm central}, N_{\rm HI}, b$) of all lines within each detection region.
Three independent minimization techniques are employed in conjunction
in order to reliably identify the global $\chi^2$ minimum.
AUTOVP then tries to remove any components
with a formal error in $N_{\rm HI}$ or
$b$ comparable to the parameter value, refitting
the detection region with one less line. If the resulting
$\chi^2$ is lower than the original value the rejection is accepted,
otherwise the fit returns to the original set of lines.
AUTOVP thus attempts to fit the spectrum with as few lines as possible
while still minimizing $\chi^2$. If the fit after these line rejections
is ``good" (characterized empirically by $\chi^2 \mathrel{\copy\simgreatbox} 2$ per pixel),
the program ends,
otherwise it tries to add a line where the local contribution to
$\chi^2$ is greatest.
The rare cases where AUTOVP fails to find a good fit
are flagged for possible manual intervention.
AUTOVP is designed to interface with the PROFIT
interactive Voigt-profile fitting package (\cite{chu96}).
This graphical interactive fitter can be used to
manually adjust poor fits, although this was required in so few cases
that we base our simulation statistics entirely on the automated fits.
\placefigure{fig: autovp_ex}
In Figure~\ref{fig: autovp_ex} we show the results of AUTOVP applied
to a $S/N=50$, continuum-fitted spectrum at $z=3$.
The bottom panel shows the
first-guess fit superimposed on the simulated spectrum.
The top panel shows the final fit after the
$\chi^2$-minimization has been performed.
Generally AUTOVP has greatest difficulty in
obtaining first-guess fits in blended saturated regions like the one
illustrated here. Nevertheless, the minimization
produced an adequate fit with no interactive adjustment.
Lines with $N_{\rm HI} \geq 10^{13}$ are indicated by the
long tick marks above the spectrum, while lines with
$N_{\rm HI} < 10^{13}$ have short tick marks.
The number of these small lines identified by AUTOVP is somewhat
sensitive to the adopted $S/N$ and the detection threshold.
To keep our results fairly robust against details of our fitting procedure,
we exclude these lines from our analysis and only focus
on lines with $N_{\rm HI} \geq 10^{13}$.
AUTOVP has also been applied to observational data from H1216 and MgII2796,
with results quite similar to those obtained from manual fitting.
\section{Results}
\placefigure{fig: col}
Figure~\ref{fig: col} shows the column density distribution
$f(N_{\rm HI})$, the number of lines per unit
redshift per linear interval of $N_{\rm HI}$.
Solid and dashed lines show the simulation results from AUTOVP
at $z=3$ and $z=2$, respectively. The dotted line shows $f(N_{\rm HI})$
obtained using the HKWM threshold algorithm at $z=3$, with a
flux threshold of 0.7. As expected, the two methods yield
similar results at high column densities, $N_{\rm HI} \ga 10^{14.5}\;\cdunits$,
but at lower $N_{\rm HI}$ AUTOVP deblends much more and finds many more lines.
We find a similar trend at $z=2$, though because of the reduced line
crowding at lower redshift the agreement between the two methods
extends down to $N_{\rm HI} \sim 10^{14}\;\cdunits$.
Filled and open circles in Figure~\ref{fig: col} show the observational
results of PWRCL and HKCSR, respectively.
The two determinations of $f(N_{\rm HI})$ agree well in their regime of overlap,
$10^{13.6}\;\cdunits \mathrel{\copy\simgreatbox} N_{\rm HI} \mathrel{\copy\simgreatbox} 10^{14.3}\;\cdunits$.
The high $S/N$ and resolution of the HIRES data allow HKCSR to
detect much weaker absorption features, and their $f(N_{\rm HI})$ continues
to rise down to the lowest bin, $N_{\rm HI} \sim 10^{12.5}\;\cdunits$.
Clearly a comparison between the simulations and HKCSR's published
line statistics must be based on Voigt-profile fitting, since
the vast majority of their lines lie in the region where line blending
causes large differences between this method and the threshold algorithm.
We compute the HKCSR $f(N_{\rm HI})$ directly from their published line list,
with no corrections for ``incompleteness.'' HKCSR estimate such
corrections from artificial spectra {\it assuming} an underlying model
of randomly distributed, Voigt-profile lines with a power-law $f(N_{\rm HI})$
and a specified distribution of $b$-parameters, and they conclude
that their results are consistent with $f(N_{\rm HI}) \propto N_{\rm HI}^{-1.46}$ down
to $N_{\rm HI} \approx 10^{12.5}\;\cdunits$, where the correction for
incompleteness is a factor of four. If we applied the same correction
factors to the simulation results, the derived column density
distributions would also rise in a nearly power-law fashion instead
of turning over at low column densities. But the simulations provide
no a priori reason to expect Voigt-profile lines or a power-law $f(N_{\rm HI})$,
so we prefer to compare them directly to the data without trying to
correct either for lines ``lost'' to blending.
The mean redshift of the HKCSR lines is $\bar z=2.9$, so the
closest comparison is to the $z=3$ simulation results.
To make this comparison more exact, we convolved the $z=3$ artificial
spectra to a resolution
$\Delta\lambda = 0.06$\AA\ ($\Delta v=3.7\;{\rm km}\,{\rm s}^{-1}$) before analysis,
which has the minor effect of removing some lines with
$N_{\rm HI}\mathrel{\copy\simgreatbox} 10^{13}$.
When analyzed by Voigt-profile decomposition, the simulation reproduces
the large number of weak lines found in the HIRES spectra. In fact,
the simulation overproduces the number of lines by a factor of $1.5-2$
in the column density range
$10^{13}\;\cdunits\mathrel{\copy\simgreatbox} N_{\rm HI}\mathrel{\copy\simgreatbox} 10^{14}\;\cdunits$,
a discrepancy that we will return to in \S 5.
The rolloff at low column densities is also somewhat different,
but the results for the weakest lines are the most sensitive
to the details of the fitting procedure and to the modeling
of noise and spectral resolution, so we regard this difference
as less significant.
In the regime where line blending is unimportant, $f(N_{\rm HI})$ of the
simulations drops by a factor of $\sim 1.5-2$ between $z=3$ and $z=2$.
At low column densities $f(N_{\rm HI})$ actually increases because of the
reduced effects of line blending.
As discussed in HKWM, Miralda-Escud\'e et al.\ (1996),
and \cite{mir97}, the evolution
of the line population over this redshift range is driven primarily
by the expansion of the universe, which lowers the physical gas
densities in the absorbing systems and thereby lowers their neutral
fractions and corresponding HI column densities.
It is therefore more physically appropriate to think of $f(N_{\rm HI})$
as evolving to the left rather than evolving downwards, though the
quantitative effect is the same to within the accuracy of this
simplified account.
\placefigure{fig: bpar}
Figure~\ref{fig: bpar} shows the distribution of $b$-parameters
for lines with $N_{\rm HI} \geq 10^{13}\;\cdunits$ from HKCSR
(solid histogram) and from the AUTOVP analyses of the simulation at $z=3$
and $z=2$ (solid and dashed curves, respectively).
The $10^{13}\;\cdunits$ cutoff eliminates lines whose identification
and derived properties are sensitive to the value of $S/N$ or to
details of the fitting procedure, though the results
do not change qualitatively if we lower this cutoff to
$10^{12.5}\;\cdunits$.
The threshold method (dotted curve)
yields much larger $b$-parameters than AUTOVP at $z=3$
because many of its identified ``lines'' are
extended absorption regions, which AUTOVP separates into narrower components.
Table~\ref{table: bdist} lists the median, mean, and $1\sigma$
dispersion of the $b$-parameter histograms. The $z=3$ simulation
values for all three numbers are slightly larger than the HKCSR values,
but the agreement is quite good given that the analysis procedures
are not identical in all their details. The most significant difference
in the distributions is the presence of many more narrow ($b<20\;{\rm km}\,{\rm s}^{-1}$)
lines in the simulation than in the data, a discrepancy that we
discuss further below.
\section{Discussion}
Our most important result is that the CDM simulation reproduces
the large number of weak lines found in HIRES spectra when it
is analyzed by Voigt-profile decomposition.
However, in the column density range
$10^{13}\;\cdunits \mathrel{\copy\simgreatbox} N_{\rm HI} \mathrel{\copy\simgreatbox} 10^{14}\;\cdunits$,
the density of lines in the simulation at $z=3$ is a factor of 1.5--2
higher than found by HKCSR at $\bar z=2.9$.
Our simulation suffers from the inevitable limitation of finite
numerical resolution, but the Ly$\alpha$\ absorbers are usually large, smooth,
low-overdensity structures, and we would in any case expect higher
numerical resolution to increase the number of lines rather than
decrease it. This excess of lines may therefore indicate a failure
of the $\Omega=1$, $\sigma_8=0.7$ CDM model, a tendency to produce
too much small scale clumping at $z=3$.
An alternative possibility, quite plausible at present, is that
we have set the intensity of the UV background too low given our
adopted value of $\Omega_b$. As discussed in \S 2, we choose the
background intensity in order to match the PRS determination
of the mean Ly$\alpha$\ optical depth, $\bar\tau_\alpha = 0.45$
at $z=3$. The statistical uncertainties in this determination are
fairly small, but there are systematic uncertainties in the required
extrapolation of the quasar continuum into the Ly$\alpha$\ forest region.
We can match the HKCSR results if we
increase the UV background intensity by a factor $\sim 1.7$,
thus lowering HI column densities by a similar factor and shifting
the simulation result for $f(N_{\rm HI})$ to the left.
This increase lowers the mean optical depth to $\bar\tau_\alpha=0.32$,
which is well outside the $1\sigma$ range of PRS (figure~4) at
the HKCRS mean redshift $\bar z=2.9$ but is consistent with the PRS
value at $z=2.65$.
It is somewhat {\it above} the value
$\bar\tau_\alpha(z=3) \sim 0.25$ found by \cite{zuo93},
who use a different data set and a different
method of determining the quasar continuum.
The uncertainty of our conclusions highlights the need for better
observational determinations of $\bar\tau_\alpha(z)$,
which plays a crucial role in normalizing the predictions of
cosmological simulations.
The mean optical depth depends on the distribution of $b$-parameters
and on the small scale clustering of absorbers in addition to the
column density distribution itself, so if $\bar\tau_\alpha$ is
well known then the amplitude of $f(N_{\rm HI})$ becomes an important
independent test of the high-redshift structure predicted by
a cosmological model.
A second important result of our comparison is that the
$\Omega=1$, $\sigma_8=0.7$ CDM model produces Ly$\alpha$\ forest lines
with typical $b$-parameters close to observed values.
However, the simulation yields many more
lines with $b<20\;{\rm km}\,{\rm s}^{-1}$ than are found by HKCSR.
A thermally broadened, single-temperature gas cloud produces
a Ly$\alpha$\ absorption line of width $b=20(T/24,000\;{\rm K})^{1/2}\;{\rm km}\,{\rm s}^{-1}$.
Low-$b$ lines arise in the simulated spectra because much
of the absorbing gas is at temperatures of $10^4\;$K or less,
with its temperature determined by the balance between
photoionization heating and adiabatic cooling (\cite{mir97}).
We will need tests at higher resolution to check that these
temperatures are not artificially low because the simulation
misses entropy production in unresolved shocks, but because such
shocks would have to be quite weak, we expect that the effect
of missing them would be small.
Miralda-Escud\'e \& Rees (1994)
pointed out that the process of reionization can
heat the IGM significantly if it occurs fast enough to prevent
radiative cooling losses.
Our equilibrium treatment of photoionization (KWH) implicitly
suppresses this effect, and since
circumstantial evidence suggests that HeII reionization
may have occurred at $z \approx 3$ (\cite{son96}), we could be
underestimating the gas temperatures at this redshift.
The dot-dash line of Figure~\ref{fig: bpar} shows the $b$-parameter
distribution obtained at $z=3$ after adding 15,000 K to the temperatures
of the SPH particles (a thermal energy equivalent to 4 Rydbergs per
HeII photoelectron), reducing the UV background intensity by 2.46
to restore $\bar\tau_\alpha=0.45$, then reextracting and reanalyzing spectra.
Heating the gas eliminates the excess of low $b$-parameter lines,
though it worsens the agreement with \cite{hu95} at $b>40\;{\rm km}\,{\rm s}^{-1}$.
We will investigate other treatments of reionization heating
in future work, though the effects will be difficult to pin down
because they depend on uncertain details of reionization (\cite{mir94}).
Model predictions for $b$-parameters should be more robust at $z=2$,
since by this time much of the energy absorbed during HeII
reionization at $z\ga 3$ will have been lost to adiabatic cooling.
Sharper tests of cosmological models against the statistics of the Ly$\alpha$\
forest can be obtained by expanding the redshift range of comparisons,
by improving the determination of $\bar\tau_\alpha(z)$, and by
applying AUTOVP to observational data, so that the analyses of simulated
and observed spectra are identical in detail.
We will also test models of the Ly$\alpha$\ forest using alternative statistical
measures to characterize spectra, for if the physical scenario that
emerges from cosmological simulations is correct, then
Voigt-profile decomposition provides at best a rough guide
to the density and temperature profiles of the absorbing gas.
At the high $S/N$ and resolution of the HIRES data, an
absorption feature with a single flux minimum
often shows asymmetries or broad wings, requiring two or more Voigt-profile
lines to provide an adequate fit (\cite{hu95}).
Pairs of lines with small velocity separations have strongly
anti-correlated $b$-parameters, suggesting that many of these decompositions
are not genuine physical blends (\cite{rau96}).
While results such as these can always be accommodated within a
discrete ``cloud'' model by postulating just the right
clustering properties, they more likely signify the breakdown of
the Voigt-profile paradigm itself, revealing the origin of the Ly$\alpha$\
forest in the diffuse, undulating gas distribution
of the high-redshift universe.
\acknowledgments
We acknowledge the invaluable assistance of Chris Churchill and
numerous stimulating discussions with Jordi Miralda-Escud\'e. We thank
the authors of \cite{hu95} for making their line list available.
We also thank Renyue Cen for his timely refereeing and helpful comments.
This work was supported in part by the PSC, NCSA, and SDSC supercomputing
centers, by NASA theory grants NAGW-2422, NAGW-2523, NAG5-2882, and NAG5-3111,
by NASA HPCC/ESS grant NAG 5-2213,
and by the NSF under grants ATS90-18256, ASC 93-18185 and the Presidential
Faculty Fellows Program.
\clearpage
| proofpile-arXiv_065-850 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The problem of transmission and storage of quantum states has received
a considerable amount of attention recently, owing to the flurry of
activity in the field of quantum computation~\cite{bib_reviews}
sparked by Shor's discovery of a quantum algorithm for
factoring~\cite{bib_shor}. In anticipation of physical realizations of
such computers (which still face major conceptual challenges), it is
necessary to extend to the quantum regime the main results of
Shannon's information theory~\cite{bib_shannon}, which provides limits
on how well information can be compressed, transmitted, and preserved.
In this spirit, the quantum analogue of the noiseless coding theorem
was obtained recently by Schumacher~\cite{bib_schum}. However, noisy
quantum channels are less well understood, mainly because quantum
noise is of a very different nature than classical noise, and the
notion of ``quantum information'' is still under discussion. Yet,
important results have been obtained concerning the correction of
errors induced by the decoherence of quantum bits via suitable quantum
codes. These error-correcting
codes~\cite{bib_shor1,bib_calder,bib_steane,bib_laflamme,bib_ekert,bib_bdsw,bib_knill,bib_calder1}
work on the principle that quantum information can be encoded in
blocks of qubits (codewords) such that the decoherence of any qubit
can be corrected by an appropriate code, much like the classical
error-correcting codes. Therefore, it is expected that a
generalization of Shannon's fundamental theorem to the quantum regime
should exist, and efforts towards such a proof have appeared
recently~\cite{bib_schum1,bib_schum2,bib_lloyd}. The capacity for the
transmission of {\em classical} information through quantum channels
was recently obtained by Hausladen {\it et al.}~\cite{bib_hausladen}
for the transmission of pure states, and by
Kholevo~\cite{bib_kholevo97} for the general case of mixed states.
When discussing quantum channels, it is important to keep in mind that
they can be used in two very different modes. On the one hand, one
may be interested in the capacity of a channel to transmit or else
store, an {\em unknown} quantum state in the presence of quantum
noise. This mode is unlike any use of a channel we are accustomed to
in classical theory, as strictly speaking classical information is not
transmitted in such a use (no measurement is involved). Rather, such
a capacity appears to be a measure of how much {\em entanglement} can
be transmitted (or maintained) in the presence of noise induced by the
interaction of the quantum state with a ``depolarizing'' environment.
On the other hand, a quantum channel can be used for the transmission
of {\em known} quantum states (classical information), and the
resulting capacity (i.e., the classical information transmission
capacity of the quantum channel) represents the usual bound on the
rate of arbitrarily accurate information transmission. In this paper,
we propose a definition for the {\em von Neumann} capacity of a
quantum channel, which encompasses the capacity for procesing quantum
as well as classical information. This definition is based on a
quantum mechanical extension of the usual Shannon mutual entropy to a
von Neumann mutual entropy, which measures quantum as well as
classical correlations. Still, a natural separation of the von
Neumann capacity into classical and purely quantum pieces does not appear to
be straightforward. This reflects the difficulty in separating
classical correlation from quantum entanglement (the ``quantum
separability'' problem, see, e.g., \cite{bib_horo} and references
therein). It may be that there is no unambiguous way to separate
classical from purely quantum capacity for all channels and all noise
models. The von Neumann capacity we propose, as it does not involve
such a separation, conforms to a number of ``axioms'' for such a
measure among which are positivity, subadditivity, concavity
(convexity) in the input (output), as well as the data processing
inequalities. We also show that the von Neumann capacity naturally
reverts to the capacity for classical information transmission through
noisy quantum channels of Kholevo~\cite{bib_kholevo97} (the Kholevo
capacity) if the unknown states are measured just before transmission,
or, equivalently, if the quantum states are {\em prepared}. In such a
use, thus, the ``purely quantum piece'' of the von Neumann capacity
vanishes. We stop short of proving that the von Neumann capacity can
be achieved by quantum coding, i.e., we do not prove the quantum
equivalent of Shannon's noisy coding theorem for the total capacity.
We do, however, provide an example where the von Neumann capacity
appears achievable: the case of noisy superdense coding.
In the next section we recapitulate the treatment of the {\em
classical} communication channel in a somewhat novel manner, by
insisting on the deterministic nature of classical physics with
respect to the treatment of information. This treatment paves the way
for the formal discussion of quantum channels along the lines of
Schumacher~\cite{bib_schum1} in Section III, which results in a
proposal for the definition of a von Neumann capacity for transmission
of entanglement/correlation that parallels the classical construction.
We also prove a number of properties of such a measure, such as
subadditivity, concavity/convexity, forward/backward quantum
data-processing inequalities, and derive a quantum Fano inequality
relating the loss of entanglement in the channel to the fidelity of
the code used to protect the quantum state. This proof uses an
inequality of the Fano-type obtained recently by
Schumacher~\cite{bib_schum1}. In Section IV we demonstrate that the
von Neumann capacity reduces to the recently obtained Kholevo
capacity~\cite{bib_kholevo97} if the quantum states are {\em known},
i.e., measured and ``kept in memory'', before sending them on. In
Section V then we apply these results directly to a specific example,
the quantum depolarizing channel~\cite{bib_channel}. This generic
example allows a direct calculation of all quantities involved.
Specifically, we calculate the entanglement/correlation processed by
the channel as a function of the entropy of the input and the
probability of error of the channel. We also show that this capacity
reverts to the well-known capacity for classical information
transmission in a depolarizing channel if {\em known} quantum states
are transmitted through the channel. In Section VI finally, we
interpret the von Neumann capacity in the context of superdense coding
and derive a quantum Hamming bound consistent with it.
\section{Classical channels}
The information theory of classical channels is well known since
Shannon's seminal work on the matter~\cite{bib_shannon}. In this
section, rather than deriving any new results, we expose
the information theory of classical channels in the light of the {\em
physics} of information, in preparation of the quantum treatment of
channels that follows. Physicists are used to classical laws of
physics that are {\em deterministic}, and therefore do not consider
noise to be an intrinsic property of channels. In other words,
randomness, or a stochastic component, does not exist {\em per se},
but is a result of incomplete measurement. Thus, for a physicist there
are no noisy channels, only incompletely monitored ones. As an
example, consider an information transmission channel where the
sender's information is the face of a coin before it is flipped, and
the receiver's symbol is the face of the coin after it is flipped.
Information theory would classify this as a useless channel, but for a
physicist it is just a question of knowing the initial conditions of
the channel {\em and the environment} well enough. From this, he can
calculate the trajectory of the coin, and by examining the face at the
received side infer the information sent by the sender. Classical
physics, therefore, demands that all {\em conditional} probability
distributions can be made to be {\em peaked}, if the environment,
enlarged enough to cover all interacting systems, is monitored. In
other words, $p_{i|j}=1$ or $0$ for all $i$, $j$: if the
outcome $j$ is known, $i$ can be inferred with certainty.
As a consequence, {\em all} conditional entropies can be made to
vanish for a closed system.
According to this principle, let us then construct the
classical channel. Along with the ensemble of source symbols $X$
(symbols $x_1,\cdots,x_N$ appearing with probabilities
$p_1,\cdots,p_N$), imagine an ensemble of received symbols $Y$. The
usual noisy channel is represented by the diagram on the left in
Fig.~\ref{fig_class}: the conditional entropy $H(X|Y)$ represents the
loss $L$ in the channel, i.e., the uncertainty of inferrring $X$ from
$Y$, whereas $H(Y|X)$ stands for noise $N$ in the output,
which is unrelated to the error-rate of the channel.
\begin{figure}
\caption{(a) Entropy Venn diagram for the classical channel
$XY$, and its ``physical'' extension including the environment.}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_class.ps,width=3.0in,angle=-90}}
\label{fig_class}
\vskip -0.25cm
\end{figure}
A channel for which $L=0$ is called a ``lossless'' channel (no
transmission errors occur), whereas $N=0$ characterizes a
``deterministic'' channel (the input unambiguously determines the
output). On the right-hand side in Fig.~\ref{fig_class}, we have
extended the channel to include the environment. All conditional
entropies are zero, and the noise and loss are simply due to
correlations of the source or received ensembles with an
environment, i.e., $L=H(X{\rm:}E|Y)$ and $N=H(Y{\rm:}E|X)$.
The capacity of the classical channel is obtained by
maximizing the mutual entropy between source and received symbols [the
information $I=H(X{\rm:}Y)$ processed by the channel] over all input
distributions:
\begin{eqnarray}
C = \max_{p(x)}\; I\;.
\end{eqnarray}
If the output of the channel $Y$ is
subjected to {\em another} channel (resulting in the output $Z$, say),
it can be shown that the information processed by the combined channel,
$H(X{\rm:}Z)$, cannot possibly be larger than the information processed
in the {\em first} leg, $H(X{\rm:}Y)$. In other words, any subsequent
processing of the output
cannot possibly increase the transmitted information. This is
expressed in the
data-processing inequality (see, e.g.,~\cite{bib_ash}):
\begin{eqnarray}
H(X{\rm:}Z)\leq H(X{\rm:}Y)\leq H(X) \label{dataproc}\;.
\end{eqnarray}
On the same token, a ``reverse'' data-processing inequality can be
proven, which implies that the information processed in the {\em second}
leg of the channel, $H(Y{\rm:}Z)$, must exceed the information
processed by the total channel, $H(X{\rm:}Z)$:
\begin{eqnarray}
H(X{\rm:}Z)\leq H(Y{\rm:}Z)\leq H(Z) \label{dataproc1}\;.
\end{eqnarray}
This inequality reflects microscopic time-reversal invariance: any
channel used in a forward manner can be used in a backward manner.
As far as
coding is concerned, the troublesome quantity is the loss $L$, while
the noise $N$ is unimportant.
Indeed, for a message of length $n$, the typical number of input
sequences for every output sequence is $2^{nL}$, making decoding
impossible. The principle of error-correction is to embed the messages
into {\em codewords}, that are chosen in such a way that the
conditional entropy of the ensemble of codewords {\em vanishes}, i.e.,
on the level of message transmission the channel is lossless. Not
surprisingly, there is then a relationship between the channel loss
$L$ and the probability of error $p_c$ of a {\em code} $c$
that is composed of $s$ codewords:
\begin{eqnarray}
L\leq H_2[p_c]+p_c\log(s-1)\;, \label{fanocl}
\end{eqnarray}
where $H_2[p]$ is the dyadic Shannon entropy
\begin{eqnarray}
H_2[p] = H_2[1-p] = -p\log p\,-\,(1-p)\log(1-p)\;.
\end{eqnarray}
Eq.~(\ref{fanocl}) is the Fano inequality (see, e.g.,~\cite{bib_ash}),
which implies, for example, that the loss vanishes
if the error of the code vanishes. Note that the noise of the
channel itself in general is not zero in this situation. Let us now turn to
quantum channels.
\section{Quantum channels}
\subsection{Information theory of entanglement}
Quantum channels have properties fundamentally different from the
classical channel just described owing to the superposition principle
of quantum mechanics and the non-cloning theorem that
ensues~\cite{bib_nocloning}. First and foremost, the ``input'' quantum
state, after interaction with an environment, is ``lost'', having
become the output state. Any attempt at copying the quantum state
before decoherence will result in a classical channel, as we
will see later. Thus, a joint probability for input and output symbols
does not exist for quantum channels. However, this is not essential as
the quantity of interest in quantum communication is {\em not} the
state of an isolated quantum system (a ``product state''), but the
degree of entanglement between one quantum system and another,
parameterized by their mutual entropy as shown below. A
single non-entangled quantum system (such as an isolated spin-1/2
state) carries no entropy and is of no interest for quantum
communication as it can be arbitrarily recreated at any time. Entangled
composite systems (such as Bell states) on the other hand are
interesting because the entanglement can be used for communication.
Let us very briefly recapitulate the quantum
information theory of
entanglement~\cite{bib_neginfo,bib_entang,bib_meas,bib_reality}.
For a composite quantum system $AB$, we can write relations between
von Neumann entropies that precisely parallel those written by Shannon for
classical entropies. Specifically, we can define the conditional entropy
of $A$ (conditional on the knowledge of $B$)
\begin{eqnarray}
S(A|B) = S(AB)-S(B)
\end{eqnarray}
via a suitable definition of a ``conditional'' density matrix
$\rho_{A|B}$. The latter matrix can have eigenvalues larger than
unity, revealing its non-classical nature and allowing conditional quantum
entropies to be {\em negative}~\cite{bib_neginfo}. Similarly, we can define
a ``mutual'' density matrix $\rho_{A{\rm:}B}$ giving rise to a mutual von
Neumann entropy
\begin{eqnarray}
S(A{\rm:}B) = S(A) + S(B) - S(AB)
\end{eqnarray}
which exceeds the usual bound obtained for mutual Shannon entropies
by a factor of two:
\begin{eqnarray}
S(A{\rm:}B) \le 2\,{\rm min}[S(A),S(B)]\;.
\end{eqnarray}
The latter equation demonstrates that quantum systems can be more
strongly correlated than classical ones: they can be {\em
supercorrelated}. These relations can be conveniently summarized by
entropy Venn diagrams (Fig.~\ref{fig_venn}a) as is usual in classical
information
theory. The extension to the quantum regime implies that negative
numbers can appear which are classically forbidden\footnote{
In classical entropy Venn diagrams, negative numbers can only
appear in the mutual entropy of three or more systems.}.
As an example, we show in Fig.~\ref{fig_venn}b the quantum entropies of Bell
states (which are fully entangled states of two qubits). These notions
can be extended to multipartite systems, and will be used throughout
the paper.
\begin{figure}
\caption{(a) Entropy Venn diagram for a bipartite entangled quantum
system $AB$, depicting $S(AB)$ (total area), marginal entropies
[$S(A)$ viz. $S(B)$], conditional [$S(A|B)$ viz. $S(B|A)$] and mutual
[$S(A{\rm:}B)$] entropies. (b) Entropy diagram for a fully entangled
Bell-state.}
\vskip 0.3cm
\centerline{\psfig{figure=channel-fig1.ps,width=3.4in,angle=0}}
\label{fig_venn}
\vskip 0.25cm
\end{figure}
The degree of entanglement of a bipartite pure quantum state is
customarily indicated by the marginal entropy of one of its parts,
i.e., the von Neumann entropy of the density matrix obtained by
tracing the joint density matrix over the degrees of freedom of the
other part (the entropy of entanglement, see~\cite{bib_bdsw}).
However, since the parts of an entangled system
do not possess a state on their own, it takes up to twice the
marginal entropy of one of the parts to specify (in bits) the state of
entanglement. For example, it takes up to two bits to specify the
entanglement between two qubits (there are four Bell-basis
states). Thus, we propose to measure the entanglement of pure states
by the mutual entropy between the two parts, which takes values between 0 (for
non-entangled systems) and $2S$ (for entangled systems of
marginal entropy $S$ each). In order to avoid confusion with the previously
defined entropy of entanglement, we propose to call this quantity
the {\em mutual entanglement} (or simply von Neumann mutual entropy),
and denote it by the symbol $I_Q$:
\begin{eqnarray}
I_Q = S(A{\rm:}B)\;.
\end{eqnarray}
For pure entangled states, the mutual
entanglement $I_Q$ is just twice the entropy of entanglement,
demonstrating
that either is a good measure for the {\em degree} of entanglement,
but not necessarily for the absolute amount. Estimating the
entanglement of {\em mixed} states, on the other hand, is more
complicated, and no satisfying definition is available
(see~\cite{bib_bdsw} for the
most established ones). The quantum mutual entropy for mixed states
does {\em not} represent pure quantum entanglement, but rather
classical {\em and} quantum entanglement that is difficult to
separate consistently. For reasons that become more clear in the
following, we believe that the mutual
entanglement $I_Q$ between two systems is the most
straightforward generalization
of the mutual information $I$ of classical information theory, and
will serve as the vehicle to define a quantum/classical {\em von
Neumann} capacity for quantum channels.
\subsection{Explicit model}
In constructing a general quantum channel formally, we follow
Schumacher~\cite{bib_schum1}. A quantum mixed state $Q$ suffers
entanglement with an environment $E$ so as to lead to a new mixed
state $Q'$ with possibly increased or decreased entropy. In order to
monitor the
entanglement transmission, the initial mixed state $Q$ is ``purified''
by considering its entanglement with a ``reference'' system
$R$:
\begin{eqnarray}
|RQ\rangle=\sum_i\sqrt{p_i}\,|r_i,i\rangle \label{eq9}
\end{eqnarray}
where $|r_i\rangle$ are the $R$ eigenstates.
Indeed, this can always be achieved via a Schmidt decomposition.
Then, the mixed state $Q$ is simply obtained as a partial trace of the
pure state $QR$:
\begin{eqnarray}
\rho_Q = \tra_R[\rho_{QR}]=\sum_i p_i\,|i\rangle\langle i|\;.\label{eq10}
\end{eqnarray}
Also, the interaction with the environment
\begin{eqnarray}
QRE\stackrel {U_{QE}\otimes 1_R}\longrightarrow Q'R'E' \label{eq11}
\end{eqnarray}
now can be viewed as a channel to transmit the entanglement between $QR$ to the
system $Q'R'$. Here, $U_{QE}$ is the unitary operation entangling $QR$
with the environment $E$, which is initially in a pure state. This
construction is summarized in Fig.~\ref{fig_channel}.
\begin{figure}
\caption{Quantum network representation of a noisy quantum
channel. $R$ purifies the mixed state $Q$; the corresponding
entanglement is indicated by a dashed line.
\label{fig_channel}}
\vskip 0.25cm
\centerline{\psfig{figure=fig-channel.ps,width=2.5in,angle=-90}}
\vskip -0.25cm
\end{figure}
The
evolution of entropies in such a channel is depicted in
Fig.~\ref{fig_unitary}, where the entropy of the
reference state [which is the same as the entropy of $Q$ {\em before}
entanglement, $S(Q)=S(R)$] is denoted by $S$,
\begin{eqnarray}
S= -\sum_i p_i\log p_i\;, \label{eq12}
\end{eqnarray}
while the entropy of the quantum state $Q'$
after entanglement $S(Q')=S'$, and the entropy of the environment $S(E)=S_e$.
The latter was termed ``exchange entropy'' by Schumacher~\cite{bib_schum1}.
\begin{figure}
\caption{Unitary transformation entangling the pure environment
$|E\rangle$ with the pure system $|QR\rangle$. The reference system $R$ is not
touched by this transformation, which implies that no entropy can be exchanged
across the double solid lines in the diagram on the left.}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig2.ps,width=3.2in,angle=-90}}
\label{fig_unitary}
\vskip -0.25cm
\end{figure}
Note that, as for any tripartite pure state, the entropy dia\-gram of the
entangled
state $Q'R'E'$ is uniquely fixed by three parameters, the marginal entropies
of $Q'$, $R'$, and $E'$ respectively, i.e., the numbers $S$, $S'$, and $S_e$.
Also, in any pure entangled diagram involving three systems, the
ternary mutual entropy [the center of the ternary diagram,
$S(Q'{\rm:}R'{\rm:}E')$],
is always zero~\cite{bib_entang,bib_meas,bib_reality}.
To make contact with the classical channel of the previous section,
let us define the {\em quantum loss} $L_Q$\footnote{We follow here
the nomenclature that ``quantum'' always means ``quantum including
classical'', rather than ``purely quantum'', in the same sense as
the von Neumann entropy is not just a purely quantum entropy. This
nomenclature is motivated by the difficulty to separate classical
from quantum entanglement.}:
\begin{eqnarray}
L_Q= S(R'{\rm:}E'|Q')=S_e+S-S'\;.
\end{eqnarray}
It represents the difference between the entropy acquired by the environment,
$S_e$, and the entropy change of $Q$, ($S'-S$), and thus stands for the
loss of entanglement in the quantum transmission. It
plays a central role in error correction as shown below and in Section III.D.
The entropy diagram in terms of
$S$, $S_e$, and $L_Q$ is depicted in Fig.~\ref{fig_loss}. From this diagram
we can immediately read off inequalities relating the loss $L_Q$ and the
entropies $S$ and $S_e$ by considering triangle
inequalities for quantum entropies~\cite{bib_araki}, namely
\begin{eqnarray}
0&\le&L_Q\le 2S \;\label{lossbound},\label{ineq1}\\
0&\le&L_Q\le 2S_e\label{ineq2}\;,
\end{eqnarray}
which can be combined to
\begin{eqnarray}
0\le L_Q\le 2 \min\,(S,S_e)\;.
\end{eqnarray}
We find therefore that the initial mutual entanglement $2S$ is split,
through the action of the environment, into a piece shared with $Q'$
[i.e., $S(Q'{\rm:}R')=2S-L_Q$],
and a piece shared with the environment (the remaining loss $L_Q$)
according to the relation
\begin{eqnarray}
S(R'{\rm:}Q')+S(R'{\rm:}E'|Q')=S(R'{\rm:}E'Q')=S(R{\rm:}Q)\;,
\end{eqnarray}
or equivalently
\begin{eqnarray}
I_Q +L_Q = 2S\;.
\end{eqnarray}
\begin{figure}
\caption{Entropy diagram summarizing the entropy relations between the
entangled systems $Q'$, $R'$, and $E'$. }
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_loss.ps,width=1.75in,angle=-90}}
\label{fig_loss}
\vskip -0.25cm
\end{figure}
Finally, we are ready to propose a definition for the
von Neumann capacity. Again, in analogy
with the classical construction, the von Neumann capacity $C_Q$ would
be the mutual entanglement processed by the channel (mutual von
Neumann entropy), maximized over
the density matrix of the input channel, i.e.,
\begin{eqnarray}
C_Q = \max_{\rho_Q} I_Q\;,\label{quantcap}
\end{eqnarray}
where $I_Q=S(R'{\rm:}Q')=S(R{\rm:}Q')$ is the entanglement processed
by the channel:
\begin{eqnarray}
I_Q=2S-L_Q\;.
\end{eqnarray}
From the bound (\ref{lossbound}) we find that
the entanglement processed by the channel is non-negative, and bounded
from above by the initial entanglement $2S$. An interesting situation
arises when the entanglement processed by the channel saturates
this upper bound. This is the case of the {\em lossless} quantum
channel, where $L_Q=0$.
It was shown recently by Schumacher and Nielsen~\cite{bib_schum2} that
an error-correction procedure meant to restore the initial quantum
state (and thus the initial entanglement $2S$) can only be successful
when $L_Q=0$. From Fig.~\ref{fig_loss} we can see that when $L_Q=0$,
$Q'$ is entangled {\em separately} with the reference state and the
environment, leading to the diagram represented in
Fig.~\ref{fig_lossless}. For this reason alone it is possible to
recover the initial entanglement between $Q$ and $R$ via interaction
with an ancilla $A$ (that can be viewed as a {\em second} environment
in a ``chained'' channel). The latter effects a transfer of the
entanglement between $Q'$ and $E'$ to entanglement between $E'$ and
$A$. This operation can be viewed as an ``incomplete'' measurement of
$Q'$ by $A$ which only measures the environment $E'$ while keeping
intact the entanglement of $Q'$ with $R$. It was shown
in~\cite{bib_schum2} that $L_Q=0$ is in fact a necessary {\em and}
sufficient condition for this to be feasible. Such a transfer of
entanglement corresponds to the quantum equivalent of error
correction, and will be discussed with reference to the quantum Fano
inequality in Section III.D.
\begin{figure}
\caption{Entanglement between $Q'$, $R'$, and $E'$ in the lossless quantum
channel}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_lossless.ps,width=1.75in,angle=-90}}
\label{fig_lossless}
\vskip -0.25cm
\end{figure}
\subsection{Axioms for quantum information}
In the following, we present a number of reasonable ``axioms'' for a
quantum mutual information, and show that $I_Q$ defined above has the required
properties. These are:
\begin{itemize}
\item[(i)] non-negativity
\item[(ii)] concavity in $\rho_Q$ (for a fixed channel)
\item[(iii)] convexity in $\rho_Q'$ (for fixed $\rho_Q$)
\item[(iv)] subadditivity
\end{itemize}
These requirements for a quantum mutual entropy (``entanglement
processed by the channel'') are very natural and reflect the kind of
requirements that are put on classical channels.
The non-negativity of $I_Q$ is simply a consequence of the
subadditivity of quantum entropies. (Just like the mutual Shannon entropy,
the mutual quantum entropy is a non-negative quantity).
Concavity of quantum information in $\rho_Q$
[axiom (ii)] reflects that the information processed by a
channel with a mixture of quantum states $\rho_Q=\sum_i w_i\rho_Q^i$
(with $\sum_i w_i=1$) as
input should be larger than the average information processed by
channels that each have a mixture $\rho_Q^i$ as input, i.e.,
\begin{eqnarray}
I_Q(\rho_Q)\geq\sum_i w_i I_Q(\rho_Q^i)\;.
\end{eqnarray}
This is the quantum analogue of the concavity of the Shannon mutual
information $H(X{\rm:}Y)$ in the input probability distribution $p(x)$
for a fixed channel, i.e., fixed $p(y|x)$. The proof uses
that, if the quantum operation achieved by the channel
is fixed, we have
\begin{eqnarray}
\rho'_{QE} &=& U_{QE} \left( \sum_i w_i \rho^i \otimes
|0\rangle\langle 0|\right)
U_{QE}^{\dagger} \nonumber\\
&=& \sum_i w_i U_{QE}(\rho^i \otimes |0\rangle\langle 0|)
U_{QE}^{\dagger} \nonumber\\
&=& \sum_i w_i \rho'^i_{QE}\;.
\end{eqnarray}
Therefore, using
\begin{eqnarray}
I_Q(\rho_Q) &=& S(R{\rm:}Q') \nonumber\\
&=& S(R)+S(Q')-S(RQ') \nonumber\\
&=& S(Q'E')+S(Q')-S(E') \nonumber\\
&=& S(Q'|E')+S(Q')
\label{eq-24}
\end{eqnarray}
the concavity of the quantum information in the input
results from the concavity of $S(Q'|E')$ in $\rho'_{QE}$
and from the concavity of $S(Q')$ in $\rho'_Q$~\cite{bib_wehrl}.
\par
Convexity of the processed information in $\rho_Q'$ [axiom
(iii)] states that, if the superoperator that takes
a fixed $\rho_Q$ into $\rho_Q'$ is such that
\begin{eqnarray}
\rho_Q'=\sum_j w_j \rho'^j_Q\;,
\end{eqnarray}
then
\begin{eqnarray}
I_Q(\rho_Q\to \rho_Q') \leq \sum_j w_j I_Q(\rho_Q\to \rho'^j_Q)\;.
\end{eqnarray}
Thus, the processed information of a channel that is a
``superposition'' of
channels (each used with probability $w_j$)
that result in $\rho_Q'$ cannot exceed the average of the
information for each channel. One has a similar property
for classical channels: the mutual information $H(X{\rm:}Y)$
is a convex function of $p(y|x)$ for a fixed input distribution $p(x)$.
The proof follows from noting that, if the input is fixed,
we have
\begin{eqnarray}
\rho'_{RQ}=\sum_j w_j \rho'^j_{RQ}
\end{eqnarray}
Then, expressing the quantum information as
\begin{eqnarray}
I_Q(\rho_Q\to \rho_Q') = S(R{\rm:}Q') = S(R)-S(R|Q')\;.
\end{eqnarray}
and noting that $S(R)$ is constant, the concavity of
$S(R|Q')$ in $\rho'_{RQ}$ implies the convexity of the quantum
information in the output.
\par
Finally, the subadditivity of quantum information [axiom (iv)] is a
condition which ensures that the information processed by a joint
channel with input $\rho_{Q_1Q_2}$ is smaller or equal to the
information processed ``in parallel'' by two channels with input
$\rho_{Q_1}=\tra_{Q_2}(\rho_{Q_1Q_2})$ and
$\rho_{Q_2}=\tra_{Q_1}(\rho_{Q_1Q_2})$ respectively.
Thus, if $R$ is the reference system purifying the joint input
$Q_1Q_2$, $Q_1$ is purified by $RQ_2$ while $Q_2$ is purified by
$RQ_1$ (see Fig.~\ref{channel-figsub}).
\begin{figure}
\caption{Parallel channels as quantum network, in the derivation of
the subadditivity of mutual von Neumann entropies. The
entanglement between $Q_1$, $Q_2$, and the reference is indicated by a
dashed line.}
\vskip 0.25cm
\centerline{\psfig{figure=figsub-new.ps,width=2.0in,angle=-90}}
\label{channel-figsub}
\vskip -0.25cm
\end{figure}
The subadditivity of von Neumann mutual entropies for such a
channel can be written as
\begin{eqnarray} \label{eq_subadditiv}
S(R{\rm:} Q_1' Q_2') \leq S(RQ_2{\rm:}Q_1')+S(RQ_1{\rm:}Q_2')\;,
\end{eqnarray}
which can be read as
\begin{eqnarray}
I_{12}\leq I_1 + I_2
\end{eqnarray}
with the corresponding identifications, and mirrors the classical
inequality
\begin{eqnarray}
H(X_1X_2{\rm:}Y_1Y_2)\leq H(X_1{\rm:}Y_1)+H(X_2{\rm:}Y_2)
\end{eqnarray}
for two independent channels taking $X_1\to Y_1$ and $X_2\to Y_2$.
To prove inequality~(\ref{eq_subadditiv}), we
rewrite the quantum information of each channel
using Eq.~(\ref{eq-24}) and the fact that $E_1$ and $E_2$ are initially
in a {\em product} state.
Eq.~(\ref{eq_subadditiv}) then becomes
\begin{eqnarray}
&&S(Q_1'Q_2'|E_1'E_2')+S(Q_1'Q_2')\leq\nonumber \\
&&S(Q_1'|E_1')+S(Q_1')+S(Q_2'|E_2')+S(Q_2')\;.
\end{eqnarray}
Subadditivity of {\em conditional} entropies, i.e.,
\begin{eqnarray}
&&\hspace{-0.3cm}S(Q_1'Q_2'|E_1'E_2')\nonumber\\
&=&S(Q_1'|E_1'E_2')+S(Q_2'|E_1'E_2')-
\underbrace{S(Q_1'{\rm:}Q_2'|E_1'E_2')}_{\geq0}\nonumber\\
&\leq&S(Q_1'|E_1'E_2')+S(Q_2'|E_1'E_2')\nonumber\\
&\leq&S(Q_1'|E_1')-\underbrace{S(Q_1'{\rm:}E_2'|E_1')}_{\geq0}
+S(Q_2'|E_2')-\underbrace{S(Q_2'{\rm:}E_1'|E_2')}_{\geq0}\nonumber\\
&\leq& S(Q_1'|E_1')+ S(Q_2'|E_2')\;,
\end{eqnarray}
together with the subadditivity property of ordinary (marginal)
von Neumann entropies, proves Eq.~(\ref{eq_subadditiv}). The terms that
are ignored in the above inequality are positive due to strong
subadditivity. This property of subadditivity of the information
processed by quantum channels can be straightforwardly extended
to $n$ channels.
An alternative definition for the quantum information processed by a
channel, called ``coherent information'', has been proposed by
Schumacher and Nielsen~\cite{bib_schum2}, and by
Lloyd~\cite{bib_lloyd}.
This quantity $I_e=S(R'|E')=S-L_Q$ is not positive [axiom (i)], and
violates axioms (ii) and (iv), which leads to a {\em violation} of the
reverse data-processing inequality, while the
``forward'' one is respected~\cite{bib_schum2} (as opposed to the von
Neumann mutual entropy which observes both, see below).
The coherent information attempts to capture the ``purely'' quantum
piece of the processed information while separating out any classical
components. This separation appears to be at the origin of the
shortcomings mentioned above.
\subsection{Inequalities for quantum channels}
From the properties of the ``mutual entanglement'' $I_Q$
derived above, we can prove data-processing inequalities for
$I_Q$ which reflect probability conservation, as well as the Fano
inequality which relates the loss of a channel to the fidelity of a code.
\vskip 0.25cm
\noindent{\it (i) Data-processing}
\vskip 0.25cm
Assume that starting with
the entangled state $QR$, entanglement with environment $E_1$ produces
the mixed state $Q_1$. This output is used again as an input to
another channel, this time
entangling $Q_1$ with $E_2$ to obtain $Q_2$ (see Fig.~\ref{fig_dpi}).
\begin{figure}
\caption{Chaining of channels in the derivation of the data-processing
inequality. The output $Q_1$ is subjected to a second channel by entangling
with an environment $E_2$ independent from $E_1$, to give output $Q_2$.}
\vskip 0.25cm
\centerline{\psfig{figure=chain-new.ps,width=3in,angle=-90}}
\label{fig_dpi}
\vskip -0.25cm
\end{figure}
The quantum analogue of the (forward) data-processing inequality
(\ref{dataproc}) that holds
for mutual informations in classical channels involves the mutual
entanglements $S(R{\rm:}Q_1)$ and $S(R{\rm:}Q_2)$, and asserts that
the mutual entanglement between reference and output cannot be increased by
any further ``processing'':
\begin{eqnarray}
S(R{\rm:}Q_2)\leq S(R{\rm:}Q_1)\leq 2S\;.\label{qdatapr}
\end{eqnarray}
That such an inequality should hold is almost obvious from the
definition of the mutual entanglement, but a short proof is given below.
This proof essentially follows Ref.~\cite{bib_schum2},
and is based on the property of strong subadditivity applied to
the system $RE_1E_2$:
\begin{eqnarray}
S(R{\rm:}E_2|E_1)=S(R{\rm:}E_1E_2)-S(R{\rm:}E_1)\geq 0\;.\label{strongsub}
\end{eqnarray}
For the channel $Q\rightarrow Q_1$,
we see easily (see Fig.~\ref{fig_loss}) that
\begin{eqnarray}
S(R{\rm:}E_1) &=& S(R{\rm:}Q_1E_1)-S(R{\rm:}Q_1|E_1) \nonumber\\
& = & 2S-S(R{\rm:}Q_1)\;. \label{app1}
\end{eqnarray}
Similarly, considering $E_1E_2$ as the environment for the ``overall''
channel $Q\rightarrow Q_2$, we find
\begin{eqnarray}
S(R{\rm:}E_1E_2)=2S-S(R{\rm:}Q_2)\;. \label{app2}
\end{eqnarray}
Plugging Eqs.~(\ref{app1}) and (\ref{app2}) into the positivity condition
(\ref{strongsub}), we obtain the quantum data processing inequality,
Eq.~(\ref{qdatapr}), as claimed.
\par
The {\em reverse} quantum data-processing inequality implies
that the entanglement processed by the second leg of the
channel, $S(RE_1{\rm:}Q_2)$, must be larger than the entanglement
processed by the entire channel:
\begin{eqnarray}\label{eq36}
S(R{\rm:}Q_2) \leq S(R E_1{\rm:}Q_2)\leq S(R E_1 E_2{\rm:}Q_2)= 2 S(Q_2)\;.
\end{eqnarray}
The proof relies on strong subadditivity applied to $Q_2 E_1 E_2$:
\begin{eqnarray}
S(Q_2{\rm:}E_1|E_2)=S(Q_2{\rm:}E_1E_2)-S(Q_2{\rm:}E_2)\geq 0\;.
\label{strongsub2}
\end{eqnarray}
For treating the channel $Q_1\rightarrow Q_2$ (i.e., the ``second
leg''), we have to purify
the input state of $Q_1$, that is consider $RE_1$ as the ``reference''.
Thus, we have
\begin{eqnarray}
S(Q_2{\rm:}RE_1) = 2 S(Q_2) - S(Q_2{\rm:}E_2)\;.
\end{eqnarray}
For the ``overall'' channel $Q\rightarrow Q_2$, we have
\begin{eqnarray}
S(Q_2{\rm:}R)=2 S(Q_2) - S(Q_2{\rm:}E_1 E_2)\;.
\end{eqnarray}
These two last equations together with Eq.~(\ref{strongsub2}),
result in the reverse quantum data-processing inequality, Eq.~(\ref{eq36}).
\par
From Eq.~(\ref{qdatapr}) we obtain immediately an inequality relating
the loss of entanglement after the first stage $L_1$ (we drop the
index $Q$ that indicated the quantum nature of the loss in this
discussion), with the overall loss,
$L_{12}$:
\begin{eqnarray}
0\leq L_1\leq L_{12}\;. \label{lossineq}
\end{eqnarray}
Physically, this implies that the loss $L_{12}$ cannot decrease from
simply chaining channels, just as in the classical case. As
emphasized earlier, the loss $L_1$ corresponds to the share of initial
entanglement that is irretrievably lost to the environment. Indeed, if
the environment cannot be accessed (which is implicit by calling it an
environment) the decoherence induced by the channel cannot be
reversed. Only if $L_1=0$ can this be achieved~\cite{bib_schum2}.
In view of this fact, it is natural to seek for a quantum
equivalent to the classical Fano inequality~(\ref{fanocl}).
\vskip 0.25cm
\noindent{\it (ii) Fano inequality}
\vskip 0.25cm
To investigate this issue, let us consider the chained channel
above, where error correction has taken place via transfer of entanglement
with a second environment. Let us also recall the definition of
``entanglement fidelity'' of Schumacher~\cite{bib_schum1}, which is
a measure of how faithfully the dynamics of the channel has preserved
the initial entangled quantum state $QR$:
\begin{eqnarray}
F_e(QR,Q'R) = \langle QR|\,\rho_{Q'R}\,|QR\rangle\equiv F_e^{QQ'}\;.\label{fid}
\end{eqnarray}
Since this entanglement fidelity does not depend on the reference
system~\cite{bib_schum1}, we drop $R$ from $F_e$ from here on, as indicated
in Eq.~(\ref{fid}).
Naturally, the entanglement fidelity can be related to the
probability of error of the channel. The quantum analogue of the
classical Fano inequality should relate the fidelity of the {\em code}
(in our example above the fidelity between $QR$ and $Q_2R$, the
error-corrected system) to the loss of the error-correcting channel $L_{12}$.
The derivation of such an inequality is immediate
using the Fano-type inequality derived by Schumacher~\cite{bib_schum1},
which relates
the entropy of the environment of a channel $S(E')$
to the fidelity of entanglement,
\begin{eqnarray}
S(E')\leq H_2[F_e^{QQ'}]\,+\,(1-F_e^{QQ'})\log{(d_Qd_R-1)}\;,\label{eqfano}
\end{eqnarray}
where $d_Q$ and $d_R$ are the Hilbert-space dimensions of $Q$ and $R$
respectively, and $H_2[F]$ is again the dyadic Shannon entropy.
Let us apply this
inequality to an error-correcting channel (decoherence +
error-correction), i.e.,
the chained channel considered above. In that case, the environment is
$E_1E_2$, and the entanglement fidelity is now between $Q$ and $Q_2$, i.e.,
the fidelity of the {\em code}, and we obtain
\begin{eqnarray}
S(E_1E_2)\leq H_2[F_e^{QQ_2}]\,+\,(1-F_e^{QQ_2})\log{(d-1)}\;.
\end{eqnarray}
Here, $d=d_R\,d_{Q_2}$ can be viewed as the Hilbert space dimension of
the code (this is more apparent in superdense coding discussed in the
Section VI).
To derive the required relationship, we simply note that
\begin{eqnarray}
S(E_1E_2) \geq L_{12}/2
\end{eqnarray}
[this is Eq.~(\ref{ineq2}) applied to the composite channel]. This relates
the fidelity of the code $F_e^{QQ_2}$ to the loss $L_{12}$, yielding the Fano
inequality for a quantum code
\begin{eqnarray}
L_{12}\leq 2\left[H_2[F_e^{QQ_2}]+
\left(1-F_e^{QQ_2}\right)\log{(d-1)}\right]\;. \label{fano}
\end{eqnarray}
As we noticed throughout the construction of quantum channels, a factor
of 2 appears also in the quantum Fano inequality, commensurate with
the fact that the loss can be twice the initial entropy.
Inequality (\ref{fano}) puts an upper limit on the fidelity
of a code for any non-vanishing loss $L_{12}$.
\section{Classical use of quantum channel}
In recent papers~\cite{bib_hausladen,bib_kholevo97}, the capacity for
the transmission of {\em classical} information through quantum channels has
been discussed. Essentially, this capacity is equal to the
maximal accessible information $\chi$ in the system, known as the Kholevo
bound~\cite{bib_kholevo}.
What we show in the following is that the mutual entanglement
introduced in the previous section, i.e., the quantum mutual entropy
$S(R:Q')$ between the ``decohered'' quantum state $Q'$ and the
``reference'' state $R$, reduces to $\chi$ if the quantum state is
measured before it is transmitted, or, equivalently, if Q is prepared
by a classical ``preparer'' $X$. Let the system $QR$ be ``purified'' again
via a Schmidt decomposition as in Eq.~(\ref{eq9}). If we measure
$Q$ in its eigenbasis we can write
\begin{eqnarray}
|RXQ\rangle=\sum_i \sqrt{p_i}\,|r_i\,x_i\, i\rangle\;,
\end{eqnarray}
where $x_i$ are the eigenstates of $X$ (if $X$ is in state $x_i$, $Q$
is in state $i$ etc.). (Figure~\ref{fig_trip} summarizes the
relationship between the respective entropies.)
Naturally then, tracing over $R$ we obtain
\begin{eqnarray}
\rho_{XQ}=\sum_ip_i\,|x_i\rangle\langle x_i|\otimes \rho_i\label{eq28}
\end{eqnarray}
with $\rho_i=|i\rangle\langle i|$, and similarly for $\rho_{RQ}$.
\begin{figure}
\caption{Entanglement between $Q$, $R$, and the ancilla (or preparer)
$X$ after measurement of the initial state of $Q$ by $X$, but prior
to entanglement with the environment. The initial state of $Q$
(before decoherence) is kept in memory, as it were, by $X$ via
classical correlation with $Q$. }
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_trip.ps,width=1.25in,angle=-90}}
\label{fig_trip}
\vskip -0.25cm
\end{figure}
Thus, $X$ and $Q$ are {\em classically}
correlated: each state of the ``preparer'' $X$ represents a state of
$Q$, or alternatively, $X$ reflects (keeps in memory) the initial quantum
state of $Q$. If the entropy of the quantum system $Q$ before
transmission is $S$ (just like in the previous section), the mutual
entropy between $R$ and $Q$ (as well as between $X$ and $Q$) is also
$S$, unlike the value $2S$ found in the quantum use. Decoherence now
affects $Q$ by entangling it with the environment, just like earlier.
Thus,
\begin{eqnarray}
\rho_{XQ}\to\rho_{XQ'}=\sum_ip_i|x_i\rangle\langle x_i|\otimes \rho'_i
\end{eqnarray}
where
\begin{eqnarray}
\rho_i^\prime=\tra_E\left\{U_{QE}\,\left(\rho_i\otimes
|0\rangle\la0|\right)\,U^\dagger_{QE}\right\}\;, \label{eq31}
\end{eqnarray}
and we assumed again that the environment $E$ is in a fixed ``0'' state
before interacting with $Q$. Now our proof proceeds as before, only
that the loss in the ``classical'' channel obeys different
inequalities. The requirement that the entangling
operation $U_{QE}$ does not affect $X$ or $R$ now implies
\begin{eqnarray}
S(X'{\rm:}E'Q')=S(X{\rm:}Q)=S(R{\rm:}Q)= S \label{eq32}
\end{eqnarray}
(see Figure~\ref{classic-fig1}).
\begin{figure}
\caption{Unitary transformation entangling the ``preparer'' (or
alternatively, the classical ``memory'') $X$ with the pure
environment $E$ and the quantum system $Q$. Neither the reference
$R$ nor the preparer $X$ are affected by this operation. As the
ternary Venn diagram between $Q'$, $E'$ and $X'$ is not pure in this
case, mutual entropy between $Q'$ and $X'$ {\em can} be shared by
$E'$.
\label{classic-fig1}}
\vskip 0.25cm
\centerline{\psfig{figure=classic-fig1.ps,width=3.3in,angle=-90}}
\vskip -0.25cm
\end{figure}
Applying the chain rule to the left hand side of Eq.~(\ref{eq32}) leads to
\begin{eqnarray}
S(X'{\rm:}E'Q')= S(X{\rm:}Q')+S(X{\rm:}E'|Q')\;.\label{eq33}
\end{eqnarray}
The quantum mutual entropy between the preparer and the quantum state
after decoherence, $S(X{\rm:}Q')$, can be shown to be equal to the Kholevo
bound $\chi$ (see Ref.~\cite{bib_access}). With $L=S(X{\rm:}E'|Q')$
(the classical loss of the channel) we thus conclude from
Eqs.~(\ref{eq33}) and (\ref{eq32}) that
\begin{eqnarray}
S=\chi+L\;.
\end{eqnarray}
Note that $S(X{\rm:}Q')$ is equal to $S(R{\rm:}Q')$, the mutual
entanglement $I_Q$ introduced earlier, as
$S(X)=S(R)$ and $S(XQ')=S(RQ')$.
Thus,
\begin{eqnarray}
I_Q\equiv S(R:Q')=\chi
\end{eqnarray}
if known quantum states are sent through the
channel, as advertised. It was shown recently by Kholevo~\cite{bib_kholevo97}
that the maximum of the latter quantity indeed plays the role of
channel capacity for classical information transmission
\begin{eqnarray}
C=\max_{p_i}\,\left[S(\rho')-\sum_i p_iS(\rho'_i)\right]\equiv\max_{p_i}\,\chi
\end{eqnarray}
where $\{p_i\}$ is a probability distribution of symbols at the
source, and $\rho'_i$ are the (not necessarily orthogonal) quantum
states received at the output, with the probability
distribution $\{p_i\}$ and $\rho'=\sum_i p_i\rho'_i$.
Thus, the quantity $C_Q$ that we propose as a capacity for
entanglement/correlation transmission reverts to the capacity for information
transmission $C$ if the unknown quantum states are {\em measured} before
transmission. This represents solid evidence in favor of our interpretation.
Let us now calculate
the quantities introduced here for a specific simple model of quantum noise.
\section{Quantum depolarizing channel}
The quantum depolarizing channel is an idealization of a quantum
storage and transmission process in which the stored quantum state can
undergo bit-flip and phase errors. This is not the most general
one-qubit channel\footnote{A more general depolarizing channel could
be constructed by allowing each of the possible errors a different
probability.}, but appears to be sufficient to examine a number of
interesting aspects of quantum communication.
\subsection{Quantum use}
Imagine a quantum state
\begin{eqnarray} |\Psi\rangle =
\alpha\,|0\rangle + \beta\,|1\rangle\;,
\end{eqnarray}
where the basis states of
the qubit can be taken to be spin-1/2 states polarized in the
$z$-direction, for example. (Specifically, we use the convention
$\sigma_z|1\rangle=|1\rangle$.) The depolarizing channel is
constructed in such a way that, due to an interaction with an
environment, the quantum state survives with probability $1-p$, but is
depolarized with probability $p/3$ by either a pure bit-flip, a pure
phase-error, or a combination of both:
\begin{eqnarray}
|\Psi\rangle &\stackrel{1-p}\longrightarrow & |\Psi\rangle\;,\nonumber\\
|\Psi\rangle& \stackrel{p/3}\longrightarrow & \sigma_x|\Psi\rangle=
\alpha\,|1\rangle+\beta\,|0\rangle\;, \nonumber\\
|\Psi\rangle &\stackrel{p/3}\longrightarrow & \sigma_z|\Psi\rangle=
-\alpha\,|0\rangle+\beta\,|1\rangle\;, \nonumber\\
|\Psi\rangle &\stackrel{p/3}\longrightarrow & \sigma_x\sigma_z|\Psi\rangle=-
\alpha\,|1\rangle+\beta\,|0\rangle \;,
\end{eqnarray}
where the $\sigma$ are Pauli matrices. Such an ``arbitrary'' quantum
state $\Psi$ can, without loss of generality, considered to be a state
$Q$ that is entangled with a reference state $R$, such that
the marginal density matrix of $Q$ can be written as
\begin{eqnarray}
\rho_Q =q\,|0\rangle\la0|\,+\,(1-q)\,|1\rangle\la1|
\label{rhomix}
\end{eqnarray}
with entropy $S(\rho_Q)=-\tra\rho_Q\log\rho_Q=H_2[q]$ and $q$ a probability
$(0\leq q\leq1$). In other words, the coefficients $\alpha$ and
$\beta$ need not be complex numbers. Conversely,
we can start with such a mixed state at the input, and consider $QR$
as a {\em pure} quantum state that this mixed
state obtains from. For example,
\begin{eqnarray}
|QR\rangle = \sqrt{1-q}\,|10\rangle\, - \,\sqrt q\,|01\rangle\;. \label{qr}
\end{eqnarray}
Naturally then, the mixed state Eq.~(\ref{rhomix}) is obtained by simply
tracing over this reference state. Pure states with real coefficients
such as (\ref{qr}) are not general, but suffice for the depolarizing channel
as $R$ is always traced over.
Let us now construct a basis for $QR$ that interpolates between
completely independent and completely entangled states, and allows us
to choose the initial entropy of $Q$ with a single parameter $q$. We thus
introduce the orthonormal ``$q$-basis'' states
\begin{eqnarray}
|\Phi^-(q)\rangle &=& \sqrt{1-q}\,|00\rangle\,-\,\sqrt q\,|11\rangle\;, \nonumber \\
|\Phi^+(q)\rangle &=& \sqrt q\,|00\rangle\,+\,\sqrt{1-q}\,|11\rangle\;, \nonumber \\
|\Psi^-(q)\rangle &=& \sqrt{1-q}\,|10\rangle\,-\,\sqrt q\,|01\rangle\;, \nonumber \\
|\Psi^+(q)\rangle &=& \sqrt q\,|10\rangle\,+\,\sqrt{1-q}\,|01\rangle\;.
\end{eqnarray}
Note that for $q=0$ or 1, these states are product states,
while for $q=1/2$ they are completely entangled, and $\Psi^\pm(1/2)$
and $\Phi^\pm(1/2)$ are just the usual Bell basis states. The
possibility of quantum decoherence of these states is introduced by
entangling them with an environment in a pure state, taken to be of
the same Hilbert space dimension as $QR$ for simplicity, i.e., a
four-dimensional space for the case at hand. This is the
minimal realization of a depolarizing channel.
Let us assume that $QR$
(for definiteness) is initially in the state $|\Psi^-(q)\rangle$, and the
environment in a superposition
\begin{eqnarray}
|E\rangle &= &\sqrt{1-p}\,|\Psi^-(q)\rangle \nonumber \\
&+&\sqrt{p/3}\left(|\Phi^-(q)\rangle+|\Phi^+(q)\rangle+|\Psi^+(q)\rangle\right)\;.
\end{eqnarray}
The environment and $QR$ are then entangled by means of the unitary operator
$U_{QRE}=U_{QE}\otimes 1_R$, with
\begin{eqnarray}
U_{QE} &=
&1\otimes P_{\Psi^-}(q)+
\sigma_x\otimes P_{\Phi^-}(q)\nonumber\\
&+&(-i\sigma_y)\otimes P_{\Phi^+}(q)+
\sigma_z\otimes P_{\Psi^+}(q)\;,
\end{eqnarray}
where the $P_{\Phi}(q)$ and $P_{\Psi}(q)$ stand for projectors
projecting onto $q$-basis states. Note that the Pauli matrices act
only on the first bit of the $q$-basis states, i.e., the entanglement
operation only involves $Q$ and $E$. Depending on the entanglement
between $Q$ and $R$, however, this operation also affects the
entanglement between $R$ and $E$. Thus, we obtain the state
\begin{eqnarray}
\lefteqn{|Q^\prime R^\prime E^\prime\rangle = U_{QRE}|QR\rangle|E\rangle = }\nonumber \\
&& \sqrt{1-p}\,|\Psi^-_{QR}(q),\,\Psi^-_E(q)\rangle +
\sqrt{p/3}\left(|\Phi^-_{QR}(q)\;,\Phi^-_E(q)\rangle +\right.\nonumber\\
&&\left.|\Phi^+_{QR}(1-q)\;,\Phi^+_E(q)\rangle +
|\Psi^+_{QR}(1-q)\;,\Psi^+_E(q)\rangle\right
\end{eqnarray}
on account of the relations
\begin{eqnarray}
\sigma_x |\Psi^-_{QR}(q)\rangle & = &
|\Phi^-_{QR}(q)\rangle \;, \\ (-i\,\sigma_y) |\Psi^-_{QR}(q)\rangle & = &
|\Phi^+_{QR}(1-q)\rangle \;,\\ \sigma_z |\Psi^-_{QR}(q)\rangle & = &
|\Psi^+_{QR}(1-q)\rangle\;,
\end{eqnarray}
and with obvious notation to distinguish the environment ($E$) and
quantum system ($QR$) basis states. The (partially depolarized)
density matrix
for the quantum system is obtained by tracing over the environment:
\begin{eqnarray}
\rho_{Q'R'} & = & \tra_E\left(|Q'R'E'\rangle\langle Q'R'E'|\right) =
(1-p)\,P_{\Psi^-}(q) +\nonumber \\
& & p/3\left[P_{\Phi^-}(q)+P_{\Phi^+}(1-q)+P_{\Psi^+}(1-q)\right]\;.
\end{eqnarray}
Its eigenvalues can be obtained to calculate the entropy:
\begin{eqnarray}
S_e(p,q)\equiv S(Q'R') = \nonumber \hspace{4.5cm}\\
H[\frac{2p}3(1-q),\frac{2pq}3,\frac12(1-\frac{2p}3+\Delta),
\frac12(1-\frac{2p}3-\Delta)]\;,
\end{eqnarray}
with $H[p_1,...,p4]$ the Shannon entropy, and
\begin{eqnarray}
\Delta = \left[(1-2p/3)^2-16/3\,p(1-p)\,q(1-q)\right]^{1/2}\;.
\end{eqnarray}
By tracing over the reference state we obtain the density matrix of
the quantum system after the interaction $\rho_{Q'}$, and its respective
entropy
\begin{eqnarray}
S'(p,q)\equiv S(Q')= H_2[q+\frac{2p}3(1-2q)]\;.
\end{eqnarray}
Together with the entropy of the reference state (which is unchanged
since $R$ was not touched by the interaction), $S(R')=S(R)=H_2[q]$,
this is enough to fill in the ternary entropy diagram reflecting the
dynamics of the channel, Fig.~\ref{fig_loss}. We thus find the
mutual entanglement processed by the channel:
\begin{eqnarray}
I_Q=S(Q'{\rm:}R)=2H_2[q]-L_Q(p,q)\;,
\end{eqnarray}
where the loss is
\begin{eqnarray}
L_Q(p,q)=H_2[q]-H_2[q+\frac{2p}3(1-2q)]+ S_e(p,q)\;.
\end{eqnarray}
The mutual entanglement is plotted in Fig.~\ref{fig_3d},
as a function of the error probability $p$ of the channel
and of the parameter $q$ which determines the initial entropy.
\begin{figure}
\caption{Mutual entanglement between the depolarized state $Q'$ and the
reference system $R'=R$, as a function of error $p$ and
parameter $q$. Note that the channel is 100\% depolarizing
at $p=3/4$. The concavity in $q$ [according to axiom (ii)] as well as
the convexity in $p$ [axiom (iii)] are apparent.}
\vskip 0.25cm
\centerline{\psfig{figure=newmutentpq.ps,width=3.5in,angle=0}}
\label{fig_3d}
\vskip -0.25cm
\end{figure}
The mutual entanglement is maximal when the entropy of the
source is maximal (as in the classical theory), i.e., $q=1/2$. Then:
\begin{eqnarray}
C_Q &=& \max_q\, I_Q\nonumber\\
&=& 2-S_e(p,1/2)
= 2-H_2[p]-p\,\log3\;. \label{depolcap}
\end{eqnarray}
In that case, the maximal rate of entanglement transfer is 2 bits
(error-free transfer, $p=0$). The capacity only vanishes at $p=3/4$,
i.e., the 100\% depolarizing channel. This is analogous to the
vanishing of the classical capacity of the binary symmetric channel at
$p=1/2$. As an example of such a channel, we shall discuss the
transmission of the entanglement present in a Bell state (one out of
four fully entangled qubit pairs) through a ``superdense coding''
channel in Section VI.A. The maximal mutual entanglement and minimal
loss implied by Eq.~(\ref{depolcap}) are plotted in
Fig.~\ref{fig_depol} as a function of $p$. This error rate $p$ can be
related to the fidelity of the channel by
\begin{eqnarray}
F_e^{Q'Q}= 1-p+\frac p3\,(1-2q)^2\;,
\end{eqnarray}
where $F_e^{Q'Q}$ is Schumacher's fidelity of entanglement introduced earlier.
Note that this implies that the Fano inequality Eq.~(\ref{eqfano}) is
saturated at $q=1/2$ for any $p$.
\begin{figure}
\caption{Maximal entanglement transfer $C(p)$ and minimal loss $L(p)$
as a function of the error probability $p$.}
\vskip 0.25cm
\centerline{\psfig{figure=capacity.ps,width=2.5in,angle=90}}
\label{fig_depol}
\vskip -0.25cm
\end{figure}
\subsection{Classical use}
Now, instead of using the channel to transmit entanglement (sending
unknown quantum states), one could equally
well use it to send classical information (known quantum states) as
outlined in section IV. Here, we calculate the capacity for the
transmission of classical information through the quantum depolarizing
channel and verify that the result is equal to the value obtained by
Calderbank and Shor~\cite{bib_calder} using the Kholevo theorem.
Before entanglement with the environment, let us then measure the mixed state
$Q$ via an ancilla $X$, after which $Q$ and $X$ are classically correlated,
with mutual entropy $H_2[q]$.
Note that this operation leads to an entangled triplet $QRX$
at the outset, as in Fig.~\ref{fig_trip}, with $S=H_2[q]$.
We now proceed with the calculation
as before. The basis states for the system $|QXR\rangle$ are then simply
\begin{eqnarray}
|\Phi_X^-(q)\rangle &=& \sqrt{1-q}\,|000\rangle\,-\,\sqrt q\,|111\rangle\;, \nonumber \\
|\Phi^+_X(q)\rangle &=& \sqrt q\,|000\rangle\,+\,\sqrt{1-q}\,|111\rangle\;, \nonumber \\
|\Psi^-_X(q)\rangle &=& \sqrt{1-q}\,|110\rangle\,-\,\sqrt q\,|001\rangle\;, \nonumber \\
|\Psi^+_X(q)\rangle &=& \sqrt q\,|110\rangle\,+\,\sqrt{1-q}\,|001\rangle\;,
\end{eqnarray}
where we used the index $X$ on the basis states to distinguish them
from the two-qubit basis states introduced earlier. The entanglement operation
is as before, with a unitary operator acting on $Q$ and $E$ only.
Because of the additional trace over the ancilla $X$, however,
we now find for the density matrix $\rho_{Q'R'}$:
\begin{eqnarray}
\rho_{Q'R'}
& = & (1-2p/3)\left[\,(1-q)|10\rangle\la10|\,+\,q|01\rangle\langle 01|\,\right]\nonumber\\
&+&2p/3\left[\,(1-q)|00\rangle\la00|\,+\,q|11\rangle\la11|\,\right]\;.
\end{eqnarray}
Consequently, we find for the mutual information transmitted through the
channel
\begin{eqnarray}
I = S(Q'{\rm:}R)=H_2[q]-L(p,q)\;,
\end{eqnarray}
with the (classical) loss of information
\begin{eqnarray}
L(p,q) &=&H[\frac{2p}3(1-q),\frac{2p}3q,
(1-\frac{2p}3)(1-q),(1-\frac{2p}3)q]\nonumber\\
&-&H_2[q+\frac{2p}3(1-2q)] \;.
\end{eqnarray}
Maximizing over the input distribution as before, we obtain
\begin{eqnarray}
C= \max_q S(Q'{\rm:}R) = 1-H_2[2p/3]\;, \label{classcap}
\end{eqnarray}
the result derived recently for the depolarizing channel simply from
using the Kholevo theorem~\cite{bib_calder}. Note that
Eq.~(\ref{classcap}) is just the Shannon capacity of a binary symmetric
channel~\cite{bib_ash}, with a bit-flip probability of $2p/3$ (of the
three quantum error ``syndromes'', only two are classically detectable
as bit-flips).
\section{Interpretation}
\subsection{Quantum capacity and superdense coding}
The interpretation of the capacity suggested here as a quantum
mechanical extension of the classical construction
can be illustrated in an
intuitive manner with the example of the depolarizing channel
introduced above. The idea is that $I_Q$ reflects the capacity for
transmission of quantum mutual entropy (entanglement and/or classical
information) but that the amount transferred in a particular channel
depends on how this channel is used.
A particularly elegant channel
that uses $I_Q$ to its full extent is the noisy ``superdense coding''
channel. There, the entanglement between sender and receiver is used
to transmit two bits of classical information by sending just {\em
one} quantum bit~\cite{bib_superdense,bib_neginfo}. In a general
superdense coding scheme, the initial state $QR$ is one of a set of
entangled states conditionally on classical bits $C$.
This situation
can be related to our previous discussion by noting that all
entropies appearing there are to be understood as
{\em conditional} on the classical bits $C$ that
are to be sent through the channel as shown in Fig.~\ref{fig-super}. The
von Neumann capacity introduced above is then just
\begin{eqnarray}
I_Q=S(R:Q'|C)\;. \label{eq-81}
\end{eqnarray}
It is not immediately obvious that this von Neumann capacity is equal
to the {\em classical} capacity between preparer (usually termed
Alice) and the receiver (Bob). However, it is not difficult to
prove [using the fact that $S(R{\rm:}Q)=S(R{\rm:}C)=S(Q{\rm:}C)=0$] that
Eq.~(\ref{eq-81}) is in fact equal to the maximal
amount of classical information about $C$ extractable from $RQ'$
(after $Q$ decohered),
which is\footnote{That the quantum mutual entropy between a preparer and
a quantum system is an upper bound to the amount of
classical information obtainable by measuring the quantum system
(the Kholevo bound) is shown in Ref.~\cite{bib_access}.}
\begin{eqnarray}
\chi=S(RQ':C)\;.
\end{eqnarray}
Thus, in this example the amount of entanglement processed in a channel
can be viewed as the amount of {\em classical} information about the
``preparer'' of the entangled state $QR$. This amount of information
can reach {\em twice} the entropy of $Q$ (2 bits in standard
superdense coding), which is classically impossible.
(The superdense coding and
teleportation channels will be discussed in detail elsewhere).
\begin{figure}
\caption{Quantum Venn diagram for the noisy superdense coding
channel before decoherence. Conditionally on the classical bits $C$,
$QR$ is in a pure entangled state described by a Venn diagram of the
form $(-S,2S,-S)$. Note that no information about $C$ is contained
in $R$ or $Q$ {\em alone}, i.e., $S(C{\rm:}R)=S(C{\rm:}Q)=0$.
\label{fig-super}}
\vskip 0.25cm
\centerline{\psfig{figure=fig-super.ps,width=1.25in,angle=-90}}
\vskip -0.25cm
\end{figure}
Having established this relation between superdense coding and the
general quantum channels treated here, let us imagine that the qubit
that is sent through the channel (and which is ``loaded'' with
entanglement) is subject to the depolarizing noise of the previous
section. Indeed, if $p=0$ the two classical bits can be decoded
perfectly, achieving the value of the capacity. It has been argued
recently~\cite{bib_neginfo} that this can be understood by realizing
that besides the qubit that is sent forwards in time in the channel,
the entanglement between sender and receiver can be viewed as an
antiqubit sent {\em backwards} in time (which is equivalent to a qubit
sent forwards in time if the appropriate operations are performed on
it in the future). Thus, the quantum mechanics of superdense coding
allows for the time-delayed (error-free) transmission of information,
which shows up as excessive capacity of the respective channel. On the
other hand, it is known that (for un-encoded qubits) superdense coding
becomes impossible if $p\approx0.189$, which happens to be the precise
point at which $I_Q=1$. This is related to the fact that at this point
the ``purification'' of ``noisy'' pairs becomes impossible.
However, the capacity of this channel is not zero. While no
information can be retrieved ``from the past'' in this case, the
single qubit that is sent through the channel still carries
information, indeed, it shares one bit of mutual entropy with the qubit
stored by the receiver. Clearly, this is still a quantum channel: if
it were classical, the transmission of one bit could not take place
with unit rate and perfect reliability, due to the noise level
$p=0.189$. As the receiver possesses both this particle and the one
that was shared earlier, he can perform joint measurements (in the
space $Q'R$) to retrieve at least one of the two classical bits.
An extreme example is the
``dephasing'' channel, which is a depolarizing channel with only
$\sigma_z$-type errors, affecting the phase of the qubit.
As is well known, classical
bits are unaffected by this type of noise, while quantum
superpositions are ``dephased''. The channel becomes useless (for the
storage of superpositions) at $p=0.5$, yet measuring the qubit yields
one {\em classical} bit in an error-free manner. A calculation of
$\max_q S(R:Q')$ for this channel indeed yields
\begin{eqnarray}
I_Q(p)=2-H_2[p]\;.
\end{eqnarray}
In this limiting case thus, it appears possible to separate the
classical ($I=1$) from the purely quantum capacity. However, it might
well be possible that this cannot be achieved in general. Below, we
show that such an ``excessive'' von Neumann capacity (as in superdense
coding) is consistent with a commensurate quantum Hamming bound.
\subsection{Quantum Hamming bounds}
Classically, the Hamming
bound~\cite{bib_ash} is an upper bound on the number $s$ of codewords
(bit-strings of length $n$) for a code to correct $t$ errors:
\begin{eqnarray}
s\,\sum_{i=0}^t {n \choose i}\le 2^n\;. \label{classham}
\end{eqnarray}
This is a necessary (but not sufficient) condition for error-free
coding, which reflects the necessary space to accommodate all the
codewords and associated descendants for all error syndromes.
For $s$ codewords coding for $k$ bits ($s=2^k$), we can
consider the asymptotics of (\ref{classham}) in the limit of
infinitely long messages ($n\rightarrow\infty$), and find that the
rate of error-free transmission is limited by
\begin{eqnarray}
R\le - \frac1n\log \sum_{i=0}^{pn}{n \choose i}
\left(\frac12\right)^i\left(\frac12\right)^{n-i}
\end{eqnarray}
where $R=k/n$ is the transmission rate and $p=t/n$ is the asymptotic
probability of error.
Using
\begin{eqnarray}
\lim_{n\to\infty} &-&\frac1n\log\left\{\sum_{i=0}^{pn} {n \choose i}
\,r^i\,(1-r)^{n-i}\right\} \nonumber \\
&=& p\,\log\frac pr\, + \, (1-p)\,\log\frac{1-p}{1-r}\nonumber\\
&\equiv&H(p,1-p\,\|\,r,1-r)\;,
\end{eqnarray}
where $H(p,1-p\,\|\,r,1-r)$ is the {\em relative} entropy between the
probability distributions $p$ and $r$, we can write
\begin{eqnarray}
R\le H(p,1-p\,\|\,1/2,1/2)=1-H_2(p)\;.
\end{eqnarray}
The relative entropy thus turns out to be just the classical capacity
of the channel, and measures the ``distance''
of the error-probability
of the channel relative to the ``worst case'', i.e., $p=1/2$. Note
that relative entropies are positive semi-definite.
For quantum channels, the standard quantum Hamming bound for
non-degenerate (orthogonal) codes is written
as~\cite{bib_laflamme,bib_ekert,bib_bdsw}
\begin{eqnarray}
2^k\,\sum_{i=0}^t 3^i{n \choose i}\le 2^n\;,
\end{eqnarray}
which expresses that the number of orthogonal states identifying the
error syndromes on the $2^k$ different messages must be smaller than
$2^n$, the dimension of the Hilbert space of the quantum
state $Q$ ($n$ qubits). In the limit of large $n$, this translates
into an upper bound for the rate of non-degenerate quantum codes
\begin{eqnarray}
R\le -\frac1n\log\left\{ \sum_{i=0}^{pn}{n \choose i}\left(\frac34\right)^i
\left(\frac14\right)^{n-i} \right\} -1 \;.
\end{eqnarray}
which can (as in the classical case) be written in terms of a relative
entropy
\begin{eqnarray}
R\le H(p,1-p\,\|\,3/4,1/4)\,-\,1\,=\,1-S_e(p)\;,\label{usualqhb}
\end{eqnarray}
Thus, the usual quantum Hamming bound limits the rate of
non-degenerate quantum codes by
the capacity based on ``coherent information'' proposed
in~\cite{bib_schum2,bib_lloyd}, which is thought of as the ``purely quantum''
piece of the capacity.
Note that the positivity of
relative entropy does {\em not} in this case guarantee such a capacity
to be positive, which may just be a reflection of the
``inseparability'' of the von Neumann capacity.
The quantum Hamming bound shown above relies on coding the error
syndromes only into the quantum state $Q$ that is processed, or, in
the case of superdense coding, sent through the noisy channel. As we
noted earlier, however, a quantum system that is entangled does not,
as a matter of principle, have a state on its own. Thus, the entangled
reference system $R$ {\em necessarily} becomes part of the quantum
system, even if it is not subject to decoherence. Thus, the Hilbert
space available for ``coding'' automatically becomes as large as $2n$,
the combined Hilbert space of $Q$ and $R$. This is most obvious again
in superdense coding, where the ``decoding'' of the information
explicitly involves joint measurements of the decohered $Q'$ {\em and}
the ``reference'' $R$, shared between sender and receiver (in a
noise-free manner).
The corresponding {\em
entanglement} quantum Hamming bound therefore can be written by
remarking that while the coding space is $2n$, only $n$ qubits are
sent through the channel, and thus
\begin{eqnarray}
2^k\,\sum_{i=0}^t 3^i{n \choose i}\le 2^{2n}\;.
\end{eqnarray}
Proceeding as before, the rate of such quantum codes is limited by
\begin{eqnarray}
R\le H(p,1-p\,\|\,3/4,1/4)\,=\,2-S_e(p)\;, \label{entham}
\end{eqnarray}
the von Neumann capacity $C_Q$ for the depolarizing channel
proposed in this paper, Eqs.~(\ref{quantcap}) and (\ref{depolcap}).
The latter is always positive, and represents the
``distance'' between the error probability $p$ of the channel and the
worst-case error $p=3/4$ (corresponding to a 100\% depolarizing
channel), in perfect analogy with the classical
construction. Eq.~(\ref{entham}) thus guarantees the {\em weak
converse} of the quantum fundamental theorem: that no code can be
constructed that maintains a rate larger than the capacity $C_Q$ with a
fidelity arbitrarily close to one.
\section{Conclusions}
We have shown that the classical concept of information transmission
capacity can be extended to the quantum regime by defining a von
Neumann capacity as the maximum mutual von Neumann entropy between the
decohered quantum system and its reference. This mutual von Neumann
entropy, that describes the amount of information---classical and/or
quantum---processed by the channel, obeys ``axioms'' that any measure
of information should conform to. As for any quantum extension, the
von Neumann capacity reverts to its classical counterpart when the
information is ``classicized'' (i.e., it reverts to the Kholevo
capacity when measured or prepared states are sent), and ultimately to
the Shannon capacity if all quantum aspects of the channel are ignored
(i.e., if orthogonal states are sent and measured). Thus, the von
Neumann capacity of a channel can only vanish when the classical
capacity is also zero, but it can be excessive as entanglement allows
for superdense coding. In order to take advantage of this, however,
both the quantum system that decoheres {\em and} the reference system
it is entangled with need to be accessible. In practical quantum
channels this appears to be impossible, and the rate of practical codes must
then be considerably smaller than the von Neumann capacity. Yet,
because of the inseparability of entangled states, a consistent
definition of channel capacity {\em has} to take into account the full
Hilbert space of the state. Whether a capacity can be
defined {\em consistently} that characterizes the ``purely'' quantum
component of a channel is still an open question.
\acknowledgements
We would like to thank John Preskill and the members of the QUIC group
at Caltech for discussions on the depolarizing channel, as well as
Howard Barnum and Michael Nielsen for discussions during the Quantum
Computation and Quantum Coherence Program at the ITP in Santa Barbara,
where most of this work was done. This research was supported in part
by NSF Grant Nos. PHY 94-12818 and PHY 94-20470 at the Kellogg
Radiation Laboratory, and Grant No. PHY 94-07194 at the ITP in Santa
Barbara.
| proofpile-arXiv_065-865 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Pure SU(2) Yang-Mills theory in $3+1$ dimensions
does not possess static, finite energy solutions.
In contrast, SU(2) Yang-Mills-Higgs (YMH) theory
possesses static, finite energy solutions,
and so do SU(2) Einstein-Yang-Mills (EYM)
and Yang-Mills-dilaton (YMD) theory.
YMH theory with a triplet Higgs field
contains a stable static, spherically symmetric solution,
the 't Hooft-Polyakov monopole \cite{thooft},
whereas YMH theory with a doublet Higgs field
contains an unstable static, spherically symmetric solution
\cite{dhn,bog,km}.
This solution represents the electroweak sphaleron
in the limit of vanishing Weinberg angle \cite{km,kkb}.
For large Higgs boson masses the theory contains in addition
a sequence of sphaleron solutions (without parity reflection symmetry)
\cite{kb1,yaffe}.
EYM theory possesses a sequence of unstable static, spherically
symmetric solutions for any finite value of the
coupling constants \cite{bm,strau1,volkov1}.
Within the sequence the solutions are
labelled by the number of nodes $k$ of the
gauge field function.
When the dilaton field is coupled to the system,
the solutions persist in the resulting
Einstein-Yang-Mills-dilaton (EYMD) theory
\cite{don,lav2,maeda,bizon3,neill,kks3}.
Decoupling gravity leads to YMD theory
with the corresponding sequence of unstable static,
spherically symmetric solutions
\cite{lav1,bizon2}.
In YMH theory, beside the 't Hooft-Polyakov monopole,
there exist multimonopoles with magnetic charge $m=n/g$,
where $n>1$ is the topological charge or winding number
and $g$ is the gauge coupling constant.
The spherically symmetric 't Hooft-Polyakov monopole
has $n=1$.
Following a pioneering numerical study \cite{rr},
axially symmetric multimonopoles have been
obtained analytically in the Prasad-Sommerfield limit
\cite{forg}.
In this limit the energy of the multimonopoles satisfies
the Bogomol'nyi bound
$E=4 \pi n \langle \Phi \rangle /g$.
Similarly, beside the electroweak sphaleron,
which carries Chern-Simons number $N_{CS}=1/2$,
there exist axially symmetric multisphalerons in YMH theory
with Chern-Simons number $N_{CS}=n/2$
\cite{kk}.
So it is natural to ask, whether analogous
axially symmetric solutions exist also
in EYM or YMD theory.
In this letter we construct
multisphaleron solutions and their excitations
in YMD theory.
The appropriate axially symmetric ansatz for the multisphaleron
solutions is analogous to the ansatz in YMH theory
\cite{rr,kk,man,kkb}.
Like the YMH multisphalerons,
the YMD multisphalerons are labelled
by an integer $n$,
which represents a winding number with respect to the azimuthal angle
$\phi$. While $\phi$ covers the full trigonometric circle once,
the fields wind $n$ times around.
For $n=1$, spherical symmetry and the known sequence of
YMD sphaleron solutions are recovered.
For each value of $n$ we find a sequence of axially
symmetric solutions, which can be labelled by the number of
nodes $k$ of the gauge field functions,
analogous to the spherically symmetric case.
For the limiting solutions,
obtained for $k \rightarrow \infty$,
we give an analytic expression.
The energies of the limiting solutions
satisfy a Bogomol'nyi type relation $E \propto n$,
which represents an upper bound for the energies
of the solutions of the $n$-th sequence.
In section 2 we briefly review the lagrangian,
discuss the ansatz and present the resulting energy functional.
In section 3 we exhibit the multisphaleron
solutions with $n \le 4$ and $k \le 4$.
We discuss the limiting solutions in section 4
and present our conclusions in section 5.
\section{\bf Axially symmetric ansatz}
Let us consider the lagrangian of YMD theory
\begin{equation}
{\cal L} = \frac{1}{2} ( \partial_\mu \Phi \partial^\mu \Phi )
-\frac{1}{2}e^{2 \kappa \Phi} {\rm Tr}( F_{\mu\nu} F^{\mu\nu})
\ \end{equation}
with dilaton field $\Phi$,
SU(2) gauge field $V_\mu$ and field strength tensor
\begin{equation}
F_{\mu\nu}=\partial_\mu V_\nu-\partial_\nu V_\mu
- i g [ V_\mu ,V_\nu ]
\ , \end{equation}
and the coupling constants $\kappa$ and $g$.
This theory possesses a sequence of static
spherically symmetric sphaleron solutions
labelled by an integer $k$,
which counts the number of nodes of the gauge field function
\cite{lav1,bizon2}.
To obtain static axially symmetric multisphaleron solutions,
we choose the ansatz for the SU(2) gauge fields
analogous to the case of multimonopoles \cite{rr,man} and
electroweak multisphalerons \cite{kk}.
We therefore define a set of orthonormal vectors
\begin{eqnarray}
\vec u_1^{(n)}(\phi) & = & (\cos n \phi, \sin n \phi, 0) \ ,
\nonumber \\
\vec u_2^{(n)}(\phi) & = & (0, 0, 1) \ ,
\nonumber \\
\vec u_3^{(n)}(\phi) & = & (\sin n \phi, - \cos n \phi, 0)
\ . \end{eqnarray}
and expand the gauge fields
($V_\mu = V_\mu^a \tau^a /2$) as
\begin{equation}
V_0^a(\vec r) = 0 \ , \ \ \
V_i^a(\vec r) = u_j^{i(1)}(\phi) u_k^{a(n)}(\phi) w_j^k(\rho,z)
\ , \end{equation}
whereas the dilaton field satisfies
\begin{equation}
\Phi(\vec r) = \Phi(\rho,z)
\ . \end{equation}
Invariance under rotations about the $z$-axis
and parity reflections leads to the conditions \cite{rr,kk,kkb}
\begin{equation}
w_1^1(\rho,z)=w_2^1(\rho,z)=w_1^2(\rho,z)=
w_2^2(\rho,z)=w_3^3(\rho,z)=0
\ . \end{equation}
The axially symmetric energy functional
\begin{equation}
E = E_\Phi + E_V =
\int (\varepsilon_\Phi + e^{2 \kappa \Phi} \varepsilon_V )
\, d\phi \, \rho d\rho \, dz
\ \end{equation}
contains the energy densities
\begin{equation}
\varepsilon_\Phi= \frac{1}{2} \left[
(\partial_\rho \Phi )^2 + (\partial_z \Phi )^2 \right]
\ \end{equation}
and
\begin{eqnarray}
\varepsilon_V & = &\frac{1}{2} \left[
(\partial_\rho w_3^1 + {1\over{\rho}} ( n w_1^3 + w_3^1 )
- g w_1^3 w_3^2 )^2
+ (\partial_z w_3^1 + {n\over{\rho}} w_2^3
- g w_2^3 w_3^2 )^2
\right.
\nonumber \\
& + & \left.
(\partial_\rho w_3^2 + {1\over{\rho}} w_3^2
+ g w_1^3 w_3^1 )^2
+ (\partial_z w_3^2
+ g w_2^3 w_3^1 )^2
+ (\partial_\rho w_2^3 - \partial_z w_1^3 )^2
\right]
\ . \end{eqnarray}
It is still invariant under gauge transformations generated by
\cite{rr,kk}
\begin{equation}
U= e^{i\Gamma(\rho,z) \tau^i u_3^{i(n)}}
\ , \end{equation}
where the 2-dimensional scalar doublet
$(w_3^1,w_3^2-n/g\rho)$ transforms with
angle $2 \Gamma(\rho,z)$,
while the 2-dimensional gauge field $(w_1^3,w_2^3)$ transforms
inhomogeneously.
We fix this gauge degree of freedom by choosing the
gauge condition \cite{kk,kkb}
\begin{equation}
\partial_\rho w_1^3 + \partial_z w_2^3 =0
\ . \end{equation}
Changing to spherical coordinates
and extracting the trivial $\theta$-dependence
(present also for spherically symmetric case $n=1$)
we specify the ansatz further \cite{kk}
\nonumber \\
\begin{eqnarray}
w_1^3(r,\theta) \ &
= & \ \ {1 \over{gr}}(1 - F_1(r,\theta)) \cos \theta \ , \ \ \ \
w_2^3(r,\theta) \
= - {1 \over{gr}} (1 - F_2(r,\theta) )\sin \theta \ ,
\nonumber \\
w_3^1(r,\theta) \ &
= & - {{ n}\over{gr}}(1 - F_3(r,\theta) )\cos \theta \ , \ \ \ \
w_3^2(r,\theta) \
= \ \ {{ n}\over{gr}}(1 - F_4(r,\theta) )\sin \theta
\ . \end{eqnarray}
With $F_1(r,\theta)=F_2(r,\theta)=F_3(r,\theta)=F_4(r,\theta)=w(r)$,
$\Phi(r,\theta)=\varphi(r)$ and $n=1$
the spherically symmetric ansatz of ref.~\cite{bizon2} is recovered.
The above ansatz and gauge choice
yield a set of coupled partial differential equations
for the functions $F_i(r,\theta)$ and $\Phi(r,\theta)$.
To obtain regular solutions with finite energy density
with the imposed symmetries, we take as
boundary conditions for the functions $F_i(r,\theta)$ and $\Phi(r,\theta)$
\begin{eqnarray}
r=0 & : & \ \ F_i(r,\theta)|_{r=0}=1,
\ \ \ \ \ \ \ \ \ i=1,...,4, \
\ \ \partial_r \Phi(r,\theta)|_{r=0}=0,
\nonumber \\
r\rightarrow\infty& :
& \ \ F_i(r,\theta)|_{r=\infty}=F(\infty) , \
\ \ \ i=1,...,4,
\ \ \ \Phi(r,\theta)|_{r=\infty}=\Phi(\infty) ,
\nonumber \\
\theta=0& : & \ \ \partial_\theta F_i(r,\theta)|_{\theta=0} =0,
\ \ \ \ \ \ i=1,...,4, \
\ \ \partial_\theta \Phi(r,\theta)|_{\theta=0}=0,
\nonumber \\
\theta=\pi/2& : & \ \ \partial_\theta F_i(r,\theta)|_{\theta=\pi/2} =0,
\ \ \ i=1,...,4, \
\ \ \ \partial_\theta \Phi(r,\theta)|_{\theta=\pi/2}=0
\ , \label{bc} \end{eqnarray}
(with the exception of $F_2(r,\theta)$ for $n=2$,
which has $\partial_\theta F_2(r,\theta)|_{\theta=0} \ne 0$
\cite{kknew})
where $F(\infty) = \pm 1$ and $\Phi(\infty)=0$.
The boundary conditions for the gauge field functions at infinity
imply, that the solutions are magnetically neutral.
A finite value of the dilaton field at infinity
can always be transformed to zero via
$\Phi \rightarrow \Phi - \Phi(\infty)$,
$r \rightarrow r e^{-\kappa \Phi(\infty)} $.
The variational solutions
$F_i(\beta r, \theta)$ and $\Phi(\beta r, \theta)$,
lead to $E_\beta = \beta^{-1} E_\Phi + \beta E_V$.
Since the energy functional is minimized for $\beta=1$,
the virial relation \cite{bizon2}
\begin{equation}
E_\Phi = E_V
\ \end{equation}
also holds for general $n$.
We now remove the dependence on the coupling constants
$\kappa$ and $g$ from the differential equations
by changing to the dimensionless coordinate $x=gr /\kappa$ and
the dimensionless dilaton function $\varphi = \kappa \Phi$.
The energy then scales with the factor $1/(\kappa g)$.
The dilaton field satisfies asymptotically the relation
\begin{equation}
\lim_{x \rightarrow \infty} x^2 \varphi' = D
\ , \end{equation}
where $D$ is the dilaton charge.
The energy is related to the dilaton charge via
\begin{equation}
E = \frac{4 \pi}{\kappa g} \lim_{x \rightarrow \infty} x^2 \varphi'
= \frac{4 \pi}{\kappa g} D
\ . \label{dil} \end{equation}
\section{\bf Multisphaleron solutions}
We solve the equations numerically,
subject to the boundary conditions eqs.~(\ref{bc}).
To map spatial infinity to the finite value $\bar{x}=1$,
we employ the radial coordinate $\bar{x} = \frac{x}{1+x}$.
The numerical calculations are based on the Newton-Raphson
method. The equations are discretized on a non-equidistant
grid in $\bar{x}$ and an equidistant grid in $\theta$, where
typical grids used have sizes $50 \times 20$ and $100 \times 20$
covering the integration region $0\leq\bar{x}\leq 1$ and
$0\leq\theta\leq\pi/2$.
The numerical error for the functions is estimated to be
on the order of $10^{-3}$.
The energy density $\varepsilon$, defined by
\begin{equation}
E= \frac{1}{\kappa g} \int \varepsilon (\vec x)
x^2 dx \sin \theta d \theta d\phi
\ , \end{equation}
of the axially symmetric multisphaleron solutions
and their excitations
has a strong peak on the $\rho$-axis,
while it is rather flat along the $z$-axis.
Keeping $n$ fixed and varying $k$,
we observe, that the ratio of the maximum energy density
$\varepsilon_{\rm max}$
to the central energy density
$\varepsilon(x=0)$
remains almost constant.
Also, with $n$ fixed increasing $k$, the location of the
peak of the energy density approaches zero exponentially.
On the other hand, with fixed $k$ and increasing $n$
the peak of the energy density moves outward.
The central energy density $\varepsilon(x=0)$
as well as the maximum energy density $\varepsilon_{\rm max}$
are shown in Table~1
for the sequences $n=1-4$ with node numbers $k=1-4$.
Also shown in Table~1 is the energy E.
In the following we exhibit as one example
the multisphaleron solution for $n=3$ and $k=3$.
In Fig.~1 we show the energy density $\varepsilon$.
In Figs.~2a-d we show the gauge field functions
$F_i$, which go from $F_i(0)=1$ to $F_i(\infty)=-1$,
passing three times zero in any direction.
Finally, in Fig.~3 we show the dilaton function $\varphi$.
\section{Limiting solutions for $k\rightarrow \infty$}
The known sequences of solutions often tend to a simpler
limiting solution.
For the spherically symmetric YMD sequence
with winding number $n=1$, the limiting solution
for $k \rightarrow \infty$ is given by \cite{bizon2}
\begin{equation}
w_\infty=0 \ , \ \ \ \varphi_\infty=- \ln\left( 1+\frac{1}{x} \right)
\ , \label{lim1} \end{equation}
and describes an abelian magnetic monopole with unit charge,
$m=1/g$.
The gauge field functions $w_k$ approach the limiting
function $w_\infty=0$ nonuniformly, because of the boundary
conditions at the origin and at infinity, where $w_k=\pm 1$.
For the axially symmetric sequences with $n>1$ we observe
a similar convergence with node number $k$.
As for $n=1$, the numerical analysis shows, that
the gauge field functions $(F_i)_k$ tend to the constant value zero
in an exponentially increasing region.
For $(F_i)_\infty=0$,
the set of field equations reduces to a single
ordinary differential equation for $\varphi_\infty$
\begin{equation}
\left(x^2 \varphi_\infty' \right)' -
2 e^{2 \varphi_\infty} \frac{n^2}{2 x^2} =0
\ . \label{limn1} \end{equation}
This yields the spherically symmetric limiting solution
\begin{equation}
(F_i)_\infty=0 \ , \ \ \
\varphi_\infty= - \ln\left( 1+\frac{n}{x} \right)
\ , \label{limn2} \end{equation}
corresponding to an abelian
magnetic monopole with $n$ units of charge.
Thus the limiting solution of the sequence is charged,
whereas all members of the sequence are magnetically neutral.
This phenomenon
is also observed in EYM and EYMD theory. (A detailed discussion
of the convergence there is given in ref.~\cite{kks3}.)
To demonstrate the convergence of the sequence of
numerical solutions for $\varphi_k$
to the analytic solution $\varphi_\infty$,
we show the functions $\varphi_k$ for $k=1-4$
together with the limiting function $\varphi_\infty$ in Fig.~4
for $n=3$.
The $k$-th function deviates from the limiting function
only in an inner region, which decreases exponentially with $k$.
The value $\varphi_k(x=0)$ decreases roughly linearly with $k$.
The energy of the limiting solution of the sequence $n$
is given by
\begin{equation}
E = \frac{4 \pi}{\kappa g}
\int_0^\infty \left[ \frac{1}{2} \varphi_\infty'^2
+ e^{2 \varphi_\infty}
\frac{n^2}{2 x^4} \right] x^2 dx = \frac{4 \pi}{\kappa g} n
\ , \label{limn5} \end{equation}
i.~e.~the energy satisfies a Bogomol'nyi type relation,
$E \propto n$.
This relation is in agreement with eq.~(\ref{dil}),
since $D=n$ for the limiting solution.
This limiting value for the energy, eq.~(\ref{limn5}),
represents an upper bound for each sequence,
as observed from Table~1.
The larger $n$, the slower is the convergence to the limiting solution.
Further details on the solutions and the convergence properties
of the sequences will be given elsewhere \cite{kknew}.
\section{\bf Conclusions}
We have constructed sequences of axially symmetric
multisphaleron solutions in YMD theory.
The sequences are characterized by a winding number $n$,
describing the winding of the fields in the azimuthal angle $\phi$,
while the solutions within each sequences are labelled by
the node number $k$ of the gauge field functions.
For $n=1$ the known spherically symmetric sequence is obtained.
The multisphalerons have a torus-like shape.
The maximum of the energy density occurs on the $\rho$-axis.
With fixed $n$ and increasing $k$ the maximum
moves inward along the $\rho$-axis,
whereas with fixed $k$ and increasing $n$ it
moves outward.
For fixed $n$ and $k \rightarrow \infty$ each sequence approaches
an analytically given limiting solution.
The limiting solution has vanishing gauge field functions
and corresponds to an abelian monopole
with $n$ units of magnetic charge.
Because of the coupling to the dilaton, the energy
of this limiting solution is finite.
It satisfies a Bogomol'nyi type relation
with energy $E \propto n$.
The spherically symmetric YMD sphalerons
possess fermion zero modes \cite{lav4}.
Since the electroweak sphaleron and
multisphalerons have also
fermion zero modes \cite{bk,kk,kopen},
we expect zero modes to be present also
for the YMD multisphalerons.
The spherically symmetric sequence of $n=1$ YMD sphaleron solutions
can be continued in the presence of gravity,
yielding a corresponding sequence of EYMD solutions,
which still depends on a coupling constant
\cite{don,lav2,maeda,bizon3,neill,kks3}.
By continuity, we conclude, that the axially symmetric $n>1$ YMD
multisphaleron solutions
also exist in the presence of gravity,
representing axially symmetric EYMD solutions.
In the limit of vanishing dilaton coupling constant,
they should reduce to axially symmetric EYM solutions,
and thus axially symmetric
generalizations of the Bartnik-McKinnon solutions
\cite{bm}.
The outstanding question then is, whether there also exist the
corresponding EYM and EYMD black hole solutions.
\vfill\eject
| proofpile-arXiv_065-869 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The main property of Topological Field Theories (TFT) \cite{topol} is the
fact that
their observables are of topological nature, i.e. they only depend on the
global properties of the manifold on which the theory is defined.
Therefore, these kind of theories exhibit remarkable ultraviolet
finiteness properties due to the lack of any physical, {\it i.e.} metric
dependent observables. In particular, the two--dimensional BF model
treated here provides an example of a fully finite quantum field theory.
The history of TFT's is closely related to
the relationship between the problems that arise in the research of
physical systems and the ensuing mathematical methods that are needed for
their solution.
A well known example is the relation between the work of Donaldson
concerning the study of the topology of four
dimensional manifolds \cite{donaldson1,donaldson2} and its description
as a Topological Yang--Mills theory due to Witten \cite{yangmills}.
A further example is the study of the knot and
link invariants in the case of three--dimensional Chern--Simons theory
\cite{witten2}.
The Topological Yang--Mills theory and the Chern--Simons theory are
examples of two distinct classes of TFT's, the
former belonging to the Witten type --
{\it i.e.} the whole action is a BRST variation -- and
the latter being of the Schwarz type --
the action splits into an invariant part and a BRST variation term.
There exists a further type of Schwarz class TFT's: the Topological
BF models \cite{bfmod}. They constitute the natural extension of the
Chern--Simons model in an arbitrary number of spacetime dimensions.
These models describe the coupling of an antisymmetric
tensor field to the Yang--Mills field strength \cite{antisym},
\cite{blau}.
The aim of the present work is the quantization of the two--dimensional
BF model in the axial gauge.
It is motivated by similar works done for the Chern--Simons model
\cite{chern3} and the BF model \cite{bf3} both in three spacetime dimensions.
The axial gauge is particularly interesting for these two models since in
this gauge these two theories are obviously ultraviolet
finite due to the complete absence of radiative
corrections. Since for the two--dimensional case we are interested in,
the choice of the axial gauge allows us to overcome the usual infrared
problem which occurs in the propagator of the dimensionless
fields \cite{blasi2}, it is interresting to see wether the ultraviolet
finitness properties are also present.
On the other hand, we have already shown that for the three--dimensional
Chern-Simons and BF models \cite{chern3}, \cite{bf3}, the symmetries
completely define the theory, {\it i.e.} the quantum action principle
is no more needed. The latter property relies on the existence of a
topological linear vector supersymmetry \cite{delducgs} besides the BRST
invariance \cite{brst1,brst2}. The generators of the latter together with
the one of the BRST--symmetry form a superalgebra
of the Wess--Zumino type \cite{wesszum1,wesszum2}, which closes
on the translations \cite{sor}. Then the associated Ward identities
can be solved for the Green's functions, exactly and uniquely.
As this linear vector supersymmetry is also present in two--dimensions,
it is natural to ask ourself wether all the Green's functions of the
two--dimensional model are also uniquely determined by symmetry
considerations. We will answer this question by the negative. The
consistency relations let some class of Green's functions undetermined
and therefore one has to use some of the equations of motion.
The present work is organized as follows. The model is introduced in
section \ref{model} and its symmetries are discussed. In section
\ref{symdisc} we will check the consistency conditions between the various
symmetries and thus recover almost all the equations of motion
for the theory. Section \ref{prop_finit} is devoted to the derivation of
the Green's functions keeping in mind that some are consequences of
symmetries only whereas some others are solution of the field equations.
At the end we draw some conclusions.
\section{The two--dimensional BF model in the axial gauge} \label{model}
The classical BF model in two spacetime dimensions
is defined by
\begin{equation} \label{s_inv}
S_{\mbox{\small inv}} = \frac{1}{2} \,\int\limits_{\cal{M}} d^2 x \,
\varepsilon^{\mu \nu} F^a_{\mu \nu} \phi^a \quad .
\end{equation}
One has to stress that the action (\ref{s_inv}) does not depend on a
metric $g_{\mu \nu}$ which one may introduce on the arbitrary
two--dimensional manifold ${\cal M}$.
In this paper, ${\cal M}$ is chosen to be the flat Euclidean
spacetime\footnote{${\cal M}$ has the metric
$\eta_{\mu \nu} = \mbox{diag}\, (+\, 1 , +\, 1)$.}.
$\varepsilon^{\mu \nu}$ is the totally antisymmetric
Levi--Civita--tensor\footnote{
The tensor $\varepsilon^{\mu \nu}$ is normalized to $\varepsilon^{12} =
\varepsilon_{12} = 1$ and one has
$\varepsilon^{\mu \nu} \varepsilon_{\sigma\tau} =
\delta^\mu_\sigma \delta^\nu_\tau - \delta^\mu_\tau \delta^\nu_\sigma$.}.
The field strength $F^a_{\mu \nu}$ is related to the Yang--Mills gauge
field $A^a_\mu$ by the structure equation
\begin{equation}
F^a_{\mu \nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu
+ f^{abc} A^b_\mu A^c_\nu \quad ,
\end{equation}
and the $B$ field, traditionally denoted by $\phi$ for the two
dimensional case, is a scalar field.
The fields ($A^a_\mu$, $F^a_{\mu \nu}$, $\phi^a$) belong to the adjoint
representation of a simple compact gauge group ${\cal G}$.
The corresponding generators $T^a$ obey
\begin{equation}
\big[ T^a , T^b \big] = f^{abc} T^c \quad ,
\quad Tr(T^a T^b) = \delta^{ab}
\quad ,
\end{equation}
$f^{abc}$ are the totally antisymmetric structure constants
of ${\cal G}$. The field--strength satisfies the Bianchi--identity
\begin{equation} \label{bianchi1}
(D_\rho F_{\mu \nu})^a + (D_\mu F_{\nu \rho})^a +
(D_\nu F_{\rho \mu})^a = 0 \quad ,
\end{equation}
where the covariant derivative $D_\mu$ in the adjoint
representation is
\begin{equation} \label{covariant_der}
(D_\mu \,\cdot \, )^a = (\partial_\mu \, \cdot \, )^a
+ f^{abc} A^b_\mu (\, \cdot \, )^c \quad .
\end{equation}
The equations of motion are
\begin{eqnarray}
\label{eomac} \frac{\delta \, S_{\mbox{\small inv}}}{\delta \,\phi^a}
& = & \frac{1}{2}\, \varepsilon^{\mu \nu}
F^a_{\mu \nu} \; = \; 0 \quad , \\
\label{eomphi} \frac{\delta \, S_{\mbox{\small inv}}}{\delta \, A^a_\mu}
& = & \varepsilon^{\mu \nu} (D_\nu \phi)^a \; = \; 0 \quad .
\end{eqnarray}
Eq. (\ref{eomac}) implies the vanishing curvature condition,
\begin{equation}
F^a_{\mu \nu} = 0 \quad ,
\end{equation}
and from eq. (\ref{eomphi}) follows that the scalar field is confined on
the hypersphere
\begin{equation}
\phi^a \phi^a = \mbox{const.} \quad .
\end{equation}
The action (\ref{s_inv}) is invariant under the infinitesimal gauge
transformations
\begin{equation}\begin{array}{rcl}
\label{deltaa} \delta A^a_\mu & = & -\, (\partial_\mu \theta^a +
f^{abc} A^b_\mu \theta^c) \; = \; -\, (D_\mu \theta)^a \quad , \\[3mm]
\label{deltaphi} \delta \phi^a & = & f^{abc} \theta^b \phi^c \quad ,
\end{array}\eqn{gautrans}
where $\theta^a$ is the infinitesimal local gauge parameter.
We choose to work in the axial gauge
\begin{equation} \label{axgauge}
n^\mu A^a_\mu =0\, .
\end{equation}
and without loss of generality, we fix the axial gauge vector to be
\begin{equation}
n^\mu = ( n^1 , n^2 ) = ( 1 , 0 ) \quad .
\eqn{axdef}
Following the usual BRST procedure, one introduces
a ghost, an antighost and a Lagrange multiplier field
($c^a$, $\bar{c}^a$, $b^a$) in order to construct the gauge fixed
action $S$ in a manifestly BRST--invariant manner,
\begin{equation} \label{total_s}
S = S_{\mbox{\small inv}} + S_{\mbox{\small gf}} \quad ,
\end{equation}
\begin{eqnarray}
S_{\mbox{\small gf}} & = & \int d^2 x \, s (\bar{c}^a n^\mu
A^a_\mu ) \nonumber \\
& = & \int d^2 x \,( b^a n^\mu A^a_\mu + \bar{c}^a n^\mu\partial_\mu
c^a + \bar{c}^a f^{abc} n^\mu A^b_\mu c^c) \quad .
\end{eqnarray}
The nilpotent BRST--transformations are given by
\begin{equation} \label{brst_transf}
\begin{array}{rcl}
s A^a_\mu & = & -\,(D_\mu c)^a \quad , \\[2mm]
s \phi^a & = & f^{abc} c^b \phi^c \quad , \\[2mm]
s c^a & = & \frac{1}{2} \, f^{abc} c^b c^c \quad , \\[2mm]
s \bar{c}^a & = & b^a \quad ,\\[2mm]
s b^a & = & 0 \quad , \\[2mm]
s^2 & = & 0 \quad .
\end{array}
\end{equation}
Note that the gauge fixing term $S_{\mbox{\small gf}}$ is not metric
independent and therefore not topological.
The canonical dimensions and ghost charges of the fields are collected in
table \ref{table1}, where the canonical dimension of the gauge direction
$n^\mu$ is zero.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l||c|c|c|c|c|c|} \hline
& $A^a_\mu$ & $\phi^a$ & $c^a$ &
$\bar{c}^a$ & $b^a$ \\ \hline \hline
canonical dimension & 1 & 0 & 0 & 1 & 1 \\ \hline
ghost charge ($Q_{\Phi\Pi}$) & 0 & 0 & 1 & -1 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{\label{table1} Canonical dimension and ghost charge of
the fields}
\end{table}
Before investigating further let us make some comments concerning the
axial gauge. It is well known that this choice does not
fix the gauge completely. Indeed (\ref{total_s}) is still invariant under
gauge transformations of the same type as \equ{gautrans} but where
the gauge parameter $\theta} \newcommand{\T}{\Theta^a$ depends only on
$x^1$. This residual gauge invariance will play an important
role in the sequel.
The action (\ref{total_s}) possesses, besides the BRST--symmetry
(\ref{brst_transf}) and the scale invariance, an
additional linear vector supersymmetry. In order to derive the vector
supersymmetry transformations, one considers
the energy--momentum tensor $T_{\mu \nu}$ of the theory.
Since the invariant part of the action (\ref{s_inv}) is
metric--independent, the improved energy--momentum tensor is an exact
BRST--variation,
\begin{equation} \label{tmunu}
T_{\mu \nu} = s \Lambda_{\mu \nu} \quad ,
\end{equation}
where $\Lambda_{\mu \nu}$ is given by
\begin{equation} \label{lmunu}
\Lambda_{\mu \nu} = \eta_{\mu \nu} \,( \bar{c}^a \, n^\lambda
A^a_\lambda) - \bar{c}^a \,(n_\mu A^a_\nu + n_\nu A^a_\mu) \quad .
\end{equation}
Eq.~(\ref{tmunu}) explicitly shows the unphysical character of the
topological BF model.
Using the functional form for the equations of motion
\begin{eqnarray}
\label{eoma} \frac{\delta S}{\delta A^a_\nu} & =
& \varepsilon^{\nu \mu} D_\mu
\phi^a + n^\nu (b^a - f^{abc} \bar{c}^b c^c) \quad , \\
\label{eomf} \frac{\delta S}{\delta \phi^a} & = & \frac{1}{2}
\,\varepsilon^{\mu \nu} F^a_{\mu \nu} \quad , \\
\label{eomc} \frac{\delta S}{\delta c^a} & =
& n^\mu (D_\mu \bar{c})^a \quad , \\
\label{eomcb} \frac{\delta S}{\delta \bar{c}^a} & =
& n^\mu (D_\mu c)^a \quad ,
\end{eqnarray}
and the gauge condition
\begin{equation} \label{gaucond}
\frac{\delta S}{\delta b^a} = n^\mu A^a_\mu \quad ,
\end{equation}
one gets for the divergence of (\ref{lmunu})
\begin{equation}
\partial^\nu \Lambda_{\nu \mu} = \varepsilon_{\mu \nu} n^\nu
\bar{c}^a \, \frac{\delta S}{\delta \phi^a} - A^a_\mu \,
\frac{\delta S}{\delta c^a} + \partial_\mu \bar{c}^a \,
\frac{\delta S}{\delta b^a} + \mbox{total deriv.} \quad .
\end{equation}
An integration over the two--dimensional spacetime yields the
Ward--identity of the linear vector supersymmetry,
\begin{equation} \label{wi_s}
{\cal W}_\mu S = 0 \quad ,
\end{equation}
where
\begin{equation} \label{wi_op}
{\cal W}_\mu = \int d^2x \,\Big( \varepsilon_{\mu \nu} n^\nu
\bar{c}^a \, \frac{\delta}{\delta \phi^a} - A^a_\mu \,
\frac{\delta}{\delta c^a} + \partial_\mu \bar{c}^a \,
\frac{\delta}{\delta b^a} \Big) \quad
\end{equation}
and the transformations read
\begin{eqnarray} \label{susy_on}
\delta_\mu A^a_\nu & = & 0 \quad , \nonumber \\
\delta_\mu \phi^a & = & \varepsilon_{\mu \nu} n^{\nu}
\bar{c}^a \quad , \nonumber \\
\delta_\mu c^a & = & - \, A^a_\mu \quad , \nonumber \\
\delta_\mu \bar{c}^a & = & 0 \quad , \nonumber \\
\delta_\mu b^a & = & \partial_\mu \bar{c}^a \quad .
\end{eqnarray}
{\em Remark:} It can be easily verified that $S_{\mbox{\scriptsize inv}}$
and $S_{\mbox{\scriptsize gf}}$ are not separately invariant under
(\ref{susy_on}); only the combination (\ref{total_s}) is invariant.
The generator $\delta_\mu$ of the linear vector supersymmetry and the
BRST--operator $s$ form a graded algebra of the Wess--Zumino type which
closes on--shell on the translations,
\begin{eqnarray}
\big\{ s , \delta_\mu \big\} A^a_\nu
& = & \partial_\mu A^a_\nu - \varepsilon_{\mu \nu} \,
\frac{\delta S}{\delta \phi^a} \quad , \nonumber \\
\big\{ s , \delta_\mu \big\} \phi^a
& = & \partial_\mu \phi^a + \varepsilon_{\mu \nu} \,
\frac{\delta S}{\delta A^a_\nu} \quad , \nonumber \\
\big\{ s , \delta_\mu \big\} \psi^a & = & \partial_\mu \psi^a \quad ,
\quad \forall \,\psi^a \in \{ c^a , \bar{c}^a , b^a \} \quad .
\end{eqnarray}
Moreover, the following algebraic relations hold,
\begin{equation}
\big\{ \delta_\mu , \delta_\nu \big\} = 0 \quad , \quad
\big[ \delta_\mu , \partial_\nu \big] = 0 \quad .
\end{equation}
\section{Consequences of the symmetries} \label{symdisc}
We will now discuss the implications of the
symmetries discussed above independently of the action
(\ref{total_s}). This means that we are going to consider only
the gauge fixing condition, the Ward identity for the vector
supersymmetry and the Slavnov--Taylor identity and look for
their consistency independently from the field equations
(\ref{eoma})--(\ref{eomcb}). Since we are ultimately interested in
deriving all the Green's functions of the theory,
let us first rewrite all these functional identities in
term of the generating functional for the connected Green's functions.
We will see later that due to the absence of loop graphs for this theory,
it is sufficient to restrict ourself to
the tree approximation, {\it i.e.} the Legendre transformation of the
classical action:
\begin{eqnarray}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu \,a}} \, = \, A^a_\mu
& \quad , \quad &
\frac{\delta S}{\delta A^a_\mu} \, = \, -\,j^{\mu \,a} \quad , \\
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_\phi} \, = \, \phi^a
& \quad , \quad &
\frac{\delta S}{\delta \phi^a} \, = \, -\,j^a_\phi \quad , \\
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_c} \, = \, c^a
& \quad , \quad &
\frac{\delta S}{\delta c^a} \, = \,\hspace{4mm} j^a_c \quad , \\
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_{\bar{c}}} \, = \, \bar{c}^a
& \quad , \quad &
\frac{\delta S}{\delta \bar{c}^a} \,
= \,\hspace{4mm} j^a_{\bar{c}} \quad , \\
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_b} \, = \, b^a
& \quad , \quad &
\frac{\delta S}{\delta b^a} \, = \, -\,j^a_b \quad ,
\end{eqnarray}
\begin{eqnarray}
\label{legendre}
\lefteqn{Z^{\mbox{\small c}}[j^{\mu \,a}, j^a_\phi, j^a_c, j^a_{\bar{c}},
j^a_b] = S[A^a_\mu, \phi^a, c^a, \bar{c}^a, b^a] +} \\
&& +\, \int d^2x \,\Big( j^{\mu \,a} A^a_\mu +
j^a_\phi \phi^a + j^a_c c^a + j^a_{\bar{c}} \bar{c}^a +
j^a_b b^a \Big) \quad , \nonumber
\end{eqnarray}
where the canonical dimensions and ghost charges of the classical
sources $j^{\mu \,a}$, $j^a_\phi$, $j^a_c$, $j^a_{\bar{c}}$, $j^a_b$ are
given in table \ref{table2}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l||c|c|c|c|c|} \hline
& $j^{\mu \,a}$ & $j^a_\phi$ & $j^a_c$ & $j^a_{\bar{c}}$ & $j^a_b$ \\
\hline \hline
canonical dimension & 1 & 2 & 2 & 1 & 1 \\ \hline
ghost charge ($Q_{\Phi \Pi}$) & 0 & 0 & -1 & 1 & 0 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table2} Dimensions and ghost charges of the sources}
\end{table}
The gauge condition (\ref{gaucond}) now reads
\begin{equation}
\label{gauge_cond}
n^\mu \frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu \,a}}
=-\, j^a_b \quad ,
\end{equation}
and the linear vector supersymmetry Ward identity (\ref{wi_s})
\begin{equation}
{\cal W}_\mu Z^{\mbox{\small c}} =
\int d^2x \,\Big[ -\,\varepsilon_{\mu \nu } n^\nu j^a_\phi
\,\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_{\bar{c}}} -
j^a_c \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu \,a}}
- j^a_b \,\partial_\mu
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_{\bar{c}}} \Big]
\; = \; 0 \quad .
\end{equation}
The BRST--invariance is formally expressed by the Slavnov--Taylor
identity
\begin{eqnarray} \label{stid}
{\cal S}(Z^{\mbox{\small c}}) & = & \int d^2x \, \Big( j^{\mu \,a}
\big[ (D_\mu c)^a \big] \cdot Z^{\mbox{\small c}} - j^a_\phi
\big[ f^{abc} c^b \phi^c \big] \cdot Z^{\mbox{\small c}}+\nonumber \\
&&\hspace{1.7cm} +\, j^a_c \big[ \frac{1}{2} f^{abc} c^b c^c \big]\cdot
Z^{\mbox{\small c}}
+ j^a_{\bar{c}} \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_b} \Big)
\; = \; 0 \quad .
\end{eqnarray}
We have used the notation $[{\cal O} ] \cdot Z^c$ for the generating functional
of the connected Green's functions with the insertions of a local field
polynomial operator ${\cal O} $. Usually, such insertions must be renormalized
and their renormalization is controlled by coupling them to external
sources. But in the case of the axial gauge, these insertions are trivial
due to the fact that the ghost fields decouple from the gauge field as
we will see later.
In order to analyze the consequences of these functional identities, let
us begin by the projection of the supersymmetry Ward--identity
along the axial vector $n^\mu$
\begin{equation} \label{n_wi_z}
n^\mu {\cal W}_\mu Z^{\mbox{\small c}} =
-\int d^2x \, j^a_b
\,\Big(\underbrace{-\, j^a_c + n^\mu \partial_\mu
\frac{\delta Z^{\mbox{\small c}}}{\delta j^a_{\bar{c}}}}_{X^a}
\Big) = 0 \quad ,
\end{equation}
where the gauge condition (\ref{gauge_cond}) has been used.
Locality, scale invariance and ghost charge conservation imply that
$X^a$ is a local polynomial in the classical sources $j$ and their
functional derivatives $\delta/\delta j$ of dimension $2$ and ghost
charge $-1$. The most general form for $X^a$ may depends on a further
term
\begin{equation}
X^a =-j^a_c+n^\mu\partial_\mu\frac{\delta Z^{\mbox{\small c}}}
{\delta j^a_{\bar{c}}}+z^{abc}j^b_b\frac{\delta Z^{\mbox{\small c}}}
{\delta j^c_{\bar{c}}} \quad ,
\eqn{Joe1}
provided $z^{abc}$ is antisymmetric in $a$ and $b$. The latter is thus
proportional to the structure constants $f^{abc}$.
By substituting the general form (\ref{Joe1}) into (\ref{n_wi_z})
\begin{equation}
\int d^2x \, j^a_b\,X^a=0 \quad,
\end{equation}
one gets
\begin{equation}
\label{antighost_z}
n^\mu \partial_\mu \Big( \frac{\delta Z^{\mbox{\small c}}}{\delta
j^a_{\bar{c}}} \Big) - \alpha f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}} = j^a_c
\end{equation}
which, up to the undetermined coefficient $\alpha$, corresponds to the
antighost equation (\ref{eomc}). In order to complete this identification,
let us use the fact that, in any gauge theory with a linear gauge fixing
condition, there exist a ghost equation which in our case follows from the
Slavnov--Taylor identity differentiated with respect to the source for
$n^\mu A_\mu$. Indeed this gives
\begin{equation} \label{ghost_z}
n^\mu \partial_\mu \Big( \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_c}
\Big) -
f^{abc} j^b_b \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c} =
j^a_{\bar{c}} \quad ,
\end{equation}
which corresponds to (\ref{eomcb}).
At this level, it is clear that consistency between (\ref{antighost_z})
and (\ref{ghost_z}) fixes the value $\alpha=1$.
Therefore, the equations of motion for the ghost sector
(\ref{antighost_z}),
(\ref{ghost_z}) are direct consequences of the symmetries. Furthermore,
these equations show that the ghosts only couple to the source $J_b$ and
therefore, it is possible to factorize out the effect of the ghost fields
from the Slavnov--Taylor identity (\ref{stid}). The latter is thus
replaced by a local gauge Ward identity
\begin{eqnarray} \label{local_gauge}
\lefteqn{ -\partial_\mu j^{\mu \, a} - n^\mu \partial_\mu
\Big( \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_b} \Big)
+f^{abc} j^{\mu \, b} \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu \, c}}
+f^{abc} j^b_\phi \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi} +
}\nonumber \\
&& +\, f^{abc} j^b_c \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c}
+ f^{abc} j^b_{\bar{c}} \,
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
+ f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_b} = 0 \quad .
\end{eqnarray}
The Ward identity expressing the invariance of the theory under the
residual gauge symmetry discussed in sect. \ref{model}
corresponds to the integration of (\ref{local_gauge}) with respect to
$x^1$. The residual Ward identity reads
\begin{eqnarray} \label{gaugeresidu}
\lefteqn{\int_{-\infty}^{\infty} dx^1 \left\{}\newcommand{\rac}{\right\}
f^{abc} j^{\mu \, b} \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu\, c}}
+f^{abc} j^b_\phi \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi}
+\, f^{abc} j^b_c \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c}+
\right. }\nonumber \\
&& \left.
+ f^{abc} j^b_{\bar{c}} \,
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
+ f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_b}-\partial_2 j^{2\, a}\rac
=0 \quad .
\end{eqnarray}
It is important to notice that this step is plagued by the bad
long distance behaviour of the field $b$. Indeed, passing from
\equ{local_gauge} to \equ{gaugeresidu} would imply
\begin{equation}
\int_{-\infty}^{\infty} dx^1 \partial_1
\Big( \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_b} \Big)=0
\eqn{irpro}
which is not the case as it can be shown using the
solutions given latter. This is the usual IR problem of the axial
gauge. In order to enforces \equ{irpro}, one substitutes
\begin{equation}
\fud{}{J_b}\leftrightarrow e^{-\varepsilon(x^1)^2}\fud{}{J_b} \qquad (\varepsilon >0)
\eqn{bdamped}
which corresponds to a damping factor for the $b$--field along the $x^1$
direction, and takes the
limit $\varepsilon\rightarrow 0$ at the end. It turns out that this simple substitution
is sufficient in order to get \equ{irpro} and that the limit can be done
trivially. The check is straightforward for all the Green's functions
and therefore is left to the reader.
For the gauge sector, let us begin with the transversal
component of the supersymmetry Ward identity and the ghost equation
(\ref{ghost_z}) written as functional operator acting on
$Z^{\mbox{\small c}}$
\begin{eqnarray}
\label{transversal_01}
{\cal W}^{{\rm tr}} Z^{\mbox{\small c}}\equiv
{\cal W}_2 Z^{\mbox{\small c}} =
\int d^2x \,\left[ -\, j^a_\phi \,\frac{\delta
}{\delta j^a_{\bar{c}}}
- j^a_c \,\frac{\delta }{\delta j^{2\, a}}
- j^a_b \,\partial_2
\frac{\delta }{\delta j^a_{\bar{c}}} \right] Z^{\mbox{\small c}}
& = & 0, \\
\label{gh_01}
{\cal G}^a Z^{\mbox{\small c}} =
\left( \partial_1 \frac{\delta }{\delta j^a_c} -
f^{abc} j^b_b \,\frac{\delta }{\delta j^c_c} \right)
Z^{\mbox{\small c}} &=& j^a_{\bar{c}}.
\end{eqnarray}
Then, a direct calculation shows that the consistency condition
\begin{equation} \label{wzcons}
\left\{ {\cal W}_2 , {\cal G}^a \right\} Z^{\mbox{\small c}}
= {\cal W}_2 j^a_{\bar{c}}
\end{equation}
is in fact equivalent to the equation of motion for $A_2$
(\ref{eomf})
\begin{equation} \label{aeomsym}
\left( \partial_1 \frac{\delta }{\delta j^{2\, a}} -
f^{abc} j^b_b \,\frac{\delta }{\delta j^{2\, c}} \right)
Z^{\mbox{\small c}} = j^a_\phi -\partial_2 j_b^a \quad .
\end{equation}
Up to now, we recover the equations of motion for $c$, $\bar{c}$ and
$A_2$ as consistency conditions between the various symmetries but,
contrary to the higher dimensional case,
we did not get any information about the dynamic of $\phi$.
To clarify this point, let us consider the three--dimensional BF model
\cite{bf3}. In this case, it is well known that the field $B$ is a
one--form and therefore is invariant under the so--called reducible
symmetry
\begin{equation}\label{redsym}
s B^a_\mu = -\,(D_\mu \psi )^a
\end{equation}
where $\psi$ is a zero--form. To fix this extra symmetry one needs,
besides the usual Yang-Mills ghost,
antighost and Lagrange multiplier fields $(c, \bar{c}, b)$, a second set
of such fields $(\psi, \bar{\psi}, d)$ related to (\ref{redsym}). The
consequence is
that we exactly double the number of terms because of the exact similarity
between the Yang-Mills part $(A_\mu , c , \bar{c} , b)$ and the reducible
part $(B_\mu , \psi , \bar{\psi} , d)$.
Therefore, it is easy to convince oneself that we recover the two
antighost equations and, by considering the consistency
conditions of the same type as (\ref{wzcons}), that we are in a position to
construct the two
equations of motion for $A_\mu$ and $B_\mu$\footnote{For a detailed
discussion, see eqs. (4.3)--(4.5) in \cite{bf3}}. For the
two--dimensional case investigated here the field $\phi$ is a 0--form and
does not exhibit any reducible symmetry of the type (\ref{redsym}).
The invariance rather than covariance of $\phi$ is the basic difference
between the two--dimensional BF model and
the higher dimensional case.
Finally, in order to clarify the present two--dimensional situation, let
us collect all the functional identities we have got from symmetry
requirements alone. These are the gauge condition (\ref{gauge_cond}), the
antighost equation (\ref{antighost_z}) with $\alpha=1$, the ghost
equation (\ref{ghost_z}), the transversal component of the
supersymmetry Ward identity (\ref{transversal_01}), the field equation
for $A_2$ (\ref{aeomsym}), the local Ward
identity (\ref{local_gauge}) and the residual gauge Ward identity
(\ref{gaugeresidu}) which are respectively given by\footnote{We will now
systematically substitute \equ{axdef}.}
\begin{eqnarray}
\label{fgau}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^{1\, a}}
& =&-\, j^a_b \\
\label{fant}
\partial_1 \Big( \frac{\delta Z^{\mbox{\small c}}}{\delta
j^a_{\bar{c}}} \Big) - f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}& =& j^a_c \\
\label{fgho}
\partial_1\Big( \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_c}
\Big) -
f^{abc} j^b_b \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c}
&=& j^a_{\bar{c}} \\
\label{fsutr}
\int d^2x \,\left\{}\newcommand{\rac}{\right\} -\, j^a_\phi \,\frac{\delta
}{\delta j^a_{\bar{c}}}
- j^a_c \,\frac{\delta }{\delta j^{2\, a}}
- j^a_b \,\partial_2
\frac{\delta }{\delta j^a_{\bar{c}}} \rac Z^{\mbox{\small c}}
& = & 0 \\
\label{fa}
\left( \partial_1 \frac{\delta }{\delta j^{2\, a}} -
f^{abc} j^b_b \,\frac{\delta }{\delta j^{2\, c}} \right)
Z^{\mbox{\small c}} &= & j^a_\phi
-\partial_2 j^a_b, \\
\label{floc}
\partial_1
\Big( \frac{\delta Z^{\mbox{\small c}}}{\delta j^a_b} \Big)
+f^{abc}j^{\mu\, b}\,\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu\, c}}
+f^{abc} j^b_\phi \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi} +
&&\nonumber \\
+\, f^{abc} j^b_c \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c}
+ f^{abc} j^b_{\bar{c}} \,
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
+ f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_b} & =& \partial_\mu j^{\mu \, a} \\
\label{fres}
\int_{-\infty}^{\infty} dx^1 \left\{}\newcommand{\rac}{\right\}
f^{abc} j^{\mu\, b} \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\mu\, c}}
+f^{abc} j^b_\phi \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi}
+\, f^{abc} j^b_c \,\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_c} +
\right.&&\nonumber \\
\left.
+ f^{abc} j^b_{\bar{c}} \,
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
+ f^{abc} j^b_b \,\frac{\delta
Z^{\mbox{\small c}}}{\delta j^c_b}-\partial_\mu j^{\mu \, a}\rac
& =&0 \quad .
\end{eqnarray}
As mentioned before, the procedure described above fails to produce an
important relation. Indeed, the equation of motion
(\ref{eoma})
\begin{equation}\label{fphi}
\left( \varepsilon^{\mu\nu}\partial_\nu \frac{\delta }{\delta j^a_\phi}
+ n^\mu \frac{\delta }{\delta j^a_b}\right) Z^{\mbox{\small c}}
+ \varepsilon^{\mu\nu} f^{abc}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^{\nu\, b}}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi}
-n^\mu f^{abc} \frac{\delta Z^{\mbox{\small c}}}{\delta j^b_c}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
= - j^{\mu\, a}
\end{equation}
is not a consequence of the symmetries and therefore, has to be derived
from the action (\ref{total_s}). More precisely, the $\mu=1$ component
of \equ{fphi} is the field equation for $\phi$
\begin{equation}
\left( \partial_1 \frac{\delta }{\delta j^a_\phi}
-f^{abc} j_b^b\frac{\delta }{\delta j^c_\phi}\rp
Z^{\mbox{\small c}}= - j^{2\, a}
\eqn{dynfi}
and the $\mu=2$ component of \equ{fphi} is the field equation for $b$
\begin{equation}
\left(}\newcommand{\rp}{\right) \frac{\delta }{\delta j^a_b}-
\partial_2 \frac{\delta }{\delta j^a_\phi}\rp Z^{\mbox{\small c}}
-f^{abc}\frac{\delta Z^{\mbox{\small c}}}{\delta j^{2\, b}}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_\phi}
-f^{abc} \frac{\delta Z^{\mbox{\small c}}}{\delta j^b_c}
\frac{\delta Z^{\mbox{\small c}}}{\delta j^c_{\bar{c}}}
= - j^{1\, a}
\eqn{intcond}
which is nothing else than an integrability condition.
{\em Remarks:}
All the functional identities obtained by symmetry
considerations are linear in the quantum fields and, hence, they are not affected
by radiative corrections. This linearization
originates from the topological supersymmetry.
On the other hand, the equation for $b$ is quadratic and may causes problems.
This point will be treated later.
\section{Calculation of the Green's functions, perturbative finiteness}
\label{prop_finit}
We will now derive the solution to the set of equations
(\ref{fgau}) -- (\ref{fphi}). In turn, this will prove that the tree
approximation (\ref{legendre}) corresponds to the exact solution.
\subsection{Solution of the Gauge Condition}
We already emphasized that the axial gauge allows for the factorization
of the ghost sector. This is illustrated by the solution for the gauge
condition (\ref{fgau})
\begin{equation}
\langle A^a_1(x) \, b^b(y) \rangle
= -\,\delta^{ab} \delta^{(2)}(x-y) \quad .
\end{equation}
which is the only non--vanishing Green's function containing $A^a_1$.
\subsection{Solution of the ghost sector}
Let us first differentiate the antighost equation (\ref{fant})
with respect to the source $j^a_c$. This leads to\footnote{Our
conventions for functional derivatives of even and/or odd objects are
$$
\fud{}{C}\int AB=\int \left(}\newcommand{\rp}{\right)\fud{A}{C}B+(-1)^{{\rm deg}(A){\rm deg}(B)}
\fud{B}{C}A\rp
$$}
\begin{equation}
\partial_1 \frac{\delta^2 Z^{\mbox{\small c}}}{\delta j^a_{\bar{c}}(x)
\,\delta j^b_c(y)} \Bigg|_{j = 0} = \,-\,\delta^{ab} \delta^{(2)}(x - y)
\end{equation}
A subsequent integration yields the propagator
\begin{equation}
\langle \bar{c}^a(x) \, c^b(y) \rangle \, =
\delta^{ab}\left[}\newcommand{\rc}{\right] - \theta(x^1-y^1) \delta(x^2-y^2) + F(x^2-y^2) \rc
\eqn{cbcgen}
where $\theta$ is the step function,
\begin{equation}
\theta(x-y) = \left\{ \begin{array}{ccl}
1 & , & (x-y) > 0 \\
0 & , & (x-y) < 0 \quad . \end{array} \right.
\end{equation}
and $F(x^2-y^2)$ is a function of $x^2-y^2$ due to translational
invariance, and with canonical dimension $1$. As the introduction of
any dimensionfull parameters in our theory will spoil its topological
character and that we work in the space of
tempered distributions\footnote{Any expression of the type
$1/(x^2-y^2)$ exhibits short distance singularities and
therefore needs the introduction of a dimensionfull UV substraction point in
order to give it a meaning.}\label{foutbol}, we are left with
\begin{equation} \label{cbarc}
\langle \bar{c}^a(x) \, c^b(y) \rangle \, =
-\,\delta^{ab} \left[}\newcommand{\rc}{\right] \theta(x^1-y^1)+\a\rc \delta(x^2-y^2)
\end{equation}
The analogous calculation starting form the ghost equation (\ref{fgho})
leads to the value
\begin{equation} \label{einhalb}
\alpha \; = \; -\,\frac{1}{2} \quad .
\end{equation}
for the integration constant as a consequence of the Fermi statistics
for the ghost fields and the $c\leftrightarrow\bar{c}$ invariance
of the theory. This implies the principal
value prescription for the unphysical pole $(n^\mu k_\mu)^{-1}$ in the
Fourier--transform of the ghost--antighost propagator.
It is important to note that contrarily to the Landau gauge
\cite{blasi2}, the ghost--antighost propagator is infrared regular
in the axial gauge.
For the higher order Green's functions, the basic recurence relation is
obtained by differentiating (\ref{fant}) with respect to the most general
combination of the sources
$(\delta^{(n+m+1)}/(\delta j_c)(\delta j_b)^n(\delta j_\varphi)^m)$
for $n+m\geq 1$ and $\varphi\in \{A_2 , \phi , c, \bar{c}\}$. This
gives the following recursion relation over the number of $b$--fields
\begin{eqnarray} \label{gen_rec}
\lefteqn{\partial_1\vev{\bar{c}^a(x) c^b(y) b^{c_1}(z_1)
\ldots b^{c_n}(z_n)
\varphi^{d_1}(v_1) \ldots\varphi^{d_m}(v_m)} =} \\[3mm]
&&=\sum_{k = 1}^n f^{a c_k c}\,\d^{(2)}(x-z_k)\times \nonumber \\[3mm]
&&\quad\times\vev{\bar{c}^c(z_k) c^b(y)
b^{c_1}(z_1) \ldots \widehat{b^{c_k}(z_k)}
\ldots b^{c_n}(z_n) \varphi^{d_1}(v_1) \ldots
\varphi^{d_m}(v_m)} \nonumber
\end{eqnarray}
where $\widehat{\Phi}$ denotes the omission of the field $\Phi$ in
the Green's functions.
Let us first look at the case $n=0$ where (\ref{gen_rec}) reduces to
\begin{equation}
\partial_1\vev{\bar{c}^a(x) \, c^b(y) \,
\varphi^{d_1}(v_1) \ldots\varphi^{d_m}(v_m)}\; = 0
\eqn{gen_reczero}
The solution is
\begin{equation}
\vev{\bar{c}^a(x) \, c^b(y) \,
\varphi^{d_1}(v_1) \ldots\varphi^{d_m}(v_m)}\; = F(\xi} \newcommand{\X}{\Xi_k)
\qquad 1\leq k\leq M
\eqn{gensolzero}
where $\xi} \newcommand{\X}{\Xi_k$, stands for the $M=1+2m+\frac{m}{2}(m-1)$ differences
$\{x^2-y^2,\,x^2-v_i^2,\,v_i^2-v_j^2,\,y^2-v_i^2\}$, $1\leq i,j\leq m$
due to translational invariance. The absence of any
dependance on the coordinate 1 comes from the fact that any Green's
function which does not involve $b$--fields obeys an homogeneous equation
similar to (\ref{gen_reczero}) for all its arguments.
Under the same assumptions as for (\ref{cbarc}), $F(\xi} \newcommand{\X}{\Xi_k)$ has the general form
\begin{equation}
F(\xi} \newcommand{\X}{\Xi_k)\sim \d(\xi} \newcommand{\X}{\Xi_k)
\eqn{gensolfxi}
where the coefficients are either constants or proportional to
$\ln(\frac{\xi} \newcommand{\X}{\Xi_k}{\xi} \newcommand{\X}{\Xi_{k'}})$ since this is the only combination which do not break
scale invariance. Using now
canonical dimension arguments (c.f. Tab. \ref{table1}), conservation of
the ghost charge
and residual gauge invariance \equ{fres}, one gets
\begin{equation}
\vev{(\bar{c})^{m_1} \, (c)^{m_2}\, (A_\mu)^{m_3}\, (\phi)^{m_4}}=0
\qquad \forall \ \{ m_1 ,\, m_2 ,\, m_3 ,\, m_4 \} \neq \{ 1,1,0,0 \}
\eqn{genzero}
The next step concerns the Green's functions which involves
$b$--fields. As a consequence of \equ{genzero}, the unique starting point
for the recurence \equ{gen_rec} is the two point function
$\vev{c\, \bar{c}}$ \equ{cbarc}. Thus \equ{gen_rec} solves to the recurence
relation
\begin{eqnarray} \label{propghost}
\lefteqn{\vev{\bar{c}^a(x) c^b(y) b^{c_1}(z_1) \ldots
b^{c_n}(z_n)} = } \\[3mm]
& & = \sum_{k = 1}^n f^{a c_k c}
[ \theta(x^1-z_k^1) + \alpha^{(n)} ]
\delta(x^2-z_k^2) \vev{\bar{c}^c(z_k)
c^b(y) b^{c_1}(z_1)
\ldots \widehat{b^{c_k}(z_k)} \ldots b^{c_n}(z_n)} \nonumber
\end{eqnarray}
for $n\geq1$.
The integration constants $\alpha^{(n)}$ are also fixed by the Fermi
statistics of the ghost fields to be
\begin{equation}
\alpha^{(n)} = -\,\frac{1}{2} \quad , \quad \forall \, n
\end{equation}
and these solutions correspond to tree graphs
\begin{eqnarray}\label{prop_barc_c_nb}
\lefteqn{\langle \bar{c}^a(x) \, c^b(y) \, b^{c_1}(z_1) \,\ldots
\, b^{c_n}(z_n) \rangle \; = } \\[3mm]
&& = -\sum_{k = 1}^n f^{e c_k c}
\,\Big\langle\bar{c}^a(x)\, c^e(z_k)\Big\rangle
\vev{\bar{c}^c(z_k) \, c^b(y) b^{c_1}(z_1)
\, \ldots \,\widehat{b^{c_k}(z_k)} \,\ldots b^{c_n}(z_n)} \nonumber
\end{eqnarray}
since (\ref{fant},\ref{fgho}) are linear in the quantum fields. Thus, this justify
the tree approximation \equ{legendre} for this sector.
\subsection{Solution of the gauge sector}
Although we already know that the symmetries fail to produce
an important relation for this sector, let us look how
far we can go in the determination of the Green's functions
when taking into account only the symmetries for the model.
The most fruitful approach is based on the transversal component of
the supersymmetry Ward identity (\ref{fsutr}).
The two points functions are found by differentiating (\ref{fsutr})
with respect to $\delta^{(2)}/\delta j^2 \delta j_c$,
$\delta^{(2)}/\delta j_\phi \delta j_c$ and
$\delta^{(2)}/\delta j_b \delta j_c$. They are
\begin{eqnarray}\label{startaa}
\langle A^a_2(y) \, A^b_2(z) \rangle & = &
0 \quad , \\
\label{prop_ccaphi} \langle A^a_2(x) \, \phi^b(y) \rangle
& = & \langle \bar{c}^b(y) \, c^a(x) \rangle \nonumber \\
& = & -\,\delta^{ab} \,\left[}\newcommand{\rc}{\right]\theta(y^1-x^1) - \frac{1}{2}\rc
\,\delta(x^2-y^2) \quad , \\
\label{startab}\langle b^a(x)\, A^b_2(y) \rangle & = & \partial_2
\langle \bar{c}^a(x)\, c^b(y) \rangle \nonumber \\
& = & -\,\delta^{ab} \, \left[}\newcommand{\rc}{\right]\theta(x^1-y^1) - \frac{1}{2}\rc \,
\partial_2 \delta(x^2-y^2) \quad ,
\end{eqnarray}
where (\ref{cbarc}) and (\ref{einhalb}) have been used.
For the higher orders,
(\ref{startaa},\, \ref{prop_ccaphi},\, \ref{startab}) generalize to
\begin{eqnarray}
\vev{A^a_2(y) \, A^b_2(z)\, ({\varphi})^n}&=&0\nonumber \\[3mm]
\vev{A^a_2(x) \, \phi^b(y)\, ({\varphi})^n}&=&
\vev{\bar{c}^b(y) \, c^a(x)\, ({\varphi})^n}\nonumber \\[3mm]
\vev{b^a(x)\, A^b_2(y)\, ({\varphi})^n}&=&
\partial_2\vev{ \bar{c}^a(x)\, c^b(y)\, ({\varphi})^n}\nonumber
\end{eqnarray}
with ${\varphi}\in\{A_2,\phi,c,\bar{c},b\}$.
Since we have already the complete solution for the ghost sector
(\ref{cbarc}\, ,\ref{genzero}\, \ref{prop_barc_c_nb}), this proves that in the
axial gauge the supersymmetry completely fixes all the Green's functions
which contain at least one field $A_2$.
\subsubsection{Solution of the Local Gauge Ward--identity}
The solution for $\langle (b)^n \rangle$, $\forall n$ can be derived
from the local gauge Ward--identity (\ref{floc}). Indeed, by
differentiation with respect to $\delta^{(n)}/(\delta j_b)^n$, $n\geq 1$,
one gets directly
\begin{equation}
\langle (b)^{(n+1)} \rangle = 0 \qquad \forall n\geq 1 \label{solbn}
\end{equation}
Since these are the only loop diagrams of the theory, this shows that
the non--linearity of \equ{intcond} have nn consequences and that the three
approximation \equ{legendre} is exact.
\subsubsection{Solution of the field equation for $\phi$}
In the last two subsections, we showed that all the Green's function
of the form
$$
\vev{A_2 \ldots}\quad , \quad \vev{b^n}
$$
where fixed by symmetry requirements.
The remaining part formed by the Green's functions of the
type $\langle (\phi)^m(b)^n \rangle$, $m\geq 1$ is solved only through the
use of the equations of motion (\ref{fphi}). In the following, we
will thus look for the general solution of (\ref{fphi}) for Green's functions
with no $A_2$ fields since the latter are already found in the previous
subsection.
For the propagators, (\ref{fphi}) gives
\begin{eqnarray}
\varepsilon^{\mu\nu}\partial_\nu\vev{\phi^a(x) b^b(y) }&=&0\\
\partial_1 \langle \phi^a(x)\phi^b(y)\rangle&=&0 \\
\langle b^b(x) \phi^a(y) \rangle&=&
\partial_2 \langle \phi^a(x)\phi^b(y)\rangle
\end{eqnarray}
which, together with (\ref{solbn}) and translational invariance solve into
\begin{eqnarray}
\langle \phi^a(x)\phi^b(y)\rangle&=& F(x^2-y^2) \label{fifi} \\
\langle b^b(x) \phi^a(y) \rangle&=&0 \label{bfi}
\end{eqnarray}
Here $F(x^2-y^2)$ is an arbitrary function with canonical
dimension $0$. Following the same reasoning as for (\ref{cbarc}), the
latter is a constant
\begin{equation}
\vev{ \phi^a(x)\phi^b(y)}={\rm const}\;\d^{ab}
\eqn{vevphi}
It is important to notice that this constitute the first
solution of the homogeneous equation which is not anihilated by the residual
gauge invariance. This is caused by the bosonic character of the field
$\phi$ of canonical dimension $0$.
The higher orders are generated by functional differentiating
\equ{dynfi} with respect to
$\delta^{(m+n)}/\delta (j_b)^m \delta (j_\phi)^n$
\begin{eqnarray}
\lefteqn{\partial_1\vev{\phi^a(x)
b(y_1)^{b_1}\ldots b(y_m)^{b_m}
\phi(z_1)^{c_1}\ldots\phi(z_n)^{c_n}}=}\label{genrecbf}\\[3mm]
&&=\sum_{i=1}^{m}f^{ab_ie}\delta^{(2)}(x-y_i) \vev{\phi^e(y_i)
b(y_1))^{b_1}\ldots \widehat{b(y_i))^{b_i}}\ldots b(y_m)^{b_m}
\phi(z_1)^{c_1}\ldots\phi(z_n)^{c_n} }\nonumber
\end{eqnarray}
For the $m=0$ case, the solution which generalizes \equ{fifi}
is
\begin{equation}
F\left(}\newcommand{\rp}{\right)\ln (\frac{x-z_i}{x-z_j}) \rp
\eqn{genso}
but then \equ{intcond} imposes\footnote{See footnote ${}^6$ on
p. \pageref{foutbol}.}
\begin{equation}
\vev{(\phi)^n}=\b_n \qquad \forall n
\eqn{vevgen}
where $\b_n$ is a constant which may depends on $n$.
Physically this correspond to the only invariants polynomials ${\rm {Tr} \,} \phi^n$.
Furthermore, these solutions are the starting points for the reccurence
\equ{genrecbf} for $m\neq 0$. Nevertheless, the Green's functions obtained
by this way do not satisfy the residual gauge invariance \equ{fres} and
we must set $\b_n=0$.
\section{Conclusion}
We already emphisize that the main difference of the two dimensional BF model
with respect to the higher dimensional cases is the absence of reducible
symmetry caused by the 0--form nature of the field $\phi$.
The system is thus less constrained, {{\em i.e.},\ } the symmetries do not fix all the
Green's functions, the monomials ${\rm {Tr} \,}(\phi)^n$ remains free.
{\bf Acknowledgments:} One of the authors (S.E.) would like to thank the
``Fonds Turrettini'' and the ``Fonds F. Wurth'' for their financial support
during his stay at the Technische Universit\"at Wien where this work has
been initiated. We are also indebted toward Olivier Piguet and Nicola Maggiore
for helpful discussions.
| proofpile-arXiv_065-873 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Construction of chiral gauge theory is one
of the long standing problems of lattice field theories.
Because of the fermion doubling problem,
the lattice field theory discretized naively becomes non-chiral.
Several approaches on lattice have been proposed to overcome this
difficulties, but none of them have been proven to be successful.
Recently Kaplan has proposed a domain-wall model
in order to construct lattice chiral gauge theories\cite{kaplan}.
The model consists of Wilson fermion action
in (2$n$+1) dimensions with a fermion mass term
being the shape of a domain wall in the (extra) (2$n$+1)th dimension.
In the case of free fermions
it is shown for $0 < m_0 < 1$, where $m_0$ denotes
the domain wall mass height, that
a massless chiral state arises as a zero mode
bound to the $2n$-dimensional domain wall
while all doublers have large masses of the lattice cutoff scale.
In a way of introducing dynamical gauge fields
two variants of this model have been proposed:
the waveguide model\cite{waveg} and the overlap formula\cite{overl}.
However, it has been reported that the chiral zero mode disappears
for these two variants even in the weak coupling limit,
due to the roughness of gauge field\cite{waveg}.
In the original model
the roughness of the gauge field is replaced
with the dynamical gauge field in the extra dimension, and
in the weak coupling limit of the $2n$ dimensional coupling,
$2n$ dimensional links $U_{\mu}(x,s)$ = 1 ($\mu = 1,\cdots ,2n$)
while only extra dimensional links $U_{2n+1}(x,s)$ become dynamical.
We would like to know whether the fermionic zero modes exist
on the domain wall or not in this limit.
This question has already been investigated
in (2+1) dimensions\cite{aokinagai},
but the result, in particular, in the symmetric phase
is not conclusive, due to peculiarity of the phase transition
in the 2 dimensional U(1) spin model.
Therefore we numerically investigate this model with U(1) gauge field
in (4+1) dimensions and report the result here.
\section{Mean-field analysis}
Before numerical investigation
we estimate the effect of the dynamical gauge field in the extra
dimension using the mean-field analysis.
In our mean-field analysis all link variables in the extra dimension
are replaced with $(x , s)$-independent constant $z$ $(0<z<1)$,
so that the fermion propagator is easily obtained\cite{aokinagai}
\footnote{Note that $G_L$ $(G_R)$ here was denoted $G_R$ $(G_L)$ in Ref.\cite{aokinagai}}
\begin{eqnarray}
G(p)_{s,t} = \left[ \left( -i \sum_{\mu} \gamma_{\mu} {\bar p_{\mu}} + M(z) \right) G_{R}(p) P_L \right. \nonumber \\
+ \left. \left( -i \sum_{\mu} \gamma_{\mu} {\bar p_{\mu}} + M^{\dag}(z) \right) G_L(p) P_R \right]_{s,t} \nonumber ,
\end{eqnarray}
where $\bar p_{\mu} \equiv \sin(p_{\mu})$
and $P_{R/L} = (1 \pm \gamma_5)/2$.
Corresponding fermion masses are obtained from $G_R$ and $G_L$
in ${\bar p} \rightarrow 0$ limit, and
we found that fermionic zero modes exist only for
$1-z < m_0 < 1$. Thus the critical value of domain wall mass
is $m_{0}^{c} = 1 - z$ and therefore no zero mode survives if $z=0$.
\section{Numerical analysis}
We investigated the (4+1) dimensional U(1) model numerically,
using a quenched approximation.
At the zero physical gauge coupling
the gauge field action of the model is reduced to
the $2n$ dimensional spin model with many copies:
\begin{equation}
S_G = \beta_s \sum_{s,x,{\hat{\mu}}} {\rm{Re}} {\rm{Tr}}
\left[ U_{D}(x,s) U^{\dag}_{D}(x+{\hat{\mu}},s) \right],
\end{equation}
where $D=2n+1$.
Therefore there exists a phase transition
which separates a broken phase from a symmetric phase.
We calculated the order parameter $v$
using a rotational technique\cite{aokinagai}
and found $\beta_s=0.29$ corresponds to the symmetric phase
and $\beta_s=0.5$ to the broken phase.
We calculated the fermion propagator over 50 configurations
at $\beta_s=0.5$ and 0.29
on $L^3 \times 32\times 16$ lattices with $L=$4, 6, 8.
For the fermion field we take periodic boundary conditions
except the anti-periodic boundary condition in the 4th direction.
The errors are estimated by the single elimination jack-knife method.
At $s=0$
we obtained the inverse propagator $G_{R}^{-1}$ and $G_{L}^{-1}$
for $p_1, p_2, p_3 = 0$ as a function of $p_4$.
We obtain fermion mass squared, $m_{f}^2$,
by extrapolating $G_{R}^{-1}$ and $G_{L}^{-1}$ linearly to $p_4 = 0$.
\subsection{Broken phase}
In Fig.\ref{fig:broken}
we plotted $m_f$ in the broken phase ($\beta_s=0.5$)
as a function of $m_0$.
\begin{figure}[t]
\centerline{\epsfxsize=7.5cm \epsfbox{fig1.ps}}
\vspace*{-12mm}
\caption{$m_f$ vs. $m_0$ in the broken phase}
\label{fig:broken}
\vspace*{-5mm}
\end{figure}
As seen from this figure, the finite size effect is small,
and the left-handed modes are always massive,
while the right-handed modes are massless if $m_0$ is larger than about $0.6$.
Therefore, we conclude that chiral zero modes can exist in the broken phase.
\subsection{Symmetric phase}
Let us show the fermion mass in the symmetric phase ($\beta_s=0.29$)
in Fig.\ref{fig:symmetric}.
In the smallest lattice size the chiral zero modes seem to exist.
\begin{figure}[t]
\centerline{\epsfxsize=7.5cm \epsfbox{fig2.ps}}
\vspace*{-12mm}
\caption{$m_f$ vs. $m_0$ in the symmetric phase}
\label{fig:symmetric}
\vspace*{-5mm}
\end{figure}
However, for large lattices,
the mass difference between the left- and right-handed modes
becomes smaller. This suggest that
the fermion spectrum becomes vector-like in the infinite volume limit.
However, since the fermion mass near $m_0=1.0$ is so small,
from this data alone,
we cannot exclude a possibility that the critical mass $m_{0}^c$ is
very close to $1.0$.
To make a definite conclusion on the absence of chiral zero modes
in the symmetric phase, we try to fit the fermion propagator
using the form of the mean-field propagator
with the fitting parameter $z$.
We show the quality of the fit in Fig.\ref{fig:propR}.
\begin{figure}[t]
\centerline{\epsfxsize=7.5cm \epsfbox{fig3.ps}}
\vspace*{-12mm}
\caption{$G_{R}^{-1}$ vs. $\sin^2(p)$ in the symmetric phase}
\label{fig:propR}
\vspace*{-5mm}
\end{figure}
This figure shows that
the fermion propagator is well described
by the mean-field propagator.
In Fig.\ref{fig:fittingR} and Fig.\ref{fig:fittingL}
we plotted the parameter $z$ obtained the above fit
as a function of $1/L$
\begin{figure}[t]
\centerline{\epsfxsize=7.5cm \epsfbox{fig4.ps}}
\vspace*{-12mm}
\caption{$z$(right-handed) vs. $1/L$ in the symmetric phase}
\label{fig:fittingR}
\vspace*{-5mm}
\end{figure}
\begin{figure}[t]
\centerline{\epsfxsize=7.5cm \epsfbox{fig5.ps}}
\vspace*{-12mm}
\caption{$z$(left-handed) vs. $1/L$ in the symmetric phase}
\label{fig:fittingL}
\vspace*{-5mm}
\end{figure}
The parameter $z$'s are almost independent of $m_0$ at the each $1/L$
except for the right-handed ones at $m_0=0.99$.
The solid circles represent the order parameter $v$.
The behaviors of $z$ at different $m_0$
are almost identical each other
and are very similar to that of $v$
except the right-handed ones at $m_0=0.99$.
This suggest that
$z$ can be identified with $v$
and therefore $z$ becomes zero
as the lattice size goes to the infinity.
If this is the case
the fermion spectrum of this model becomes vector-like
in the symmetric phase.
\section{Conclusions}
We have carried out the numerical simulations
of the U(1) original domain-wall model
in (4+1) dimensions
in the weak coupling limit of the 4-dimensional
coupling.
In the broken phase,
there exist chiral zero modes on the domain wall
for $m_0 > m_{0}^c$.
The existence of the critical mass $m_{0}^c$
is predicted by the mean-field analysis.
On the other hand,
in the symmetric phase,
the analysis using the mean-field propagator suggests
that this model becomes vector-like.
We should note, however, that
the right-handed modes at $m_0=0.99$ behaves differently
and the similar behavior was also found
in the (2+1) dimensional model\cite{aokinagai}.
Therefore in the future,
we must investigate this point in detail,
for example, increasing the statistics.
Besides this point
the results from both phases suggest that
this model becomes vector-like in the continuum limit,
which should be taken at the critical point
(: the point between two phases).
Therefore,
it seems also difficult
to construct a chiral gauge theory on lattice
via the original domain-wall model.
| proofpile-arXiv_065-894 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
{} Various phenomena associated with phase transitions at the early
stage of the universe have been a subject of great interest in
cosmology for two decades. These phase transitions of the
universe is motivated from the symmetry breaking phenomenon
in high energy particle physics.
One of the decisive problems in high energy particle physics
is how the model of unified theories can be tested.
It is expected that the primary symmetry of unified theories
is broken down at the early universe to yield the theories
with lower symmetries.
There is a possibility to test the model at the early stage
of the universe.
To investigate unified theories, much interest has been taken
in clarifying the mechanism of the spontaneous symmetry breaking
under the circumstance of the early universe.
The dynamics of the strong coupling gauge theory may break
the symmetry of the unified theories without introducing
an elementary scalar field.
This scenario is called as the dynamical symmetry breaking
\cite{NJL}.
It is considered
that the change of curvature or volume size of the universe
may cause the dynamical symmetry breaking as the evolution
of the universe proceeds.
The studies of such effects on the symmetry
breaking may help understanding unified theories and evolution
of the early universe.
Many works have been done in this field.
By using the weak curvature expansion it is found that the
chiral symmetry is restored for a large positive curvature
and that there is no symmetric phase in a spacetime with
any negative curvature.\cite{IMO,ELOS,Inagaki}
In weakly curved spacetime it is pointed out that non-trivial
topology for the fermion field may drastically change the
phase structure of the four-fermion theory.\cite{ELO}
The higher derivative and gauged four-fermion theories
have also investigated in weakly curved spacetime.\cite{ENJL}
In some compact spaces, e.g., de Sitter space\cite{IMM,EOS}
and Einstein universe\cite{IIM}, the effective potential
is calculated without any approximation for the spacetime curvature.
It is observed to exhibit the symmetry restoration through the
second order phase transition.
However, in such compact spaces, it is not clear whether the symmetry
restoration is caused by the curvature or finite size effect.
An example of other simple compact space with no curvature is
the torus universe.
Since the torus spacetime has only the finite size effect,
then the investigation in this
space will indicate which effects, curvature or finite size,
is essential to restore the symmetry.
We therefore investigate
the dynamical symmetry breaking in compact flat space
with non-trivial topology.
Let us briefly comment on the
cosmological motivations to consider the torus universe.
Several astrophysicists have discussed the possibility
of the torus universe \cite{Zeldovich,SS,FS}, and
recently the topology of the universe is argued by
using the observational data of the cosmic microwave background
anisotropies \cite{torusCMB}, which was detected by COBE DMR \cite{smoot}.
Assuming that our universe is the three-torus, they
constrained the cell size of the torus.
According to the results, the size would be larger than the
present horizon scale.
Thus we do not know the topology of our universe at present.
Nevertheless, there are some cosmological motivations to consider
compact flat space with non-trivial topology.
First, quantum cosmologist have argued that small volume
universe have small action, and are more likely to be created
\cite{Atkatz}. In fact, it seems difficult to create an infinite volume
universe in the context of quantum cosmology.
Second, the torus universe, in contrast to the compact $S^3$ universe,
may have long lifetime because the curvature does not collapse
the universe.
In this paper we make a systematic study of the dynamical symmetry
breaking in compact flat space with non-trivial topology,
assuming that the four-fermion theory is an effective
theory which stems from more fundamental theory at GUT era.
The effective potential is calculated from the Feynman
propagator which depends on the spacetime structure.
Evaluating the effective potential, we investigate the
dynamical symmetry breaking induced by the effect of the
spacetime structure.
The dynamical symmetry breaking in torus universe of space-time
dimension, $D=3$, is investigated in Ref.\cite{Kimetal,KNSY,DYS}.
Our strategy to evaluate the effective potential differs
from that in Ref.\cite{Kimetal,KNSY,DYS}. Our method starts
from the Feynman propagator in real space, then it
can be easily applied to compact flat spaces with arbitrary
topology for $D=2,3$ and $4$.
The paper is organized as follows.
In section 2, we show a brief review of four-fermion theory
in curved space. We then extend the formalism
to a useful form in order to investigate the effective potential
in compact flat space with nontrivial topology.
In section 3, we apply the formalism to a 3-dimensional
spacetime with nontrivial spatial sector.
4-dimensional case is investigated in section 4.
Section 5 is devoted to summary and discussions.
In appendix, we show the validity of our method by considering
$D=2$ case. We prove that our method leads to
well known results previously obtained.
We use the units $\hbar=1$ and $c=1$.
\section{Formalism}
In this section we first give a brief review of the
four-fermion theory
in curved space. We consider the system with the action\cite{GN}
\begin{equation}
S\! =\!\! \int\!\!\! \sqrt{-g} d^{D}\! x\! \left[
-\sum^{N}_{k=1}\bar{\psi}_{k}\gamma^{\alpha}\nabla_{\alpha}\psi_{k}
+\frac{\lambda_0}{2N}
\left(\sum^{N}_{k=1}\bar{\psi}_{k}\psi_{k}\right)^{2}
\right] ,
\label{ac:gn}
\end{equation}
where index $k$ represents the flavors of the fermion field
$\psi$, $N$ is the number of fermion species, $g$ the determinant
of the metric tensor $g_{\mu\nu}$,
and $D$ the spacetime dimension.
For simplicity we neglect the flavor index below.
The action (\ref{ac:gn}) is invariant under the discrete
transformation
$\bar{\psi}\psi \longrightarrow -\bar{\psi}\psi$.
For $D=2,4$ this transformation is realized
by the the discrete chiral transformation
$\psi \longrightarrow \gamma_{5}\psi$.
Thus we call this $Z^{2}$ symmetry the discrete chiral symmetry.
The discrete chiral symmetry prohibits the fermion mass term.
If the composite operator constructed from the fermion and
anti-fermion
develops the non-vanishing vacuum expectation value,
$\langle\bar{\psi}\psi\rangle \neq 0$,
a fermion mass term appears in the four-fermion interaction
term and the chiral symmetry is broken down dynamically.
For practical calculations in four-fermion theory
it is more convenient to introduce auxiliary field $\sigma$
and start with the action
\begin{equation}
S_{y} = \!\int\! \sqrt{-g}d^{D}x
\left[-\bar{\psi}\gamma^{\alpha}
\nabla_{\alpha}\psi
-\frac{N}{2\lambda_0}\sigma^{2}-\bar{\psi}\sigma\psi
\right]\, .
\label{ac:yukawa}
\end{equation}
The action $S_{y}$ is equivalent to the action (\ref{ac:gn}).
If the non-vanishing vacuum expectation value is assigned to
the auxiliary field $\sigma$
there appears a mass term for the fermion field $\psi$
and the discrete chiral symmetry (the $Z_{2}$ symmetry)
is eventually broken.
We would like to find a ground state of the system described by
the four-fermion theory.
For this purpose we evaluate an effective potential for the field
$\sigma$.
The ground state is determined by observing the minimum of the
effective potential in the homogeneous and static background
spacetime.
As is known, the effective potential
in the leading order of the $1/N$ expansion is given by\cite{IMO}
\begin{equation}
V(\sigma)={1\over 2\lambda_0}\sigma^2
+{ {\rm Tr} \sqrt{-g} \int_0^\sigma ds S_{F}(x,x';s)
\over \int d^Dx \sqrt{-g}},
\label{def:epot}
\end{equation}
where
\begin{equation}
{\rm Tr}=\int\int d^Dx d^Dx' \delta^{D}(x-x') {\rm tr},
\end{equation}
and $S_{F}(x,x';s)$ is the Feynman propagator for free fermion
with mass $s$, which satisfies
\begin{equation}
(\gamma^\alpha\nabla_\alpha+s)S_{F}(x,x';s)={i\over\sqrt{-g}}
\delta^D(x,x').
\end{equation}
It should be noted that the effective potential
(\ref{def:epot}) is normalized so that $V(0)=0$.
We introduce the Feynman propagator for the scalar
field with mass $s$,
\begin{equation}
(\Box_x-s^2)G_{F}(x,x';s)=\frac{i}{\sqrt{-g}}\delta^D(x,x'),
\end{equation}
which has the relation,
\begin{equation}
S_{F}(x,x';s)=(\gamma^\alpha\nabla_\alpha-s)G_{F}(x,x';s).
\end{equation}
Then, we write the effective potential as
\begin{equation}
V(\sigma)={1\over 2\lambda_0}\sigma^2
-\lim_{x'\rightarrow x }{\mbox{\rm tr}\mbox{\bf 1}}\int_0^\sigma ds ~ s~G_{F}(x,x';s),
\label{expVb}
\end{equation}
in flat spacetime.
Here $\mbox{\rm tr}\mbox{\bf 1}$ is the trace of an unit Dirac matrix.
Now we consider the Feynman propagator on compact flat space with
nontrivial topology. We write the Feynman propagator in the
$D$-dimensional Minkowski
space as ${\tilde G}_{F}(x,x';s)=\tilde G(\xi)$, where
$\xi=(x-x')^2=(t-t')^2-({\bf x-x'})^2$ and the explicit expression
is given by Eq.(\ref{exptG}).
That is, ${\tilde G}_{F}(x,x';s)$ has the Lorentz invariance,
and is a function of the variable $\xi$.
Then the Feynman propagator on the $(D-1)$-dimensional spatial
torus whose size is ${\bf L}=(L_1,L_2,\cdots,L_{D-1})$ can be written
\widetext
\begin{equation}
G_{F}(x,x';s)=
\sum_{n_1=-\infty}^{\infty}\sum_{n_2=-\infty}^{\infty}
\cdots \sum_{n_{D-1}=-\infty}^{\infty}\alpha({\bf n}) \tilde G(\xi_{\bf n})
\equiv\sum_{{\bf n}} \alpha({\bf n}) \tilde G(\xi_{\bf n}),
\label{defGF}
\end{equation}
\narrowtext
where
\begin{equation}
\xi_{\bf n}=(t-t')^2-\sum_{i=1}^{D-1}(x_i-x'_i+n_iL_i)^2,
\label{defxin}
\end{equation}
${\bf n}=(n_1,n_2, \cdots, n_{D-1})$, and
$\alpha({\bf n})$ is a phase factor which is determined
in accordance with the boundary condition of
quantum fields (see below).
Throughout this paper we use
a convention $x=(t,{\bf x})=(t,x_1,x_2,\cdots,x_{D-1})$,
$~x'=(t',{\bf x}')=(t',x'_1,x'_2,\cdots,x'_{D-1})$, etc.
Note that the Green function constructed in this way has the invariance
under the replacement $x_i\rightarrow x_i +{ L_i}$,
and satisfies the equation of motion.
\footnote{
We should note that our formalism will be easily extended to
finite temperature theory.
The finite temperature Green function can be obtained
by summing the Euclidean Green function so that
it has periodicity in the direction of time,
in the same way as Eq.(\ref{defGF}).
}
The Feynman propagator in the $D$-dimensional Minkowski space is
(see e.g.\cite{BD})
\widetext
\begin{equation}
{\tilde G}_{F}(x,x';s)=\tilde G(\xi)={\pi\over (4\pi i)^{D/2}}
\biggl({4s^2\over -\xi +i\epsilon}\biggr)^{(D-2)/4}
H^{(2)}_{D/2-1}\Bigl([s^2(\xi-i\epsilon)]^{1/2}\Bigr),
\label{exptG}
\end{equation}
where $H_\nu^{(2)}(z)$ is the Hankel function of the second kind.
Then the effective potential is obtained by substituting Eq.(\ref{exptG})
with (\ref{defxin}), and (\ref{defGF}) into Eq.(\ref{expVb}).
Performing the integration we get
\begin{equation}
V={1\over 2\lambda_0}\sigma^2-\lim_{x'\rightarrow x }
\mbox{\rm tr}\mbox{\bf 1}\sum_{{\bf n}}\alpha({\bf n})
{\pi\over(4\pi i)^{D/2}}
\biggl({4\over -\xi_{\bf n}+i\epsilon}\biggr)^{(D-2)/4}
{1\over (\xi_{\bf n}-i\epsilon)^{1/2}}
\biggl[ s^{D/2} H_{D/2}^{(2)}(s(\xi_{\bf n}-i\epsilon)^{1/2})
\biggr]_{0}^{\sigma}.
\end{equation}
The effective potential should take real values physically.
The imaginary part of the effective potential
must vanish after we take the limit $x \rightarrow x'$.
Thus we consider only real part of the effective potential which
is given by
\begin{equation}
V={1\over 2\lambda_0}\sigma^2+\lim_{{\bf x}'\rightarrow {\bf x} }
\mbox{\rm tr}\mbox{\bf 1}\sum_{{\bf n}}\alpha({\bf n})
{1\over(2\pi )^{D/2}}
{1\over {\Delta x}_{\bf n}^D}
\biggl[ (\sigma{\Delta x}_{\bf n})^{D/2} K_{D/2}(\sigma{\Delta x}_{\bf n})
-\lim_{z\rightarrow0} z^{D/2}K_{D/2}(z)\biggr]~,
\label{epb}
\end{equation}
\narrowtext
where we have defined
\begin{equation}
{\Delta x}_{\bf n}=\sqrt{\sum_{i=1}^{D-1}(x_i-x'_i+n_iL_i)^2}~,
\end{equation}
and $K_{\nu}(z)$ is the modified Bessel function.
In deriving the above equation, we have set that $t=t'$, and used the relation, $H_\nu^{(2)}(-iz)=(i2/\pi)e^{\nu\pi i} K_{\nu}(z)$
(see e.g. \cite{Mag}).
As is read from Eq.(\ref{epb}), only when ${\bf n}=0$
in the summation diverges.
We therefore separate the effective potential into two parts,
\begin{equation}
V={V^{\rm IV}}+{V^{\rm FV}},
\label{def:V:topo}
\end{equation}
where
\widetext
\begin{eqnarray}
&&{V^{\rm IV}}={1\over 2\lambda_0}\sigma^2+
\lim_{{\Delta x}\rightarrow 0}
\mbox{\rm tr}\mbox{\bf 1}{1\over(2\pi )^{D/2}}
{1\over {\Delta x}^D}
\biggl[ (\sigma{\Delta x} )^{D/2} K_{D/2}(\sigma{\Delta x})
-2^{D/2-1}\Gamma(D/2)\biggr],
\label{exViv}
\\
&&{V^{\rm FV}}=\lim_{{\bf x}'\rightarrow {\bf x} }
\mbox{\rm tr}\mbox{\bf 1}\sum_{{\bf n} (\ne 0)}\alpha({\bf n})
{1\over(2\pi )^{D/2}}
{1\over {\Delta x}_{\bf n}^D}
\biggl[ (\sigma{\Delta x}_{\bf n})^{D/2} K_{D/2}(\sigma{\Delta x}_{\bf n})
-2^{D/2-1}\Gamma(D/2)\biggr]~.
\label{exVfv}
\end{eqnarray}
\narrowtext
Here we have set $\alpha({\bf n}=0)=1$,
and used that $\lim_{z\rightarrow0} z^{D/2}K_{D/2}(z)=2^{D/2-1}\Gamma(D/2)$.
In general, we need to regularize the divergence of ${V^{\rm IV}}$
by performing the renormalization procedure.
It is well known that the divergence can be
removed by the renormalization of the
coupling constant for $D<4$.
Employing the renormalization condition,
\begin{equation}
{\partial^2 {V^{\rm IV}}\over \partial \sigma^2}~\bigg\vert_{\sigma=\mu} =
{\mu ^{D-2}\over \lambda_r},
\end{equation}
we find that Eq.(\ref{exViv}) reads
\widetext
\begin{eqnarray}
&&{V^{\rm IV}_{\rm ren}}={1\over{2\lambda_r}}\sigma^2 \mu^{D-2}
\nonumber
\\
&&\hspace{0.5cm}
+\lim_{{\Delta x}\rightarrow0} {\mbox{\rm tr}\mbox{\bf 1}\over(2\pi)^{D/2}}
{1\over {\Delta x}^D}
\biggl[{1\over2}(\sigma{\Delta x})^2(\mu{\Delta x})^{D/2-1}
\Bigl(K_{D/2-1}(\mu{\Delta x})-\mu{\Delta x} K_{D/2-2}(\mu{\Delta x})\Bigr)
\nonumber
\\
&&\hspace{4.6cm}+(\sigma{\Delta x})^{D/2}K_{D/2}(\sigma{\Delta x})
-2^{D/2-1}\Gamma(D/2)
\biggr].
\label{Vivren}
\end{eqnarray}
\narrowtext
As we shall see in the next sections, ${V^{\rm IV}_{\rm ren}}$ reduces to
the well known form of the effective potential in the Minkowski
spacetime. Therefore the effect of the nontrivial configuration
of space on the effective potential is described by ${V^{\rm FV}}$.
Finally in this section, we explain the phase factor $\alpha({\bf n})$.
As is pointed out in Ref.\cite{ISHAM,Dowker,Avis}
there is no theoretical constraint which boundary
condition one should take for quantum fields in compact flat spaces.
It is possible to consider the fields with the various boundary
condition in the compact spaces with non-trivial topology.
Thus we consider the fermion fields with periodic and
antiperiodic boundary conditions, and
study whether the finite size effect can be changed by the
boundary condition. For this purpose, it is convenient to
introduce the phase factor $\alpha({\bf n})$ by
\begin{equation}
\alpha({\bf n})=\alpha_1\alpha_2\cdots \alpha_{D-1},
\end{equation}
where $\alpha_i=(-1)^{n_i}$ for antiperiodic boundary condition
in the direction of $x_i$, and $\alpha_i=1$
for periodic boundary condition.
In the following sections we investigate the behavior of the effective
potential at $D=3$ and $D=4$ with the various boundary conditions.
\section{Application in $D=3$}
In this section
we apply the method explained in the previous section to
the case, $D=3$.
In the three dimensional torus spacetime, $R\otimes S^1\otimes S^1$,
it is possible to consider the three kinds of independent boundary
conditions.
To see the effect of the compact space and the boundary condition
of the field on the phase structure of the four-fermion theory,
we evaluate the effective potential and the gap equation
and show the phase structure for every three kinds of boundary conditions.
As is mentioned before, the same problem has been investigated
in Ref.\cite{Kimetal} for the three dimensional flat compact space
with nontrivial topology.
We can compare our results with theirs.
The three dimensional case is instructive because the
modified Bessel functions reduce to elementary functions,
\begin{eqnarray}
&&K_{3/2}(z)=\sqrt{\pi\over 2z} \biggl(1+{1\over z}\biggr)e^{-z},
\label{eqn:bes1}\\
&&K_{1/2}(z)=K_{-1/2}(z)=\sqrt{\pi\over 2z}e^{-z}.
\label{eqn:bes2}
\end{eqnarray}
Substituting the relations (\ref{eqn:bes1}) and (\ref{eqn:bes2})
to Eqs.(\ref{Vivren}) and (\ref{exVfv}),
the effective potential $V$ in three dimension
becomes
\begin{equation}
V={V^{\rm IV}_{\rm ren}} + {V^{\rm FV}},
\label{eqn:Effpt3D}
\end{equation}
\begin{equation}
{{V^{\rm IV}_{\rm ren}}\over \mu^3}=
\frac{1}{2}\left(\frac{1}{\lambda_{r}}-\frac{\mbox{\rm tr}\mbox{\bf 1}}{2\pi}\right)
\left(\frac{\sigma}{\mu}\right)^{2}
+ \frac{\mbox{\rm tr}\mbox{\bf 1}}{12\pi}\left(\frac{\sigma}{\mu}\right)^{3},
\label{Vivc3}
\end{equation}
\begin{equation}
\frac{{V^{\rm FV}}}{\mu^{3}}=\frac{\mbox{\rm tr}\mbox{\bf 1}}{4\pi}\sum_{{\bf n}\neq0}
\frac{\alpha({\bf n})}{(\mu{\zeta_\bn})^{3}}
\left(\sigma{\zeta_\bn} e^{-\sigma{\zeta_\bn}}+e^{-\sigma{\zeta_\bn}}-1\right),
\label{eqn:Vfv3d}
\end{equation}
where
\begin{equation}
{\zeta_\bn} \equiv\lim_{{\bf x}'\rightarrow{\bf x}}{\Delta x}_{\bf n}
=\sqrt{(n_1L_1)^2+(n_2L_2)^2}.
\label{defzetan}
\end{equation}
Here we note that Eq.(\ref{eqn:Vfv3d}) disappears in the
Minkowski limit $(L_{1},L_{2})\rightarrow (\infty,\infty)$
and the effective potential (\ref{eqn:Effpt3D})
reduces to Eq.(\ref{Vivc3}) which is equal to the effective
potential in the Minkowski space-time.
The gap equation,
$\partial V /\partial\sigma\vert_{\sigma=m}=0$,
which determines the dynamical fermion mass $m$
in the compact flat space with nontrivial topology
reduces to
\begin{equation}
\frac{4\pi}{\lambda_{r}\mbox{\rm tr}\mbox{\bf 1}}-2+\frac{m}{\mu}-
\sum_{{\bf n}\neq0}\alpha({\bf n})\frac{e^{-m{\zeta_\bn}}}{\mu{\zeta_\bn}}=0.
\label{eqn:Gap3D1}
\end{equation}
The effective potential in the Minkowski spacetime (\ref{Vivc3})
has a broken phase in which the discrete
chiral symmetry is broken down, when the coupling
constant is larger than the critical value $\lambda_{cr}=2\pi/\mbox{\rm tr}\mbox{\bf 1}$.
For convenience, we introduce the dynamical fermion mass ${\m_0}$ in
the Minkowski space-time given by
$\partial{V^{\rm IV}_{\rm ren}}/ \partial\sigma\vert_{\sigma={\m_0}}=0$.
The dynamical fermion mass $m_{0}$ in the Minkowski
space-time has a relationship with
the coupling constant as
\begin{equation}
\frac{m_{0}}{\mu}=-\frac{4\pi}{\lambda_{r}\mbox{\rm tr}\mbox{\bf 1}}+2.
\label{eqn:m0vscpl}
\end{equation}
When the system is in the broken phase at the limit of Minkowski space
$(L_{1},L_{2})\rightarrow(\infty,\infty)$,
substituting Eq.(\ref{eqn:m0vscpl})
to Eq.(\ref{eqn:Gap3D1}),
we obtain the gap equation,
\begin{equation}
\frac{m}{m_{0}}-1-
\sum_{{\bf n}\neq0}\alpha({\bf n})\frac{e^{-m{\zeta_\bn}}}{m_{0}{\zeta_\bn}}=0.
\label{eqn:Gap3D2}
\end{equation}
It should be noted that the solution $m$ of the gap equation
coincides with the dynamical fermion mass $m=m_{0}$ at the
limit $(L_{1},L_{2})\rightarrow(\infty,\infty)$.
We expect that the phase transition will occur for
a sufficiently small $L_{i}$.
It is convenient to introduce a new variable $k_{i}$ instead
of $L_{i}$ to investigate the gap equation (\ref{eqn:Gap3D2}) for
small $L_{1}$ and $L_{2}$.
\begin{equation}
k_{i} \equiv \left(\frac{2\pi}{L_{i}}\right)^{2}.
\end{equation}
To investigate the phase structure of the four-fermion theory
in three dimensional flat compact space,
we calculate the effective potential
(\ref{eqn:Effpt3D}) and the gap equation (\ref{eqn:Gap3D2})
numerically with varying the variable $k_{i}$ for the various
boundary conditions below.
\subsection{Antiperiodic-antiperiodic boundary condition}
First we take the antiperiodic boundary condition for both
compactified directions and call this case AA-model.
The phase factor is chosen as
$\alpha({\bf n}) = (-1)^{n_{1}}(-1)^{n_{2}}$ in this case.
In Fig.\ref{fig:Gap3dAA} the behavior of the gap equation
(\ref{eqn:Gap3D2}) is shown for the AA-model.
As is seen in Fig.\ref{fig:Gap3dAA}, we find that
the symmetry restoration occurs as $L_{1}$ and (or) $L_{2}$
become smaller and that the phase transition is second-order.
In the case of $L_{1}=L_{2}=L$,
the critical value of $L$ where the phase transition takes place is
$L_{cr}\approx 1.62/m_{0} $ ($k_{cr}/{m_{0}}^{2} \approx 15.1$).
In Fig.\ref{fig:EffptAA} the behavior of the effective
potential (\ref{eqn:Effpt3D}) for the AA-model
is plotted as a function of $\sigma/\mu$
for the case of $\lambda_{r}>\lambda_{cr}$ (we take
$\lambda_{r}=2 \lambda_{cr}$ as a typical case).
In plotting Figs.\ref{fig:Gap3dAA} and \ref{fig:EffptAA},
we numerically summed ${\bf n}$ in
Eqs.(\ref{eqn:Vfv3d}) and (\ref{eqn:Gap3D2}).
In Fig.\ref{fig:PhaseAALxLy} we show the phase
diagram for the AA-model in $(L_{1},L_{2})$ plane.
Here we note that in the limit
$L_{1} \rightarrow \infty$ (or $L_{2} \rightarrow \infty$), the
space-time topology $R\otimes S^1 \otimes S^1$,
considered here, should be understood as $R^{2}\otimes S^1$.
In this limit the field theory should have the same structure
as the finite temperature field theory for $D=3$.
In fact, the critical value of $L_{1}$ (or $L_{2}$) is
equal to the critical temperature $\beta_{cr}= 2\ln 2
\approx 1.39$ as is expected, which is shown by dashed line
in the figure.
These results consistent with those of Ref.\cite{Kimetal}
for the model with the A-A boundary condition.
\subsection{Periodic-antiperiodic boundary condition}
We consider the case where the periodic
and the antiperiodic boundary conditions are adopted
in $x_{1}$ and $x_{2}$ directions, respectively.
We call this case PA-model, where
the phase is taken as $\alpha({\bf n})=(-1)^{n_{2}}$.
In Fig.\ref{fig:Gap3dPA} we show the behavior of the gap
equation (\ref{eqn:Gap3D2}) for the PA-model.
From Fig.\ref{fig:Gap3dPA}, we find that the symmetry restoration
occurs when $L_{2}$ becomes smaller with $L_{1}$ fixed
and that the symmetry restoration dose not occur
when $L_{1}$ becomes smaller with $L_{2}$ fixed.
We also find that the phase transition is second-order, if occur.
Especially in the case of $L_{1}=L_{2}=L$, the symmetry restoration
occur, and the critical value $L_{cr}$ where the symmetry is restored
is $m_{0}L_{cr} \approx 1.14$ ($k_{cr}/m_{0}^{2} \approx 30.3$).
In Figs.\ref{fig:EffptPA}, \ref{fig:EffptPAKx10KyetcAP} and
\ref{fig:EffptPAKxetcKy10AP}
typical behaviors of the effective potential
for the PA-model are shown as a function of $\sigma/\mu$
for the case of $\lambda_{r}>\lambda_{cr}$
(we take $\lambda_{r}=2 \lambda_{cr}$).
Fig.\ref{fig:EffptPA} is the case of $L_{1}=L_{2}$.
Fig.\ref{fig:EffptPAKx10KyetcAP} is the case with $L_{1}$ fixed,
and Fig.\ref{fig:EffptPAKxetcKy10AP} is same but with $L_{2}$ fixed.
In Fig.\ref{fig:EffptPAKx10KyetcAP}, where $L_1$ (the size
associated with the periodic boundary condition) is fixed,
we can see that the chiral symmetry is restored as $L_2$ (the size associated
with the anti-priodic boundary condition) becomes smaller.
While, in Fig.\ref{fig:EffptPAKxetcKy10AP} where $L_2$ is
fixed, the fermion mass becomes larger as $L_1$ becomes smaller.
The phase diagram for the
PA-model is shown in Fig.\ref{fig:PhasePALxLy} in $(L_{1},L_{2})$ plane.
These results is consistent with those of Ref.\cite{Kimetal}.
\subsection{Periodic-periodic boundary condition}
Finally we take the periodic boundary condition for both compactified
directions and call this case PP-model.
The phase factor is chosen as $\alpha({\bf n})=1$ in this case.
In Fig.\ref{fig:Gap3dPP}, we show the behavior of the gap equation
(\ref{eqn:Gap3D2}) for the PP-model. From this figure,
we find that the symmetry restoration does not occur as $L_{1}$
and (or) $L_{2}$ becomes smaller. In Fig.\ref{fig:EffptPP}, we show
the effective potential (\ref{eqn:Effpt3D}) in the case of
$L_{1}=L_{2}=L$ and $\lambda_{r}>\lambda_{cr}$
(we take $\lambda_{r}=2\lambda_{cr}$) for the PP-model.
Especially, in the case of $L_{1}=L_{2}=L$,
we can analytically prove that the symmetry restoration doesn't
occur irrespective of the coupling constant $\lambda_{r}$.
To see this, we investigate the differential coefficient
of the effective potential (\ref{eqn:Effpt3D}) of the PP-model
at $\sigma \rightarrow 0+$. The differential coefficient of the
effective potential (\ref{eqn:Effpt3D}) is
\begin{equation}
\frac{1}{\mu^{2}}\frac{dV}{d\sigma}
=
\frac{\sigma}{\mu}\left(
\frac{1}{\lambda_{r}}-\frac{\mbox{\rm tr}\mbox{\bf 1}}{2\pi}+
\frac{\mbox{\rm tr}\mbox{\bf 1}}{4\pi}\frac{\sigma}{\mu}
-\frac{\mbox{\rm tr}\mbox{\bf 1}}{4\pi}\sum_{{\bf n}\neq0}
\frac{e^{-\sigma{\zeta_\bn}}}{\mu{\zeta_\bn}} \right).
\label{eqn:coeff}
\end{equation}
Taking the limit $\sigma \rightarrow 0+$, Eq.(\ref{eqn:coeff})
reduces to
\begin{equation}
\left.
\frac{1}{\mu^{2}}\frac{dV}{d\sigma}
\right|_{\sigma\rightarrow 0+}
=
-\frac{\mbox{\rm tr}\mbox{\bf 1} \sigma}{\mu\pi}\left.
\sum_{n_{1}=1}^{\infty}\sum_{n_{2}=1}^{\infty}
\frac{e^{-\sigma L\sqrt{n_{1}^{2}+n_{2}^{2}}}}{\mu L\sqrt{n_{1}^{2}+n_{2}^{2}}}
\right|_{\sigma\rightarrow 0+}.
\label{eqn:prof1}
\end{equation}
Using an inequality,
\begin{equation}
\frac{n_{1}+n_{2}}{\sqrt{2}} \leq \sqrt{n_{1}^{2}+n_{2}^{2}} < n_{1}+n_{2},
\mbox{for $n_{1}\geq 1$ and $n_{2}\geq 1$},
\end{equation}
we get the inequality
\begin{equation}
\frac{e^{-\sigma L(n_{1}+n_{2})}}{n_{1}+n_{2}} <
\frac{e^{-\sigma L\sqrt{n_{1}^{2}+n_{2}^{2}}}}{\sqrt{n_{1}^{2}+n_{2}^{2}}} \leq
\frac{\sqrt{2}e^{-{\sigma L}(n_{1}+n_{2})/{\sqrt{2}}}}{n_{1}+n_{2}}.
\label{eqn:ineq1}
\end{equation}
Summing up each term in Eq.(\ref{eqn:ineq1})
with respect to $n_{1}$ and $n_{2}$, we find,
\widetext
\begin{equation}
\frac{1}{e^{\sigma L}-1}+\log\left(1-e^{-\sigma L}\right) <
\sum_{n_{1}=1}^{\infty}\sum_{n_{2}=1}^{\infty}
\frac{e^{-\sigma L\sqrt{n_{1}^{2}+n_{2}^{2}}}}{\sqrt{n_{1}^{2}+n_{2}^{2}}} \leq
\frac{\sqrt{2}}{e^{{\sigma L}/{\sqrt{2}}}-1}
+\sqrt{2}
\log\left(1-e^{-{\sigma L}/{\sqrt{2}}}\right).
\label{eqn:ineq2}
\end{equation}
\narrowtext
According to Eq.(\ref{eqn:ineq2}), we obtain the inequality
\widetext
\begin{equation}
\left.\frac{1}{\mu^{2}}\frac{dV}{d\sigma}
\right|_{\sigma\rightarrow 0+}
<
\left. -\frac{\mbox{\rm tr}\mbox{\bf 1}\sigma}{\pi\mu^{2}L}\left(
\frac{1}{e^{\sigma L}-1}+\log\left(1-e^{-\sigma L}\right)
\right)\right|_{\sigma\rightarrow 0+}
= -\frac{\mbox{\rm tr}\mbox{\bf 1}}{\pi\mu^{2}L^{2}} < 0.
\end{equation}
\narrowtext
We find that the differential coefficient of the effective potential
has a negative value at $\sigma \rightarrow 0+$
irrespective of the coupling constant $\lambda_{r}$
in the case of $L_{1}=L_{2}=L$ for the PP-model.
Thus, we have shown that only a broken phase could exist and
the symmetry restoration does not occur at all.
Though this proof is limited to the special case $L_{1}=L_{2}=L$,
this result is expected to hold in other cases $L_{1} \neq \L_{2}$.
The results in this subsection are different from those of Ref.\cite{Kimetal}
for PP-model.
Summarizing this section, we examined the phase structure
of the four-fermion theory in compactified space of
the three dimension by evaluating the effective
potential.
The phase structure is altered due to the compactified
space. Our results are consistent with those
of Ref.\cite{Kimetal} except for the periodic-periodic
boundary condition.
In the case of PP-model, our results indicate that
only a broken phase could exist and the symmetry restoration
dose not occur.
The behavior of the dynamical fermion mass $m$ is quite
different according as the imposed boundary condition.
Making the length size of the compactified direction small,
the dynamical fermion mass becomes large when adopting the
periodic boundary condition. However it becomes small
when adopting the antiperiodic boundary condition.
In concrete, for the AA-model where the antiperiodic boundary
condition is imposed in the two directions,
the dynamical fermion mass disappears, and the symmetry is
restored when the size of the compactified space becomes small.
The order of the symmetry restoration is second.
In the PP-model, where the periodic boundary
condition is imposed, the symmetry is not restored
as is seen before.
In the PA-model, where one direction is periodic
boundary condition and the other is antiperiodic one,
two effects compete with each other.
In the special case $L_{1}=L_{2}=L$
the effect of antiperiodic boundary condition
triumphs over that of periodic boundary condition
to restore the symmetry at small $L$.
The order of this symmetry restoration is second.
\section{Application in $D=4$}
In this section we consider the case of $D=4$, i.e.,
the spacetime which has 3-dimensional spatial sector
with nontrivial topology. Let us start evaluating ${V^{\rm IV}_{\rm ren}}$.
The special situation in $D=4$ case is that the
renormalization can not make finite the effective potential
in our theory.
Therefore it is not necessary to consider the renormalization
for $D=4$. Nevertheless, we introduce a "renormalized" coupling
constant defined by Eq.(\ref{Vivren}) for convenience in the same way as
the case in $D=3$ .
Then we must regularize ${V^{\rm IV}_{\rm ren}}$ by some method,
e.g., by introducing a cut-off parameter.
Here we examine two methods to regularize ${V^{\rm IV}_{\rm ren}}$.
The first one is to keep ${\Delta x}$ finite and to set $D=4$.
The straightforward calculations lead to
\widetext
\begin{eqnarray}
{{V^{\rm IV}_{\rm ren}}\over \mu^4}={1\over 2\lambda_{r}}\biggl({\sigma\over\mu}\biggr)^2
+ && {\mbox{\rm tr}\mbox{\bf 1}\over4(4\pi)^2}
\biggl[
\biggl({\sigma\over\mu}\biggr)^2
\biggl(-2+12\Bigl(\gamma-\ln{2\over\mu{\Delta x}}\Bigr)\biggr)
\nonumber \\
&& +\biggl({\sigma\over\mu}\biggr)^4
\biggl({3\over2}-2\Bigl(\gamma-\ln{2\over\mu{\Delta x}}+\ln{\sigma\over\mu}
\Bigr)\biggr)
\biggr] +O({\Delta x}),
\label{Vivc}
\end{eqnarray}
from Eq.(\ref{Vivren}).
On the other hand one can adopt the dimensional regularization
as the second method, in which we
set $D=4-2\epsilon$. By expanding the right hand side of
Eq.(\ref{Vivren}) in terms of $1/\epsilon$, we find
\begin{eqnarray}
{{V^{\rm IV}_{\rm ren}}\over \mu^D}={1\over 2\lambda_r}\biggl({\sigma\over\mu}\biggr)^2
+ &&{\mbox{\rm tr}\mbox{\bf 1}\over4(4\pi)^2}
\biggl[
\biggl({\sigma\over\mu}\biggr)^2
\biggl(-2+6\Bigl(\gamma-\ln 4\pi\Bigr) -{6\over\epsilon}\biggr)
\nonumber
\\
&&+\biggl({\sigma\over\mu}\biggr)^4
\biggl({3\over2}-\Bigl(\gamma-\ln4\pi + 2\ln {\sigma\over\mu}\Bigr)
+{1\over\epsilon}\biggr)
\biggr]+O(\epsilon).
\label{Vivcc}
\end{eqnarray}
\narrowtext
The above two methods are related by
\begin{equation}
{1\over\epsilon}=-\gamma+\ln\biggl({1\over (\mu{\Delta x})^2\pi}\biggr).
\end{equation}
According to the Ref.\cite{Inagaki,IMO}, we find the momentum
cut-off parameter $\Lambda$, which is introduced in their papers,
is related by
\begin{equation}
\ln{\Lambda^2\over \mu^2}=\ln\biggl({2\over \mu{\Delta x}}\biggr)^2
+1-2\gamma.
\end{equation}
This effective potential has a broken phase, when the coupling
constant is larger than the critical value $\lambda_{\rm cr}$,
which is given by
\begin{equation}
{1\over\lambda_{\rm cr}}=
{\mbox{\rm tr}\mbox{\bf 1}\over(4\pi)^2}\biggl(3\ln{\Lambda^2\over\mu^2}-2\biggr).
\end{equation}
For convenience,
we introduce the dynamical fermion mass ${\m_0}$ in the Minkowski space by
\begin{equation}
{1\over\lambda_{\rm r}}-{1\over\lambda_{\rm cr}}
+{\mbox{\rm tr}\mbox{\bf 1}\over(4\pi)^2}\biggl(\ln{\Lambda^2\over{\m_0}^2}\biggr)
{{\m_0}^2\over\mu^2}=0,
\end{equation}
as in the case in $D=3$.
In terms of ${\m_0}$, Eq.(\ref{Vivc}) or Eq.(\ref{Vivcc})
can be written as
\begin{equation}
{{V^{\rm IV}_{\rm ren}}\over{\m_0}^4}={\mbox{\rm tr}\mbox{\bf 1}\over4(4\pi)^2}\biggl[
-2\biggl(\ln{\Lambda^2\over{\m_0}^2}\biggr)~{{\sigma^2\over{\m_0}^2}}
+ \biggl(\ln {\Lambda^2\over{\m_0}^2}
+{1\over2}-\ln {\sigma^2\over{\m_0}^2}\biggr)
{\sigma^4\over{\m_0}^4}
\biggr],
\end{equation}
where we used the momentum cut-off parameter.
On the other hand, Eq.(\ref{exVfv}), which represents the effect due to the
compactified space, reduces to
\begin{equation}
{{V^{\rm FV}}\over{\m_0}^4}=
{\mbox{\rm tr}\mbox{\bf 1}\over(2\pi)^2}\sum_{{\bf n}\neq0} {\alpha({\bf n})\over ({\m_0}{\zeta_\bn})^4}
\biggl[ (\sigma{\zeta_\bn})^2 K_2(\sigma{\zeta_\bn})-2\biggr],
\end{equation}
where ${\zeta_\bn}=\sqrt{(n_1L_1)^2+(n_2L_2)^2+(n_3L_3)^2}$.
Then we get the gap equation
\begin{equation}
-\ln{\Lambda^2\over{\m_0}^2}
+{{m}^2\over{\m_0}^2}\biggl(\ln{\Lambda^2\over{\m_0}^2}-
\ln{{m}^2\over{\m_0}^2}\biggr)
-4\sum_{{\bf n}\neq0}{\alpha({\bf n})\over({\m_0}{\zeta_\bn})}
{{m}\over{\m_0}}K_{1}({m}{\zeta_\bn})=0~.
\end{equation}
We can easily solve the above equation numerically.
The advantage of our method is that the equation is
given by a simple sum of the modified Bessel function which
damps exponentially at large ${\bf n}$.
In $D=4$ case, we have many varieties of models
according to the varieties of the size of torus and
the boundary condition of fermion fields in the three different
directions of torus.
For convenience, we separate this section into the following
three subsections.
\subsection{Antiperiodic boundary conditions}
Let us first consider the models associated with the antiperiodic
boundary condition for fermion fields, where the phase parameter
is given by $\alpha({\bf n})=(-1)^{n_1}(-1)^{n_2}(-1)^{n_3}$.
In Fig.\ref{fig:figgapA} we show the behavior of solutions of the gap equation with
the antiperiodic boundary condition on the three typical spaces with
non-trivial topology. That is,
one is the torus of three equal sides ($L_1=L_2=L_3=L$), which we call
this case AAA-model,
second is the torus but with a infinite side, i.e., ($L_1=L_2=L$,
$L_3=\infty$),
AAI-model,
third is the space with only one side compactified ($L_1=L$,
$L_2=L_3=\infty$), which we call AII-model.
The three lines in the figure show
the solution of the gap equations on the three spaces as the function
of $(2\pi/L m_0)^2$.
For a direction $x_i$ with a infinite side, the sum of $n_i$
becomes ineffective.
Here we have taken the cut-off parameter $\Lambda/{\m_0}=10$.\footnote{
In the below, the cut-off parameter $\Lambda/{\m_0}=10$ is adopted
tacitly, as long as we do not note especially.}
In Fig.\ref{fig:figgapA} we have considered the models which have
broken phase for large $L$.
The figure shows that the symmetric phase appears as $L$ become
smaller beyond some critical values. Thus as is expected
from the results of $D=3$, the effect of compactifying the space
affects the phase structure, and the effect with the
antiperiodic boundary condition for fermion fields always
tends to restore the symmetry.
We show the critical values $L_{\rm cr}$, i.e.,
the value of $L$ when the symmetry is restored,
as a function of cut-off parameter in Fig.\ref{fig:figcrtA}. We can read
from the figure that smaller value of $L$ is needed in order
to restore the symmetry of the AII-model compared with AAA-model .
\subsection{Periodic boundary conditions}
Next we consider the fields with the periodic boundary condition,
where the phase parameter is taken as $\alpha({\bf n})=(+1)^{n_1}
(+1)^{n_2}(+1)^{n_3}$. In the same way as the above, consider
three typical kinds of spaces,
($L_1=L_2=L_3=L$), ($L_1=L_2=L$, $L_3=\infty$), and
($L_1=L$, $L_2=L_3=\infty$), with adopting the periodic boundary
condition for the compactified directions.
We call each of them PPP-, PPI-, PII-model, respectively.
The solutions of gap equation are shown in Fig.\ref{fig:figgapP}.
In contrast to the results of antiperiodic boundary condition,
the fermion mass of broken phase becomes larger as $L$ becomes
smaller. The effect becomes more significant as the scale of
compactification $L$ becomes smaller.
This nature is same as the results in $D=3$.
Fig.\ref{fig:figgapP} shows the case that the coupling constant is larger
than the critical value of Minkowski spacetime, i.e., $\lambda_{\rm r}
>\lambda_{\rm cr}$, and that the phase is broken in the limit
$L\rightarrow\infty$.
The phase structure of these fields with the
periodic boundary condition has the interesting feature that
the phase becomes broken due to the compactified
space even when the coupling constant is smaller than
the critical value and the phase is symmetric in the limit of
$L\rightarrow\infty$.
In order to show this, let us introduce a ``mass'' ${m_1}$ instead of
the coupling constant by
\begin{equation}
\biggl\vert{1\over\lambda_{\rm r}}-{1\over\lambda_{\rm cr}}\biggr\vert
+{\mbox{\rm tr}\mbox{\bf 1}\over(4\pi)^2}\biggl(\ln{\Lambda^2\over{m_1}^2}\biggr)
{{m_1}^2\over\mu^2}=0.
\end{equation}
Then the gap equation reduces to
\begin{equation}
\ln{\Lambda^2\over{m_1}^2}
+{{m}^2\over{m_1}^2}\biggl(\ln{\Lambda^2\over{m_1}^2}-
\ln{{m}^2\over{m_1}^2}\biggr)
-4\sum_{{\bf n}\neq0}{\alpha({\bf n})\over({m_1}{\zeta_\bn})}
{{m}\over{m_1}}K_{1}({m}{\zeta_\bn})=0~.
\end{equation}
We show the solution of the gap equation
in Fig.\ref{fig:figgapPII}.
Here the cut-off parameter is chosen as $\Lambda/{m}_1=10$.
The fermion mass becomes non-zero value and the broken
phase appears as $L$ becomes smaller in the compactified spaces.
Thus the effect of compactified space makes the phase broken
when the periodic boundary condition is considered.
\subsection{Antiperiodic and periodic boundary conditions}
The above investigation suggests that the dynamical phase of the fields
is significantly affected by the compactification of the spatial sector.
The effect works in different way according to the
boundary conditions of the fermion fields.
In this subsection, we consider the models in which the different
boundary conditions are adopted for different directions of compact
space, to check these features in more detail.
First we consider the model that the periodic boundary condition is
imposed in the $x_1$-direction in the period $L_1={L_{\rm P}}$ and
the antiperiodic boundary condition in the $x_2$-direction
in the period $L_2={L_{\rm A}}$ and the other side is infinite, i.e., $L_3=\infty$.
We call this case PAI-model.
Fig.\ref{fig:phasePAImodel} shows the phase diagram of the PAI-model
,which has the broken phase in the limit of Minkowski
space, in the (${L_{\rm P}}-{L_{\rm A}}$) plane.
The critical value of ${L_{\rm A}}$ at large ${L_{\rm P}}$ is $m_0L_A\simeq1.20$.
We find the similar behavior in section 3-B.
We next consider the model that
the periodic boundary condition is imposed in the $x_1$-direction
in the period $L_1={L_{\rm P}}$ and the antiperiodic boundary condition
in the $x_2$- and $x_3$-direction in the period $L_2=L_3={L_{\rm A}}$, which
we call this PAA-model.
Fig.\ref{fig:phasePAAmodel} is the phase diagram of PAA-model
in the (${L_{\rm P}}-{L_{\rm A}}$) plane.
The critical value of ${L_{\rm A}}$ at large ${L_{\rm P}}$ is $m_0L_A\simeq1.38$.
Finally we consider the PPA-model, i.e.,
the periodic boundary condition is imposed in the $x_1$- and $x_2$-direction
in the period $L_1=L_2={L_{\rm P}}$ and the antiperiodic boundary condition
in the $x_3$-direction in the period $L_3={L_{\rm A}}$.
Fig.\ref{fig:phasePPAmodel} is the phase diagram of PPA-model.
The critical value of ${L_{\rm A}}$ at large ${L_{\rm P}}$ is same as that of PAI-model
All these models show the similar behavior in the subsection 3-B.
To end this section, we summarize the results.
We have investigated the nature of the
effective potential in the compactified space in four dimension.
The effect of the compactification of the spatial sector
changes the phase structure of the four-fermion theory.
The consequent results on the effective potential comes up in
different way according to the boundary condition of the
fermion fields. The antiperiodic boundary condition
tends to restore the symmetry and the periodic boundary condition
does to break the symmetry, when the effect of the compact space
becomes large. Thus we can construct the both models that
the symmetry is broken and restored in the course of the
expansion of the universe.
\section{Summary and Discussions}
We have investigated the four-fermion theory in compact
flat space with non-trivial topology.
By using the effective potential and the gap equation
in the leading order of the $1/N$ expansion we find
the phase structure of the theory in three and four
spacetime dimensions.
In three dimensions three class of models are considered
according to the variety of the boundary conditions
for fermion fields. The phase structure
of the theory is examined for three models
When taking the antiperiodic boundary condition,
the broken chiral symmetry tends to be restored for a sufficiently
small $L$. The phase transition is of the second order.
When taking the periodic boundary condition, the chiral
symmetry tends to be broken down for a small $L$.
In four dimensions we also see the same effects appears
in the compact flat spaces.
Therefore the drastic change of the phase structure
is induced by the compact space with no curvature.
In the torus space with the antiperiodic boundary condition,
the finite size effect decreases the dynamical fermion mass and
the chiral symmetry is restored for a small universe
beyond some critical size ($L < L_{cr}$).
On the other hand, the torus space with the
periodic boundary condition for fermion fields,
the finite size effect causes the
opposite influence to the phase structure.
The dynamical fermion mass is increased as the size
$L$ decreases and the chiral symmetry may be broken
down for $L < L_{cr}$, even when we set
symmetric phase at $L\rightarrow\infty$.
In some cases only the broken phase is observed
for any finite $L$ ($L_{cr}\rightarrow \infty$)
even if the coupling constant $\lambda_{r}$ of the
four-fermion interactions is sufficiently small.
According to the behavior of the effective potential in section 3,
the value of the vacuum energy in the true vacuum becomes lower
when the space size (volume) of the compactified direction becomes small
in the model associated with the periodic boundary condition.
On the contrary, for the model associated with the
the antiperiodic boundary condition,
the value of the the vacuum energy in the true vacuum
is raised, when the space size (volume) becomes small.
Therefore we find that the effect of the periodic
boundary condition forces attractively (negative pressure)
and that of antiperiodic boundary
condition forces repulsively (positive pressure).
This fact resembles to the well known
Casimir effect \cite{CS}, which is found in QED.
The Casimir effect gives rise to the vacuum pressure
due to the effect of the finite volume.
Using the momentum space representation we understand
these effects of boundary conditions in the following way.
In the compactified space the momentum is discritized.
The fermion fields with an antiperiodic boundary condition
can not take a momentum smaller than ($ |p| \geq \pi/L$).
Thus the possible momentum of the internal fermion fields
becomes larger when $L$ becomes small.
Since the lower momentum fermion field
has played an essential role to break the chiral
symmetry, the vacuum expectation value of the
composite operator $\langle\bar{\psi}\psi\rangle$
disappears for a sufficiently small $L$ and the broken
symmetry is restored.
Contrary to this
the fermion fields with a periodic boundary condition
can take a vanishing momentum even if the space is compact.
Hence the finite size of $L$ has no effect to restore the symmetry.
In the compactified space
the fermion field $\psi(x)$ can interact with
the field $\psi(x+nL)$ for a compactified direction.
Thus the finite size effect seems to make the interaction stronger.
Summing up all the correlations
$\langle\bar{\psi}(x)\psi(x+nL)\rangle$,
the vacuum expectation value of the composite field
becomes larger as $L$ decreases.
We can understand it as the dimensional reduction.
Compactifying one direction to the size $L$ in $D$-dimensional
space, it looks $D-1$-dimensional space for particles
with Compton wavelength much larger than the size $L$.
In the lower dimensional space the influence from
the lower momentum fermion exceeds. Then
the finite size effect
breaks the chiral symmetry for the model with the
periodic boundary condition.
From the cosmological point of view,
some mechanism, e.g., inflation, is needed to explain the hot and
large universe.
Even in the torus universe, inflation seems to be needed
to solve the horizon problem.
As had been discussed in Ref.\cite{Branden},
we need some special idea to lead inflation in the
context of the dynamical symmetry breaking scenario.
An inflation model which is induced
by a composite fermion field in the
context of the dynamical symmetry breaking scenario
is investigated in some class of supersymmetric particle model \cite{preon}.
We will need further investigation to test the
symmetry breaking in the early universe.
\subsection*{ACKNOWLEDGMENTS}
We would like to thank professor T.~Muta for
useful discussions and comments.
We also thanks M.~Siino for helpful comments.
| proofpile-arXiv_065-895 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section[]{Introduction}
\label{sec:I}
The superspace of three-geometries on a fixed manifold \cite{Whe68}
plays an important role in several formulations of quantum gravity.
In Dirac quantization \cite{DeW67}, wave functions on superspace
represent states. In generalized quantum frameworks \cite{Harup},
sets of wave functions on superspace define initial and final
conditions for quantum cosmology. The geometry of superspace is
therefore of interest and has received considerable
attention.\footnote{For some representative earlier articles see
\cite{DeW67,Fis70,DeW70a}. For more recent articles that also review
the current situation see \cite{FH90,Giu95}.} The notion of distance
that defines this geometry is induced from the DeWitt supermetric on
the larger space of three-{\it metrics} \cite{DeW67,DeW70a}. While the
properties of the supermetric on the space of metrics
are explicit, the properties of the
induced metric\footnote{The induced metric on the superspace of three
metrics might also be called the DeWitt metric since it was first
explored by DeWitt \cite{DeW70a}. However, to avoid confusion we
reserve the term ``DeWitt metric'' for the metric on the
space of three-metrics.} on the space of three-{\it geometries} are
only partially understood \cite{FH90,Giu95}. For example, the
signature of the metric on superspace, which is of special interest
for defining spacelike surfaces in superspace, is known only in
certain regions of this infinite dimensional space. In this paper we
explore the signature of the metric on simplicial approximations to
superspace generated by the methods of the Regge calculus
\cite{Reg61}.
Tetrahedra (three-simplices) can be joined together to make a
three-dimensional, piecewise linear manifold. A metric on this
manifold may be specified by assigning a flat metric to the interior
of the simplices and values to their $n_1$ squared edge-lengths. Not
every value of the squared edge-lengths is consistent with a
Riemannian metric (signature +++) on the simplicial manifold. Rather,
the squared edge-lengths must be positive, satisfy the triangle
inequalities, and the analogous inequalities for tetrahedra. The
region of an ${\bf R}^{n_1}$ whose axes are the squared edge-lengths
$t^i$, $i=1, \cdots, n_1$, where these inequalities are satisfied
is a space of simplicial configurations we call {\em simplicial
configuration space}.\footnote{This is the
``truncated'' superspace of \cite{MTW}}
The DeWitt supermetric induces a metric on
simplicial configuration space. Lund and Regge \cite{LRup} have given
a simple expression for this metric
and its properties have been explored by Piran and Williams
\cite{PW86} and Friedman and Jack \cite{FJ86}. In this paper we
explore the signature of the Lund-Regge metric for several
simplicial manifolds by a combination of analytical and numerical
techniques. In contrast to the continuum problem, we are able to
explore the signature over the whole of the finite dimensional
simplicial configuration spaces.
In Section II we review the construction of the metric on the
superspace of continuum three-geometries and summarize the known
information on its signature. In Section III we show how the
Lund-Regge simplicial metric is induced from the continuum metric
and analytically derive a
number of results limiting its signature. Section IV explores the
signature numerically for a number of elementary, closed, simplicial
manifolds. We study first the surface of a four-simplex. We find that
throughout its 10-dimensional configuration space that among
a basis of orthogonal vectors there is one timelike
direction and 9 spacelike ones. We next study the Lund-Regge metric of a
three-torus at various lattice resolutions (ranging from a
189-dimensional to a 1764-dimensional simplicial configuration space)
in the neighborhood of the single point representing a flat metric.
We find that the Lund-Regge metric can be degenerate, change signature,
and have more than one physical time-like directions.
We conclude with a comparison with known continuum results.
\section[]{Continuum Superspace}
\label{sec:II}
In this section we shall briefly review some of the known properties of
the metric on the superspace of continuum geometries on a fixed
three-manifold $M$. We do this to highlight the main features that we
must address when analyzing the corresponding metric on the superspace
of simplicial geometries. A more detailed account of the continuum situation
can be readily found \cite{Whe68,DeW67,Fis70,DeW70a,FH90,Giu95}.
Geometries on $M$ can be represented by three-metrics $h_{ab}(x)$,
although, of course, different metrics
describe the same geometry when related by a diffeomorphism.
We denote the space of
three-metrics on $M$ by ${\cal M}(M)$. A point in ${\cal M}$ is a
particular metric $h_{ab}(x)$ and we may consider the tangent space of
vectors at a point. Infinitesimal displacements $\delta h_{ab}(x)$ from
one three-metric to another are particular examples of vectors. We
denote such vectors generally by $k_{ab}(x)$, $k^\prime_{ab}(x)$, etc. A
natural class of metrics on ${\cal M}(M)$ emerges from the structure of
the constraints of general relativity. Explicitly they are given by
\begin{equation}
(k^\prime,k) = \int\nolimits_M d^3 x N(x) \bar G^{abcd} (x) k^\prime_{ab}(x)
k_{cd}(x)
\label{twoone}
\end{equation}
where $\bar G^{abcd}(x)$, called the inverse DeWitt supermetric, is
given by
\begin{equation}
\bar G^{abcd}(x) =
\frac{1}{2} h^{\frac{1}{2}}(x) \left[h^{ac} (x) h^{bd} (x) +
h^{ad} (x) h^{bc} (x) - 2 h^{ab} (x) h^{cd}(x)\right]
\label{twotwo}
\end{equation}
and $N(x)$ is an essentially arbitrary but non-vanishing function called
the lapse. Different metrics result from different choices of $N(x)$. In
the following we shall confine ourselves to the simplest choice,
$N(x)=1$.
The DeWitt supermetric (\ref{twotwo}) at a point $x$ defines a metric on the
six-dimensional space of three-metric components at $x$. This metric
has signature $(-,+,+,+,+,+)$ \cite{DeW67}. The signature of the metric
(\ref{twoone}) on ${\cal M}$ therefore has an infinite number of
negative signs and an infinite number of positive signs --- roughly one
negative sign and five positive signs for each point in $M$.
The space of interest, however, is not the space of
three-metrics ${\cal M}(M)$ but rather the superspace of
three-geometries Riem$(M)$ whose ``points'' consist of classes of
diffeomorphically equivalent metrics, $h_{ab}(x)$. A metric on Riem$(M)$
can be induced from the metric on ${\cal M}(M)$, (\ref{twoone}), by
choosing a particular perturbation in the metric $\delta h_{ab}(x)$ to
represent the infinitesimal displacement between two nearby
three-geometries. However, a $\delta h_{ab}$ is not fixed uniquely by
the pair of nearby geometries. Rather, as is well known, there is an
arbitrariness in $\delta h_{ab}(x)$ corresponding to the arbitrariness
in how the points in the two geometries are identified. That
arbitrariness means that, for any vector $\xi^a(x)$,
the ``gauge-transformed'' perturbation
\begin{equation}
\delta h^\prime_{ab}(x) = \delta h_{ab}(x) + D_{(a}\xi_{b)}(x)\ ,
\label{twothree}
\end{equation}
represents the same displacement in superspace as $\delta h_{ab}(x)$
does, where $D_a$ is the derivative in $M$.
The metric (\ref{twoone}) on ${\cal M}(M)$ is not invariant under gauge
transformations of the form (\ref{twothree}) even with $N=1$. Thus we
may distinguish ``{\it vertical}'' directions in ${\cal M}(M)$ which are pure
gauge
\begin{equation}
k^{\rm vertical}_{ab} (x) = D_{(a}\xi_{b)}(x)
\label{twofour}
\end{equation}
and ``{\it horizontal}'' directions which are orthogonal to {\it all} of these
in the metric (\ref{twoone}).
Since the metric (\ref{twoone}) is not invariant under gauge
transformations, there are different notions of distance between points
in superspace depending on what $\delta h_{ab}(x)$ is used to represent
displacements between them. The conventional choice
\cite{DeW70a} for
defining a geometry on superspace has been to choose the {\it minimum}
of such distances between points. That is the same as saying that
distance is measured in ``horizontal'' directions in superspace.
Equivalently, one could say that a gauge for representing displacements
has been fixed. It is the gauge specified by the three conditions
\begin{equation}
D^b \left(k_{ab} - h_{ab} k^c_c\right) = 0\ .
\label{twofive}
\end{equation}
The signature of the metric defined by the above construction is an
obvious first question concerning the geometry of superspace. The
infinite dimensionality of superspace, however, makes this a non-trivial
question to answer. The known results have been lucidly explained by
Friedman and Higuchi \cite{FH90} and Giulini \cite{Giu95} and we briefly
summarize some of them here:
\begin{itemize}
\item At any point in superspace there is always at least one negative
direction represented by constant conformal displacements of the form
\begin{equation}
k_{ab} = \delta\Omega^2 h_{ab}(x)\ .
\label{twosix}
\end{equation}
Evidently (\ref{twosix}) satisfies (\ref{twofive}) so that it is
horizontal, and explicit computation from (\ref{twoone}) shows $(k,
k) \leq 0$;
\item If $M$ is the sphere $S^3$, then for a
neighborhood of the round metric on $S^3$, the signature has one
negative sign corresponding to (\ref{twosix}) and all other orthogonal
directions are positive;
\item Every $M$ admits geometries with negative Ricci curvature
(all eigenvalues strictly negative). In the open region of
superspace defined by negative Ricci curvature geometries the signature
has an {\it infinite} number of negative signs and an {\it infinite}
number of positive signs. On the sphere, these results already
show that there must be points in superspace where the metric is
degenerate.
\end{itemize}
The above results are limited, covering only a small part of the
totality of superspace. In the following we shall show that
more complete results can be obtained in simplicial configuration space.
\section[]{The Lund-Regge Metric}
\label{sec:III}
\subsection{Definition}
\label{sec:A}
In this Section we derive the form of the Lund-Regge metric on
simplicial configuration space together with some analytic results on its
signature. We consider a fixed closed simplicial three-manifold $M$
consisting of $n_3$ tetrahedra (three-simplices) joined together so that
each neighborhood of a point in $M$ is homeomorphic to a region of ${\bf
R}^3$. The resulting collections of $n_k$ $k$-simplices, (vertices,
edges, triangles, and tetrahedra for $k=0,1,2,3$, respectively) we
denote by $\Sigma_k$. A simplicial geometry is fixed by an assignment
of values to the squared edge-lengths of $M,\ t^m, m=1, \cdots, n_1$ and
a flat Riemannian geometry to the interior of each tetrahedron consistent
with those values. The assignment of squared edge-lengths is not
arbitrary. The squared edge-lengths are positive and constrained by the
triangle inequalities and their analogs for the tetrahedra. Specifically
if $V^2_k(\sigma)$ is the squared measure (length, area, volume) of
$k$-simplex $\sigma$ expressed as a function of the $t^m$, we must have
\begin{equation}
V^2_k(\sigma)\geq 0\ , \quad k=1,2,3
\label{threeone}
\end{equation}
for all $\sigma\in \Sigma_k$. The space of three-geometries on $M$ is
therefore the subset of the space of $n_1$ squared edge-lengths $t^m$
in which (\ref{threeone}) is satisfied. We call this {\it simplicial
configuration space} and denote it by ${\cal T}(M)$. A point in ${\cal
T}(M)$ is a geometry on $M$; the $\{t^m\}$ are coordinates locating
points in ${\cal T}(M)$.
Distinct points in ${\cal T}(M)$ correspond to different assignments
of edge-lengths to the simplicial manifold $M$. In general distinct
points correspond to distinct three-geometries and, in this respect,
${\cal T}(M)$ is like a superspace of three-geometries. However, this
is not always the case. Displacements of the vertices of a flat
geometry in a flat embedding space result in a new assignment of the
edge-lengths that corresponds to the same flat geometry. These
variations in edge-lengths that preserve geometry are the simplicial
analogs of diffeomorphisms \cite{Har85,Mor92}. Further, for large
triangulations where the local geometry is near to flat we expect
there to be approximate simplicial diffeomorphisms --- small changes
in the edge-lengths which approximately preserve the geometry
\cite{Har85,LNN86,RW84}. Thus, the continuum limit of ${\cal T}(M)$
is not the superspace of three-geometries but the space of
three-metrics. It is for these reasons that we have used the term
simplicial configuration space rather than simplicial superspace.
We now define a metric on ${\cal T}(M)$ that gives the distance
between points separated by infinitesimal displacements $\delta t^m$
according to
\begin{equation}
\delta S^2 = G_{mn} (t^\ell) \delta t^m \delta t^n\ .
\label{threetwo}
\end{equation}
Such a metric can be induced from the DeWitt metric on the space
${\cal M}(M)$ of continuum three-metrics on $M$ in the following way:
Every simplicial geometry can be represented in ${\cal M}(M)$ by a
metric which is piecewise flat in the tetrahedra and, indeed, there are
many different metrics representing the same geometry. Every
displacement $\delta t^m$ between two nearly three-geometries can be
represented by a perturbation $\delta h_{ab}(x)$ of the metric in ${\cal
M}(M)$. The DeWitt metric (\ref{twoone}) which gives the notion of
distance between nearby metrics in ${\cal M}(M)$ can therefore be used
to induce a notion of distance in ${\cal T}(M)$ through the relation
\begin{equation}
G_{mn} (t^\ell) \delta t^m \delta t^n = \int\nolimits_M d^3 x\, N(x)
\,\bar G^{abcd}(x) \delta h_{ab}(x) \delta h_{cd}(x)\ .
\label{threethree}
\end{equation}
On the right hand side $\bar G^{abcd}(x)$ is (\ref{twotwo}) evaluated at
a piecewise flat metric representing the simplicial geometry and $\delta
h_{ab}(x)$ is a perturbation in the metric representing the change in
that geometry corresponding to the displacement $\delta t^m$.
However, as the discussion of Section II should make clear, many
different metrics on ${\cal T}(M)$ can be induced by the
identification (\ref{threethree}). First, there is the choice of
$N(x)$. We choose $N(x)=1$. Second, since the right hand side of
(\ref{threethree}) is not gauge invariant we must fix a gauge for the
perturbations $\delta h_{ab}$ to determine a metric $G_{mn} (t^\ell)$.
There are two parts to this. First, to evaluate the integral on the right
hand side of (\ref{threethree}) we must at least fix the gauge {\it
inside} each tetrahedron. We shall refer to this as the {\sl Regge
gauge} freedom. It is important to emphasize, however, that any
choice for the Regge gauge does not completely fix the total gauge
freedom available. As discussed above, there still may be variations
of the lengths of the edges which preserve the geometry -- simplicial
diffeomorphisms --- and correspondingly $\cal T(M)$ can still have
both vertical and horizontal directions. Therefore, secondly, this
gauge freedom must also be fixed.
A natural choice for the Regge gauge from the point of view of
simplicial geometry is to require that the $\delta h_{ab}$ are {\it
constant} inside each tetrahedron,
\begin{equation}
D_c \delta h_{ab} (x) = 0\ , \quad {\rm inside\ each}\quad
\tau\in \Sigma_3\ ,
\label{threefour}
\end{equation}
but possibly varying from one tetrahedron to the next. The conditions
(\ref{threefour}) are, of course, more numerous than the three
diffeomorphism conditions permitted at each point, but as we already
mentioned, these two gauges are distinct. Thus, (\ref{threefour}) is
not the Regge calculus counterpart of (\ref{twofive}). Nevertheless,
this choice of Regge-gauge (\ref{threefour}) has a beautiful property
which we shall discuss below in Subsection B.
Assuming (\ref{threefour}), the right hand side of (\ref{threethree})
may be evaluated explicitly. Although not gauge invariant, the
right-hand-side of (\ref{threethree}) {\em is} coordinate invariant. We
may therefore conveniently use coordinates in which the metric
coefficients $h_{ab}(x)$ satisfying (\ref{threefour}) are
constant in each tetrahedron. Then,
using (\ref{twotwo}),
\begin{equation}
G_{mn}(t^\ell)\delta t^m\delta t^n =
\sum\limits_{\tau\in\Sigma_3}
V(\tau)\left\{\delta h_{ab} (\tau) \delta h^{ab}(\tau) - \left[\delta
h^a_a(\tau)\right]^2\right\}
\label{threefive}
\end{equation}
where $V(\tau)$ is the volume of tetrahedron $\tau$ and we have written
$h_{ab}(\tau), \delta h_{ab}(\tau)$, etc for the constant values of
these tensors inside $\tau$.
To proceed further we need explicit expressions for $h_{ab}(\tau)$ in
terms of $t^\ell$, and for $\delta h_{ab}(\tau)$ in terms of $t^\ell$ and
$\delta t^\ell$. One way of making an explicit identification is to
pick a particular vertex in each tetrahedron $(0)$ and consider the
vectors ${\bf e}_a(\tau), a= 1,2,3$ proceeding from this vertex to the other
three vertices $(1, 2, 3)$ along the edges of the tetrahedron. The metric
$h_{ab}(\tau)$ in the basis defined by these vectors is
\begin{eqnarray}
\label{threesix}
h_{ab}(\tau) &=& {\bf e}_a(\tau)\cdot {\bf e}_b(\tau)\nonumber \\
&=& \frac{1}{2} \left(t_{0a}+t_{0b}-t_{ab}\right)
\end{eqnarray}
where $t_{AB}$ is the squared edge-length between vertices
$A$ and $B$. Eq (\ref{threesix}) gives an explicit expression
for the metric in each tetrahedron in terms of its squared edge-lengths in a
basis adapted to its edges. The perturbation of (\ref{threesix}),
\begin{equation}
\delta h_{ab} (\tau) = \frac{1}{2} \left(\delta t_{0a} + \delta t_{ab} - \delta
t_{0b}\right) \ ,
\label{threeseven}
\end{equation}
gives an explicit expression for the perturbation in $h_{ab}(\tau)$
induced by changes in the squared edge-lengths. (In general
(\ref{threeseven}) changes discontinuously from tetrahedron to
tetrahedron.) Eq (\ref{threeseven}) is an explicit realization of the
gauge condition (\ref{threefour}). Only trivial linear transformations
of the form (\ref{twothree}) inside the tetrahedra preserve (\ref{threefour}),
and there are none of these that preserve the simplicial structure in
the sense that $\xi^a(x)$ vanishes on the boundary of the tetrahedra.
In this sense (\ref{threeseven}) fixes the Regge gauge for the
perturbations.
An explicit expression for the $G_{mn}(t^\ell)$ defined by
(\ref{threethree}), (\ref{threefive}), and (\ref{threeseven}) may be
obtained by studying the expression\footnote{See, {\it e.g.}
\cite{Har85} for a
derivation.} for the squared volume of tetrahedron $\tau$,
\begin{equation}
V^2(\tau) = \frac{1}{(3!)^2} {\rm det} [h_{ab}(\tau)]\ .
\label{threeeight}
\end{equation}
Consider a perturbation $\delta t^m$ in the squared edge-lengths. The
left hand side of (\ref{threeeight}) may be expanded (dropping the label
$\tau$) as
\begin{equation}
V^2 (t^\ell + \delta t^\ell)=
V^2(t^\ell) + \frac{\partial
V^2(t^\ell)}{\partial t^m} \delta t^m + \frac{1}{2}\ \frac{\partial^2 V^2
(t^\ell)}{\partial t^m \partial t^n}\ \delta t^m \delta t^n + \cdots\ .
\label{threenine}
\end{equation}
The right hand side may be expanded using the identity
\begin{equation}
{\rm det}\ A = \exp[Tr \log(A)]
\label{threeten}
\end{equation}
as
\begin{eqnarray}
{\rm det} \left(h_{ab} + \delta h_{ab}\right)
&=& {\rm det} (h_{ab}) {\rm det} \left(\delta^a_b + \delta h^a_b\right) \nonumber \\
&=& {\rm det} (h_{ab}) \left\{1 + \delta h^a_a
+ \frac{1}{2}\left[\delta h_{ab} \delta h^{ab}
- \left(\delta h^a_a\right)^2\right]
+\cdots \right\}\ .
\label{threeeleven}
\end{eqnarray}
Equating (\ref{threenine}) and (\ref{threeeleven}) gives, at first order,
the identity
\begin{equation}
\delta h^a_a(\tau) = \frac{1}{V^2}\ \frac{\partial V^2(\tau)}{\partial
t^m}\ \delta t^m\ ,
\label{threetwelve}
\end{equation}
and at second order gives a relation which leads through (\ref{threefive}) to
the following elegant expression for the metric $G_{mn} (t^\ell)$:
\begin{equation}
G_{mn} (t^\ell) = -\sum\limits_{\tau\in\Sigma_3} \frac{1}{V(\tau)}
\ \frac{\partial^2 V^2(\tau)}{\partial t^m\partial t^n}\ .
\label{threethirteen}
\end{equation}
This is the Lund-Regge metric \cite{LRup} on simplicial
configuration space ${\cal T}(M)$. It is an explicit function of the squared
edge-lengths $t^m$ through (\ref{threeeight}) and (\ref{threesix}). The metric
may be reexpressed in a number of other ways of which a useful example
is
\begin{equation}
G_{mn} (t^\ell) = -2 \left[\frac{\partial^2 V_{\rm TOT}}{\partial t^m
\partial t^n} + \sum\limits_{\tau\in\Sigma_3} \frac{1}{V(\tau)} \frac{\partial
V(\tau)}{\partial t^m}\ \frac{\partial V(\tau)}{\partial t^n}\right]
\label{threefourteen}
\end{equation}
where $V_{\rm TOT}$ is the total volume of $M$.
The metric (\ref{threefourteen}) becomes singular at the boundary of
$\cal T(M)$ where $V(\tau)$ vanishes for one or more tetrahedra.
However, locally, since $V^2$ is a third order polynomial in the $t$'s,
$G_{mn} \sim (t^\ell - t^\ell_b)^{-1/2}$ where $t^\ell_b$ is a point
on the boundary. A generic boundary point is therefore a finite distance
from any other point in ${\cal T(M)}$ as measured by the metric $G_{mn}$.
\subsection{Comparison with Nearby Regge Gauge Choices}
\label{sec:B}
The identification of points in a perturbed and unperturbed geometry
is ambiguous up to a displacement $\xi^a(x)$ in the point in the
perturbed geometry identified with $x^a$ in the unperturbed geometry.
As a consequence any two perturbations $\delta h_{ab}(x)$ which
differ by a gauge transformation (\ref{twothree}) represent the same
displacement in the space ${\cal T}(M)$ of three-geometries. In the
continuum case of Section II, we followed DeWitt \cite{DeW70a} and
fixed this ambiguity by minimizing the right-hand-side of
(\ref{threethree}) over all possible gauge transformations $\xi^a(x)$
so that distance between three-geometries was measured along
``horizontal'' directions in ${\cal M}(M)$. In the previous subsection
we fixed
the ambiguity in the comparison of the continuum space of piecewise metrics
to simplicial lattices by requiring perturbations to be constant over
tetrahedra [{\it cf.} (\ref{threefour})]. We can now show that the
distance defined in this way is a local minimum with respect to other
Regge gauge choices that preserve the simplicial structure, in a sense to be
made precise below.
Consider the first variation of the right-hand-side of
(\ref{threethree}) with $N(x)=1$ that is produced by an infinitesimal
gauge transformation
$\xi^a(x)$. This is
\begin{equation}
\int_M d^3 x \bar G^{abcd} (x) \delta h_{ab}(x) D_{(c}\xi_{d)}(x)\ .
\label{threefifteen}
\end{equation}
Integrating by parts and making use of the symmetry of $\bar G^{abcd}$, this
first variation can be written
\begin{equation}
-\sum\limits_{\tau\in\Sigma_3} \int_\tau d^3 x \bar G^{abcd} (x)
D_c \delta h_{ab} (x) \xi_d(x) +
\sum\limits_{\sigma\in\Sigma_2} \int_\sigma d\Sigma\left[\hskip-.01
in\Bigl|n_c
\bar G^{abcd} (x) \delta h_{ab} (x)\Bigr |\hskip-.01 in\right]
\xi_d (x)\ .
\label{threesixteen}
\end{equation}
In this expression, the first term is a sum of volume integrals over
the individual tetrahedra in $M$. The second term is an integral over
triangles where, for a particular triangle, $n^a$ is a unit outward pointing
normal and
$\left[\hskip-.01 in\Bigl|\ \ \Bigr |\hskip-.01 in\right]$ denotes the
discontinuity across the triangle. Such a term must be included since
we do not necessarily assume that the non-gauge invariant argument of
(\ref{threefifteen}) is continuous from tetrahedron to tetrahedron. The
conditions (\ref{threefour}) make the first term vanish. The second
vanishes when $\xi^a(x)$ vanishes on the boundary of every
tetrahedron. That means that the distance defined by the Lund-Regge
metric is an extremum among all re-identifications of points in the
{\it interiors} of the tetrahedra between the perturbed and
unperturbed geometries. It does not appear to necessarily be an
extremum with respect to re-identifying points in the interior of
triangles or edges. The Lund-Regge metric therefore provides the
shortest distance between simplicial three-metrics among all
choices of Regge gauge which vanish on the triangles.
However it is not exactly ``horizontal'' in the sense of the continuum
because of the possibility of simplicial diffeomorphisms.
We shall see explicit consequences of this below.
\subsection{Analytic Results on the Signature}
\label{sec:C}
We are interested in the signature of $G_{mn}$ on ${\cal T}(M)$. We shall
calculate the signature numerically for some simple $M$ in
Section \ref{sec:IV}, but here we give a few analytic results which
characterize it incompletely.
\subsubsection{The Timelike Conformal Direction }
\label{sec:a}
The conformal perturbation defined by
\begin{equation}
\delta t^m = \delta{\Omega^2} \ t^m
\label{threeseventeen}
\end{equation}
is always timelike. This can be seen directly from (\ref{threefive})
by noting that (\ref{threeseven}) and (\ref{threesix}) imply
\begin{equation}
\delta h_{ab}(\tau) = \delta {\Omega^2} \ h_{ab}(\tau)\ .
\label{threeeighteen}
\end{equation}
However, it can also be verified directly from (\ref{threethirteen})
using the fact that $V^2$ is a homogeneous polynomial of degree three
in the $t^m$. Then it follows easily from Euler's theorem that
\begin{equation}
G_{mn} (t^\ell) t^m t^n = - 6V_{\rm TOT} (t^\ell) < 0\ .
\label{threenineteen}
\end{equation}
The timelike conformal direction is not an eigenvector of $G_{mn}$
because
\begin{equation}
G_{mn}t^n=-4 \partial V_{TOT}/\partial t^m\ .
\label{conformal}
\end{equation}
We do not expect $\partial V_{TOT}/\partial t^m $ to be proportional
to $t^m$ except for symmetric assignments of the edge lengths on
highly symmetric triangulations.
The same relation shows that the conformal direction is orthogonal
to any gauge direction $\delta t^n$ because
\begin{equation}
G_{mn}t^m\delta t^n=-4 (\partial V_{TOT}/\partial t^n) \delta t^n =
-4\delta V_{TOT}\ .
\label{conformal1 }
\end{equation}
This vanishes for any change in edge lengths which does not change the
geometry.
\subsubsection{At Least $n_1-n_3$ Spacelike Directions}
\label{sec:b}
Eqs (\ref{threefive}) and (\ref{threetwelve}) can be combined to show
that
\begin{eqnarray}
\widetilde G_{mn} \delta t^m \delta t^n &\equiv& \left[G_{mn} + 4
\sum\limits_{\tau\in\Sigma_3} \, \frac{1}{V(\tau)}\ \frac{\partial V(\tau)}{\partial
t^m}\ \frac{\partial V(\tau)}{\partial t^n}\right] \delta
t^m\delta t^n \nonumber \\
&=&\sum\limits_{\tau\in\Sigma_3} \, V(\tau)\left[\delta h_{ab} (\tau)
\delta h^{ab}(\tau)\right]\geq 0
\label{threetwenty}
\end{eqnarray}
Thus
\begin{equation}
G_{mn} = \widetilde G_{mn} - 4\sum\limits_{\tau\in\Sigma_3}\ \frac{1}{V(\tau)}
\ \frac{\partial V(\tau)}{\partial t^m}
\ \frac{\partial V(\tau)}{\partial t^n}
\label{threetwentyone}
\end{equation}
where $\widetilde G_{mn}$ is positive. Some displacements $\delta t^m$
will leave the volumes of all the tetrahedra unchanged:
\begin{equation}
\frac{\partial V(\tau)}{\partial t^m}\ \delta t^m = 0\quad , \quad \tau
\in \Sigma_3.
\label{threetwentytwo}
\end{equation}
These directions are clearly spacelike from (\ref{threetwentyone}).
Since (\ref{threetwentytwo}) is $n_3$ conditions on $n_1$ displacements
$\delta t^m$
we expect at least $n_1-n_3$ independent spacelike directions.
\subsubsection{Signature of Diffeomorphism Modes}
\label{sec:c}
In general any change $\delta t^m$ in the squared edge-lengths of $M$
changes the three-geometry. A flat simplicial three-geometry is an
exception. Locally a flat simplicial geometry may be embedded in
Euclidean ${\bf R}^3$ with the vertices at positions ${\bf x}_A, A=1,
\cdots, n_0$. Displacements of these locations result in new and
different edge-lengths, but the flat geometry remains unchanged. Such
changes in the edge-lengths $\delta t^m$ are called {\sl gauge directions} in
simplicial configuration space. Each vertex may be displaced in three
directions making a total of $3n_0$ gauge directions. We shall now
investigate whether these directions are timelike, spacelike, or null.
We evaluate $\delta S^2$ defined by (\ref{threetwo}) and
(\ref{threefourteen}) for displacements $\delta{\bf x}_A$ in the
locations of the vertices. If an edge connects vertices $A$ and $B$,
its length is
\begin{equation}
t^{AB} = \left({\bf x}_A - {\bf x}_B\right)^2 \equiv \left({\bf
x}_{AB}\right)^2
\label{threetwentythree}
\end{equation}
and the change in length $\delta t^{AB}$ from a variation in
position $\delta {\bf
x}_A$ follows immediately. The total volume is unchanged by any
variation in position of the ${\bf x}_A$, which means that
\begin{equation}
\frac{\partial V_{TOT}}{\partial x^i_A} = 0 \qquad , \qquad
\frac{\partial^2 V_{TOT}}{\partial x^i_A \partial x^j_B} = 0
\label{threetwentyfour}
\end{equation}
and so on. These derivatives are related to those with respect to the
edge-lengths by the chain rule.
Thus (\ref{threetwentyfour}) does not imply that $\partial^2
V_{TOT}/\partial t^{AC}\partial t^{BC}$ is zero, but only that
\begin{equation}
\frac{\partial^2 V_{TOT}}{\partial t^m \partial t^n} \delta t^m \delta
t^n = -\frac{\partial V_{TOT}}{\partial t^{AB}}
\ \frac{\partial^2t^{AB}}{\partial x^i_A \partial x^j_B} \delta x^i_A
\delta x^j_B
\label{threetwentyseven}
\end{equation}
where $\delta t^m = (\partial t^m/\partial x^i_A )\delta
x^i_A$, with summation over both $A$ and $i$. Inserting
(\ref{threetwentyseven}) and the chain rule relations into
(\ref{threefourteen}), we obtain
\begin{eqnarray}
\lefteqn{\delta S^2 \equiv G_{mn} (t^{\ell}) \delta t^m \delta t^n =}
\\ \nonumber
- 2 \Biggl[-\frac{\partial
V_{TOT}}{\partial t^n}\ \frac{\partial^2t^n}{\partial x^i_A \partial
x^j_B}
&+ & \sum\limits_{\tau\in\Sigma_3} \frac{1}{V(\tau)}
\ \frac{\partial V(\tau)}{\partial t^m}\ \frac{\partial V(\tau)}{\partial t^n}
\ \frac{\partial t^m}{\partial x^i_A}\ \frac{\partial t^n}{\partial
x^j_B}\Biggr] \delta x^i_A \delta x^j_B\ .
\label{threetwentyeight}
\end{eqnarray}
To simplify this, we go from sums over edges to sums over the
corresponding vertices, $(C, D)$, with a factor of $1/2$ for each sum.
Then
\begin{equation}
\delta S^2 = \frac{\partial V_{TOT}}{\partial t^{CD}}
\ \frac{\partial^2t^{CD}}{\partial x^i_A \partial x^j_B}\ \delta x^i_A
\delta x^j_B -
\frac{1}{2} \sum\limits_{\tau\in\Sigma_3} \frac{1}{V(\tau)}
\left[\frac{\partial V(\tau)}{\partial t^{CD}}
\ \frac{\partial t^{CD}}{\partial x^i_A} \delta x^i_A\right]^2\ .
\label{threetwentynine}
\end{equation}
Using the explicit relations (\ref{threetwentythree}), we find
\begin{equation}
\delta S^2 = 2\left\{ \frac{\partial V_{TOT}}{\partial t_{CD}} \left( \delta
{\bf x}_C - \delta {\bf x}_D\right)^2
- \sum\limits_{\tau\in\Sigma_3}\ \frac{1}{V(\tau)}
\left[\frac{\partial V(\tau)}{\partial t_{CD}}
{\bf x}_{CD} \cdot \left(\delta {\bf x}_C - \delta {\bf
x}_D\right) \right]^2\right\}.
\label{threethirty}
\end{equation}
The second term of (\ref{threethirty}) is negative definite, but
the first does not appear to have a definite sign, so that a general
statement on the character of gauge modes does not emerge.
However, more information is available in the specific cases to
be considered below.
\section[]{Numerical Investigation of the Simplicial Supermetric}
\label{sec:IV}
The Lund-Regge metric can be evaluated numerically to give complete
information about its signature over the whole of the simplicial
configuration space which confirms the incomplete but more general
analytic results obtained above. We consider specifically as manifolds the
three-sphere ($S^3$), and the three-torus ($T^3$). For the simplest
triangulation of $S^3$ we investigate the signature over the whole of
its simplicial configuration space. For several triangulations of
$T^3$ we investigate a limited region of their simplicial
configuration space near flat geometries. Even though in both cases
the triangulations are rather course, a number of basic and
interesting features emerge.
The details of the triangulations in the two cases are described below
but the general method of calculation is as follows. Initial
edge-lengths are assigned (consistent with the triangle and
tetrahedral equalities) and the supermetric calculated using (3.13).
The eigenvalues of the metric $G_{mn}$ are then calculated and the
numbers of positive, negative and zero values counted. To explore
other regions of the simplicial configuration space, the edge-lengths
are repeatedly updated (in such a manner so as to ensure that the
squared-measures (see (3.1)) of all the triangles and tetrahedra are
positive) and the eigenvalues and hence the signature of the
supermetric are found. In the case of the $T^3$ triangulations we
also calculate the deficit angles which give information on the curvature
of the simplicial geometry, and in addition, we explore the geometry
of the neighboring points in simplicial configuration space along each
eigenvector.
We now describe the details of the two numerical calculations for the two
different manifolds.
\subsection{The 3-Sphere}
\label{subsec:IVA}
The simplest triangulation of $S^3$ is the surface of a
four-simplex, which consists of five vertices, five tetrahedra, ten
triangles, and ten edges.
Thus the simplicial configuration
space is 10-dimensional (Figure 1). Each point in this space
represents a particular assignment of lengths to each of the ten edges
of the 4-simplex. To numerically explore all of this 10-dimensional
space would be foolish as the space contains redundant and ill-defined
regions. To avoid the ill-defined parts we need only to restrict
ourselves to those points in the configuration space that satisfy the
two and three dimensional simplicial inequalities (Eq.~3.1). However,
to avoid redundancy we must examine the invariance properties of the
the eigenvalue spectrum of the Lund-Regge metric.
Let us begin by examining an invariance of the Lund-Regge
eigenvalue spectrum under a global scale transformation. The
Lund-Regge metric scales as $G_{mn}\longrightarrow L^{-1}G_{mn}$ under
an overall rescaling of the edges $l_i\longrightarrow Ll_i$ where
$l_i=\sqrt{t^i}$. The signature is scale invariant, so we may use
this invariance to impose one condition on the $l_i$. We found it
most convenient numerically to fix $l_0 = 1$ as the longest edge. The
ten-dimensional space has now collapsed to a nine-dimensional
subspace. As this is the only invariance we have identified, we then
further restrict our investigation to the points in this subspace
which satisfy the various simplicial inequalities. Specifically,
writing $\overline{AB}$ as the edge between vertex $A$
and vertex $B$:
\begin{itemize}
\item $l_0 \equiv \overline{34}=1$ and $l_0$ is the longest edge;
\item $l_3 \equiv \overline{03} \in (0,1]$;
\item $l_4 \equiv \overline{04} \in [(1-l_3),1]$;
\item $l_6 \equiv \overline{13} \in (0,1]$;
\item $l_7 \equiv \overline{14} \in [(1-l_6),1]$;
\item $l_1 \equiv \overline{01} \in (\tau_{-}, \tau_{+})$ where
$\tau_\pm$ is obtained from the tetrahedral inequality applied to
edge $\overline{01}$ of tetrahedron $(0134)$,
\begin{eqnarray}
\tau_\pm^2 & = & \frac{1}{2} \left(-t_0+t_3+t_4+t_6-\frac{t_3t_6}{t_0}
+\frac{t_4t_6}{t_0} \right. \nonumber \\
& + & t_7+\frac{t_3t_7}{t_0}-\frac{t_4t_7}{t_0} \nonumber \\
& & \pm \sqrt{(t_0^2-2t_0t_3+t_3^2-2t_0t_4-2t_3t_4+t_4^2)} \nonumber \\
& & \left. \sqrt{(t_0^2-2t_0t_6+t_6^2-2t_0t_7-2t_6t_7+t_7^2)}/{t_0} \right)
\label{fourone}
\end{eqnarray}
\item $l_8 \equiv \overline{23} \in (0,1]$;
\item $l_9 \equiv \overline{24} \in [(1-l_8),1]$;
\item $l_2 \equiv \overline{02} \in (\tau_{-}, \tau_{+})$, where
$\tau_\pm$ is obtained from a tetrahedral inequality applied to
edge $\overline{02}$ of tetrahedron $\{0234\}$;
\item $l_5 \equiv \overline{12} \in (s_{-},s_{+})$, where $s_\pm$ is
obtained from the tetrahedral inequality as applied to the three
tetrahedra ( $(1234)$, $(0123)$ and $(0124)$ ) sharing edge
$\overline{12}$, with
\begin{eqnarray}
s_- & = & max\{ \tau_-^{(1234)}, \tau_-^{(0123)}, \tau_-^{(0124)} \}
\nonumber \\
s_+ & = & min\{ \tau_+^{(1234)}, \tau_+^{(0123)}, \tau_+^{(0124)} \}.
\label{fourtwo}
\end{eqnarray}
\end{itemize}
Considering only this region, we were able to sample the whole of the
configuration space. Subdividing the unit interval into ten points
would ordinarily entail a calculation of the eigenvalues for $10^{10}$
points. However, by using the scaling law for the Regge-Lund metric
together with the various simplicial inequalities we only needed to
calculate the eigenvalues for $102160$ points. We observed exactly one
timelike direction and nine spacelike directions for each point in the
simplicial configuration space, even though the distortions of our
geometry from sphericity were occasionally substantial --- up to 10 to 1
deviations in the squared edge-lengths from their symmetric values.
Recall that a conformal displacement is a timelike direction in
simplicial configuration space [{\it cf.}
(\ref{threenineteen})]. However this direction will coincide with the
timelike eigenvector of $G_{mn}$ only when all the edge lengths are
equal as follows from (\ref{conformal}).
There are clearly more than the $n_1-n_3=5$ spacelike
directions required by the general result of \ref{sec:b}. We conclude
that in this case the signature of the Lund-Regge metric is
$(-,+,+,+,+,+,+,+,+)$ over the whole of simplicial configuration
space.
\begin{figure}
\centerline{\epsfxsize=5.0truein\epsfbox{Fig1.eps}}
\label{Fig1}
\caption{\protect\small
{\em The boundary of a 4-simplex as a 10-dimensional simplicial
configuration space model for $S^3$.} This figure shows the five
tetrahedra corresponding to the boundary of the central 4-simplex
(0,1,2,3,4) exploded off around its perimeter. The 4-simplex has 5
vertices, 10 edges, 10 triangles, and 5 tetrahedra. The topology of
the boundary consisting of those tetrahedra is that of a 3-sphere.
The specification of the 10 squared edge-lengths of the 4-simplex
completely fixes its geometry, and represents a single point in the
10-dimensional simplicial configuration space. Here we analyze the geometry of
this space using the Regge-Lund metric and show that there is one, and
only one, timelike direction.}
\end{figure}
More detailed information about the Lund-Regge metric beyond the
signature is contained in the eigenvalues themselves. Predictions of
their degeneracies arise from the symmetry group of the triangulation.
For the boundary of the 4-simplex the symmetry group is the
permutation group on the five vertices, $S_5$. If all the edge
lengths are assigned symmetrically --- all equal edge-lengths in the
present case --- then the eigenvalues may be classified according to
the irreducible representations of $S_5$ and their degeneracies are
given by the dimensions of those representations. This is because a
permutation of the vertices can be viewed as a matrix in the
10-dimensional space of edges which interchanges the edges in
accordance with the permutation of the vertices. The Regge-Lund
metric $G_{mn}$ may be viewed similarly and commutes with all the
elements of $S_5$ for a symmetric assignment of edges. The matrices
representing the elements of $S_5$ give a 10-dimensional reducible
representation of it, which can be decomposed into irreducible
representations by standard methods described in \cite{Har85c}. The
result is that the reducible representation decomposes as $1+4+5=10$,
where the factors in this sum are dimensions of the irreducible
representations, which we expect to be the multiplicities of the
corresponding eigenvalues of $G_{mn}$ at a symmetric assignment of
edges.
The results of numerical calculations of the eigenvalues are illustrated in
Figure~2 for a slice of simplicial configuration space. When
all the edges are equal to 1 we found one eigenvalue of $-1/2\sqrt{2}$,
four eigenvalues equal to $1/3\sqrt{2}$, and five of $5/6\sqrt{2}$. As
expected, these degeneracies were broken when we departed from the
spherical geometry (Figure~2). Nevertheless, even with aspect ratios on
the order of $10:1$ we always found a single timelike direction in
this simplicial superspace.
\begin{figure}
\centerline{\epsfxsize=5.0truein\epsfbox{Fig2.eps}}
\label{Fig2}
\caption{\protect\small
{\em The eigenvalue spectrum along a 1-dimensional cut through the
10-dimensional simplicial configuration space.} We plot the 10
eigenvalues (not necessarily distinct) of the Lund-Regge metric
$G_{mn}$ with nine of the ten edges set to unity and the remaining
squared edge-length ($t$) varies from 0 to 3 (a range obtained
directly from the tetrahedral inequality). When $t=1$ we have maximal
symmetry and we have three distinct eigenvalues for the ten
eigenvectors as expected with degeneracies 1, 4, and 5 as predicted.
As we move away from this point one can see that the degeneracies are
for the most part broken. However, there remains a twofold degeneracy
in the 5 arising from the remaining symmetries in the assignments of
the edge-lengths. }
\end{figure}
\section{The 3-Torus}
\label{subsec:IVB}
In this section we analyze the Lund-Regge metric in the neighborhood
of two different flat geometries on a common class of triangulations of the
three-torus, $T^3$. We investigate the metric on triangulations
of varying refinement in this class.
We illustrate degeneracy of the metric and identify a few gauge modes
(vertical directions) in flat space which
correspond to positive eigenvalues.
The class of triangulations of $T^3$ is constructed as follows: A lattice of
cubes, with $n_x$, $n_y$ and $n_z$ cubes in the $x$, $y$ and
$z$-directions, is given the topology of a 3-torus by identifying
opposite faces in each of the three directions. Each cube is divided
into six tetrahedra, by drawing in face diagonals and a body diagonal
(for details, see R\v ocek and Williams\cite{RW81}). The number of
vertices is then $n_0=n_x n_y n_z$ and there are $7n_0$ edges, $12n_0$
triangles, and $6n_0$ tetrahedra.
The geometry is flat when the squared edge lengths of the
sides, face diagonals, and body
diagonals take values of
1, 2, and 3 times the lattice scale respectively.
We refer to this geometry as the {\em
right-tetrahedron lattice}.
The flat 3-torus can also be tessellated by isosceles tetrahedra, each
face of which is an isosceles triangle with a squared base edge of
$1$, the other two squared edge-lengths being $3/4$. We refer to this
as the {\em isosceles-tetrahedron lattice}.
One can obtain this lattice from the right-tetrahedron lattice by the
following construction on each cube. Compress the
cube along its main diagonal in a symmetrical way, keeping all the ``coordinate
edges" at length $1$, until the main diagonal is also of length $1$. The face
diagonals will then have squared edge-lengths of $4/3$. An overall rescaling by a
factor of $\sqrt 3/2$ then converts the lattice of these deformed cubes to the
isosceles-tetrahedron lattice.
These two lattices correspond to distinct points in the $7n_0$ dimensional
simplicial configuration space of a triangulation of $T^3$.
They are examples of inequivalent flat geometries on $T^3$.
Even though they can be obtained from each other by the deformation
procedure described, they are associated with distinct flat structures. For
example, consider the geodesic structure. For the right-tetrahedron lattice,
there are three orthogonal geodesics of extremal length at any point, parallel
to the coordinate edges of the cubes of which the lattice is constructed, and
corresponding to the different meridians of the torus. On the other hand,
for the isosceles-tetrahedron lattice, the three extremal geodesics at any
point will not be orthogonal to each other; they will actually be parallel to
the ``coordinate edges" which in the deformation process have moved into
positions at angles of ${\rm arccos}(-1/3)$ to each other. (It is perhaps easier to
visualize the analogous situation in two dimensions where the geodesics will
be at angles of ${\rm arccos}(-1/2)$ to each other). If the metric structures on the two
lattices we consider were diffeomorphically equivalent, the diffeomorphism
would preserve geometric quantities, like the angles between the
extremal-length geodesics, and this is clearly not the case.
We shall see how
this inequivalence between the flat tori manifests itself in the simplicial
supermetric.
We first turn to a detailed examination of the the isosceles-tetrahedron
lattice and, in particular, the eigenvalues of the Lund-Regge metric.
Unlike the relatively simple $S^3$ model described above where we used
Mathematica to calculate the eigenvalue spectrum of the matrix
$G_{mn}$, here we developed a C program utilizing a Householder method
to determine the the eigenvalue spectrum and corresponding
eigenvectors. In addition, we calculated the deficit angle
(integrated curvature) associated to each edge. These deficit angles
were used to identify diffeomorphism and conformal directions as well
as to corroborate the analytic results for the continuum described in
the Introduction. We performed various runs ranging from an
isosceles-tetrahedron lattice with $3\times 3\times 3$ vertices and
189 edges, up to a lattice with $6\times 6\times 7$ vertices and 1764
edges. The points (simplicial 3-geometries) in such high dimensional
simplicial configuration spaces cannot be systematically canvassed as
we did in the 4-simplex model. For this reason we chose to search the
neighborhood of flat space in two ways. First we explored the region
around flat space by making random variations (up to 20 percent) in
the squared edge-lengths of the isosceles-tetrahedron lattice, and
secondly we perturbed the edge-lengths a short distance along selected
flat-space eigenvectors.
Movement along the eigenvectors was performed in the following way. We
start with a set of squared edge-lengths corresponding to zero
curvature. All the $7n_0$ deficit angles are zero. We calculate the
$7n_0$ eigenvectors, $v=\{v_j,\ j=1,2,\ldots,7n_0\}$ together with the
corresponding eigenvalues $\lambda_j$ and then adjust the squared
edge-lengths along one of the $v_j$ as specified in:
\begin{equation}
t^i_{new} = t^i_{flat} + \epsilon v^j,\hspace{.25in} \forall i\in
\{1,2,\ldots, 7n_0\}.
\label{foureight}
\end{equation}
Here we choose $\epsilon\ll 1$. We then calculate the deficit angles
for this new point $t_{new}$ and then repeat this procedure for each
$v_j$ in turn. In this way we can in principle identify which (if any)
of the eigenvectors correspond to gauge directions.
We observe the following:
\begin{itemize}
\item Eigenvectors corresponding to eigenvalues $\lambda=1/2$
appear to be diffeomorphisms to order $\epsilon^3$ in the
sense that the deficit angles are of order $\epsilon^3$.
\item For an $n_0=n\times n\times n$ lattice there are $6n-4$
eigenvectors corresponding to $\lambda=1/2$.
\item There are $n_0+2$ eigenvectors corresponding to $\lambda=1$.
\end{itemize}
A graphical representation of this movement along the eigenvectors away from
the flat isosceles-tetrahedron lattice point is illustrated in Figure~3.
\begin{figure}
\centerline{\epsfxsize=5.0truein\epsfbox{Fig3.eps}}
\label{Fig3}
\caption{\protect\small
{\em The generation of curvature by motion in simplicial configuration
space from a flat-space point to a neighboring point along one of the
flat-space eigenvectors.} Here we analyze the curvature of the
3-geometries of the $3\times 3\times 3$ $T^3$ lattice in the
neighborhood of the flat-space isosceles tetrahedral lattice point.
The simplicial configuration space is 189 dimensional, and there are
189 deficit angles (integrated curvature) used as indicators of
curvature change. This plot represents the deficit angle spectrum
associated to motion along each of the eigenvectors associated to each
of the eigenvalues $\lambda_k$ Eq.~(\protect\ref{foureight}). Here we chose
$\epsilon=0.01$. In the plot we notice that all the eigenvectors
corresponding to eigenvalue $\lambda=1/2$ correspond to vertical or
diffeomorphism directions. }
\end{figure}
Although the eigenvalues corresponding to those gauge directions we have
identified is positive, we have not been able to establish all gauge
directions are spacelike.
For the right-tetrahedron
lattice we have simplified the expression (\ref{threethirty}) obtained
for diffeomorphism modes as follows. The derivatives of $V(\tau)$ with
respect to the diagonal edges all vanish, so that only the edges of
the cubic lattice need be included in the $(C, D)$ sum. For lattice
spacing of 1, these derivatives are all $1/12$, giving $\partial
V_{TOT}/\partial t^{CD} \equiv 1/2$, since each such edge is shared by
6 tetrahedra. Thus for this lattice we have
\begin{equation}
\delta S^2 =
\sum_{(C,D)} \left(\delta {\bf x}_C - \delta {\bf
x}_D\right)^2 - \frac{1}{12}\ \sum_{\tau\in\Sigma_3}
\left[\sum\limits_{(C,D)\in
\tau} {\bf x}_{CD} \cdot \left(\delta {\bf x}_C - \delta {\bf
x}_D\right) \right]^2
\label{fourthree}
\end{equation}
where $(C,D)$ implies that $C$ and $D$ are connected by an edge.
Alternatively, in
terms of summation over edges, $m$, the expression is
\begin{equation}
\delta S^2 = 2\left[\sum_m \delta {\bf x}^2_m -\frac{1}{6}
\sum_{\tau\in\Sigma_3} \left(\sum_{m\in\tau} {\bf x}_m
\cdot \delta {\bf x}_m\right)^2\right]
\label{fourfour}
\end{equation}
Even though (\ref{fourfour}) is relatively simple, with definite
signs for each term, we have not managed to prove a general result about
the overall sign. We suspect that it is positive and in the following
special cases have found this to
be true:
\begin{enumerate}
\item If just one vertex moves through $\delta {\bf r}$
\begin{equation}
\delta S^2= 8 \delta {\bf r}^2
\label{fourfive}
\end{equation}
\item If only the edges in one coordinate direction of the lattice
change,
\begin{equation}
\delta S^2 \ge \frac{1}{2} \sum \delta {\bf x}^2_{CD}
\label{foursix}
\end{equation}
\item If all the $\delta {\bf x}_{CD}$'s have the same magnitude
$\delta L$ (but not the same direction)
\begin{equation}
\delta S^2 > (\delta L)^2\ \left(n_1 - \frac{3}{4}\ n_3\right) > 0
\label{fourseven}
\end{equation}
since $n_1 > n_3$
\end{enumerate}
If it could be proved that $\delta S^2$ is always positive, it would
still remain to calculate the number of independent directions in the
space of edge-lengths, in order to see how many of the positive
eigenvalues do indeed correspond to vertical gauge modes.
We next turn briefly to more general aspects of the
right-tetrahedron lattice.
As mentioned earlier, although the geometry of the right-tetrahedron is flat
like the isosceles-tetrahedron lattice, it is not diffeomorphic to it.
It is therefore no surprise that when we calculate the eigenvalues of
the Lund-Regge metric we find both their values and degeneracies to differ
from the isosceles-tetrahedron case.
Even if the lattices are rescaled so that their total volumes are
equal, the eigenvalue spectra (which scale by the inverse length) are still not
the same.
The right-tetrahedron case presents allows a particularly easy analysis
of how the degeneracies of the eigenvalue spectrum are connected with
the symmetry group of a lattice. To find the symmetry group of the
right-tetrahedron lattice,
consider the symmetries when one point, the origin say, is
fixed. These are a ``parity" transformation (when ${\bf r}$ is mapped
to $-{\bf r}$), represented by $Z_2$, and a permutation of the three
coordinate directions, represented by $S_3$. The full group is
obtained by combining this $Z_2 \times S_3$ subgroup with the subgroup
of translations (mod 3) in the three coordinate directions. Thus the
symmetry group of the $T^3$ lattice with $3\times 3\times 3$ vertices
is the semi-direct product of the elementary Abelian normal subgroup
of order $3^3$ by the subgroup $Z_2 \times S_3$.
The action of the group on the vertices induces a permutation of the edges.
This 189-dimensional permutation representation of the edges decomposes as
\begin{eqnarray}
3\times 1_1 + 2\times 2_1 &+& 3\times 2_2 + 2\times 4 + 5
\times (6_1 + 6_2 + 6_3 + 6_4) + 2 \times (6_5 + 6_6 + 6_7 + 6_8)
\label{fournine}
\end{eqnarray}
where {\it e.g.} $6_3$ is the third irreducible representation of
dimension $6$. When this is compared with the multiplicities found for
the eigenvalues of the supermetric for the flat-space decomposition
with right-angled tetrahedra, it can be seen that the numbers agree
precisely provided that the two multiplicities of $8$ are interpreted
as $6 + 2$, and the two multiplicities of $3$ are regarded as $2 +
1$. We have no explanation for this unexpected degeneracy, although it
has been observed before (for example for several triangulations of
$CP^2$ \cite{Har85c,HPup}) and we suspect that there is a deep group
theoretical reason for it. A detailed investigation of the eigenvalues
found for the isosceles-tetrahedron lattice would almost certainly
reveal similar accidental degeneracies, for example for the
multiplicity of 29 found numerically for the eigenvalue $\lambda=1$.
Finally we look at common properties of the isosceles-tetrahedron
lattice and right tetrahedron lattice, in particular, the
the signature of the Lund-Regge metric, which is
the main point of these calculations. For both the isosceles- and
right-tetrahedron lattices with $3\times 3\times 3$ vertices, and
therefore 189 edges, the supermetric (in a flat-space configuration,
and in a neighborhood of flat space) has 176 positive eigenvalues and
13 negative ones (Fig.~4 illustrates these results for the
isosceles-tetrahedron lattice, together with the other 32 runs we
made). These are consistent with our analytical results and can be
interpreted as follows.
\begin{itemize}
\item There are rather more than the required $n_1-n_3=27$
spacelike directions.
\item The negative eigenvalues
include the conformal mode.
\item We have shown that the eigenvalues corresponding to
diffeomorphisms may sometimes be positive, and for the
isosceles-tetrahedron lattice have identified each of
the eigenvectors corresponding to the $\lambda=1/2$
eigenvalue as a generator of a diffeomorphism, or
vertical direction.
\item None of the eigenvectors corresponding to the $13$ negative
eigenvalues are generators of diffeomorphisms. This
indicates that there are $13$ horizontal directions corresponding to
negative eigenvalues.
\end{itemize}
Furthermore, we consistently observe signature change in the
supermetric as we depart from the flat space point. We also observed
a few null eigenvalues for the right-tetrahedron lattice for flat
space at various resolutions, including $4\times 4\times4$. We are
presumably seeing finite analogs of the infinite number of horizontal
(i.e. non-gauge) directions predicted by Giulini \cite{Giu95} for
regions of superspace in the neighborhood of a flat metric, where
there is an open region with negative Ricci curvature. This
illustrates the important fact that there are still timelike
directions in the simplicial supermetric, beyond the conformal mode,
which are always present.
\vskip .26 in
\begin{figure}
\centerline{\epsfxsize=5.0truein\epsfbox{Fig4.eps}}
\label{Fig4}
\caption{\protect\small
{\em An originally flat 3-torus isosceles-tetrahedron lattice with
additional 10\% random fluctuations induced on the squared
edge-lengths.} These two graphs show the number of positive ($POS$)
and negative ($NEG$) eigenvalues as a function of the number of
vertices ($N_0$). Here we considered 33 different resolution $T^3$
lattices ranging from a $3\times 3\times 3$ lattice with $N_0=27$
vertices to a 6x6x7 lattice with $N_0=252$ vertices. The ratio of
negative to positive eigenvalues ranges from $\sim 0.074$ to $\sim
0.123$.}
\end{figure}
\section{Geometric Structure of Superspace: Results and Future Directions}
\label{sec:V}
In this paper we used the simplicial supermetric of Lund and Regge as
a tool for analyzing the geometry of simplicial configuration space,
specifically the signature of the Lund-Regge metric. One way of
summarizing our results is to compare them with the known results for
the metric on continuum superspace described in the Introduction:
The conformal direction (\ref{threeseventeen}) is timelike as we showed
analytically in (\ref{threenineteen}). This coincides with the result
for the continuum conformal displacements.
The simplest simplicial manifold with the topology of the three-sphere
($S^3$) is the surface of a four-simplex. Here, we showed that, in
the 10-dimensional simplicial configuration space, among a set of
orthogonal directions there is always a single timelike direction and
nine spacelike ones, even for regions of simplicial configuration
space corresponding to geometries distorted from spherical symmetry
with aspect ratios exceeding $10:1$ in squared edge-length.
For the continuum such a result
is known only in an arbitrarily small neighborhood of the round metric
(analogous to all equal edges). However, we have no evidence that the
situation of a single timelike direction in an orthogonal set extends to more
refined triangulations of $S^3$ such as the 600-cell. In particular,
preliminary results indicate that there are 628 positive, 92 negative
and no zero eigenvalues for the 600-cell model\cite{BHM96}.
By investigating a neighborhood of the flat geometry in various
triangulations of $T^3$, we exhibited exact simplicial diffeomorphisms
for exactly flat geometries, and approximate simplicial
diffeomorphisms for approximately flat assignments of the squared edge
lengths. We showed that that there was more than one orthogonal
timelike non-gauge direction at the flat geometry. We showed that the
Lund-Regge metric can become degenerate and change signature as one
moves away from exactly flat geometries --- a result that might have
been expected at least on large triangulations of $S^3$ from
the combination of the continuum
results on the signature near a round metric and the different
signature on regions that correspond to negative curvature. Here,
signature change is exhibited explicitly for $T^3$.
The principle advantage of casting the DeWitt supermetric into its
simplicial form is to reduce the continuum infinite dimensional
superspace to a finite dimensional simplicial configuration
space. This simplicial configuration space is to be contrasted with
``mini-'' or ``midi''-superspaces. Simplicial configuration space preserves
elements of {\it both} the physical degrees of freedom and the
diffeomorphisms. In the continuum limit of increasingly large
triangulations we expect to recover the full content of both.
Our analysis provides motivation for further research. A potentially
fruitful line of investigation is to define approximate notions of
vertical and horizontal directions in simplicial configuration space
in such a way that they coincide with the exact vertical and
horizontal directions in the continuum limit. We already know that
there are $3n_0$ approximate diffeomorphism degrees of freedom for a
simplicial 3-geometry -- a fact that has been demonstrated
analytically and illustrated numerically in Regge geometrodynamics via
the freedom of choice of a shift vector per vertex\cite{Mil86,KMW88}.
Once armed with such a theory of approximate simplicial
diffeomorphisms, it would be interesting to extend our $S^3$ analysis
to a simplicial model with arbitrarily large number of vertices. In
this way we can be assured that the tessellation will have encoded in
it all of the true dynamic degrees of freedom as well as the full
diffeomorphism freedom.
\section{Acknowledgments}
We are indebted to Benjamin Bromley for his invaluable assistance with
the numerical implementation of the $T^3$ lattice and for numerous
discussions. We are grateful to Jan Saxl for his analysis of the
multiplicities for the flat $T^3$ tessellation. JBH thanks the LANL
Theoretical Division, the Santa Fe Institute, and the Physics
Department at the University of New Mexico for their hospitality while
this work was started. His work was supported in part by NSF grants
PHY90-08502, PHY95-07065, and PHY-94-07194. RMW and WAM acknowledge
support from a Los Alamos National Laboratory LDRD/IP grant. The work
of RMW was also supported in part by the UK Particle Physics
and Astronomy Research Council.
| proofpile-arXiv_065-898 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Since the early 1970's there has not been a space experiment able
to image a wide field with reasonable angular resulution in the UV, which
operated for more than about 10 days. The only full sky survey in the UV
was conducted by the TD-1 satellite and resulted in a catalog containing
31,215 sources measured in four spectral bands. Selected regions, to
deeper levels that the TD-1 survey, were observed by telescopes from balloons,
rockets, or dedicated satellites. The deepest such partial surveys
by wide-field imagers are
by the FOCA balloon telescope and by the UIT Shuttle-borne instrument.
Observations in the UV region longward of Lyman $\alpha$ (Ly $\alpha$ \,)
up to the atmospheric
transmission limit of $\sim$3000\AA\, take advantage of the reduced sky
background. This is because of a fortuitous combination of zodiacal
light decreasing shortward of $\sim$3000\AA\, and other backgrounds
remaining low up to near the geocoronal Ly $\alpha$. In this spectral region
it is therefore possible to observe faint astronomical sources with a
high signal-to-noise ratio with a modest telescope (O'Connell 1987).
The sources best studied with small aperture telescopes are QSOs
and AGNs, that radiate significantly in the UV. Other sources of UV
photons are hot stars of various types, the most interesting being
white dwarfs and mixed-type binaries. Young, massive stars, that
emit copious amounts of UV radiation and ionize the interstellar medium,
are important in the context of star formation and evolution of galaxies;
here the advantage of a wide-field imager is obvious. This has been
demonstrated amply by the UIT instrument flown on the Space Shuttle
(Stecher {\it et al.\ } 1992).
The obvious advantages of TAUVEX here are reduced sky background, the
longer observing time per target, and the long duration mission.
In June 1991 it was
proposed that TAUVEX be launched and operated from the Spectrum
R\"{o}ntgen-Gamma (SRG) spacecraft as part of the SODART (Soviet-Danish
R\"{o}ntgen Telescope) experiment. The SRG satellite will be launched
in late 1997 by Russia into a high elliptical four-day orbit. SRG is the
first of a series of space astronomical observatories being developed
under the sponsorship of the Russian Academy of Sciences with financial
support of the Russian Space Agency. For SRG, the scientific support
comes from the Space Research Institute (IKI) of the Russian Academy of
Sciences, and technical support is given by the Babakin Institute of
the Lavotchkin Association.
SODART (Schnopper 1994) consists of two X-ray imaging telescopes,
each with four focal-plane
instruments, to perform observations in the 0.2-20 keV band. It images
a one-degree field of view with arcmin resolution. TAUVEX will provide SODART
with aspect reconstruction and will assist SRG in pointing and position
keeping. In September 1991 the TAUVEX experiment was officially invited
to join other instruments aboard the SRG spacecraft. ISA agreed in November
1991 to provide SRG with the TAUVEX instrument. The official confirmation
from the Russian side was received in December 1991.
The TAUVEX imagers will operate on the SRG platform alongside numerous
X-ray and $\gamma$-ray experiments. This will be the first scientific
mission providing simultaneous UV-X-$\gamma$ observations of celestial
objects. The instruments on SRG include, apart from SODART, the JET-X
0.2-10 keV imager with 40' FOV and
10-30" resolution, the MART 4-100 keV coded aperture
imager ($6^{\circ}$ FOV and 6' resolution), the two F-UVITA
700-1000\AA\, imagers ($1^{\circ}$ FOV and 10"
resolution), the MOXE all-sky X-ray burst detector in the 3-12 keV band, and
the SPIN all-sky $\gamma$-ray burst detector in the 10keV-10MeV band
with $0^{\circ}$.5 optical localization.
TAUVEX will be bore-sighted with SODART, JET-X, MART and F-UVITA, and will
obtain simultaneous imaging photometry of objects in the UV with three
independent telescopes. A combination of various filters will accomodate
wide, intermediate and narrow spectral bands. These will be selected to
take maximal scientific advantage of the stability of SRG, the image
quality of the optics (90\% of the energy in $\sim$8"), and long staring
times at each SRG pointing. During a single pointing it will be possible
to change filters, thus more than three UV bands can be used on one
observation.
The present design of TAUVEX includes three co-aligned 20 cm diameter
telescopes in a linear array on the same mounting surface. Each telescope
images 54' onto photon-counting position-sensitive
detectors with wedge-and-strip anodes. Such detectors are space-qualified
and have flown in a number of Space Astronomy missions. The TAUVEX detectors
were developed by Delft Electronische Producten (Netherlands) to provide
high UV quantum efficiency at high count rates. Most systems within
TAUVEX are at least doubly-redundant. The choice of three telescopes
with identical optics and detectors adds an intrinsic degree of
redundancy. More safeties are designed into the software. Because of
SRG telemetry constraints, TAUVEX has to accumulate an image on-board,
instead of transmitting time-tagged photons. The drift of the SRG
platform is compensated within the payload, by tracking onto a $\sim$bright
(m$_{UV}<10.5$ mag) star in the field of view. The tracking corrections
are used to register the collected events and are supplied to the SRG
orientation and stabilization system.
The payload was
designed and is assembled by El-Op Electro-Optics Industries, Ltd., of
Rehovot, the top electro-optical manufacturer of Israel, with continuous
support and
supervision of Tel Aviv University astronomers.
The development of TAUVEX follows a number of stages, in which the predicted
behavior is verified by extensive tests. El-Op already produced a number
of models of the experiment that were delivered to the Russian constructors
of the spacecraft. The delivered models include a size mockup, a mass and
center of gravity model for satellite vibration tests, and a thermal
simulation model. The latter, in particular, is identical to the flight
model except for its lack of electronics and working detectors. All
construction details and surface finishes were included, the telescopes
have actual aluminized mirrors, etc.
The thermal model was tested at an ESA (European Space Agency) facility
in Germany in late-January 1993 prior to its shippment to Russia. The test
was a full space simulation, including Solar radiation, and the measured
behavior verified the theoretical model developed at El-Op. The thermal
model has now been installed on a model of the entire spacecraft,
which will be submitted to a full environmental test, including shocks
and vibrations appropriate for the PROTON-2 launch.
In April 1993
El-Op completed the engineering model of TAUVEX, which contains
operational electronics. After testing, this model was shipped
to the Russian Space Research Institute in Moscow, where it has been
tested intensively. In particular, during 1996 the SRG instrument
teams conducted a series of Complex Tests, in which instruments were
operated together, as if they were on-board the satellite. Two more such tests
are planned, until the engineering models of the instruments are delivered
early in 1997 to the Lavotchkin Industries to be integrated in a full
spacecraft engineering model. At present, the SRG schedule calls for
a launch by the end of 1997.
In parallel with tests in Russia, the TAUVEX models are passing their
qualification in Israel. During the first half of 1996 the Qualification
Model (identical to the flight model) has been vibrated and submitted to
shocks stronger than expected during the SRG launch. In September 1996 we
expect to proceed with the thermal-vacuum qualification. This test, which
lasts more than one month, checks the behavior of the instrument at
extreme temperatures and in high vacuum conditions. During these tests,
we project various targets onto the telescopes' apertures with a high-precision
60 cm diameter collimator that allow us to fully illuminate one of the three
telescopes. We test for resolution, distortion, photometric integrity,
spectral response, etc.
While the QM is being paced through the qualification process, El-Op
continues building the flight model (FM). The optical module, containing
the three telescopes, has already been built and adjusted. The integration
of the detectors and electronics will follow immediately upon the
completion of the thermal tests. The FM will be submitted to a burn-in
process, a low-level qualification, followed by an extended calibration in the
thermal vacuum chamber of El-Op.
The timetable of the SRG project calls for the upper part of
the SODART telescopes to be integrated with the X-ray mirrors at IABG, near
M\"{u}nchen, in Germany. TAUVEX, which requires very clean assembly conditions
and is connected to a mounting plate on the side of the SODART telescopes,
will be integrated at IABG at the same time. Following the integration,
the entire top part of the SRG spacecraft will be transported to Russia
to be tested at Lavotchkin and integrated with the rest of the scientific
payload.
The combination of long observing periods per source offered by of SRG
(typically 4 hours, up to 72 hours), and a high orbit with low radiation
and solar scattered background,
implies that TAUVEX will be able to detect and measure star-like objects
of $\sim$20 mag with S/N=10. This corresponds to V$\simeq$22.5 mag QSOs,
given typical UV-V colors of QSOs; at least 10 such objects are
expected in every TAUVEX field-of-view. During the 3 year guaranteed life of
SRG at least 30,000 QSOs will be observed, if the targets
will be different and at high galactic latitude. This is $\sim5\times$
more QSOs than catalogued now. The multi-band observations, combined
with ground-based optical observations, allow the simple separation of
QSOs from foreground stars.
Diffuse objects, such as nearby large galaxies, will be measured to a
surface brightness of m$_{UV}\simeq$20 mag/$\Box"$. A survey of the Local
Group galaxies and nearby clusters of galaxies,
that cannot be conducted with the Hubble Space Telescope because of its
narrow field-of-view, will be a high priority item in the target list
of TAUVEX.
TAUVEX will detect hundreds of faint galaxies in each high latitude field.
The large number of galaxies at faint UV magnitudes is indicated
by balloon-borne observations of the Marseilles CNRS group
(Milliard {\it et al.\ } 1992). Recently it became clear that the UV-bright galaxies
observed by FOCA may be related to those responsible for the Butcher-Oemler
effect in clusters of galaxies. It is even
possible that most faint, high latitude UV sources are galaxies.
Our prediction models indicate that each high-latitude field will
contain similar numbers of galaxies and stars. Allowing for a
reasonable fraction of low-{\bf b} fields, we estimate that TAUVEX
will observe $\sim10^6$ stars, mostly early type and WDs.
The data collection of TAUVEX will represent the deepest
UV-magnitude-limited survey of a significant fraction of the sky.
An additional major contribution of our experiment to astrophysics is
the unique opportunity to study time-dependent phenomena in all energy
ranges, from MeV in the $\gamma$-ray band to a few eV in the UV,
together with the other scientific instruments
on board SRG. The combination of
many telescopes observing the same celestial source in a number of spectral
bands offers unparalleled oportunities of scientific research.
For the first time, it will be possible to study the physics of accretion
disks around black holes and neutron stars, from the hard X-rays to near
the optical region. Other subjects of study include the inner regions of
QSOs and AGNs, where the physics of the accretion phenomenon, probably
powering all such sources, are best studied with simultaneous
multi-wavelength observations.
In preparation for TAUVEX, the science team at Tel Aviv University is
collaborating with the Berkeley Space Astrophysics Group
in analyzing the UV images of the FAUST Shuttle-borne imager.
The analysis is combined with gound-based observations from the
Wise Observatory, to enhance the identification possibilities.
In parallel with the hardware development, the Tel Aviv team is studying
the physics of UV space sources. A predictor model was developed to
calculate the expected number of UV sources to any observation direction.
The model tested well against the few existing data bases of UV sources.
We are also predicting UV properties of {\it normal} sources
from their known optical properties. This will allow us to detect
extraordinary sources, through a comparison of their {\it predicted}
and {\it measured} UV properties. Finally, we are creating at Tel Aviv
University a large and unique data base of UV astronomy, by combining a
number of existing data sets obtained by various space missions.
{\bf Acknowledgements:} I am grateful for support of the TAUVEX
project by the Ministry of Science and Arts, and by the Israel Academy of
Sciences. UV astronomy at Tel Aviv University is supported by
the Austrian Friends of Tel Aviv University. A long collaboration in
the field of UV astronomy with Prof. S. Bowyer from UC Berkeley,
supported now by the US-Israel Binational Science Foundation,
is appreciated. I thank all my colleagues of the TAUVEX
team for dedicated work during these years, and I am grateful to the
Korean Astronomical Observatory for inviting me to attend
this meeting.
{\bf Homepage:} Information on TAUVEX, with pictures, is available at:
http://www.tau.ac.il/\~\,benny/TAUVEX/.
\section* {References}
\begin{description}
\item Milliard, B., Donas, J., Laget, M., Armand, C. and Vuillemin, A.
1992 Astron. Astrophys. {\bf 257}, 24.
\item O'Connell, R.W. 1987 Astron. J. {\bf 94}, 876.
\item Schnopper, H.W. 1994 Proc. SPIE {\bf 2279}, 412.
\item Stecher, T.P {\it et al.\ } 1992 Astrophys. J. Lett. {\bf 395}, L1.
\end{description}
\end{document}
| proofpile-arXiv_065-936 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Experimental progress in the exclusive $(e,e'p)$ reaction in recent
years has provided a clear picture of the limitations of the simple
shell-model description of closed-shell nuclei.
Of particular interest is the reduction of the single-particle (sp)
strength for the removal of particles with valence hole
quantum numbers with respect to the simple shell-model estimate
which corresponds to a spectroscopic factor of 1 for such states.
Typical experimental results\cite{data} for closed-shell nuclei
exhibit reductions of about 30\% to 45\% for these spectroscopic
factors. In the case of ${}^{208}{\rm Pb}$, one obtains a
spectroscopic factor for the transition to the ground state of
${}^{207}{\rm Tl}$ of about 0.65 which is associated with the
removal of a $3s{1 \over 2}$ proton. An analysis which uses
information obtained from elastic electron scattering, indicates
that the total occupation number for this state is about 10\%
higher\cite{wagner}, corresponding to 0.75. This additional
background strength should be present at higher missing energy
and is presumed to be highly fragmented. The depletion of more
deeply bound orbitals is expected to be somewhat less as
suggested by theoretical considerations\cite{dimu92} which also
indicate that the strength in the background, outside the main
peak, corresponds to about 10\% (see also \cite{mah91}).
Recent experimental results for ${}^{16}{\rm O}$\cite{leus94}
yield a combined quasihole strength for the $p{1 \over 2}$ and
$p {3 \over 2}$ states corresponding to about 65\% with the
$p{1 \over 2}$ strength concentrated in one peak and the
$p{3 \over 2}$ strength fragmented already over several peaks.
Recent theoretical results yield about 76\% for these $p$
states\cite{geu96} without reproducing the fragmentation of the
$p{3 \over 2}$ strength. This calculation includes the influence
of both long-range correlations, associated with a large
shell-model space, as well as short-range correlations.
Although the inclusion of long-range correlations yields a good
representation of the $l=2$ strength, it fails to account for the
presence of positive parity fragments below the first $p{3\over 2}$
fragment. This suggests that additional improvement of the
treatment of long-range correlations is indicated possibly
including a correct treatment of the center-of-mass
motion\cite{rad94}. The contribution to the depletion of the sp
strength due to short-range correlations is typically about 10\%.
This result is obtained both in nuclear matter calculations, as
reviewed in \cite{dimu92}, and in calculations directly for finite
(medium-)heavy nuclei\cite{rad94,mudi94,mu95,po95,geu96}.
Although the influence of long-range correlations on the
distribution of the sp strength is substantial, it is clear that
a sizable fraction of the missing sp strength is due to
short-range effects. The experimental data\cite{data,leus94}
indicate that only about 70\% of the expected protons in the
nucleus has been detected in the energy and momentum domain
studied so far. It is therefore important to establish precisely
where the protons which have been admixed into the nuclear
ground state due to short-range and tensor correlations, can be
detected in the $(e,e'p)$ reaction and with what cross section.
The influence of short-range correlations on the presence of
high-momentum components in finite (medium-)heavy nuclei has been
calculated in \cite{mudi94,mu95,po95}. In this work the spectral
function for ${}^{16}{\rm O}$ has been calculated from a
realistic interaction without recourse to some form of local
density approximation\cite{sic94,vnec95}. No substantial
high-momentum components are obtained in \cite{mudi94,mu95,po95}
at small missing energy. With increasing missing energy, however,
one recovers the high-momentum components which have been admixed
into the ground state. The physics of these features can be
traced back to the realization that the admixture of
high-momenta requires the coupling to two-hole-one-particle
(2h1p) states in the self-energy for a nucleon with high
momentum. In nuclear matter the conservation of momentum requires
the equality of the 2h1p momentum in the self-energy and the
external high momentum. Since the two-hole state has a relatively
small total pair momentum, one automatically needs an essentially
equally large and opposite momentum for the intermediate
one-particle state to fulfill momentum conservation. As a result,
the relevant intermediate 2h1p states will lie
at increasing excitation energy with increasing momentum.
Considerations of this type are well known for nuclear matter
(see {\it e.g.} \cite{cio91}), but are approximately valid in
finite nuclei as well. Recent experiments on
${}^{208}{\rm Pb}$\cite{bob94} and ${}^{16}{\rm O}$\cite{blo95}
essentially confirm that the presence of high-momentum components
in the quasihole states accounts for only a tiny fraction of
the sp strength.
The theoretical prediction concerning the presence of high-momentum
components at high missing energy remains to be verified
experimentally, however. In order to facilitate and support
these efforts, the present work aims to combine the calculation
of the spectral function at these energies with the description
of both the electromagnetic vertex and final state
interactions (FSI) in order to produce realistic estimates of the
exclusive $(e,e'p)$ cross section under experimental conditions
possible at NIKHEF and Mainz. The impulse approximation has been
adopted for the electromagnetic current operator, which describes
the nonrelativistic reduction (up to fourth order in the inverse
nucleon mass~\cite{gp80}) of the coupling between the external
virtual photon and single nucleons only. The treatment of FSI has
been developed by the Pavia
group~\cite{bc81,bccgp82,bgp82,br91,bgprep} (see also
Ref.~\cite{libro}) and takes into account the average complex
optical potential the nucleon experiences on its way out of the
nucleus. Other contributions to the exclusive $(e,e'p)$ reaction
are present in principle, such as two-step mechanisms in the
final state or the decay of initial collective excitations in the
target nucleus. However, by transferring sufficiently high energy
$\omega$ to the target nucleus and by selecting typical
kinematical conditions corresponding to the socalled quasielastic
peak with $\omega = q^2/2m$ ($q$ the momentum transfer and
$m$ the nucleon mass), these contributions are
suppressed. In these conditions, adopted in the most recent
experiments, the direct knockout mechanism has been shown to be
the dominant contribution~\cite{bgprep} and essentially
corresponds to calculating the combined probability for exciting a
correlated particle (which is ultimately detected) and a
correlated hole such that energy and momentum are conserved but
no further interaction of the particle with the hole is included.
The calculation of the spectral function for ${}^{16}{\rm O}$ is
reviewed in Sec. II. Special attention is given to a separable
representation of the spectral function which facilitates the
practical implementation of the inclusion of FSI. In Sec. III the
general formalism of the Distorted Wave Impulse Approximation
(DWIA) is briefly reviewed. The influence of the FSI is studied
in Sec. IV for the quasihole transitions for which data are
available\cite{leus94,blo95}. Extending the calculation of the
cross section to higher missing energies yields the expected
rise of high missing-momentum components in the cross section
in comparison to the results near the Fermi energy. The
contribution of various partial waves is studied demonstrating the
increasing importance of higher $l$-values with increasing missing
momentum. All these results are discussed in Sec. IV and a brief
summary is presented in Sec. V.
\section{the single-particle spectral function}
The calculation of the cross section for exclusive $(e,e'p)$
processes requires the knowledge of the hole spectral function
which is defined in the following way
\begin{eqnarray}
S({\bf p},m_s,m_{\tau},{\bf p'},m_s^{'},m_{\tau};E)&= & \sum_n
\left \langle \Psi_0^{\rm A} \mid
a^{\dagger}({\bf p'},m_s',m_{\tau})
\mid \Psi_n^{{\rm A} - 1}\right \rangle
\left \langle \Psi_n^{{\rm A} - 1} \mid
a({\bf p},m_s,m_{\tau}) \mid \Psi_0^{\rm A} \right \rangle
\nonumber \\
& & \delta(E-(E_0^{\rm A}-E_n^{{\rm A} - 1})), \label{eq:spec}
\end{eqnarray}
where the summation over $n$ runs over the discrete excited
states as well as over the continuum of the (A-1) particle
system, $\left |\Psi_0^{\rm A} \right\rangle $ is the ground
state of the initial nucleus and $a({\bf p},m_s,m_{\tau})$
$(a^{\dagger}({\bf p'},m_s',m_{\tau}))$ is the annihilation
(creation) operator with the specified sp quantum numbers for
momenta and third component of spin and isospin, respectively.
The spectral function is diagonal in the third component of the
isospin and ignoring the Coulomb interaction between the protons,
the spectral functions for protons and neutrons are identical
for N=Z nuclei. Therefore in the following we have dropped the
isospin quantum number $m_{\tau}$. Note that the energy variable
$E$ in this definition of the spectral function refers to minus
the excitation energy of state $n$ in the A-1 particle system
with respect to the ground-state energy $(E_0^{\rm A})$ of the
nucleus with A nucleons.
To proceed further in the calculations it is useful to introduce a
partial wave decomposition which yields the spectral function
for a nucleon in the sp basis with orbital angular momentum $l$,
total angular momentum $j$, and momentum $p$
\begin{equation}
S_{lj}(p,p';E)= \sum_n \left \langle \Psi_0^{\rm A} \mid
a^{\dagger}_{p'lj} \mid \Psi_n^{{\rm A} - 1}\right \rangle
\left \langle \Psi_n^{{\rm A} - 1} \mid a_{plj} \mid
\Psi_0^{\rm A} \right \rangle
\delta(E-(E_0^{\rm A}-E_n^{{\rm A} - 1})), \label{eq:specl}
\end{equation}
where $a_{plj}$($a^{\dagger}_{p'lj}$) denotes the corresponding
removal (addition) operator. The spectral functions for the
various partial waves, $S_{lj}(p,p';E)$, have been obtained from
the imaginary part of the corresponding sp propagator
$g_{lj}(p,p';E)$. This Green's function solves the Dyson equation
\begin{equation}
g_{lj}(p_1,p_2;E)=g_{lj}^{(0)}(p_1,p_2;E)+\int dp_3 \int dp_4
g_{lj}^{(0)}(p_1,p_3;E) \Delta \Sigma_{lj}(p_3,p_4;E)
g_{lj}(p_4,p_2;E), \label{eq:dyson}
\end{equation}
where $g^{(0)}$ refers to a Hartree-Fock propagator and
$\Delta\Sigma_{lj}$ represents contributions to the real and
imaginary parts of the irreducible self-energy, which go beyond
the Hartree-Fock approximation of the nucleon self-energy used to
derive $g^{(0)}$. Although the evaluation of the self-energy as
well as the solution of the Dyson equation has been discussed in
detail in previous publications \cite{mu95,po95} we include
here a brief summary of the relevant aspects of the method.
\subsection {Calculation of the nucleon self-energy}
The self-energy is evaluated in terms of a $G$-matrix which is
obtained as a solution of the Bethe-Goldstone equation for
nuclear matter choosing for the bare NN interaction the
one-boson-exchange potential B defined by Machleidt
(Ref. \cite{rupr}, Table A.2). The Bethe-Goldstone equation has
been solved for a Fermi momentum $k_{\rm F} = 1.4 \
{\rm fm}^{-1}$ and starting energy $-10$ MeV. The choices for the
density of nuclear matter and the starting energy are rather
arbitrary. It turns out, however, that the calculation of the
Hartree-Fock term (Fig. 1a) is not very sensitive to this
choice \cite{bm2}. Furthermore, we will correct this nuclear
matter approximation by calculating the
two-particle-one-hole (2p1h) term displayed in Fig. 1b directly
for the finite system. This second-order correction, which
assumes harmonic oscillator states for the occupied (hole)
states and plane waves for the intermediate unbound particle
states, incorporates the correct energy and density dependence
characteristic of a finite nucleus $G$-matrix. To evaluate the
diagrams in Fig. 1, we need matrix elements in a mixed
representation of one particle in a bound harmonic oscillator
while the other is in a plane wave state. Using vector bracket
transformation coefficients \cite{vecbr} one can transform matrix
elements from the representation in coordinates of relative and
center-of-mass momenta to the coordinates of sp momenta in the
laboratory frame in which the two particle state is described by
\begin{equation}
\left | p_1 l_1 j_1 p_2 l_2j_2 J T \right \rangle
\label{eq:twostate}
\end{equation}
where $p_i$, $l_i$ and $j_i$ refer to momentum and angular
momenta of particle $i$ whereas $J$ and $T$ define the total
angular momentum and isospin of the two-particle state.
Performing an integration over one of the $p_i$, one obtains a
two-particle state in the mixed representation,
\begin{equation}
\left | n_1 l_1 j_1 p_2 l_2 j_2 J T \right \rangle =
\int_0^{\infty} dp_1 p_1^2 R_{n_1,l_1}(\alpha p_1) \left |
p_1 l_1 j_1 p_2 l_2 j_2 J T \right \rangle .
\label{eq:labstate}
\end{equation}
Here $R_{n_1,l_1} $ stands for the radial oscillator function
and the oscillator length $\alpha = $ 1.72 fm$^{-1}$ has been
chosen to have an appropriate description of the bound sp states
in $^{16}$O. Using the notation defined in
Eqs.~(\ref{eq:twostate}) and (\ref{eq:labstate}), our
Hartree-Fock approximation for the self-energy is obtained in the
momentum representation,
\begin{equation}
\Sigma_{l_1j_1}^{\rm HF}(p_1, p_1') = {{1}\over {2(2j_1+1)}}
\sum_{n_2 l_2 j_2 J T} (2J+1)(2T+1)
\left \langle p_1 l_1 j_1 n_2 l_2 j_2 J T \mid G \mid
p_1' l_1 j_1 n_2 l_2 j_2 J T \right \rangle .
\label {eq:hf}
\end{equation}
The summation over the oscillator quantum numbers is restricted
to the states occupied in the independent particle model of
$^{16}$O. This Hartree-Fock part of the self-energy is real and
does not depend on the energy.
The terms of lowest order in $G$ which give rise to an imaginary
part in the self-energy are represented by the diagrams
displayed in Figs.\ \ref{fig:diag}b and \ref{fig:diag}c,
referring to intermediate 2p1h and 2h1p states respectively.
The 2p1h contribution to the imaginary part is given by
\begin{eqnarray}
{W}^{\rm 2p1h}_{l_1j_1} (p_1,p'_1; E) =&
{\displaystyle {-1 \over {2(2j_1+1)}}} \> \>
{ {\lower7pt\hbox{$_{n_2 l_2 j_2}$}} \kern-20pt
{\hbox{\raise2.5pt \hbox{$\sum$}}} } \quad
{ {\lower7pt\hbox{$_{l L}$}} \kern-10pt
{\hbox{\raise2.5pt \hbox{$\sum$}}} } \>
{ {\lower7pt\hbox{$_{J J_S S T}$}} \kern-22pt
{\hbox{\raise2.5pt \hbox{$\sum$}}} } \quad
\int k^2 dk \int K^2 dK (2J+1) (2T+1) \nonumber \\
& \times \left\langle p_1 l_1 j_1 n_2 l_2 j_2 J T
\right| G \left| k l S J_S K L T \right\rangle
\left\langle k l S J_S K L T
\right| G \left| p'_1 l_1 j_1 n_2 l_2 j_2 J T \right\rangle
\nonumber \\
& \times \pi \delta\left(E + \epsilon_{n_2 l_2 j_2} -
{\displaystyle {K^2 \over 4m}} - {\displaystyle {k^2 \over m}} \right),
\label{eq:w2p1h}
\end{eqnarray}
where the ``experimental'' sp energies $\epsilon_{n_2 l_2 j_2}$
are used for the hole states ($-47$ MeV, $-21.8$ MeV, $-15.7$ MeV for
$s {1 \over 2}$, $p {3 \over 2}$ and $p {1 \over 2}$ states,
respectively), while the energies of the particle states are
given in terms of the kinetic energy only. The plane waves
associated with the particle states in the intermediate states
are properly orthogonalized to the bound sp states following the
techniques discussed by Borromeo et al. \cite{boro}. The 2h1p
contribution to the imaginary part
${W}^{\rm 2h1p}_{l_1j_1}(p_1,p'_1; E)$ can be calculated in a
similar way (see also \cite{boro}).
Our choice to assume pure kinetic energies for the particle
states in calculating the imaginary parts of $W^{\rm 2p1h}$
(Eq.~(\ref{eq:w2p1h})) and $W^{\rm 2h1p}$ may not be very
realistic for the excitation modes at low energy. Indeed a
sizable imaginary part in $W^{\rm 2h1p}$ is obtained only for
energies $E$ below $-40$ MeV. As we are mainly interested, however,
in the effects of short-range correlations, which lead to
excitations of particle states with high momentum, the choice
seems to be appropriate. A different approach would be required
to treat the coupling to the very low-lying 2p1h and 2h1p states
in an adequate way. Attempts at such a treatment can be found in
Refs. \cite{brand,rijsd,skou1,skou2,nili96,geu96}. The 2p1h
contribution to the real part of the self-energy can be calculated
from the imaginary part $W^{\rm 2p1h}$ using a dispersion relation
\cite{mbbd}
\begin{equation}
V^{\rm 2p1h}_{l_1j_1}(p_1,p_1';E) = {1 \over \pi} \
{\cal P} \int_{-\infty}^{\infty}
{{W^{\rm 2p1h}_{l_1j_1}(p_1,p_1';E')} \over {E'-E}} dE',
\label{eq:disper1}
\end{equation}
where ${\cal P}$ represents a principal value integral. A similar
dispersion relation holds for $V^{\rm 2h1p}$ and $W^{\rm 2h1p}$.
Since the Hartree--Fock contribution $\Sigma^{\rm HF}$ has been
calculated in terms of a nuclear matter $G$-matrix, it already
contains 2p1h terms of the kind displayed in Fig.\
\ref{fig:diag}b. In order to avoid such an overcounting of the
particle-particle ladder terms, we subtract from the real part of
the self-energy a correction term ($V_{\rm c}$), which just
contains the 2p1h contribution calculated in nuclear matter.
Summing up the various contributions we obtain for the
self-energy the following expressions
\begin{equation}
\Sigma = \Sigma^{\rm HF} + \Delta\Sigma = \Sigma^{\rm HF} +
\left( V^{\rm 2p1h} - V_{\rm c} + V^{\rm 2h1p} \right)
+ \left( W^{\rm 2p1h} + W^{\rm 2h1p} \right) . \label{eq:defsel}
\end{equation}
\subsection {Solution of the Dyson equation }
The next step is to solve the Dyson equation (\ref{eq:dyson}) for
the sp propagator. To this aim, we discretize the integrals in
this equation by considering a complete basis within a spherical
box of a radius $R_{\rm box}$. The calculated observables are
independent of the choice of $R_{\rm box}$, if it is chosen to be
around 15 fm or larger. A complete and orthonormal set of regular
basis functions within this box is given by
\begin{equation}
\Phi_{iljm} ({\bf r}) = \left\langle {\bf r} \vert p_i l j m
\right\rangle = N_{il} \ j_l(p_i r) \ {\cal Y}_{ljm}
(\theta, \phi) . \label{eq:boxbas}
\end{equation}
In this equation ${\cal Y}_{ljm}$ represent the spherical
harmonics including the spin degrees of freedom and $j_l$ denote
the spherical Bessel functions for the discrete momenta $p_i$
which fulfill
\begin{equation}
j_l (p_i R_{\rm box}) = 0 .
\label{eq:bound}
\end{equation}
Note that the basis functions defined for discrete values of the
momentum $p_i$ within the box differ from the plane wave states
defined in the continuum with the corresponding momentum just by
the normalization constant, which is
$\sqrt{\textstyle{2 \over \pi}}$ for the latter. This enables us
to determine the matrix elements of the nucleon self-energy in
the basis of Eq.~(\ref{eq:boxbas}) from the results presented in
the preceding Subsection.
As a first step we determine the Hartree-Fock approximation for the
sp Green's function in the ``box basis''. For that purpose the
Hartree-Fock Hamiltonian is diagonalized
\begin{equation}
\sum_{n=1}^{N_{\rm max}} \left\langle p_i \right|
\frac{p_i^2}{2m} \delta_{in} + \Sigma^{\rm HF}_{lj} \left| p_n
\right\rangle \left\langle p_n \vert \alpha \right\rangle_{lj} =
\epsilon^{\rm HF}_{\alpha lj} \left\langle p_i \vert
\alpha \right\rangle_{lj}. \label{eq:hfequ}
\end{equation}
Here and in the following the set of basis states in the box has
been truncated by assuming an appropriate $N_{\rm max}$. In the
basis of Hartree-Fock states $\left| \alpha \right\rangle$, the
Hartree-Fock propagator is diagonal and given by
\begin{equation}
g_{lj}^{(0)} (\alpha; E) =
\frac{1}{E-\epsilon^{\rm HF}_{\alpha lj} \pm i\eta} ,
\label{eq:green0}
\end{equation}
where the sign in front of the infinitesimal imaginary quantity
$i\eta$ is positive (negative) if $\epsilon^{\rm HF}_{\alpha lj}$
is above (below) the Fermi energy. With these ingredients one can
solve the Dyson equation (\ref{eq:dyson}). One possibility is to
determine first the socalled reducible self-energy, originating
from an iteration of $\Delta\Sigma$, by solving
\begin{equation}
\left\langle \alpha \right| \Sigma^{\rm red}_{lj}(E) \left|
\beta \right\rangle =
\left\langle \alpha \right| \Delta\Sigma_{lj}(E) \left| \beta
\right\rangle
+ \sum_\gamma
\left\langle \alpha \right| \Delta\Sigma_{lj}(E) \left| \gamma
\right\rangle
g_{lj}^{(0)} (\gamma ; E) \left\langle \gamma \right|
\Sigma^{\rm red}_{lj}(E) \left| \beta \right\rangle
\end{equation}
and obtain the propagator from
\begin{equation}
g_{lj} (\alpha ,\beta ;E ) = \delta_{\alpha ,\beta} \
g_{lj}^{(0)} (\alpha ;E ) + g_{lj}^{(0)} (\alpha ;E )
\left\langle \alpha \right| \Sigma^{\rm red}_{lj}(E) \left|
\beta \right\rangle g_{lj}^{(0)} (\beta ;E ) .
\end{equation}
Using this representation of the Green's function one can
calculate the spectral function in the ``box basis'' from
\begin{equation}
\tilde S_{lj}^{\rm c} (p_m,p_n; E) = \frac{1}{\pi} \ \mbox{Im}
\left( \sum_{\alpha, \beta} \left\langle p_m \vert \alpha
\right\rangle_{lj} \ g_{lj} (\alpha ,\beta ;E )
\left\langle \beta \vert p_n \right\rangle_{lj}\right) .
\label{eq:skob}
\end{equation}
For energies $E$ below the lowest sp energy of a
given Hartree-Fock state (with $lj$)
this spectral function is different from zero
only due to the imaginary part in $\Sigma^{\rm red}$.
This contribution involves the coupling to the continuum of 2h1p
states and is therefore nonvanishing only for energies at which
the corresponding irreducible self-energy $\Delta\Sigma$ has a
non-zero imaginary part. Besides this continuum contribution, the
hole spectral function also receives contributions from the
quasihole states \cite{mu95}. The energies and wave functions of
these quasihole states can be determined by diagonalizing the
Hartree-Fock Hamiltonian plus $\Delta\Sigma$ in the ``box basis''
\begin{equation}
\sum_{n=1}^{N_{\rm max}} \left\langle p_i \right|
\frac{p_i^2}{2m} \delta_{in} + \Sigma^{\rm HF}_{lj} +
\Delta\Sigma_{lj} (E=\epsilon^{\rm qh}_{\Upsilon lj})
\left| p_n \right\rangle \left\langle p_n \vert \Upsilon
\right\rangle_{lj} = \epsilon^{\rm qh}_{\Upsilon lj} \
\left\langle p_i \vert \Upsilon \right\rangle_{lj}.
\label{eq:qhequ}
\end{equation}
Since in the present work $\Delta\Sigma$ only contains a sizable
imaginary part for energies $E$ below
$\epsilon^{\rm qh}_{\Upsilon}$, the energies of the quasihole
states are real and the continuum contribution to the spectral
function is separated in energy from the quasihole contribution.
The quasihole contribution to the hole spectral function is
given by
\begin{equation}
\tilde S^{\rm qh}_{\Upsilon lj} (p_m,p_n; E) = Z_{\Upsilon lj}
{\left\langle p_m \vert \Upsilon \right\rangle_{lj}
\left\langle\Upsilon \vert p_n \right\rangle_{lj}}
\, \delta (E - \epsilon^{\rm qh}_{\Upsilon lj}), \label{eq:skoqh}
\end{equation}
with the spectroscopic factor for the quasihole state given by
\cite{mu95}
\begin{equation}
Z_{\Upsilon lj} =
\bigg( {1-{\partial \left\langle \Upsilon \right|
\Delta\Sigma_{lj}(E) \left| \Upsilon \right\rangle \over
\partial E} \bigg|_{\epsilon^{\rm qh}_{\Upsilon lj}}} \bigg)^{-1} .
\label{eq:qhs}
\end{equation}
Finally, the continuum contribution of Eq.~(\ref{eq:skob}) and
the quasihole parts of Eq.~(\ref{eq:skoqh}), which are obtained
in the basis of box states, can be added and renormalized to
obtain the spectral function in the continuum representation at
the momenta defined by Eq.~(\ref{eq:bound})
\begin{equation}
S_{lj} (p_m,p_n;E) = \frac{2}{\pi} \ \frac{1}{N_{il}^2} \bigl(
\tilde S^{\rm c}_{lj} (p_m,p_n;E) + \sum_{\Upsilon} \tilde
S^{\rm qh}_{\Upsilon lj} (p_m,p_n;E) \bigr).
\label{eq:renor}
\end{equation}
It is useful to have a separable representation of the spectral
function in momentum space. For a given energy, the spectral
function in the box is represented by a matrix in momentum space;
after diagonalizing this matrix one obtains
\begin{equation}
S_{lj}(p_m,p_n;E)= \sum_{i}^{N_{max}} S_{lj} (i) \ \phi_i(p_m)
\ \phi_i(p_n)
\label{eq:separ}
\end{equation}
where $S_{lj}(i)$ are the eigenvalues and $\phi_i$ are the
corresponding eigenfunctions. In all cases considered here, it is
enough to consider the first 5 or 6 largest eigenvalues in
Eq.~(\ref{eq:separ}) for an accurate representation of the
spectral function. These eigenfunctions are in principle sp
overlap functions (see discussion after Eq.~(\ref{eq:myspec})
below). They can be thought of as the
natural orbits at a given energy. In fact, if the diagonalization
is performed after integrating over the energy $E$ one would
precisely obtain the natural orbits associated with the one-body
density matrix and the eigenvalues $S_{lj}(i)$ would be the
natural occupation numbers \cite{po95}.
\section{General formalism of DWIA}
For the scattering of an ultrarelativistic electron with initial
(final) momentum ${\bf p}_{\rm e} \
({\bf p}'_{\rm e})$, while a nucleon is ejected with final
momentum ${\bf p}'_{\rm N}$, the differential cross
section in the one-photon exchange approximation
reads~\cite{bgp82,bgprep}
\begin{equation}
{ {{\rm d}\sigma} \over {{\rm d}{\bf p}'_{\rm e}
{\rm d}{\bf p}'_{\rm N} } }= { e^4 \over {16 \pi^2}} {1 \over
Q^4 p_{\rm e} p'_{\rm e} } \
\lower7pt\hbox{$_{\lambda,\lambda'=0,\pm 1}$} \kern-28pt
{\hbox{\raise2.5pt \hbox{$\sum$}}}
\quad \ L_{\lambda,\lambda'}
W_{\lambda,\lambda'} , \label{eq:cross}
\end{equation}
where $Q^2 = {\bf q}^2 - \omega^2$ and ${\bf q} =
{\bf p}_{\rm e} - {\bf p}'_{\rm e}, \ \omega =
p^{}_{\rm e} - p'_{\rm e}$ are the momentum and energy
transferred to the target nucleus, respectively. The quantities
$L_{\lambda,\lambda'}, W_{\lambda,\lambda'}$ (usually referred to
as the lepton and hadron tensors, respectively) are expressed
in the basis of unit vectors
\begin{eqnarray}
e_0 &= &\left( 1, 0, 0, 0 \right) , \nonumber \\
e_{\pm 1} &= &\left( 0, \mp {\textstyle \sqrt{{1 \over 2}}}, -
{\textstyle \sqrt{{1 \over 2}}} {\rm i}, 0 \right) ,
\label{eq:basis}
\end{eqnarray}
which define the longitudinal (0) and transverse $(\pm 1)$
components of the nuclear response with respect to the
polarization of the exchanged virtual photon. The components
of the lepton tensor depend only on the electron kinematics,
while $W_{\lambda,\lambda'}$ depend on $q, \omega,
p'_{\rm N}, \cos \gamma = {\bf p}'_{\rm N} \cdot
{\bf q} / p'_{\rm N} q,$ and the angle $\alpha$ between the
$({\bf p}'_{\rm N},{\bf q})$ plane and the electron
scattering plane.
The hadron tensor is defined as~\cite{bgp82,bgprep,frumou}
\begin{equation}
W^{}_{\lambda,\lambda'} = \
{\lower7pt\hbox{$_{\rm i}$}} \kern-7pt {\hbox{\raise7.5pt
\hbox{$\overline \sum$}}} \
\hbox {\hbox {$\sum$} \kern-15pt
{$\displaystyle \int_{\rm f}$\ } }
J^{}_{\lambda} ({\bf q}) J^*_{\lambda'} ({\bf q}) \ \delta
\left( E^{}_{\rm i} - E^{}_{\rm f} \right) ,
\label{eq:hadtens}
\end{equation}
i.e. it involves the average over initial states and the sum over
the final undetected states (compatible with energy-momentum
conservation) of bilinear products of the scattering amplitude
$J_{\lambda} ({\bf q})$.
This basic ingredient of the calculation is defined as
\begin{equation}
J^{}_{\lambda} ({\bf q}) = \int {\rm d} {\bf r} \
{\rm e}^{i {\bf q}
\cdot {\bf r}} \langle \Psi^{\rm A}_{\rm f}\vert
{\hat J}^{}_{\mu} \cdot e^{\mu}_{\lambda} \vert \Psi^{\rm A}_0
\rangle , \label{eq:scattampl}
\end{equation}
where the matrix element of the nuclear charge-current density
operator ${\hat J}_{\mu}$ is taken between the initial,
$\vert \Psi^{\rm A}_0 \rangle$, and the final,
$\vert \Psi^{\rm A}_{\rm f} \rangle$, nuclear states. A natural
choice for $\vert \Psi^{\rm A}_{\rm f} \rangle$ is suggested by
the experimental conditions of the reaction selecting a final
state, which behaves asymptotically as a knocked out nucleon with
momentum $p'_{\rm N}$ and a residual nucleus in a well-defined
state $\vert \Psi^{{\rm A} - 1}_n (E) \rangle$ with energy $E$
and quantum numbers $n$. By projecting this specific channel out
of the entire Hilbert space, it is possible to rewrite
Eq.~(\ref{eq:scattampl}) in a one-body representation
(in momentum space and omitting spin degrees of freedom for
simplicity) as~\cite{bccgp82}
\begin{equation}
J^{}_{\lambda} ({\bf q}) = \int {\rm d} {\bf p} \
\chi^{\left( -\right)\, *}_{p'_{\scriptscriptstyle{\rm N}} E n}
({\bf p} + {\bf q}) \ {\hat J}^{\rm eff}_{\mu}
({\bf p}, {\bf q}) \cdot e^{\mu}_{\lambda} \
\phi^{}_{E n} ({\bf p}) [S_n(E)]^{1\over 2} ,
\label{eq:scattampl1}
\end{equation}
provided that ${\hat J}_{\mu}$ is substituted by an appropriate
effective one-body charge-current density operator
${\hat J}^{\rm eff}_{\mu}$, which guarantees the
orthogonality between $\vert \Psi^{\rm A}_0 \rangle$ and
$\vert \Psi^{\rm A}_{\rm f} \rangle$ besides taking into account
effects due to truncation of the Hilbert space. Actually, the
orthogonality defect is negligible in the standard kinematics
for $(e,e'p)$ reactions and in DWIA ${\hat J}^{\rm eff}_{\mu}$ is
usually replaced by a simple one-body current
operator~\cite{bccgp82,br91,bgprep}.
The functions
\begin{eqnarray}
[S_n (E)]^{1\over 2} \phi^{}_{E n} ({\bf p}) &=
&\langle \Psi^{{\rm A} - 1}_n (E) \vert a ({\bf p})
\vert \Psi^{\rm A}_0 \rangle , \nonumber \\
\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}} E n}
({\bf p}) &= &\langle \Psi^{{\rm A} - 1}_n (E) \vert a ({\bf p})
\vert \Psi^{\rm A}_{\rm f} \rangle
\label{eq:specampl}
\end{eqnarray}
describe the overlap between the residual state
$\vert \Psi^{{\rm A} - 1}_n (E) \rangle$ and the hole produced in
$\vert \Psi^{\rm A}_0 \rangle$ and
$\vert \Psi^{\rm A}_{\rm f} \rangle$, respectively, by removing a
particle with momentum ${\bf p}$. Both $\phi^{}_{E n},
\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}} E n}$
are eigenfunctions of a Feshbach-like nonlocal energy-dependent
Hamiltonian referred to the residual nucleus, belonging to the
eigenvalues $E$ and $E+\omega$, respectively~\cite{bc81}.
The norm of $\phi^{}_{E n}$ is 1 and $S_n (E)$ is the
spectroscopic factor associated with the removal process, i.e. it
is the probability that the residual nucleus can indeed be
conceived as a hole produced in the target nucleus. The
dependence of
$\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}} E n}$
on $p'_{\rm N}$ is hidden in the asymptotic state
$\vert \Psi^{\rm A}_{\rm f} \rangle$ and the boundary conditions
are those of an incoming wave.
Because of the complexity of the eigenvalue problem in the
continuum, the Feshbach hamiltonian is usually replaced by a
phenomenological local optical potential $V({\bf r})$ of
the Woods-Saxon form with complex central and spin-orbit
components. It simulates the mean-field interaction between
the residual nucleus and the emitted nucleon with energy-dependent
parameters determined through a best fit of elastic
nucleon-nucleus scattering data including cross section
and polarizations. Then,
$\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}} E n}
\sim \chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}}}$
is expanded in partial waves and a Schr\"odinger equation
including $V({\bf r})$ is solved for each component up
to a maximum angular momentum satisfying a $p'_{\rm N}$-dependent
convergency criterion~\cite{bgprep}. The nonlocality of the
original Feshbach hamiltonian is taken into account by
multiplying the optical-model solution by the appropriate Perey
factor~\cite{perey}.
After summing over the undetected final states with quantum numbers
$n$ of the residual nucleus, the hadron tensor
$W_{\lambda,\lambda'}$ in momentum space becomes
\begin{eqnarray}
W^{}_{\lambda,\lambda'} &\sim &\sum_n \int {\rm d} {\bf p}
{\rm d} {\bf p}' \
\chi^{\left( - \right) \, *}_{p'_{\scriptscriptstyle{\rm N}}}
({\bf p}+{\bf q}) {\hat J}^{}_{\mu} ({\bf p}, {\bf q}) \cdot
e^{\mu}_{\lambda} \ \phi^{}_{E n} ({\bf p})
\phi^*_{E n} ({\bf p}') S_n (E) \nonumber \\
& &\hbox{\hskip 2cm}{\hat J}^{\dagger}_{\nu} ({\bf p}',
{\bf q}) \cdot e^{\nu \, \dagger}_{\lambda'} \
\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}}}
({\bf p}'+{\bf q}) \nonumber \\
&\equiv &\int {\rm d} {\bf p} {\rm d} {\bf p}' \
\chi^{\left( - \right) \, *}_{p'_{\scriptscriptstyle{\rm N}}}
({\bf p}+{\bf q}) {\hat J}^{}_{\mu} ({\bf p}, {\bf q}) \cdot
e^{\mu}_{\lambda} \ S ({\bf p}, {\bf p}'; E) \nonumber \\
& &\hbox{\hskip 2cm}{\hat J}^{\dagger}_{\nu} ({\bf p}',
{\bf q}) \cdot e^{\nu \, \dagger}_{\lambda'} \
\chi^{\left( - \right)}_{p'_{\scriptscriptstyle{\rm N}}}
({\bf p}'+{\bf q}) , \label{eq:hadtens1}
\end{eqnarray}
where
\begin{equation}
S ({\bf p}, {\bf p}'; E) = \sum_n \ S_n (E) \phi^*_{E n}
({\bf p}') \phi^{}_{E n} ({\bf p}) \label{eq:myspec}
\end{equation}
is the hole spectral function defined in Eq.~(\ref{eq:spec}).
Notice that the spin and isospin indices have been omitted for
simplicity and the summation over $n$ is over the different
partial wave contributions which are present at a given energy
$E$. This sum should not be confused with the separable
representation (Eq.~(\ref{eq:separ})) of the partial wave
contributions to the spectral function $S_{lj}(p,p',E)$ defined
in Eq.~(\ref{eq:specl}). Each $lj$-contribution, coming from
either quasi-hole states (if $E$ is the correct excitation energy)
or from states which are usually unoccupied in the standard
shell model, can be separately computed, so that the total hadron
tensor will look like
\begin{equation}
W^{}_{\lambda,\lambda'} \equiv \sum_{lj} \
W^{lj}_{\lambda,\lambda'} \quad . \label{eq:hadlj}
\end{equation}
Experimental data for the $(e,e'p)$ reaction are usually
collected as ratios between the measured cross section and
$K \sigma_{\rm eN}$, where $K$ is a suitable kinematical factor
and $\sigma_{\rm eN}$ is the elementary (half off-shell)
electron-nucleon cross section. In this way the information
contained in the five-fold differential cross section is reduced
to a two-fold function of the missing energy
$E_{\rm m} = \omega - T_{p'_{\scriptscriptstyle{\rm N}}} -
E_{\rm x}$ ($T_{p'_{\scriptscriptstyle{\rm N}}}$ is the kinetic
energy of the emitted nucleon and $E_{\rm x}$ is the excitation
energy of the residual nucleus) and of the missing momentum
${\bf p}_{\rm m} = {\bf p}'_{\rm N} - {\bf q}$~\cite{data}.
Therefore, in the following Section results will be presented
under the form of the socalled reduced cross section~\cite{bgprep}
\begin{equation}
n ({\bf p}_{\rm m}) \equiv
{ {\rm d\sigma} \over {\rm d{\bf p}'_{\rm e}
{\rm d}{\bf p}'_{\rm N}} } {1 \over {K \sigma_{\rm eN}}}
\quad . \label{eq:redcross}
\end{equation}
\section{Results}
In this Section we will discuss results for the reduced cross
section defined in Eq.~(\ref{eq:redcross}) for $(e,e'p)$
reactions on $^{16}$O leading both to discrete bound states of the
residual nucleus $^{15}$N and to states in the continuum at higher missing
energy. Distortion of electron and proton waves has been taken into account
through the effective momentum approximation~\cite{GP} and through the
optical potential derived from the Schwandt parametrization~\cite{Schw}
(see Tab. III in Ref.~\cite{leus94}), respectively. All results presented
here have been obtained using the
CC1 prescription\cite{cc1} for the half off-shell elementary
electron-proton scattering amplitude in analogy with what has been
commonly done in the analysis of the experimental data.
We also employed the nonrelativistic description for this
amplitude\cite{devan} to be consistent with the nonrelativistic
calculation of the five-fold differential cross section.
In parallel kinematics, where most of the experimental data are available,
this choice does not produce very different results with respect to the
former, and, therefore, will not be considered in the following.
\subsection{Quasihole states}
In Fig.~\ref{fig:fig1} the experimental results
for the transition to the ground state of $^{15}$N are
displayed as a function of the missing momentum
${\bf p}_{\rm{m}}$. These data points have been collected at
NIKHEF choosing the socalled parallel kinematics\cite{leus94},
where the direction of the momentum of the outgoing proton,
${\bf p}_{\rm{N}}'$, has been fixed to be parallel to the
momentum transfer ${\bf q}$. In order to minimize the effects of
the energy dependence of the optical potential describing the
FSI, the data points have been collected at a constant kinetic
energy of 90 MeV in the center-of-mass system of the emitted
proton and the residual nucleus. Consequently, since the momentum
of the ejected particle is also fixed and
\begin{equation}
p_{\rm m} = \vert p_{\rm{N}}' \vert - \vert q \vert ,
\label{eq:missmo}
\end{equation}
the missing momentum can
be modified by collecting data at various momenta $q$ transferred
from the scattered electron.
The experimental data points for this reduced cross section are
compared to the predictions of the calculations discussed above.
The quasihole part of the spectral function for the
$p{1 \over 2}$ partial wave represents the relevant piece of the
nuclear structure calculation for the proton knockout reaction
leading to the ground state of $^{15}$N. Using the quasihole part
of the spectral function as discussed above (see
Eq.~(\ref{eq:skoqh})) but adjusting the spectroscopic factor for
the quasihole state contribution $Z_{0p{1 \over 2}}$ to fit the
experimental data, we obtain the solid line of
Fig.~\ref{fig:fig1}. Comparing this result with the experimental
data one finds that the calculated spectral function reproduces
the shape of the reduced cross section as a function of the
missing momentum very well. The absolute value for the reduced
cross section can only be reproduced by assuming a spectroscopic
factor $Z_{0p{1 \over 2}} = 0.644$, a value considerably below
the one of 0.89 calculated from Eq.~(\ref{eq:qhs})\cite{mu95}.
The phenomenological Woods-Saxon wave functions adjusted to fit
the shape of the reduced cross section require spectroscopic
factors ranging from 0.61 to 0.64 for the lowest
$0p \textstyle{1 \over 2}$ state and from 0.50 to 0.59 for the
$0p \textstyle{3 \over 2}$ state, respectively,
depending upon the choice of the optical potential for the
outgoing proton\cite{leus94}. The fact that the calculated
spectroscopic factor is larger than the one adjusted to the
experimental data may be explained by the observation that the
calculations only reflects the depletion of the quasihole
occupation due to short-range correlations. Further depletion and
fragmentation should arise from long-range correlations due to
collective excitations at low energies\cite{geu96,nili96}. Other
explanations for this discrepancy could be the need for improving
the description of spurious center-of-mass motion\cite{pwp,rad94}
or a different treatment of FSI in terms of a relativistic model
for the optical potential\cite{udias}.
In order to visualize the effects of FSI, Fig.~\ref{fig:fig1}
also displays the results obtained for the quasihole contribution
to the spectral function (with the same spectroscopic factor
$Z_{0p{1 \over 2}} = 0.644$ as before, for sake of consistency)
but ignoring the effects of the optical potential. In this
socalled Plane-Wave Impulse Approximation (PWIA) the reduced cross
section as a function of the missing momentum is identical to the
spectral function at the missing energy of the considered
$0p{1 \over 2}$ state, or, better, to the momentum distribution of
the peak observed at this missing energy with the quantum numbers
of the ground state of $^{15}$N. Therefore, the difference
between the solid and the dashed line in Fig.~\ref{fig:fig1}
corresponds to the difference between the reduced cross section
defined in Eq.~(\ref{eq:redcross}) and the momentum distribution
for the ground state of $^{15}$N. In other words, it illustrates
the effect of all the ingredients entering the present theoretical
description of the $(e,e'p)$ reaction, which are not contained
in the calculation of the spectral function. In particular,
the real part of the optical potential yields a
reduction of the momentum of the outgoing proton $p_{\rm{N}}'$.
According to Eq.~(\ref{eq:missmo}), this implies in parallel
kinematics a redistribution of the strength towards smaller values
of the missing momentum and makes it possible to reproduce the observed
asymmetry of the data around $p_{\rm{m}} = 0$.
This feature cannot be obtained in PWIA (dashed line), where the results
are symmetric around $p_{\rm{m}} = 0$ due to the cylindrical symmetry of the
hadron tensor $W^{}_{\lambda,\lambda'}$
around the direction of ${\bf q}$ when FSI are switched off
(for a general review see Ref.~\cite{bgprep} and references therein).
The imaginary part of the optical potential describes
the absorption of the proton flux due to coherent inelastic
rescatterings, which produces the well known quenching with
respect to the PWIA result.
As a second example for the reduced cross section in $(e,e'p)$
reactions on $^{16}$O leading to bound states of the residual
nucleus, we present in Fig.~\ref{fig:fig2} the data for the
$\textstyle{3 \over 2}^-$ state of $^{15}$N at an excitation
energy of $-6.32$ MeV.
Also in this case the experimental data are reproduced very well
if we adjust the spectroscopic factor for the corresponding
quasihole part in the spectral function to $Z_{0p{3 \over 2}} =
0.537$. The discrepancy with the calculated spectroscopic factor
(0.914) is even larger for this partial wave than it is for the
$p{1 \over 2}$ state. A large part of this discrepancy can be
attributed to the long-range correlations, which are not
accounted for in the present study. Note, that in the experimental
data three $\textstyle{3 \over 2}^-$ states are observed in
$^{15}$N at low
excitation energies. Long-range correlations yield a splitting
such that 86\% of the total strength going to these
three states is contained in the experimental data displayed in
Fig.~\ref{fig:fig2}. This splitting is not observed in the
theoretical calculations. If one divides the adjusted
spectroscopic factor $Z_{0p{3 \over 2}}$ by 0.86 to account for
the splitting of the experimental strength, one obtains a value
of 0.624 which is close to the total spectroscopic factor
adjusted to describe the knockout of a proton from $p{1 \over 2}$
state.
Figure \ref{fig:fig2} also contains the results for the reduced
cross section derived by substituting the overlap
$[S_n (E)]^{1\over 2} \phi^{}_{E n}$ in Eq.~(\ref{eq:scattampl1})
with the variational wave function of Pieper et al.\cite{rad94},
who employed the Argonne potential for the NN
interaction\cite{argon}. Also in this case the shape of the
experimental data is globally reproduced with a slightly better
agreement for small negative values of $p_{\rm m}$ but with a
clear underestimation at larger $p_{\rm m}$. The overall quality
of the fit is somewhat worse than for the Green's function approach
and the required adjusted spectroscopic factor is
$Z_{0p{3 \over 2}} = 0.459$, even below the value of 0.537 needed
in the present calculation.
The analysis of the reduced cross section has been extended to
higher missing momenta by experiments performed at the MAMI
accelerator in Mainz\cite{blo95}, adopting different kinematical
conditions than the parallel kinematics. Using the same
spectroscopic factors for the $p{3 \over 2}$ and the
$p{1 \over 2}$ partial waves, which were adjusted to the NIKHEF
data above, the results of our calculations agree quite well
also with these MAMI data, as displayed in Fig.~\ref{fig:fig2a}.
Although the calculation is somewhat below the data at high
missing momentum, one should keep in mind that the corresponding
difference in sp strength is only an extremely tiny fraction of
the 10\% of the protons which are expected to be associated with
high momenta due to short-range
correlations~\cite{mudi94,mu95,po95}.
\subsection{The contribution of the continuum}
>From theoretical studies it is known that an enhancement of the
high-momentum components due to short-range NN correlations
does not show up in knockout experiments leading to states of low
excitation energy in the (A-1) nucleus, but should be seen at
higher missing energies, which correspond to large excitation
energies in the residual nucleus. A careful analysis of
such reactions leading to final states above the threshold for
two-nucleon emission, however, is much more involved. For
example, a description of the electromagnetic vertex beyond the
impulse approximation is needed and two-body current operators
must be adopted which are consistent with the contributions
included in the spectral function. Moreover, the possible further
fragmentation of the (A-1) residual system requires, for a
realistic description of FSI, a coupled-channel formalism with
many open channels. Calculations based on the optical potential
are not satisfactory at such missing energies, because
inelastic rescatterings and multi-step processes will add and
remove strength from this particular channel.
Nevertheless, it should be of interest to analyze the predictions
of the present approach at such missing energies. First of all,
because it represents the first realistic attempt of a complete
calculation of the single-particle channel leading to the final
proton emission, including intermediate states above the Fermi
level up to $l=4$; therefore, it represents a realistic estimate
of the relative size of this specific channel. Secondly, because
information on the shape of the reduced cross section as a
function of the missing momentum or on the relative contribution of
various partial waves could yield reliable results even at these
missing energies. Due to the problems mentioned above, no
reliable description of the absolute value of the reduced cross
section can be reached in this framework.
In order to demonstrate the energy dependence of the spectral
function and its effect on the cross section, we have calculated
the reduced cross section for the excitation of
$\textstyle{3 \over 2}^-$ states
at $E_{\rm m} = -63$ MeV. For these studies we considered
the socalled perpendicular kinematics, where the energy of the
emitted proton is kept fixed at 90 MeV as well as the momentum
transfer at $q \sim 420$ MeV/c (equal to the outgoing proton
momentum). The same optical potential as in Figs.~\ref{fig:fig1},
\ref{fig:fig2} can be adopted to describe FSI and the
missing momentum distribution is obtained by varying the angle
between ${\bf p}_{\rm N}'$ and ${\bf q}$. For a spectral function
normalized to unity (as the absolute result for the cross section
is not reliable), the reduced cross section is represented by the
solid line in Fig.~\ref{fig:fig3}. If, however, we replace the
spectral function derived from the continuum contribution in
Eq.~(\ref{eq:renor}) by the one derived for the
$\textstyle{3 \over 2}^-$
quasihole state at its proper missing energy (but now in the same
kind of perpendicular kinematics and normalized to 1)
we obtain the dashed line. A comparison of these two calculations
demonstrates the enhancement of the high-momentum components in
the spectral function leading to final states at large excitation
energies. Note that the cross section derived from the appropriate
spectral function is about two orders of magnitude larger at
$p_{\rm m} \sim 500$ MeV/c than the one derived from the spectral
function at the quasihole energy.
The discussion so far is of course somewhat academic since it will
be difficult to perform a decomposition of the continuum
contribution to the reduced cross section in terms of the quantum
numbers for angular momentum and parity of the state for
the residual system. Therefore we display in
Figs.~\ref{fig:fig4} and \ref{fig:fig5} the contributions to the
total reduced cross section of the various partial waves
associated to states above the Fermi level and usually unoccupied
in the standard shell model. From Fig.~\ref{fig:fig4} we can
furthermore see that the relative importance of the various
partial waves changes with the missing momentum, emphasizing the
contribution of higher angular momenta at increasing $p_{\rm m}$.
This feature can be observed even better in Fig.~\ref{fig:fig5},
where the percentage of each relative contribution to the total
reduced cross section is displayed as a function of the missing
momentum. For each orbital angular momentum we obtain a
``window'' in $p_{\rm m}$ where its contribution shows a maximum
as compared to other partial waves.
\section{Conclusions}
In the present paper the consequences of the presence of
high-momentum components in the ${}^{16}$O ground state have been
explored in the calculation of the $(e,e'p)$ cross section within
the formalism for the DWIA developed in
Refs.\cite{bc81,bccgp82,bgp82,br91,bgprep,libro}. The spectral
functions have been calculated for the ${}^{16}$O system itself,
by employing the techniques developed and discussed
in\cite{boro,mudi94,mu95,po95}. At low missing energies, the
description of the missing momentum dependence of the
$p \textstyle{1 \over 2}$ and $p \textstyle{3 \over 2}$
quasihole states compares favorably with the experimental data
obtained at NIKHEF\cite{leus94} and at the MAMI facility in
Mainz\cite{blo95}. The difference between theory and experiment
at high missing momenta can at most account for a very tiny
fraction of the sp strength which is predicted to be present at
these momenta\cite{mudi94,mu95,po95}. A comparison with the PWIA
result clarifies the influence of FSI in parallel kinematics.
We also compare our results for the $p \textstyle{3 \over 2}$
quasihole state with the results obtained in Ref.~\cite{rad94}
for the Argonne NN interaction. While the shape of the cross
sections is nicely described by our results, the associated
spectroscopic factors are overestimated substantially. Although a
large fraction of this discrepancy can be ascribed to the
influence of long-range correlations\cite{geu96,nili96}, which
are outside the scope of the present work, a discrepancy
may still remain although it has been suggested
that a correct treatment of the center-of-mass
motion\cite{rad94} may fill this gap.
As discussed previously for nuclear matter (see $e.g.$
\cite{cio91}) and emphasized in \cite{mudi94,mu95,po95} for
finite nuclei, the admixture of high-momentum components in the
nuclear ground state can only be explored by considering high
missing energies in the $(e,e'p)$ reaction. Although other
processes may contribute to the cross section at these energies,
we have demonstrated in this paper that the expected
emergence of high missing momentum components in the cross section
is indeed obtained and yields substantially larger cross sections
than the corresponding outcome for the quasihole states.
As a result, we conclude that the presence of high-momentum
components leads to a detectable cross section at high missing
energy. In addition, we observe that it is important to include
orbital angular momenta at least up to $l = 4$ in the spectral
function in order to account for all the high missing momentum
components up to about 600 MeV/c. A clear window for the dominant
contribution of each $l$-value as a function of missing
momentum is also established. This feature may help to analyze
experimental data at these high missing energies.
\vspace{1cm}
\centerline{\bf ACKNOWLEDGEMENTS}
This research project has been supported in part by Grant No.
DGICYT, PB92/0761 (Spain), EC Contract No. CHRX-CT93-0323, the
``Graduiertenkolleg Struktur und Wechselwirkung von Hadronen und
Kernen'' under DFG Mu705/3 (Germany), and the U.S. NSF under
Grant No. PHY-9602127.
| proofpile-arXiv_065-943 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:1}
The old idea (in any field theory) of losses of energy and momenta in an
isolated system, due to the presence of radiation reaction forces in the
equations of motion, is of topical interest in the case of the
gravitational field. Notably, gravitational radiation reaction forces
play an important role in astrophysical binary systems of compact
objects (neutron stars or black holes). The electromagnetic-based
observations of the Hulse-Taylor \cite{HT75} binary pulsar PSR~1913+16
have yielded evidence that the binding energy of the pulsar and its
companion decreases because of gravitational radiation reaction
\cite{TFMc79,D83b,TWoDW,T93}.
Even more relevant to the problem of radiation reaction are the future
gravitational-based observations of inspiralling (and then coalescing)
compact binaries. The dynamics of these systems is entirely driven by
gravitational radiation reaction forces. The future detectors such as
LIGO and VIRGO should observe the gravitational waves emitted during the
terminal phase, starting about twenty thousands orbital rotations before the
coalescence of two neutron stars. Because inspiralling compact binaries
are very relativistic, and thanks to the large number of observed
rotations, the output of the detectors should be compared with a very
precise expectation from general relativity \cite{3mn,FCh93,CF94}. In
particular Cutler {\it et al} \cite{3mn} have shown that our {\it a
priori} knowledge of the relativistic (or post-Newtonian) corrections in
the radiation reaction forces will play a crucial role in our ability to
satisfactorily extract information from the gravitational signals.
Basically, the reaction forces inflect the time evolution of the binary's
orbital phase, which can be determined very precisely because of the
accumulation of observed rotations. The theoretical problem of the
phase evolution has been addressed using black-hole perturbation
techniques, valid when the mass of one body is small as compared with
the other mass \cite{P93,CFPS93,TNaka94,Sasa94,TSasa94,P95}, and using the
post-Newtonian theory, valid for arbitrary mass ratios
\cite{BDI95,BDIWW95,WWi96,B96pn}. It has been shown \cite{CFPS93,TNaka94,P95}
that post-Newtonian corrections in the radiation reaction forces should
be known up to at least the third post-Newtonian (3PN) order, or
relative order $c^{-6}$ in the velocity of light.
The radiation reaction forces in the equations of motion of a self-gravitating
system arise at the 2.5PN order (or $c^{-5}$ order) beyond the Newtonian
acceleration. Controlling the $n$th post-Newtonian corrections in the
reaction force means, therefore, controlling the $(n+2.5)$th
post-Newtonian corrections in the equations of motion. If $n=3$ this is
very demanding, and beyond our present knowledge. A way out of this
problem is to {\it assume} the validity of a balance equation for
energy, which permits relating the mechanical energy loss in the system to the
corresponding flux of radiation far from the system. Using such a
balance equation necessitates the knowledge of the equations of motion
up to the $n$PN order instead of the $(n+2.5)$PN one. The price to be
paid for this saving is the computation of the far-zone flux up to the
same (relative) $n$PN order. However, this is in general less demanding
than going to $(n+2.5)$PN order in the equations of motion. All the
theoretical works on inspiralling binaries compute the phase evolution
from the energy balance equation. Black-hole perturbations
\cite{TNaka94,Sasa94,TSasa94} reach $n=4$ in this way, and the
post-Newtonian theory \cite{BDI95,BDIWW95,WWi96,B96pn} has $n=2.5$.
An important theoretical problem is therefore to improve the present
situation by showing the validity to post-Newtonian order of the balance
equations for energy, and also for linear and angular momenta. This problem
is equivalent to controlling the radiation
reaction forces at the same post-Newtonian order. Arguably,
this problem is also important in its own (not only for applications to
inspiralling compact binaries).
Radiation reaction forces in general relativity have long been investigated
(see \cite{D83a} for a review of works prior the seventies). In
the late sixties, Burke and Thorne \cite{Bu69,Th69,Bu71}, using a method
of matched asymptotic expansions, introduced a quasi-Newtonian
reactive potential, proportional to the fifth time-derivative of the
Newtonian quadrupole moment of the source. At about the same time,
Chandrasekhar and collaborators \cite{C69,CN69,CE70}, pursuing a
systematic post-Newtonian expansion in the case of extended fluid
systems, found some reactive terms in the equations of motion at the
2.5PN approximation. The reactive forces are different in the two
approaches because of the use of different coordinate systems, but both
yield secular losses of energy and angular momentum in
agreement with the standard Einstein quadrupole formulas (see however
\cite{EhlRGH,WalW80,BD84}). These results, after later confirmation and
improvements \cite{AD75,PaL81,Ehl80,Ker80,Ker80',BRu81,BRu82,S85},
show the validity of the balance equations to {\it Newtonian} order, and
in the case of weakly self-gravitating fluid systems. In the case of
binary systems of compact objects (such as the binary pulsar and
inspiralling binaries), the Newtonian balance equations are also known to be
valid, as the complete dynamics of these systems
has been worked out (by Damour and Deruelle \cite{DD81a,DD81b,D82,D83a})
up to the 2.5PN order where radiation reaction effects appear.
Post-Newtonian corrections in the radiation reaction force can be obtained
from first principles using a combination of analytic approximation
methods. The methods are (i) a post-Minkowskian or non-linear expansion
method for the field in the weak-field domain of the source (including
the regions far from the source), (ii) a multipolar expansion method for
each coefficient of the post-Minkowskian expansion in the domain
exterior to the source, and (iii) a post-Newtonian expansion method (or
expansion when $c\to\infty$) in the near-zone of the source (including its
interior). Then an
asymptotic matching (in the spirit of \cite{Bu69,Th69,Bu71}) permits
us to connect the external field to the field inside the source. Notably, the
methods (i) and (ii) have been developed by Blanchet and Damour
\cite{BD86,B87,BD92} on foundations laid by Bonnor and collaborators
\cite{Bo59,BoR66,HR69}, and Thorne \cite{Th80}. The method (iii) and
matching have also been developed within the present approach
\cite{BD89,DI91a,B95}.
The post-Newtonian correction that is due to gravitational wave tails in
the reaction force was determined first using the latter methods
\cite{BD88}. The tails of waves are produced by scattering of the
linear waves off the static spacetime curvature generated by the total
mass of the source (see e.g. \cite{BoR66,HR69}). Tails appear as
non-local integrals, depending on the full past history of the system, and
modifying its present dynamics by a post-Newtonian correction of 1.5PN order
in the radiation reaction force, corresponding to 4PN order in the
equations of motion \cite{BD88}. It has been shown in
\cite{BD92} that the tail contribution in the reaction force is such
that the balance equation for energy is verified for this particular
effect. This is a strong indication that the balance equations are
actually valid beyond the Newtonian order (1.5PN order in this case).
For completeness we shall include this result in the present paper.
The methods (i)-(ii) have been implemented in [1]
in order to investigate systematically the occurence and structure of the
contributions in the exterior field which are expected to yield
radiation reaction effects (after application of the method (iii) and the
relevant matching). The present paper is the direct continuation of
the paper [1], that we shall refer here to as paper I.
Working first within the linearized theory, we investigated in paper I
the ``antisymmetric" component of the exterior field, a solution of the
d'Alembertian equation composed of a retarded (multipolar) wave minus
the corresponding advanced wave. Antisymmetric waves in the exterior
field are expected to yield radiation reaction effects in the
dynamics of the source. Indeed, these waves change sign when we reverse
the condition of retarded potentials into the advanced condition (in the
linearized theory), and have the property of being regular all over the
source (when the radial coordinate $r \to 0$). Thus, by matching, the
antisymmetric waves in the exterior field are necessarily
present in the interior field as well, and can be interpreted
as radiation reaction potentials. In a particular coordinate
system suited to the (exterior) near zone of the source (and constructed
in paper~I), the antisymmetric waves define a radiation reaction
{\it tensor} potential in the linearized theory, generalizing the
radiation reaction scalar potential of Burke and Thorne
\cite{Bu69,Th69,Bu71}.
Working to non-linear orders in the post-Minkowskian approximation, we
introduced in paper~I a particular decomposition of the retarded
integral into the sum of an ``instantaneous" integral, and an
homogeneous solution composed of antisymmetric waves (in the same
sense as in the linearized theory). The latter waves are associated
with radiation reaction effects of non-linear origin. For instance,
they contain the non-linear tail contribution obtained previously
\cite{BD88}. At the 1PN order, the non-linear effects lead simply to
a re-definition of the multipole moments which parametrize the
linearized radiation reaction potential. However, the radiation
reaction potential at 1PN order has been derived only in the external
field. Thus, it was emphasized in paper~I that in order to
meaningfully interpret the physical effects of radiation reaction, it
is necessary to complete this derivation by an explicit matching to
the field inside the source.
We perform the relevant matching (at 1PN order) in the present paper.
Namely, we obtain a solution for the field inside the source
(satisfying the non-vacuum field equations), which can be transformed
by means of a suitable coordinate transformation (in the exterior
near-zone of the source) into the exterior field determined in paper
I. The matching yields in particular the multipole moments
parametrizing the reaction potential as explicit integrals over the
matter fields in the source. As the exterior field satisfies
physically sensible boundary conditions at infinity (viz the
no-incoming radiation condition imposed at past-null infinity), the
1PN-accurate radiation reaction potentials are, indeed, appropriate
for the description of the dynamics of an isolated system.
To 1PN order beyond the Burke-Thorne term, the reaction
potential involves a scalar potential, depending on the mass-type
quadrupole and octupole moments of the source, and a vectorial potential,
depending in particular on the current-type quadrupole moment. The
existence of such vectorial component was first noticed in the
physically restricting case where the dominant quadrupolar radiation is
suppressed \cite{BD84}.
A different approach to the problem of radiation reaction has been
proposed by Iyer and Will \cite{IW93,IW95} in the case of binary systems
of point particles. The expression of the radiation reaction force is
deduced, in this approach, from the {\it assumption} that the balance
equations for energy and angular momentum are correct (the angular
momentum balance equation being necessary for non-circular orbits).
Iyer and Will determine in this way the 2.5PN and 3.5PN approximations
in the equations of motion of the binary, up to exactly the freedom left
by the un-specified coordinate system. They also check that the
1PN-accurate radiation reaction potentials of the present paper (and
paper~I) correspond in their formalism, when specialized to binary
systems, to a unique and consistent choice of a coordinate system. This
represents a non-trivial check of the validity of the
1PN reaction potentials.
In the present paper we prove that the 1PN-accurate radiation reaction
force in the equations of motion of a general system extracts energy,
linear momentum and angular momentum from the system at the same rate as
given by the (known) formulas for the corresponding radiation fluxes at
infinity. The result is extended to include the tails at 1.5PN order.
Thus we prove the validity up to the 1.5PN order of the energy and
momenta balance equations (which were previously known to hold at
Newtonian order, and for the specific effects of tails at 1.5PN order).
Of particular interest is the loss of linear momentum, which can be
viewed as a ``recoil'' of the center of mass of the source in reaction
to the wave emission. This effect is purely due to the 1PN corrections
in the radiation reaction potential, and notably to its vectorial
component (the Newtonian reaction potential predicts no recoil). Numerous
authors have obtained this effect by computing the {\it flux} of linear
momentum at infinity, and then by relying on the balance equation to get
the actual recoil \cite{BoR61,Pa71,Bek73,Press77}. Peres \cite{Peres62}
made a direct computation of the linear momentum loss in the source, but
limited to the case of the linearized theory. Here we prove
the balance equation for linear momentum in the full non-linear theory.
The results of this paper apply to a weakly self-gravitating system.
The case of a source made of strongly self-gravitating (compact)
objects is {\it a priori} excluded. However, the theoretical works on
the Newtonian radiation reaction in the binary system PSR~1913+16
\cite{DD81a,DD81b,D82,D83a} have shown that some ``effacement'' of the
internal structure of the compact bodies is at work in general
relativity. Furthermore, the computation of the radiation reaction at
1PN order in the case of two point-masses \cite{IW93,IW95} has shown
agreement with a formal reduction, by means of $\delta$-functions, of
the 1PN radiation reaction potentials, initially derived in this paper
only in the case of weakly self-gravitating systems. These works give
us hope that the results of this paper will remain unchanged in the
case of systems containing compact objects. If this is the case, the
present derivation of the 1.5PN balance equations constitutes a clear
support of the usual way of computing the orbital phase evolution of
inspiralling compact binaries
\cite{3mn,FCh93,CF94,P93,CFPS93,TNaka94,Sasa94,TSasa94,P95,BDI95,BDIWW95,WWi96,B96pn}.
The plan of this paper is the following. The next section (II) is
devoted to several recalls from paper I which are necessary in order
that the present paper be essentially self-contained. In Section III
we obtain, using the matching procedure, the gravitational field
inside the source, including the 1PN reactive contributions. Finally,
in Section IV, we show that the latter reactive contributions, when
substituted into the local equations of motion of the source, yield
the expected 1PN and then 1.5PN balance equations for energy and
momenta.
\section{Time-asymmetric structure of gravitational radiation}\label{sec:2}
\subsection{Antisymmetric waves in the linearized metric}
Let $D_e=\{({\bf x},t), |{\bf x}|>r_e\}$ be the domain exterior to the
source, defined by $r_e>a$, where $a$ is the radius of the
source. We assume that the gravitational field is weak everywhere,
inside and outside the source. In particular $a\gg GM/c^2$, where $M$ is
the total mass of the source. Let us consider, in $D_e$, the
gravitational field at the first approximation in a non-linearity
expansion. We write the components $h^{\mu\nu}$ of the
deviation of the metric density from the Minkowski metric
$\eta^{\mu\nu}$ in the form \cite{N}
\begin{equation}
h^{\mu\nu} \equiv \sqrt{-g} g^{\mu\nu} - \eta^{\mu\nu} =
Gh^{\mu\nu}_{(1)} + O (G^2)\ , \label{eq:2.1}
\end{equation}
where the coefficient of the Newton constant $G$ represents the linearized
field $h_{(1)}^{\mu\nu}$, satisfying the vacuum
linearized field equations in $D_e$,
\begin{equation}
\Box h^{\mu\nu}_{(1)} = \partial^\mu \partial_\lambda h^{\lambda\nu}_{(1)}
+ \partial^\nu \partial_\lambda h^{\lambda\mu}_{(1)}
- \eta^{\mu\nu} \partial_\lambda \partial_\sigma h^{\lambda\sigma}_{(1)}
\ .\label{eq:2.2}
\end{equation}
We denote by $\Box\equiv\eta^{\mu\nu}\partial_\mu\partial_\nu$ the
flat space-time d'Alembertian operator.
The general solution
of the equations (\ref{eq:2.2}) in $D_e$ can be parametrized
(modulo an arbitrary linearized coordinate transformation)
by means of two and only two sets of multipole moments, referred to as the
mass-type moments, denoted $M_L$, and the current-type moments, $S_L$
\cite{Th80}. The capital letter $L$ represents a multi-index composed of
$l$ indices, $L=i_1i_2\cdots i_l$ (see \cite{N} for our notation
and conventions). The multipolarity of the moments is $l\geq 0$
in the case of the mass moments, and $l\geq 1$ in the case of the current
moments. The $M_L$'s and $S_L$'s are symmetric and trace-free
(STF) with respect to their $l$
indices. The lowest-order moments $M$, $M_i$ and $S_i$ are constant, and
equal respectively to the total constant mass (including the energy of the
radiation to be emitted), to the position of the center of
mass times the mass, and to the total angular momentum of the source.
The higher-order moments, having $l\geq 2$, are arbitrary functions of time,
$M_L(t)$ and $S_L(t)$, which encode all the physical properties of the
source as seen in the exterior (linearized) field. In terms
of these multipole moments, the ``canonical'' linearized solution of
Thorne \cite{Th80} reads
\begin{mathletters}
\label{eq:2.3}
\begin{eqnarray}
h^{00}_{\rm can(1)} &&= -{4 \over c^2}
\sum_{l \geq 0}{(-)^{l} \over l !}\partial_ L
\left[{1\over r} M_L\left(t- {r \over c}\right)\right]\ ,\label{eq:2.3a}\\
h^{0i}_{\rm can(1)} &&= {4 \over c^3} \sum_{l \geq 1} {(-)^{l} \over
l !} \partial_{L-1} \left[ {1\over r} M^{(1)}_{iL-1} \left(t- {r
\over c} \right) \right] \nonumber \\ && + {4 \over c^3} \sum_{l
\geq 1} {(-)^{l} l \over (l +1)!} \varepsilon_{iab}\partial_{aL-1}
\left[ {1\over r} S_{bL-1} \left(t- {r\over c}\right)\right]\ ,
\label{eq:2.3b} \\
h^{ij}_{\rm can(1)} &&= - {4 \over c^4}
\sum_{l \geq 2}{(-)^{l} \over l!}\partial_{L-2}
\left[ {1\over r} M^{(2)}_{ijL-2}
\left(t- {r \over c} \right) \right] \nonumber \\
&& - {8 \over c^4} \sum_{l \geq 2} {(-)^{l} l \over (l +1)!}
\partial_{aL-2} \left[{1\over r}\varepsilon_{ab(i} S^{(1)}_{\!j)bL-2}
\left(t- {r \over c} \right) \right] \ . \label{eq:2.3c}
\end{eqnarray}
\end{mathletters}
This solution satisfies (\ref{eq:2.2}) and the condition of harmonic
coordinates (i.e. $\partial_\nu h^{\mu\nu}_{\rm can(1)} =0$). Here we
impose that the multipole moments $M_L(t)$ and $S_L(t)$ are constant
in the remote past, before some finite instant $-\cal T$ in the
past. With this assumption the linearized field (\ref{eq:2.3}) (and
all the subsequent non-linear iterations built on it) is stationary in
a neigthborhood of past-null infinity and spatial infinity (it
satisfies time-asymmetric boundary conditions in space-time). This
ensures that there is no radiation incoming on the system, which would
be produced by some sources located at infinity.
In paper I (Ref. \cite{B93}) the contribution in (\ref{eq:2.3}) which
changes sign when we reverse the condition of retarded potentials to the
advanced condition was investigated. This contribution is
obtained by replacing each retarded wave in (\ref{eq:2.3}) by the
corresponding antisymmetric wave, half
difference between the retarded wave and the corresponding advanced one.
The antisymmetric wave changes sign when we reverse the time evolution
of the moments $M_L(t)$ and $S_L(t)$, say $M_L(t)\to M_L(-t)$, and
evaluate afterwards the wave at the reversed time $-t$.
Thus, (\ref{eq:2.3}) is decomposed as
\begin{equation}
h^{\mu\nu}_{{\rm can}(1)} =
\left( h^{\mu\nu}_{{\rm can}(1)} \right)_{\rm sym} + \left(
h^{\mu\nu}_{{\rm can}(1)} \right)_{\rm antisym}\ . \label{eq:2.4}
\end{equation}
The symmetric part is given by
\FL
\begin{mathletters}
\label{2.5}
\begin{eqnarray}
\left( h^{00}_{{\rm can}(1)} \right)_{\rm sym} =&& - {4 \over c^2}
\sum_{l \geq 0} {(-)^l \over l!}
\partial_L \left\{{ M_L (t-r/c) + M_L(t+r/c) \over 2r }\right\}
\ , \label{eq:2.5a} \\
\left( h^{0i}_{{\rm can}(1)} \right)_{\rm sym} =&& {4 \over c^3}
\sum_{l \geq 1} {(-)^l \over l!}
\partial_{L-1} \left\{{ M^{(1)}_{iL-1} (t-r/c) +
M^{(1)}_{iL-1} (t+r/c) \over 2r } \right\} \nonumber \\
&& +\, {4 \over c^3} \sum_{l \geq 1} {(-)^l l \over (l+1)!}
\varepsilon_{iab} \partial_{aL-1} \left\{ { S_{bL-1} (t-r/c) +
S_{bL-1} (t+r/c) \over 2r} \right\}\ , \label{eq:2.5b} \\
\left( h^{ij}_{{\rm can} (1)} \right)_{\rm sym} =&& - {4 \over c^4}
\sum_{l \geq 2} {(-)^l \over l!}
\partial_{L-2} \left\{{ M^{(2)}_{ijL-2} (t-r/c) +
M^{(2)}_{ijL-2} (t+r/c) \over 2r } \right\} \nonumber \\
&& - {8 \over c^4} \sum_{l \geq 2} {(-)^l l \over (l+1)!}
\partial_{aL-2} \left\{ \varepsilon_{ab(i}
{S^{(1)}_{j)bL-2}(t-r/c) + S^{(1)}_{j)bL-2} (t+r/c) \over 2r} \right\}\ .
\label{eq:2.5c}
\end{eqnarray}
\end{mathletters}
The antisymmetric part is given similarly. However, as show in paper I, it
can be re-written profitably in the equivalent form
\begin{equation}
\left( h^{\mu\nu}_{{\rm can} (1)} \right)_{\rm antisym} =
- {4\over Gc^{2+s}} V^{\mu\nu}_{\rm reac} -\partial^\mu
\xi^\nu -\partial^\nu \xi^\mu +\eta^{\mu\nu}\partial_\lambda \xi^\lambda
\ . \label{eq:2.6}
\end{equation}
The second, third and fourth terms clearly represent a linear gauge
transformation, associated with the gauge vector $\xi^\mu$.
This vector is made of antisymmetric waves, and reads
\FL
\begin{mathletters}
\label{eq:2.7}
\begin{eqnarray}
\xi^0 =&& {2 \over c} \sum_{l \geq 2}
{(-)^l \over l!} { 2 l +1 \over l(l-1)} \partial_L
\left\{{ M^{(-1)}_L (t-r/c) - M^{(-1)}_L (t+r/c) \over 2r } \right\}
\ , \label{eq:2.7a} \\
\xi^i =&& - 2 \sum_{l \geq 2} {(-)^l \over l!}
{(2l+1) (2l+3) \over l(l-1)} \partial_{iL}
\left\{{ M^{(-2)}_L (t-r/c) - M^{(-2)}_L (t+r/c) \over 2r } \right\}
\nonumber \\
&& + {4 \over c^2} \sum_{l \geq 2} {(-)^l \over l!}
{2l+1 \over l-1} \partial_{L-1}
\left\{{ M_{iL-1}(t-r/c) - M_{iL-1}(t+r/c) \over 2r }\right\}\nonumber \\
&& +\, {4 \over c^2} \sum_{l \geq 2} {(-)^ll \over (l+1)!}
{2l+1 \over l-1} \varepsilon_{iab} \partial_{aL-1}
\left\{{ S^{(-1)}_{bL-1} (t-r/c) - S^{(-1)} _{bL-1} (t+r/c) \over 2r }
\right\} \ . \label{eq:2.7b}
\end{eqnarray}
\end{mathletters}
Note that even though we have introduced first and second time-antiderivatives
of the multipole moments, denoted e.g. by
$M_{L}^{(-1)}(t)$ and $M_{L}^{(-2)}(t)$,
the dependence of (\ref{eq:2.7}) on the multipole moments ranges in fact
only in the time interval between $t-r/c$ and $t+r/c$ (see paper~I).
The first term in (\ref{eq:2.6}) defines, for our purpose, a
radiation reaction tensor potential $V^{\mu\nu}_{\rm reac}$ in the
linearized theory (in this term $s$ takes the values $0,1,2$ according
to $\mu\nu=00,0i,ij$). The components of this potential
are given by \cite{N}
\FL
\begin{mathletters}
\label{eq:2.8}
\begin{eqnarray}
V^{00}_{\rm reac} &&= G
\sum_{l \geq 2} {(-)^l \over l!} {(l+1) (l+2) \over l(l-1)}
\hat\partial_L \left\{{ M_L (t-r/c) - M_L(t+r/c) \over 2r }\right\}
\ , \label{eq:2.8a} \\
V^{0i}_{\rm reac} &&= -c^2 G
\sum_{l \geq 2} {(-)^l \over l!} {(l+2) (2l+1) \over l(l-1)}
\hat \partial_{iL} \left\{{ M^{(-1)} _L (t-r/c) -
M^{(-1)} _L (t+r/c) \over 2r } \right\} \nonumber \\
&& + G \sum_{l \geq 2} {(-)^l l \over (l+1)!}
{l+2 \over l-1} \varepsilon_{iab} \hat\partial_{aL-1}
\left\{ { S_{bL-1} (t-r/c) - S_{bL-1} (t+r/c) \over 2r} \right\}\ ,
\label{eq:2.8b} \\
V^{ij}_{\rm reac} &&= c^4 G
\sum_{l \geq 2} {(-)^l \over l!} {(2l+1) (2l+3) \over l(l-1)}
\hat \partial_{ijL} \left\{{ M^{(-2)}_L(t-r/c) -
M^{(-2)} _L (t+r/c) \over 2r } \right\} \nonumber \\
&& -2c^2 G \sum_{l \geq 2} {(-)^l l \over (l+1)!} {2l+1 \over l-1}
\varepsilon_{ab(i} \hat \partial_{j)aL-1} \left\{ {
S^{(-1)}_{bL-1} (t-r/c) - S^{(-1)}_{bL-1} (t+r/c) \over 2r } \right\}
\ \label{eq:2.8c}
\end{eqnarray}
\end{mathletters}
(see Eqs. (2.19) of paper I).
By adding the contributions of the
gauge terms associated with (\ref{eq:2.7}) to the radiation reaction potential
(\ref{eq:2.8}) one reconstructs precisely, as stated by (\ref{eq:2.6}), the
antisymmetric part $(h^{\mu\nu}_{\rm can (1)})_{\rm antisym}$ of the
linearized field (\ref{eq:2.3}).
The scalar, vector, and tensor components of (\ref{eq:2.8}) generalize,
within the linearized theory, the Burke-Thorne \cite{Bu69,Th69,Bu71}
scalar potential by taking into account all multipolarities of waves,
and, in principle, all orders in the post-Newtonian expansion. Actually
a full justification of this assertion would necessitate a matching to
the field inside the source, such as the one we perform in this paper at
1PN order. At the ``Newtonian'' order, the $00$ component of the potential
reduces to the Burke-Thorne potential,
\begin{equation}
V^{00}_{\rm reac} = - {G\over 5c^5} x^i x^j M^{(5)}_{ij} (t)
+ O \left( {1\over c^7} \right)\ . \label{eq:2.9}
\end{equation}
At this order the $0i$ and $ij$ components make negligible
contributions. Recall that a well-known property of the Burke-Thorne
reactive potential is to yield an energy loss in agreement with the
Einstein quadrupole formula, even though it is derived, in this
particular coordinate system, within the linearized theory (see
\cite{WalW80}). In this paper we shall show that the same property
remains essentially true at the 1PN order. [This property is in general
false for other reactive potentials, valid in other coordinate systems, for
which the non-linear contributions play an important role.]
Evaluating the reaction potential $V^{\mu\nu}_{\rm reac}$ at the 1PN
order beyond (\ref{eq:2.9}), we find that both the
$00$ and $0i$ components are to be considered, and are
given by
\FL
\begin{mathletters}
\label{eq:2.10}
\begin{eqnarray}
V^{00}_{\rm reac} = && - {G\over 5c^5} x^a x^b M^{(5)}_{ab} (t) +
{G\over c^7} \left[ {1\over 189} x^a x^b x^c M^{(7)}_{abc} (t)
- {1\over 70} r^2 x^a x^b M^{(7)}_{ab} (t)
\right] + O\left( {1\over c^9} \right)\ , \label{eq:2.10a} \\
V^{0i}_{\rm reac} = && {G\over c^5} \left[ {1\over 21} \hat x^{iab}
M^{(6)}_{ab} (t)-{4\over 45} \varepsilon_{iab} x^a x^c S^{(5)}_{bc} (t)
\right] + O\left( {1\over c^7} \right) \ . \label{eq:2.10b}
\end{eqnarray}
\end{mathletters}
At this order the $ij$ components of the potential can be neglected.
In the next subsection we address the question of the corrections to the
1PN reaction potential (\ref{eq:2.10}) which arise from the
non-linear contributions to the exterior field. Answering this question
means controlling the non-linear metric at the 3.5PN order.
\subsection{The 3.5PN approximation in the exterior metric}
The radiation reaction potential (\ref{eq:2.8})-(\ref{eq:2.10}) represents
the antisymmetric part of the linearized metric in a particular coordinate
system, obtained
from the initial harmonic coordinate system in which
(\ref{eq:2.3}) holds by applying the gauge transformation associated
with (\ref{eq:2.7}). In this coordinate system, the new
linearized metric reads
\begin{equation}
h^{\mu\nu}_{(1)} = h^{\mu\nu}_{\rm can(1)} + \partial^\mu \xi^\nu
+ \partial^\nu \xi^\mu - \eta^{\mu\nu} \partial_\lambda \xi^\lambda\ .
\label{eq:2.11}
\end{equation}
It fulfills, of course, the linearized equations (\ref{eq:2.2}).
Furthermore, since $\Box \xi^\mu =0$, the harmonic coordinate condition
is still satisfied. However, we recall from paper I that the latter
new harmonic coordinate system is not completely satisfying for general
purposes, and should be replaced by a certain modified (non-harmonic)
coordinate system. The reason is that the gauge vector defined by
(\ref{eq:2.7}) is made of antisymmetric waves, and consequently the
metric (\ref{eq:2.11}) contains both retarded {\it and} advanced waves.
In particular, the metric is no longer stationary in the remote past
(before the instant $-{\cal T}$ where the moments are assumed to be
constant). Of course, the advanced waves which have been introduced are
pure gauge. Nevertheless, the non-stationarity of the linearized metric
in the remote past breaks one of our initial assumptions, and this can
be a source of problems when performing the non-linear iterations of the
metric by this method. Therefore, it was found necessary in paper I to
replace the gauge vector (\ref{eq:2.7}) by a modified gauge vector, such
that the modified coordinate system has two properties. First, it
reduces in the near-zone of the source (the domain $D_i$ defined in
Sect.~III), to the un-modified coordinate system given by (\ref{eq:2.7}),
with a given but {\it arbitrary} post-Newtonian precision. Second, it
reduces {\it exactly} to the initial harmonic coordinate system in which the
linearized metric is (\ref{eq:2.3}) in a domain exterior to some {\it
time-like} world tube surrounding the source (in fact, a future-oriented
time-like cone whose vertex is at the event $t={\cal -T}$, ${\bf x}={\bf
0}$). This modified coordinate system is non-harmonic. It has been
suggested to the author by T. Damour in a private communication, and is
defined in Sect. II.C of paper I. By the first property, we see that
if we are interested to the field in the near-zone of the source, and
work at some finite post-Newtonian order (like the 1PN order
investigated in this paper), we can make all computations using the
un-modified gauge vector (\ref{eq:2.7}). Indeed, it suffices to adjust
a certain constant, denoted $K$ in paper I, so that the modified gauge
vector agrees with (\ref{eq:2.7}) with a higher post-Newtonian
precision. Thus, in the present paper, we shall not use explicitly the
modified coordinate system. By the second property, we see that the
standard fall-off behavior of the metric at the various infinities
(notably the standard no-incoming radiation condition at past-null
infinity) are preserved in the modified coordinate system. By
these two properties, one can argue that what only matters is the {\it
existence} of such a modified coordinate system, which permits us to
make the connection with the field at infinity, but that in all
practical computations of the metric in the near-zone one can use the
un-modified coordinate system defined by (2.7).
Based on the linearized metric (\ref{eq:2.11}) (or, rather, on the modified
linearized metric (2.29) of paper~I) we built a full non-linear expansion,
\begin{equation}
h^{\mu\nu} = G h^{\mu\nu}_{(1)} + G^2 h^{\mu\nu}_{(2)}
+ G^3 h^{\mu\nu}_{(3)} + ... \ , \label{eq:2.12}
\end{equation}
satisfying the vacuum field equations in a perturbative sense
(equating term by term the coefficients of equal powers of $G$ in both
sides of the equations). The non-linear coefficients
$h_{(2)}^{\mu\nu}$, $h_{(3)}^{\mu\nu}$, ... are like
$h_{(1)}^{\mu\nu}$ in the form of multipole expansions parametrized by
$M_L$ and $S_L$. The construction of the non-linear metric is based on
the method of \cite{BD86}. As we are considering multipole expansions
valid in $D_e$ (and singular at the spatial origin $r=0$), we need to
use at each non-linear iterations of the field equations a special
operator generalizing the usual retarded integral operator when acting
on multipole expansions. We denote this operator by ${\cal
F}\Box^{-1}_R$, to mean the ``Finite part of the retarded integral
operator'' (see \cite{BD86} for its precise definition). The
non-linear coefficients $h_{(2)}^{\mu\nu}$, $h_{(3)}^{\mu\nu}$, ...
are given by
\begin{mathletters}
\label{eq:2.13}
\begin{eqnarray}
h^{\mu\nu}_{(2)} &=& {\cal F}\ \Box^{-1}_R \Lambda_{(2)}^{\mu\nu} (h_{(1)})
+ q^{\mu\nu}_{(2)}\ , \label{eq:2.13a} \\
h^{\mu\nu}_{(3)} &=& {\cal F}\ \Box^{-1}_R \Lambda_{(3)}^{\mu\nu}
(h_{(1)}, h_{(2)})
+ q^{\mu\nu}_{(3)}\ , \quad ...\ , \label{eq:2.13b}
\end{eqnarray}
\end{mathletters}
where the non-linear source terms $\Lambda^{\mu\nu}_{(2)}$,
$\Lambda^{\mu\nu}_{(3)}$,~... represent the field non-linearities in
vacuum, and depend, at each non-linear order, on the
coefficients of the previous orders. The second
terms $q^{\mu\nu}_{(2)}$, $q^{\mu\nu}_{(3)}$,~... ensure the
satisfaction of the harmonic coordinate condition at each non-linear
order (see \cite{BD86}).
When investigating the 3.5PN approximation, we can disregard purely
non-linear effects, such as the tail effect, which give irreducibly
non-local contributions in the metric inside the source. These
effects arise at the 4PN approximation (see \cite{BD88} and Sect.~IV
below). Still there are some non-linear contributions in the metric at
the 3.5PN approximation, which are contained in the first two
non-linear coefficients $h_{(2)}^{\mu\nu}$ and $h_{(3)}^{\mu\nu}$
given by (\ref{eq:2.13}). These contributions involve some non-local
integrals, but which ultimately do not enter the inner metric (after
matching). As shown in paper I, the contributions due to
$h^{\mu\nu}_{(2)}$ and $h^{\mu\nu}_{(3)}$ in the 1PN radiation
reaction potential imply only a modification of the multipole moments
$M_L$ and $S_L$ parametrizing the potential. We define two new sets
of multipole moments,
\begin{mathletters}
\label{eq:2.14}
\begin{eqnarray}
\widetilde M_L (t) =&& M_L(t) + \left\{
\begin{array}{c}
{G\over c^7} m(t) \ {\rm for} \, l=0 \\ \\
{G\over c^5} m_i (t) \ {\rm for} \, l=1 \\ \\
0 \qquad {\rm for}\ l\geq 2
\end{array} \right\}
+ {G\over c^7} T_L (t) + O \left( {1\over c^8} \right)\ ,
\label{eq:2.14a} \\
\widetilde S_L (t) =&& S_L(t) + \left\{
\begin{array}{c}
{G\over c^5} s_i (t) \ {\rm for} \, l=1 \\ \\
0 \quad {\rm for}\ l\geq 2
\end{array} \right\}
+ O \left( {1\over c^6} \right) \ , \label{eq:2.14b}
\end{eqnarray}
\end{mathletters}
where the functions $m$, $m_i$ and $s_i$ are given by the
non-local expressions
\FL
\begin{mathletters}
\label{eq:2.15}
\begin{eqnarray}
m (t) =&& - {1 \over 5}
\int^t_{- \infty} dv\ M^{(3)}_{ab} (v) M^{(3)}_{ab} (v) +
F (t) \ , \label{eq:2.15a}\\
m_i (t) =&& -{2\over 5} M_a M^{(3)}_{ia} (t)
- {2\over 21c^2} \int^t_{- \infty} dv M^{(3)}_{iab} (v) M^{(3)}_{ab} (v)
\nonumber \\
&& + {1\over c^2} \int^t_{-\infty} dv \int^v_{-\infty} dw
\left[ - {2 \over 63} M^{(4)}_{iab} (w) M^{(3)}_{ab} (w)
- {16 \over 45} \varepsilon_{iab} M^{(3)}_{ac} (w) S^{(3)}_{bc} (w)
\right] + {1\over c^2} G_i (t) \ , \nonumber \\
&& \label{eq:2.15b} \\
s_i (t) =&& - {2 \over 5} \varepsilon_{iab}
\int^t_{- \infty} dv\ M^{(2)}_{ac} (v) M^{(3)}_{bc} (v) + H_i (t) \ .
\label{eq:2.15c}
\end{eqnarray}
\end{mathletters}
The function $T_L (t)$ in (\ref{eq:2.14a}), and the functions $F(t)$,
$G_i(t)$ and $H_i(t)$ in (\ref{eq:2.15}), are some local (or
instantaneous) functions, which do not play a very important role
physically (they are computed in paper~I). Then the radiation
reaction potential at the 1PN order, in the non-linear theory, is
given by the same expression as in (\ref{eq:2.10}), but expressed in
terms of the new multipole moments, say $\widetilde{\cal M}
\equiv \{\widetilde{M}_L, \widetilde{S}_L\}$,
\begin{mathletters}
\label{eq:2.16}
\begin{eqnarray}
V^{\rm reac} [\widetilde {\cal M}] =&& - {G\over 5c^5} x^a x^b
\widetilde M^{(5)}_{ab} (t) + {G\over c^7} \left[ {1\over 189} x^{abc}
\widetilde M^{(7)}_{abc}(t) - {1\over 70} r^2 x^{ab} \widetilde M^{(7)}_{ab}
(t) \right] \nonumber \\
&& + O \left( {1\over c^8} \right) \ ,\label{eq:2.16a} \\
V_i^{\rm reac} [\widetilde {\cal M}] =&& {G\over c^5}
\left[ {1\over 21} \hat x^{iab} \widetilde M^{(6)}_{ab} (t)
- {4\over 45}\varepsilon_{iab} x^{ac} \widetilde S^{(5)}_{bc} (t)
\right] + O \left( {1\over c^6} \right) \ \label{eq:2.16b}
\end{eqnarray}
\end{mathletters}
(see Eq. (3.53) in paper~I). In the considered coordinate system, the
metric, accurate to 1PN order as concerns both the usual non-radiative
effects and the radiation reaction effects, reads
(coming back to the usual covariant metric $g^{\rm ext}_{\mu\nu}$)
\FL
\begin{mathletters}
\label{eq:2.17}
\begin{eqnarray}
g^{\rm ext}_{00} =&& - 1 + {2\over c^2}
\left( V^{\rm ext} [\widetilde{\cal M}] + V^{\rm reac}[\widetilde{\cal M}]
\right)
- {2\over c^4} \left( V^{\rm ext} [\widetilde{\cal M}]
+ V^{\rm reac} [\widetilde{\cal M}] \right)^2 \nonumber \\
&&+{1\over c^6}\,{}_6 g^{\rm ext}_{00}+{1\over c^8}\,{}_8 g^{\rm ext}_{00}
+ O \left( {1\over c^{10}} \right)\ , \label{eq:2.17a} \\
g^{\rm ext}_{0i} =&& - {4\over c^3}
\left( V^{\rm ext}_i[\widetilde{\cal M}]+V_i^{\rm reac}
[\widetilde{\cal M}]\right)
+ {1\over c^5}\,{}_5 g^{\rm ext}_{0i}+{1\over c^7}\,{}_7 g^{\rm ext}_{0i}
+ O \left( {1\over c^9} \right)\ , \label{eq:2.17b} \\
g^{\rm ext}_{ij} =&& \delta_{ij} \left[ 1 + {2\over c^2}
\left( V^{\rm ext}[\widetilde{\cal M}]+V^{\rm reac} [\widetilde{\cal M}]
\right)
\right] + {1\over c^4} {}_4g^{\rm ext}_{ij} + {1\over c^6} {}_6g^{\rm ext}_{ij}
+ O \left( {1\over c^8} \right)\ . \label{eq:2.17c}
\end{eqnarray}
\end{mathletters}
The superscript ext is to remember that the metric is valid in the exterior
domain $D_e$, and will differ from the inner metric by a coordinate
transformation (see Sect.~III). The Newtonian and 1PN approximations are
entirely contained in the externals potentials $V^{\rm ext}$ and $V^{\rm
ext}_i$, given by the multipole expansions of {\it symmetric} waves,
\begin{mathletters}
\label{eq:2.18}
\begin{eqnarray}
V^{\rm ext} [\widetilde{\cal M}] = &&G \sum_{l\geq 0} {(-)^l\over l!}
\partial_L
\left\{ {\widetilde M_L \left( t-{r\over c}\right) + \widetilde M_L
\left( t+{r\over c}\right) \over 2r} \right\}\ , \label{eq:2.18a} \\
V^{\rm ext}_i [\widetilde {\cal M}] =&& -G \sum_{l\geq 1} {(-)^l\over l!}
\partial_{L-1}\left\{ {\widetilde M^{(1)}_{iL-1} \left( t-{r\over c}\right)
+ \widetilde M^{(1)}_{iL-1} \left( t+{r\over c}\right) \over 2r} \right\}
\nonumber \\
&& -G \sum_{l\geq 1} {(-)^ll\over (l+1)!} \varepsilon_{iab}
\partial_{aL-1} \left\{ {\widetilde S_{bL-1} \left( t-{r\over c}\right)
+ \widetilde S_{bL-1}\left( t+{r\over c}\right)\over 2r}\right\}\ .
\label{eq:2.18b}
\end{eqnarray}
\end{mathletters}
The 2PN and 3PN approximations are not controlled at this stage; they
are symbolized in (\ref{eq:2.17}) by the terms $c^{-n} {}_ng^{\rm
ext}_{\mu\nu}$. However, these approximations are non-radiative (or
non-dissipative), as are the Newtonian and 1PN approximations (the
1PN, 2PN, and 3PN terms are ``even" in the sense that they yield only
even powers of $1/c$ in the equations of motion). A discussion of the
4PN approximation in the exterior metric can be found in Sect.~III.D
of paper I.
As it stands, the metric (\ref{eq:2.17}) is disconnected from the actual
source of radiation. The multipole moments
$\widetilde M_L$ and $\widetilde S_L$ are left as some un-specified
functions of time. Therefore, in order to determine the radiation
reaction potentials (\ref{eq:2.16}) as some {\it explicit} functionals
of the matter variables, we need to relate the exterior metric
(\ref{eq:2.17}) to a metric valid inside the source, solution of the
non-vacuum Einstein field equations. We perform the relevant
computation in the next section, and obtain the multipole moments
$\widetilde M_L$ and $\widetilde S_L$ as integrals over the source.
\section{The 1PN-accurate radiation reaction potentials} \label{sec:3}
\subsection{The inner gravitational field}
The near-zone of the source is defined in the usual way as being
an inner domain $D_i= \{({\bf x},t), |{\bf x}|<r_i\}$, whose radius
$r_i$ satisfies $r_i>a$ ($D_i$ covers entirely the source), and
$r_i\ll\lambda$ (the domain $D_i$ is of small extent as compared with one
wavelength of the radiation). These two demands are possible
simultaneously when the source is slowly moving, i.e. when there exists
a small parameter of the order of $1/c$ when $c\to \infty$, in
which case we can assume $r_i/\lambda =O(1/c)$. Furthermore, we can
adjust $r_e$ and $r_i$ so that $a<r_e<r_i$ (where $r_e$
defines the external domain $D_e$ in Sect.~II).
In this subsection we present the result of the expression of the
metric in $D_i$, to the relevant approximation. In the next subsection
we prove that this metric matches to the exterior metric reviewed in
Sect.~II. The accuracy of the inner metric is 1PN for the usual
non-radiative approximations, and 1PN for the dominant radiation
reaction. Thus, the metric enables one to control, in the equations of
motion, the Newtonian acceleration followed by the first relativistic
correction, which is of order $c^{-2}$ or 1PN, then the dominant
``Newtonian'' radiation reaction, of order $c^{-5}$ or 2.5PN, and
finally the first relativistic 1PN correction in the reaction,
$c^{-7}$ or 3.5PN. The intermediate approximations $c^{-4}$ and
$c^{-6}$ (2PN and 3PN) are left un-determined, like in the exterior
metric (2.17). The inner metric, in $D_i$, reads
\begin{mathletters}
\label{eq:3.1}
\begin{eqnarray}
g^{\rm in}_{00} &=& -1 +{2\over c^2} {\cal V}^{\rm in} - {2\over c^4}
({\cal V}^{\rm in})^2 +
{1\over c^6}{}_6 g^{\rm in}_{00} + {1\over c^8} {}_8 g^{\rm in}_{00}
+ O\left({1\over c^{10}}\right)\ , \label{eq:3.1a} \\
g^{\rm in}_{0i} &=& -{4\over c^3} {\cal V}^{\rm in}_i + {1\over c^5} {}_5
g^{\rm in}_{0i} + {1\over c^7} {}_7 g^{\rm in}_{0i}+ O\left({1\over
c^9}\right)\ ,\label{eq:3.1b} \\
g^{\rm in}_{ij} &=& \delta_{ij} \left(1 +{2\over c^2}{\cal V}^{\rm in}\right)
+ {1\over c^4} {}_4 g^{\rm in}_{ij} + {1\over c^6} {}_6 g^{\rm in}_{ij}
+ O\left({1\over c^8}\right) \ . \label{eq:3.1c}
\end{eqnarray}
\end{mathletters}
It is valid in a particular Cartesian coordinate system $({\bf x},t)$,
which is to be determined by matching. Like in (2.17), the terms
$c^{-n} {}_n g^{\rm in}_{\mu\nu}$ represent the 2PN and 3PN
approximations. Note that these terms depend functionally on the
source's variables through some {\it spatial} integrals, extending
over the whole three-dimensional space, but that they do not involve
any non-local integral in time. These terms are ``instantaneous'' (in
the terminology of \cite{BD88}) and ``even'', so they remain invariant
in a time reversal, and do not yield any radiation reaction
effects. The remainder terms in (\ref{eq:3.1}) represent the 4PN and
higher approximations. Note also that some logarithms of $c$ arise
starting at the 4PN approximation. For simplicity we do not indicate
in the remainders the dependence on $\ln c$. The potentials ${\cal
V}^{\rm in}$ and ${\cal V}^{\rm in}_i$ introduced in (\ref{eq:3.1})
are given, like in (\ref{eq:2.17}), as the linear combination of two
types of potentials. With the notation ${\cal V}^{\rm in}_\mu \equiv
({\cal V}^{\rm in}, {\cal V}^{\rm in}_i)$, where the index $\mu$ takes
the values $0,i$ and where ${\cal V}^{\rm in}_0\equiv {\cal V}^{\rm
in}$, we have
\begin{equation}
{\cal V}^{\rm in}_\mu = V^{\rm in}_\mu [\sigma_\nu] + V^{\rm reac}_\mu
[{\cal I}]\ . \label{eq:3.2}
\end{equation}
The first type of potential, $V^{\rm in}_\mu$, is given by an integral
of the {\it symmetric} potentials, i.e. by the half-sum of the
retarded integral and of the corresponding advanced integral. Our
terminology, which means here something slightly different from
Sect.~II, should be clear from the context. We are referring here to
the formal structure of the integral, made of the sum of the retarded
and advanced integrals. However, the real behavior of the symmetric
integral under a time-reversal operation may be more complicated than
a simple invariance. The mass and current densities $\sigma_\mu
\equiv (\sigma ,
\sigma_i)$ of the source are defined by
\begin{mathletters}
\label{eq:3.3}
\begin{eqnarray}
\sigma &\equiv & {T^{00} + T^{kk}\over c^2}\ , \label{eq:3.3a}\\
\sigma_i &\equiv & {T^{0i}\over c}\ , \label{eq:3.3b}
\end{eqnarray}
\end{mathletters}
where $T^{\mu\nu}$ denotes the usual stress-energy tensor of the
matter fields (with $T^{kk}$ the spatial trace $\Sigma \delta_{jk}
T^{jk}$). The powers of $1/c$ in (\ref{eq:3.3}) are such that
$\sigma_\mu$ admits a finite non-zero limit when $c\to +\infty$. The
potentials $V^{\rm in}_\mu$ are given by
\begin{equation}
V^{\rm in}_\mu ({\bf x},t) = {G\over 2} \int {d^3{\bf x}'\over |{\bf
x}-{\bf x}'|} \left[ \sigma_\mu \left({\bf x}',t -{1\over c}
|{\bf x}-{\bf x}'|\right) + \sigma_\mu \left({\bf x}',t +{1\over c}
|{\bf x}-{\bf x}'|\right) \right]\ . \label{eq:3.4}
\end{equation}
To lowest order when $c\to +\infty$, $V^{\rm in}$ reduces to the usual
Newtonian potential, and $V^{\rm in}_i$ to the usual gravitomagnetic
potential. It was noticed in \cite{BD89} that when using the mass
density $\sigma$ given by (3.3a), the first (non-radiative)
post-Newtonian approximation takes a very simple form, involving
simply the square of the potential in the $00$ component of the
metric. See also (4.5) below, where we use the post-Newtonian
expansion $V^{\rm in} = U+ \partial^2_t X/2c^2 + O(c^{-4})$.
The fact that the inner metric contains some symmetric integrals, and
therefore some advanced integrals, does not mean that the field
violates the condition of retarded potentials. The metric
(\ref{eq:3.1}) is in the form of a post-Newtonian expansion, which is
valid only in the near zone $D_i$. It is well-known that the
coefficients of the powers of $1/c$ in a post-Newtonian expansion
typically diverge at spatial infinity. This is no concern of us
because the expansion is not valid at infinity (it would give poor
results when compared to an exact solution). Thus, the symmetric
integral (\ref{eq:3.4}) should more properly be replaced by its formal
post-Newtonian expansion, readily obtained by expanding by means of
Taylor's formula the retarded and advanced arguments when $c \to
+\infty$. Denoting $\partial^{2p}_t \equiv (\partial/\partial
t)^{2p}$ we have
\begin{equation}
V^{\rm in}_\mu ({\bf x},t) = G \sum^{+\infty}_{p=0} {1\over (2p)!c^{2p}}
\int d^3 {\bf x}' |{\bf x}-{\bf x}'|^{2p-1} \partial^{2p}_t \sigma_\mu
({\bf x}',t)\ . \label{eq:3.5}
\end{equation}
This expansion could be limited to the precision indicated in
(\ref{eq:3.1}). It involves (explicitly) only {\it even} powers of
$c^{-1}$, and is thus expected to yield essentially non-dissipative
effects. However, the dependence of $V^{\rm in}_\mu$ on $c^{-1}$ is
more complicated than indicated in (\ref{eq:3.5}). Indeed, the mass
and current densities $\sigma_{\mu}$ depend on the metric
(\ref{eq:3.1}), and thus depend on $c^{-1}$ starting at the
post-Newtonian level. Even more, the densities $\sigma_{\mu}$ and
their time-derivatives do contain, through the contribution of the
reactive potentials $V^{\rm reac}_\mu$ (see below), some odd powers of
$c^{-1}$ which are associated ({\it a priori}) to radiation reaction
effects. These ``odd'' contributions in $V^{\rm in}_\mu$ form an
integral part of the equations of motion of binary systems at the
3.5PN approximation \cite{IW95}, but we shall prove in Sect.~IV that
they do not participate to the losses of energy and momenta by
radiation at the 1PN order, as they enter the balance equations only
in the form of some total time-derivatives. The secular losses of
energy and momenta are driven by the radiation reaction potentials
$V^{\rm reac}_\mu$, to which we now turn.
The 1PN-accurate reaction potentials $V^{\rm reac}_\mu \equiv (V^{\rm
reac}, V^{\rm reac}_i)$ involve dominantly some odd powers of
$c^{-1}$, which correspond in the metric (\ref{eq:3.1}) to the 2.5PN
and 3.5PN approximations $c^{-5}$ and $c^{-7}$ taking place between
the (non-dissipative) 2PN and 3PN approximations. Since the reactive
potentials are added linearly to the potentials $V^{\rm in}_\mu$, the
simple form mentionned above of the 1PN non-radiative approximation
holds also for the 1PN radiative approximation, in this coordinate
system. The $V^{\rm reac}_\mu$'s are given by exactly the same
expressions as obtained in paper~I for the exterior metric (see
(\ref{eq:2.16}) in Sect.~II), but they depend on some specific
``source'' multipole moments ${\cal I}\equiv \{I_L,J_L\}$ instead of
the unknown multipole moments $\widetilde{\cal M}$. Namely,
\begin{mathletters}
\label{eq:3.6}
\begin{eqnarray}
V^{\rm reac}({\bf x},t)&=& -{G\over 5c^5} x_{ij} I^{(5)}_{ij} (t) +
{G\over c^7} \left[ {1\over 189} x_{ijk} I^{(7)}_{ijk} (t) - {1\over
70} {\bf x}^2 x_{ij} I^{(7)}_{ij} (t) \right] + O \left( {1\over
c^8} \right) \ , \label{eq:3.6a}\\ V_i^{\rm reac}({\bf x},t) &=&
{G\over c^5} \left[ {1\over 21} \hat x_{ijk} I^{(6)}_{jk} (t) -
{4\over 45} \varepsilon_{ijk} x_{jm} J^{(5)}_{km} (t) \right] + O
\left( {1\over c^6} \right)\ ,\label{eq:3.6b}
\end{eqnarray}
\end{mathletters}
where we recall our notation $\hat x_{ijk} = x_{ijk} - {1\over 5} {\bf
x}^2 (\delta_{ij} x_k +\delta_{ik}x_j + \delta_{jk}x_i)$ \cite{N}. The
multipole moments $I_{ij}(t)$, $I_{ijk}(t)$ and $J_{ij}(t)$ are some
explicit functionals of the densities $\sigma_{\mu}$. Only the mass
quadrupole $I_{ij}(t)$ in the first term of $V^{\rm reac}$ needs to be
given at 1PN order. The relevant expression is
\begin{equation}
I_{ij} = \int d^3{\bf x} \left\{ \hat x_{ij} \sigma + {1\over 14c^2}
{\bf x}^2 \hat x_{ij} \partial_t^2 \sigma - {20\over 21c^2} \hat x_{ijk}
\partial_t \sigma_k \right\}\ \label{eq:3.7}
\end{equation}
(see (\ref{eq:3.21a}) for the general expression of $I_L$).
The mass octupole and current quadrupole $I_{ijk}(t)$
and $J_{ij}(t)$
take their standard Newtonian expressions,
\begin{mathletters}
\label{eq:3.8}
\begin{eqnarray}
I_{ijk} &=& \int d^3 {\bf x}\,\hat x_{ijk} \sigma + O \left( {1\over c^2}
\right)\ , \label{eq:3.8a}\\
J_{ij} &=& \int d^3 {\bf x}\, \varepsilon_{km<i} \hat x_{j>k} \sigma_m
\ . \label{eq:3.8b}
\end{eqnarray}
\end{mathletters}
The potentials $V^{\rm reac}$ and $V^{\rm reac}_i$ generalize to 1PN
order the scalar reactive potential of Burke and Thorne
\cite{Bu69,Th69,Bu71}, whose form is that of the first term in
(\ref{eq:3.6a}). The vectorial potential $V^{\rm reac}_i$ enters the
equations of motion at the same 3.5PN order as the 1PN corrections in
$V^{\rm reac}$. The first term which is neglected in $V^{\rm reac}$,
of order $c^{-8}$ or 1.5PN, is due to the tails of waves (see
Sect.~IV.C).
\subsection{Matching to the exterior field}
In this subsection we prove that the inner metric presented above (i)
satisfies the Einstein field equations within the source (in the
near-zone $D_i$), and (ii) matches to the exterior metric
(\ref{eq:2.17}) in the intersecting region between $D_i$ and the
exterior zone $D_e$ (exterior near-zone $D_i\cap D_e$).
Note that during Proof (i) we do not check any boundary conditions
satisfied by the metric at infinity. Simply we prove that the metric
satisfies the field equations term by term in the post-Newtonian
expansion, but at this stage the metric could be made of a mixture of
retarded and advanced solutions. Only during Proof (ii) does one
check that the metric comes from the re-expansion when $c \to \infty$
of a solution of the Einstein field equations satisfying some relevant
time-asymmetric boundary conditions at infinity. Indeed, the exterior
metric has been constructed in paper~I by means of a post-Minkowskian
algorithm valid all over the exterior region $D_e$, and having a
no-incoming radiation condition built into it (indeed, the exterior
metric was assumed to be stationary in the remote past, see Sect.~II).
The proof that (\ref{eq:3.1}) is a solution admissible
in $D_i$ follows immediately from the particular form
taken by the Einstein field equations when
developed to 1PN order \cite{BD89}
\begin{mathletters}
\label{eq:3.9}
\begin{eqnarray}
\Box \ln (-g^{\rm in}_{00}) &=& {8\pi G\over c^2} \sigma +
O_{\rm even}\left({1\over c^{6}}\right)\ , \label{eq:3.9a} \\
\Box g^{\rm in}_{0i} &=& {16\pi G\over c^3} \sigma_i +
O_{\rm even}\left({1\over c^{5}}\right)\ ,\label{eq:3.9b}\\
\Box g^{\rm in}_{ij} &=& -{8\pi G\over c^2} \delta_{ij} \sigma +
O_{\rm even}\left({1\over c^{4}}\right) \ . \label{eq:3.9c}
\end{eqnarray}
\end{mathletters}
The point is that with the introduction of the logarithm of $-g^{\rm
in}_{00}$ as a new variable in (\ref{eq:3.9a}) the equations at the
1PN order take the form of {\it linear} wave equations. The other
point is that the neglected post-Newtonian terms in (\ref{eq:3.9}) are
``even", in the sense that the {\it explicit} powers of $c^{-1}$ they
contain, which come from the differentiations of the metric with
respect to the time coordinate $x^0=c t$, correspond formally to
integer post-Newtonian approximations (to remember this we have added
the subscript ``even'' on the $O$-symbols). This feature is simply
the consequence of the time-symmetry of the field equations, implying
that to each solution of the equations is associated another solution
obtained from it by a time-reversal.
Because the potentials $V^{\rm in}_\mu$ satisfy exactly $\Box V^{\rm
in}_\mu = - 4 \pi G \sigma_{\mu}$, a consistent solution of
(\ref{eq:3.9}) is easily seen to be given by (\ref{eq:3.1}) in which
the reactive potentials $V^{\rm reac}_\mu$ are set to zero. Now the
equations (\ref{eq:3.9}) are linear wave equations, so we can add
linearly to $V^{\rm in}_\mu$ any homogeneous solution of the wave
equation which is regular in $D_i$. One can check from their
definition (\ref{eq:3.6}) that the reactive potentials $V^{\rm
reac}_\mu$ form such a homogeneous solution, as they satisfy
\begin{mathletters}
\label{eq:3.10}
\begin{eqnarray}
\Box V^{\rm reac} = O\left({1\over c^{8}}\right), \label{eq:3.10a}\\
\Box V^{\rm reac}_i = O\left( {1\over c^{6}} \right) . \label{eq:3.10b}
\end{eqnarray}
\end{mathletters}
So we can add $V^{\rm reac}_\mu$ to $V^{\rm in}_\mu$, defining an
equally consistent solution of (\ref{eq:3.9}), which is precisely
(\ref{eq:3.1}), modulo the error terms coming from (\ref{eq:3.10}) and
which correspond to the neglected 4PN approximation. [Note that
$V^{\rm reac}_\mu$ comes from the expansion of the tensor potential
(\ref{eq:2.8}) of the linearized theory, which satisfies {\it exactly}
the source-free wave equation. It would be possible to define $V^{\rm
reac}_\mu$ in such a way that there are no error terms in
(\ref{eq:3.10}). See (\ref{eq:4.33}) for more precise expressions of
$V^{\rm reac}_{\mu}$, satisfying more precisely the wave equation.]
With Proof (i) done, we undertake Proof (ii). More precisely, we show
that (\ref{eq:3.1}) differs from the exterior metric (\ref{eq:2.17})
by a mere coordinate transformation in $D_i\cap D_e$. This will be
true only if the multipole moments $\widetilde M_L$ and $\widetilde
S_L$ parametrizing (\ref{eq:2.17}) agree, to the relevant order, with
some source multipole moments $I_L$ and $J_L$. Fulfilling these
matching conditions will ensure (in this approximate framework) the
existence and consistency of a solution of the field equations valid
everywhere in $D_i$ and $D_e$. As recalled in the introduction, this
is part of the method to work out first the exterior metric leaving
the multipole moments arbitrary (paper~I), and then to obtain by
matching the expressions of these moments as integrals over the source
(this paper).
To implement the matching we expand the inner metric (\ref{eq:3.1})
into multipole moments outside the compact support of the source. The
comparison can then be made with (\ref{eq:2.17}), which is already in
the form of a multipole expansion. Only the potentials $V^{\rm
in}_\mu$ need to be expanded into multipoles, as the reactive
potentials $V^{\rm reac}_\mu$ are already in the required form. The
multipole expansion of the retarded integral of a compact-supported
source is well-known. E.g., the formula has been obtained in the
Appendix B of \cite{BD89} using the STF formalism for spherical
harmonics. The multipole expansion corresponding to an advanced
integral follows simply from the replacement $c\to -c$ in the formula.
The script letter ${\cal M}$ will be used to denote the multipole
expansion. ${\cal M} (V^{\rm in}_\mu)$ reads as
\begin{mathletters}
\label{eq:3.11}
\begin{eqnarray}
{\cal M} (V^{\rm in}) &=& G \sum_{l\geq 0} {(-)^{l} \over l !}
\partial_L \left \{ {F_L (t-r/c) + F_L(t+r/c) \over 2r} \right \}\,
\label{eq:3.11a}\\
{\cal M} (V_i^{\rm in}) &=& G \sum_{l\geq 0} {(-)^l \over l !}
\partial_L \left\{ {G_{iL} (t-r/c) +G_{iL}(t+r/c)\over 2r} \right\}
\ , \label{eq:3.11b}
\end{eqnarray}
\end{mathletters}
where $F_L(t)$ and $G_{iL}(t)$ are some tensorial functions of time
given by the integrals
\begin{mathletters}
\label{eq:3.12}
\begin{eqnarray}
F_L (t) &=& \int d^3{\bf x}\, \hat x_L \int^1_{-1} dz\,\delta_l
(z) \sigma ({\bf x},t +z|{\bf x}|/c)\ , \label{eq:3.12a}\\
G_{iL} (t) &=& \int d^3{\bf x}\, \hat x_L \int^1_{-1} dz\,\delta_l
(z) \sigma_i ({\bf x},t +z|{\bf x}|/c)\ . \label{eq:3.12b}
\end{eqnarray}
\end{mathletters}
The function $\delta_l (z)$ appearing here takes into account the delays
in the propagation of the waves inside the source. It reads
\begin{equation}
\delta_l (z) = {(2l+1)!!\over 2^{l+1}l !} (1-z^2)^l\ ;
\quad \int^1_{-1} dz \delta_l (z) = 1 \ \label{eq:3.13}
\end{equation}
(see Eq.~(B.12) in \cite{BD89}). Note that the same functions $F_L(t)$
and $G_{iL}(t)$ parametrize both the retarded and the corresponding
advanced waves in (\ref{eq:3.11}). Indeed $\delta_l (z)$ is an even
function of its variable $z$, so the integrals (\ref{eq:3.12}) are
invariant under the replacement $c\to -c$.
Using an approach similar to the one employed in \cite{BD89}, we
perform an irreducible decomposition of the tensorial function
$G_{iL}$ (which is STF with respect to its $l$ indices $L$ but not
with respect to its $l+1$ indices $iL$), as a sum of STF tensors of
multipolarities $l+1$, $l$ and $l-1$. The equation (2.17) in
\cite{BD89} gives this decomposition as
\begin{equation}
G_{iL} = C_{iL} - {l\over l+1}
\varepsilon_{ai<i_l} D_{L-1>a} + {2l-1\over 2l+1}
\delta_{i<i_l} E_{L-1>} \ , \label{eq:3.14}
\end{equation}
where the tensors $C_{L+1}$, $D_L$ and $E_{L-1}$ (which are STF with
respect to all their indices) are given by
\begin{mathletters}
\label{eq:3.15}
\begin{eqnarray}
C_{L+1}(t) &=& \int d^3 {\bf x} \int^1_{-1} dz \delta_l (z)
\hat x_{<L} \sigma_{i_{l+1}>} ({\bf x},t+z|{\bf x}|/c)\ ,\label{eq:3.15a}\\
D_L(t) &=& \int d^3 {\bf x} \int^1_{-1} dz \delta_l (z)
\varepsilon_{ab<i_l} \hat x_{L-1>a} \sigma_b ({\bf x},t+z|{\bf
x}|/c)\ , \label{eq:3.15b}\\ E_{L-1}(t) &=& \int d^3 {\bf x}
\int^1_{-1} dz \delta_l (z) \hat x_{aL-1} \sigma_a ({\bf x},t+z|{\bf
x}|/c)\ . \label{eq:3.15c}
\end{eqnarray}
\end{mathletters}
Then by introducing the new definitions of STF tensors,
\begin{mathletters}
\label{eq:3.16}
\begin{eqnarray}
A_L &=& F_L - {4(2l+1)\over c^2 (l+1)(2l+3)} E^{(1)}_L
\ , \label{eq:3.16a}\\
B_L &=& l\, C_L - {l\over c^2 (l+1)(2l+3)} E^{(2)}_L \ ,
\label{eq:3.16b}
\end{eqnarray}
\end{mathletters}
and by using standard manipulations on STF tensors, we can re-write
the multipole expansions (\ref{eq:3.11}) in the new form
\begin{mathletters}
\label{eq:3.17}
\begin{eqnarray}
{\cal M} (V^{\rm in}) &=& -c \partial_t \phi^0+ G
\sum_{l\geq 0} {(-)^l\over l !} \partial_L \left\{ {A_L(t-r/c)
+A_L (t+r/c)\over 2r} \right\}\ , \label{eq:3.17a}\\
{\cal M} (V^{\rm in}_i) &=& {{c^3} \over 4} \partial_i \phi^0 -G
\sum_{l\geq 1} {(-)^l\over l !} \partial_{L-1} \left\{ {B_{iL-1}
(t-r/c) +B_{iL-1} (t+r/c)\over 2r} \right\} \nonumber \\
&& -G\sum_{l\geq 1} {(-)^l\over l !} {l\over l+1}
\varepsilon_{iab}\partial_{aL-1} \left\{ {D_{bL-1}
(t-r/c) +D_{bL-1} (t+r/c)\over 2r} \right\}\ , \label{eq:3.17b}
\end{eqnarray}
\end{mathletters}
where we denote
\begin{equation}
\phi^0 = - {4G \over {c^3}} \sum_{l\geq 0} {(-)^l\over(l+1)!}
{2l+1\over 2l+3}
\partial_L \left\{ {E_L(t-r/c) +E_L (t+r/c)\over 2r} \right\}\ .
\label{eq:3.18}
\end{equation}
Next the moment $A_L$ is expanded when $c \to \infty$ to 1PN order, and
the moments $B_L$, $D_L$ to Newtonian order. The required formula
is (B.14) in \cite{BD89}, which immediately gives
\begin{mathletters}
\label{eq:3.19}
\begin{eqnarray}
A_L &=& \int d^3 {\bf x} \left\{ \hat x_L \sigma+ {1\over 2c^2(2l+3)}
{\bf x}^2 \hat x_L \partial^2_t \sigma - {4(2l+1)\over c^2(l+1)
(2l+3)} \hat x_{iL} \partial_t \sigma_i \right\} \nonumber\\
&&\qquad\quad + O_{\rm even}\left({1\over c^4} \right)\ ,
\label{eq:3.19a} \\ B_L &=& l \int d^3 {\bf x}\, \hat x_{<L-1}
\sigma_{i_l >} + O_{\rm even}\left({1\over c^2}\right)\ ,
\label{eq:3.19b} \\ D_L &=& \int d^3 {\bf x}\,\varepsilon_{ab<i_l}
\hat x_{L-1>a} \sigma_b + O_{\rm even}\left({1\over c^2} \right) \
. \label{eq:3.19c}
\end{eqnarray}
\end{mathletters}
Here the notation $O_{\rm even}(c^{-n})$ for the post-Newtonian
remainders simply indicates that the whole post-Newtonian expansion is
composed only of even powers of $c^{-1}$, like in (\ref{eq:3.5}) (the
source densities $\sigma_\mu$ being considered to be independent of
$c^{-1}$), as clear from (B.14) in \cite{BD89}. We now transform the
leading-order term in the equation for $B_L$ using the equation of
continuity for the mass density $\sigma$. The Newtonian equation of
continuity does the needed transformation, but one must be careful
about the higher-order post-Newtonian corrections which involve some
reactive contributions. It can be checked that these reactive
contributions arise only at the order $O(c^{-7})$, so that the
equation of continuity reads, with evident notation, $\partial_t\sigma
+\partial_i\sigma_i = O_{\rm even} (c^{-2}) +O(c^{-7})$. From this
one deduces
\begin{equation}
B_L = {d \over {dt}} \left\{ \int d^3 {\bf x} \hat x_L \sigma
\right\} + O_{\rm even}\left({1\over c^2}\right)+ O\left({1\over
c^7}\right)\ .
\label{eq:3.20}
\end{equation}
All elements are now in hands in order to compare, in the exterior
near-zone $D_i\cap D_e$, the metrics (\ref{eq:3.1}) and
(\ref{eq:2.17}). The ``source'' multipole moments ${\cal
I}=\{I_L,J_L\}$ are defined by the dominant terms in (\ref{eq:3.19a})
and (\ref{eq:3.19c}),
\begin{mathletters}
\label{eq:3.21}
\begin{eqnarray}
I_L &\equiv& \int d^3 {\bf x} \left\{ \hat x_L \sigma+ {1\over 2c^2(2l+3)}
{\bf x}^2 \hat x_L \partial^2_t \sigma - {4(2l+1)\over c^2(l+1)
(2l+3)} \hat x_{iL} \partial_t \sigma_i \right\}
\ , \label{eq:3.21a} \\
J_L &\equiv& \int d^3 {\bf x}\,\varepsilon_{ab<i_l} \hat x_{L-1>a} \sigma_b
\ . \label{eq:3.21b}
\end{eqnarray}
\end{mathletters}
The mass-type moment $I_L$ includes 1PN corrections, while the
current-type moment $J_L$ is Newtonian. The mass moment $I_L$ was
obtained in \cite{BD89}, where it was shown to parametrize the
asymptotic metric generated by the source at the 1PN order. When
$l=2$ and $l=3$ we recover the moments introduced in
(\ref{eq:3.7})-(\ref{eq:3.8}). With (\ref{eq:3.19})-(\ref{eq:3.21}),
the multipole expansions (\ref{eq:3.17}) become
\begin{mathletters}
\label{eq:3.22}
\begin{eqnarray}
{\cal M} (V^{\rm in}) &=& -c \partial_t \phi^0+ G \sum_{l\geq 0}
{(-)^l\over l !} \partial_L \left\{ {I_L(t-r/c) +I_L (t+r/c)\over
2r} \right\} + O_{\rm even}\left({1\over c^4} \right)\ ,
\label{eq:3.22a}\\ {\cal M}(V^{\rm in}_i) &=&{{c^3}\over 4}
\partial_i\phi^0 -G \sum_{l\geq 1} {(-)^l\over l !} \partial_{L-1}
\left\{ {{I^{(1)}}_{iL-1} (t-r/c) +{I^{(1)}}_{iL-1} (t+r/c)\over 2r}
\right\} \nonumber \\ && -G\sum_{l\geq 1} {(-)^l\over l !} {l\over
l+1} \varepsilon_{iab}\partial_{aL-1} \left\{ {J_{bL-1} (t-r/c)
+J_{bL-1} (t+r/c)\over 2r} \right\}\nonumber \\ && + O_{\rm
even}\left({1\over c^2}\right)+ O\left({1\over c^7}\right)\ .
\label{eq:3.22b}
\end{eqnarray}
\end{mathletters}
Thus, from the definition (\ref{eq:2.18}) of the external potentials
$V^{\rm ext}_\mu$, we obtain the relationships
\begin{mathletters}
\label{eq:3.23}
\begin{eqnarray}
{\cal M} (V^{\rm in}) &=& -c \partial_t \phi^0 + V^{\rm ext}[{\cal
I}] + O_{\rm even}\left({1\over c^4} \right)\ , \label{eq:3.23a}\\
{\cal M} (V^{\rm in}_i) &=& {{c^3} \over 4} \partial_i \phi^0 +
V^{\rm ext}_i[{\cal I}] + O_{\rm even}\left({1\over c^2}\right) +
O\left({1\over c^7}\right) \ , \label{eq:3.23b}
\end{eqnarray}
\end{mathletters}
from which we readily infer that the multipole expansion ${\cal M}
(g^{\rm in}_{\mu\nu})$
of the metric (\ref{eq:3.1}) reads, in $D_i\cap D_e$,
\begin{mathletters}
\label{eq:3.24}
\begin{eqnarray}
{\cal M}( g^{\rm in}_{00}) +{2 \over c}\partial_t\phi^0 &=& -1
+{2\over c^2} (V^{\rm ext}[{\cal I}] +V^{\rm reac}[{\cal I}]) -
{2\over c^4} (V^{\rm ext}[{\cal I}] +V^{\rm reac}[{\cal I}])^2
\nonumber \\ && \qquad + {1\over c^6} {}_6 {\overline g}_{00}^{\rm in}
+ {1\over c^8} {}_8 {\overline g}_{00}^{\rm in} + O\left({1\over
c^{10}}\right)\ ,\label{eq:3.24a} \\ {\cal M}(g^{\rm in}_{0i}) +
\partial_i \phi^0 &=& -{4\over c^3} (V^{\rm ext}_i[{\cal I}] +V^{\rm
reac}_i[{\cal I}]) + {1\over c^5} {}_5 {\overline g}_{0i}^{\rm in} +
{1\over c^7} {}_7 {\overline g}_{0i}^{\rm in}+ O\left({1\over
c^9}\right)\ ,\label{eq:3.24b} \\ {\cal M}( g^{\rm in}_{ij}) &=&
\delta_{ij} \left[ 1 +{2\over c^2} (V^{\rm ext}[{\cal I}] +V^{\rm
reac}[{\cal I}]) \right] +{1\over c^4} {}_4 {\overline g}_{ij}^{\rm
in} + {1\over c^6} {}_6 {\overline g}_{ij}^{\rm in} +O\left({1\over
c^8}\right) \ . \label{eq:3.24c}
\end{eqnarray}
\end{mathletters}
Clearly the terms depending on $\phi^0$ have the form of an
infinitesimal gauge transformation of the time coordinate. We can
check that the corresponding coordinate transformation can be treated,
to the considered order, in a linearized way (recall from
(\ref{eq:3.18}) that $\phi^0$ is of order $c^{-3}$). Finally, in the
``exterior'' coordinates
\begin{mathletters}
\label{eq:3.25}
\begin{eqnarray}
x_{\rm ext}^0 &=& x^0 +\phi^0 (x^\nu) + O_{\rm even}\left({1\over c^5}\right)
+ O\left({1\over c^9}\right)\ ,\label{eq:3.25a}\\
x_{\rm ext}^i &=& x^i\ +O_{\rm even}\left({1\over c^4}\right)
+O\left({1\over c^8}\right)\ ,\label{eq:3.25b}
\end{eqnarray}
\end{mathletters}
the metric (\ref{eq:3.24}) is transformed into the ``exterior'' metric
\begin{mathletters}
\label{eq:3.26}
\begin{eqnarray}
g^{\rm ext}_{00} &=& -1 +{2\over c^2} (V^{\rm ext}[{\cal I}] +V^{\rm
reac}[{\cal I}]) - {2\over c^4} (V^{\rm ext}[{\cal I}] +V^{\rm
reac}[{\cal I}])^2 \nonumber \\ && \qquad + {1\over c^6} {}_6
\overline g^{\rm ext}_{00} + {1\over c^8} {}_8 \overline g^{\rm
ext}_{00} + O\left({1\over c^{10}}\right)\ , \label{eq:3.26a} \\
g^{\rm ext}_{0i} &=& -{4\over c^3} (V^{\rm ext}_i[{\cal I}] +V^{\rm
reac}_i[{\cal I}]) + {1\over c^5} {}_5 \overline g^{\rm ext}_{0i} +
{1\over c^7} {}_7 \overline g^{\rm ext}_{0i}+ O\left({1\over
c^9}\right)\ , \label{eq:3.26b} \\ g^{\rm ext}_{ij} &=& \delta_{ij}
\left[ 1 +{2\over c^2} (V^{\rm ext}[{\cal I}] +V^{\rm reac}[{\cal
I}]) \right] + {1\over c^4} {}_4 \overline g^{\rm ext}_{ij} +
{1\over c^6} {}_6 \overline g^{\rm ext}_{ij}+ O\left({1\over
c^8}\right) \ . \label{eq:3.26c}
\end{eqnarray}
\end{mathletters}
This metric is exactly identical, as concerns the 1PN, 2.5PN, and
3.5PN approximations, to the exterior metric (\ref{eq:2.17}) obtained
in paper~I, except that here the metric is parametrized by the known
multipole moments ${\cal I}$ instead of the arbitrary moments
$\widetilde {\cal M}$. Thus, we conclude that the two metrics
(\ref{eq:3.1}) and (\ref{eq:2.17}) match in the overlapping region
$D_i\cap D_e$ if (and only if) there is agreement between both types
of multipole moments. This determines $\widetilde M_L$ and
$\widetilde S_L$. More precisely, we find that $\widetilde M_L$ and
$\widetilde S_L$ must be related to $I_L$ and $J_L$ given in
(\ref{eq:3.21}) by
\begin{mathletters}
\label{eq:3.27}
\begin{eqnarray}
\widetilde M_L &=& I_L + O_{\rm even} \left( {1 \over c^4} \right)
+ O\left({1\over c^8}\right)\ , \label{eq:3.27a} \\
\widetilde S_L &=& J_L + O_{\rm even} \left( {1 \over c^2} \right)
+ O\left({1\over c^6}\right) \ , \label{eq:3.27b}
\end{eqnarray}
\end{mathletters}
where as usual the relation for $\widetilde M_L$ is accurate to 1PN
order, and the relation for $\widetilde S_L$ is Newtonian (we do also
control the parity of some neglected terms). Satisfying the latter
matching solves the problem at hand, by showing that the inner metric
(\ref{eq:3.1})-(\ref{eq:3.8}) results from the post-Newtonian
expansion of a solution of the (non-linear) field equations and a
condition of no incoming radiation.
We emphasize the dependence of the result on the coordinate system.
Of course, the metric (\ref{eq:3.1}), which contains the reactive
potentials (\ref{eq:3.6})-(\ref{eq:3.8}), is valid only in its own
coordinate system. It is a well-known consequence of the equivalence
principle that radiation reaction forces in general relativity are
inherently dependent on the coordinate system (see e.g. \cite{S83}
for a comparison between various expressions of the radiation reaction
force at the Newtonian order). The coordinate system in which the
reactive potentials (\ref{eq:3.6})-(\ref{eq:3.8}) are valid is defined
as follows. We start from the particular coordinate system in which
the linearized metric reads (\ref{eq:2.3}). Then we apply two
successive coordinate transformations. The first one is associated
with the gauge vector $\xi^\mu$ given by (\ref{eq:2.7}), and the
second one is associated with $\phi^\mu$ whose only needed component
is $\phi^0$ given by (\ref{eq:3.18}). The resulting coordinate system
is the one in which $V^{\rm reac}_\mu$ is valid. (Actually, the gauge
vector $\xi^\mu$ should be modified according to the procedure defined
in Sect. II.C of paper I, so that the good fall-off properties of the
metric at infinity are preserved.)
\section{The balance equations to post-Newtonian order}\label{sec:4}
\subsection{Conservation laws for energy and momenta at 1PN order}
Up to the second post-Newtonian approximation of general relativity,
an isolated system admits some conserved energy, linear momentum, and
angular momentum. These have been obtained, in the case of weakly
self-gravitating fluid systems, by Chandrasekhar and Nutku
\cite{CN69}. The less accurate 1PN-conserved quantities were obtained
before, notably by Fock \cite{Fock}. In this subsection we re-derive,
within the present framework (using in particular the mass density
$\sigma$ defined in (\ref{eq:3.3a})), the 1PN-conserved energy and
momenta of the system. The 1PN energy and momenta are needed in the
next subsection, in which we establish their laws of variation during
the emission of radiation at 1PN order (hence we do not need the more
occurate 2PN-conserved quantities).
To 1PN order the metric (\ref{eq:3.1}) reduces to
\begin{mathletters}
\label{eq:4.1}
\begin{eqnarray}
g^{\rm in}_{00} &=& -1 +{2\over c^2} V^{\rm in} - {2\over c^4}
(V^{\rm in})^2 + O\left({1\over c^{6}}\right)\ , \label{eq:4.1a} \\
g^{\rm in}_{0i} &=& -{4\over c^3} V^{\rm in}_i + O\left({1\over
c^5}\right)\ ,\label{eq:4.1b} \\ g^{\rm in}_{ij} &=& \delta_{ij}
\left(1 +{2\over c^2} V^{\rm in}\right) + O\left({1\over c^4}\right)
\ , \label{eq:4.1c}
\end{eqnarray}
\end{mathletters}
where $V^{\rm in}$ and $V^{\rm in}_i$ are given by (\ref{eq:3.4}).
In fact, $V^{\rm in}$ and $V^{\rm in}_i$ are given by their
post-Newtonian expansions (\ref{eq:3.5}), which can be limited here
to the terms
\begin{mathletters}
\label{eq:4.2}
\begin{eqnarray}
V^{\rm in} &=& U + {1\over 2c^2}\, \partial^2_t X
+ O\left({1\over c^4} \right)\ , \label{eq:4.2a} \\
V^{\rm in}_i &=& U_i + O\left( {1\over c^2} \right)\ , \label{eq:4.2b}
\end{eqnarray}
\end{mathletters}
where the instantaneous (Poisson-like) potentials $U$, $X$ and $U_i$ are
defined by
\begin{mathletters}
\label{eq:4.3}
\begin{eqnarray}
U({\bf x},t) &=& G \int {d^3{\bf x}'\over |{\bf x}-{\bf x}'|} \sigma
({\bf x}',t)\ , \label{eq:4.3a}\\
X({\bf x},t) &=& G \int d^3{\bf x}'|{\bf x}-{\bf x}'| \sigma
({\bf x}',t)\ , \label{eq:4.3b}\\
U_i({\bf x},t) &=& G \int {d^3{\bf x}'\over |{\bf x}-{\bf x}'|} \sigma_i
({\bf x}',t)\ . \label{eq:4.3c}
\end{eqnarray}
\end{mathletters}
Since $V^{\rm in}$ is a symmetric integral, there are no terms of
order $c^{-3}$ in (\ref{eq:4.2a}) (such a term would be a simple
function of time in the case of a retarded integral). We shall need
(only in this subsection) a metric whose space-space components $ij$
are more accurate than in (\ref{eq:4.1c}), taking into account the
next-order correction term. We introduce an instantaneous potential
whose source is the sum of the matter stresses, say $\sigma_{ij} =
T^{ij}$, and the (Newtonian) gravitational stresses,
\begin{mathletters}
\label{eq:4.4}
\begin{equation}
P_{ij}({\bf x},t) = G \int {d^3{\bf x}'\over |{\bf x}-{\bf x}'|}
\left[ \sigma_{ij} + {1\over 4\pi G} \left( \partial_i U\partial_j U
- {1\over 2} \delta_{ij} \partial_k U\partial_k U\right) \right]
({\bf x}',t)\ . \label{eq:4.4a}
\end{equation}
The spatial trace $P\equiv P_{ii}$ is
\begin{equation}
P({\bf x},t) = G \int {d^3{\bf x}'\over |{\bf x}-{\bf x}'|}
\left[ \sigma_{ii} - {1\over 2} \sigma U \right]
({\bf x}',t) + {{U^2} \over {4}} \ . \label{eq:4.4b}
\end{equation}
\end{mathletters}
The metric which is accurate enough for our purpose reads, in
terms of the instantaneous potentials (\ref{eq:4.3})-(\ref{eq:4.4}),
\begin{mathletters}
\label{eq:4.5}
\begin{eqnarray}
g^{\rm in}_{00} &=& -1 +{2\over c^2} U + {1\over c^4} [\partial_t^2
X - 2U^2] + O\left( {1\over c^6} \right)\ , \label{eq:4.5a} \\
g^{\rm in}_{0i} &=& -{4\over c^3} U_i
+ O\left( {1\over c^5} \right) \label{eq:4.5b}\ , \\
g^{\rm in}_{ij} &=& \delta_{ij}
\left( 1 +{2\over c^2} U + {1\over c^4} [\partial^2_t X +2U^2] \right)
+ {4\over c^4} [P_{ij} -\delta_{ij} P] +
O\left( {1\over c^6} \right)\ . \label{eq:4.5c}
\end{eqnarray}
The square-root of (minus) the determinant of the metric is
\begin{equation}
\sqrt{-g^{\rm in}} = 1 + {2\over c^2} U + {1\over c^4} [\partial^2_t X
+2U^2 - 4P] + O \left( {1\over c^6} \right)\ . \label{eq:4.5d}
\end{equation}
\end{mathletters}
Consider the local equations of motion of the source, which state the
conservation in the covariant sense of the stress-energy tensor $T^{\mu\nu}$
(i.e. $\nabla_\mu T^\mu_\alpha=0$). These equations, written in a form
adequate for our purpose, are
\begin{equation}
\partial_\mu \Pi^\mu_\alpha = {\cal F}_\alpha \ , \label{eq:4.6}
\end{equation}
where the left-hand-side is the divergence in the ordinary sense of the
material stress-energy density
\begin{equation}
\Pi_\alpha^\mu \equiv \sqrt{-g^{\rm in}}\ g^{\rm in}_{\alpha\nu}
T^{\mu\nu} \ , \label{eq:4.7}
\end{equation}
and where the right-hand-side can be viewed as the four-force density
\begin{equation}
{\cal F}_\alpha \equiv {1\over 2}
\sqrt{-g^{\rm in }}\ T^{\mu\nu} \partial_\alpha
g^{\rm in}_{\mu\nu}\ \label{eq:4.8}
\end{equation}
The 1PN-conserved energy and momenta follow from integration of these
equations over the ordinary three-dimensional space, which yields the
following three laws (using the Gauss theorem to discard some
divergences of compact-supported terms)
\begin{mathletters}
\label{eq:4.9}
\begin{eqnarray}
{d \over dt}\left\{- \int d^3{\bf x}\, \Pi^0_0\right\} &=& -c \int
d^3{\bf x}\,{\cal F}_0\ , \label{eq:4.9a}\\
{d \over dt}\left\{{1\over c}\int d^3{\bf x}\,\Pi^0_i\right\} &=& \int
d^3{\bf x}\,{\cal F}_i\ , \label{eq:4.9b}\\
{d \over dt}\left\{{1\over c} \varepsilon_{ijk} \int d^3{\bf x}\, x_j
\Pi^0_k\right\} &=& \varepsilon_{ijk}
\int d^3{\bf x}\,\bigl( x_j {\cal F}_k +\Pi_k^j \bigr) \ .\label{eq:4.9c}
\end{eqnarray}
\end{mathletters}
The quantities $\Pi_\alpha^\mu$ and ${\cal F}_\alpha$ are then determined.
With (\ref{eq:4.5}) we
obtain, for the various components of $\Pi_\alpha^\mu$,
\begin{mathletters}
\label{eq:4.10}
\begin{eqnarray}
\Pi^0_0 &=& -\sigma c^2 + \sigma_{ii} + {4\over c^2} [\sigma
P-\sigma_i U_i] + O\left( {1\over c^4} \right)\ ,
\label{eq:4.10a}\\ \Pi^0_i &=& c \sigma_i \left( 1+4 {U\over c^2}
\right) - {4\over c} \sigma U_i + O\left( {1\over c^3} \right)\ ,
\label{eq:4.10b}\\ \Pi^i_0 &=& -c \sigma_i \left( 1-4 {P\over c^4}
\right) - {4\over c^3} \sigma_{ij} U_j + O\left( {1\over c^5}
\right)\ , \label{eq:4.10c}\\ \Pi^i_j &=& \sigma_{ij} \left( 1+4
{U\over c^2} \right) - {4\over c^2} \sigma_i U_j + O\left( {1\over
c^4} \right)\ . \label{eq:4.10d}
\end{eqnarray}
\end{mathletters}
Note that $\Pi_0^i$ is determined with a better precision than
$\Pi_i^0$ (but we shall not need this higher precision and give it for
completeness). For the components of ${\cal F}_\alpha$, we find
\begin{mathletters}
\label{eq:4.11}
\begin{eqnarray}
{\cal F}_0 &=& {1\over c} \sigma \partial_t \left( U +{1\over 2c^2}
\partial_t^2 X\right) - {4\over c^3} \sigma_j \partial_t U_j +
O \left( {1\over c^5} \right)\ , \label{eq:4.11a} \\
{\cal F}_i &=& \sigma \partial_i \left( U +{1\over 2c^2}
\partial_t^2 X\right) - {4\over c^2} \sigma_j \partial_i U_j +
O \left( {1\over c^4} \right)\ . \label{eq:4.11b}
\end{eqnarray}
\end{mathletters}
The $\Pi_\alpha^\mu$'s and ${\cal F}_\alpha$'s can now be substituted
into the integrals on both sides of (\ref{eq:4.9}). This is correct
because the support of the integrals is the compact support of the
source, which is, for a slowly-moving source, entirely located within
the source's near-zone $D_i$, where the post-Newtonian expansion is
valid. Straightforward computations permit us to re-express the
right-hand-sides of (\ref{eq:4.9}) into the form of total
time-derivatives. We do not detail here this computation which is
well-known (at 1PN order), but we present in Sect.~IV.B a somewhat
general formula which can be used to reach elegantly the result (see
Eq.~(4.23)). By transfering the total time-derivatives to the
left-hand-sides of (\ref{eq:4.9}), one obtains the looked-for
conservation laws at 1PN order, namely
\begin{eqnarray}
{d E^{\rm 1PN}\over dt} &=& O\left({1\over c^{4}}
\right)\ ,\label{eq:4.14} \\
{d P_i^{\rm 1PN}\over dt} &=& O\left(
{1\over c^{4}}\right)\ , \label{eq:4.15} \\
{d S_i^{\rm 1PN}\over dt} &=& O\left(
{1\over c^{4}}\right)\ , \label{eq:4.16}
\end{eqnarray}
where the 1PN energy $E^{\rm 1PN}$, linear momentum $P_i^{\rm 1PN}$,
and angular momentum $S_i^{\rm 1PN}$ are given by the
integrals over the source
\FL
\begin{eqnarray}
E^{\rm 1PN} &=& \int d^3 {\bf x} \left\{ \sigma c^2 + {1\over 2}
\sigma U - \sigma_{ii} + {1\over c^2} \biggl[ -4\sigma P + 2\sigma_i
U_i + {1\over 2} \sigma \partial^2_t X -{1\over 4} \partial_t \sigma
\partial_t X \biggr] \right\}\ , \label{eq:4.17} \\ P^{\rm 1PN}_i
&=& \int d^3 {\bf x} \left\{ \sigma_i - {1\over 2c^2} \sigma
\partial_i \partial_t X \right\}\ , \label{eq:4.18} \\ S^{\rm 1PN}_i
&=& \varepsilon_{ijk} \int d^3 {\bf x} x_j \left\{ \sigma_k +
{1\over c^2} \left[ 4 \sigma_k U - 4\sigma U_k - {1\over 2} \sigma
\partial_k \partial_t X \right] \right\}\ . \label{eq:4.19}
\end{eqnarray}
The 1PN energy $E^{\rm 1PN}$ can also be written as \cite{R}
\FL
\begin{eqnarray}
E^{\rm 1PN}&=& \int d^3 {\bf x} \left\{ \sigma c^2 + {1\over 2} \sigma U
- \sigma_{ii} + {1\over c^2} \left[ \sigma U^2 -4\sigma_{ii} U
+2 \sigma_i U_i
+ {1\over 2} \sigma \partial^2_t X
- {1\over 4} \partial_t \sigma \partial_t X \right] \right\}\ .
\nonumber \\
\label{eq:4.15'}
\end{eqnarray}
A similar but more precise computation would yield the 2PN-conserved
quantities \cite{CN69}.
\subsection{Secular losses of the 1PN-accurate energy and momenta}
As the reactive potentials $V^{\rm reac}_\mu$ manifestly change sign
in a time reversal, they are expected to yield dissipative effects in
the dynamics of the system, i.e. secular losses of its total energy,
angular momentum and linear momentum. The ``Newtonian'' radiation
reaction force is known to extract energy in the system at the same
rate as given by the Einstein quadrupole formula, both in the case of
weakly self-gravitating systems
\cite{Bu69,Th69,Bu71,C69,CN69,CE70,EhlRGH,WalW80,BD84,AD75,PaL81,Ehl80,Ker80,Ker80',BRu81,BRu82,S85}
and compact binary systems \cite{DD81a,DD81b,D82}. Similarly the
reaction force extracts angular momentum in the system. As concerns
linear momentum the Newtonian reaction force is not precise enough, and
one needs to go to 1PN order.
In this subsection, we prove that the 1PN-accurate reactive potentials
$V^{\rm reac}_\mu$ lead to decreases of the 1PN-accurate energy and
momenta (computed in (\ref{eq:4.17})-(\ref{eq:4.15'})) which are in
perfect agreement with the corresponding far-zone fluxes, known from
the works \cite{EW75,Th80,BD89} in the case of the energy and angular
momentum, and from the works
\cite{Pa71,Bek73,Press77,Th80} in the case of linear momentum.
We start again from the equations of motion
(\ref{eq:4.6})-(\ref{eq:4.8}), which imply, after spatial integration,
the laws (\ref{eq:4.9}) that we recopy here:
\begin{mathletters}
\label{eq:4.20}
\begin{eqnarray}
{d \over dt}\left\{- \int d^3{\bf x}\, \Pi^0_0\right\} &=& -c \int
d^3{\bf x}\,{\cal F}_0\ , \label{eq:4.20a}\\
{d \over dt}\left\{{1\over c}\int d^3{\bf x}\,\Pi^0_i\right\} &=& \int
d^3{\bf x}\,{\cal F}_i\ , \label{eq:4.20b}\\
{d \over dt}\left\{{1\over c} \varepsilon_{ijk} \int d^3{\bf x}\, x_j
\Pi^0_k\right\} &=& \varepsilon_{ijk}
\int d^3{\bf x}\,\bigl( x_j {\cal F}_k +\Pi_k^j \bigr) \ .\label{eq:4.20c}
\end{eqnarray}
\end{mathletters}
The left-hand-sides are in the form of total time-derivatives. To 1PN
order, we have seen that the right-hand-sides can be transformed into
total time-derivatives, which combine with the left-hand-sides to give
the 1PN-conserved energy and momenta. Here we shall prove that the
contributions due to the reactive potentials in the right-hand-sides
cannot be transformed entirely into total time-derivatives, and that
the remaining terms yield precisely the corresponding 1PN fluxes. The
balance equations then follow (modulo a slight assumption and a
general argument stated below).
The right-hand-sides of (\ref{eq:4.20}) are evaluated by substituting
the metric (\ref{eq:3.1}), involving the potentials ${\cal V}^{\rm in}_\mu
= V^{\rm in}_\mu+ V^{\rm reac}_\mu$. The components of the force
density (\ref{eq:4.8}) are found to be
\begin{mathletters}
\label{eq:4.21}
\begin{eqnarray}
{\cal F}_0 &=& {\sigma\over c} \partial_t {\cal V}^{\rm in}
- {4\over c^3} \sigma_j
\partial_t {\cal V}^{\rm in}_j +
{1 \over c^{5}} {}_5 {\cal F}_0 + {1 \over c^{7}}
{}_7 {\cal F}_0 + O\left({1\over c^{9}}\right) \ ,
\label{eq:4.21a} \\
{\cal F}_i &=& \sigma \partial_i {\cal V}^{\rm in} - {4\over c^2} \sigma_j
\partial_i {\cal V}^{\rm in}_j + {1 \over c^{4}} {}_4 {\cal F}_i
+ {1 \over c^{6}}
{}_6 {\cal F}_i + O\left({1\over c^{8}}\right) \ , \label{eq:4.21b}
\end{eqnarray}
\end{mathletters}
where we have been careful at handling correctly the un-controlled 2PN
and 3PN approximations, which lead to the terms symbolized by the
$c^{-n}{}_n {\cal F}_{\mu}$'s. The equations (\ref{eq:4.21}) reduce
to (\ref{eq:4.11}) at the 1PN approximation. They give the components
of the force as linear functionals of ${\cal V}^{\rm in}$ and ${\cal
V}^{\rm in}_i$. The remainders are 4PN at least.
The same
computation using the stress-energy density (\ref {eq:4.7}) yields the
term which is further needed in (\ref{eq:4.20c}),
\begin{equation}
\varepsilon_{ijk} \Pi^j_k = - {4\over c^2} \varepsilon_{ijk} \sigma_j
{\cal V}^{\rm in}_k +{1 \over c^4} {}_4 {\cal T}_i
+ {1 \over c^{6}} {}_6 {\cal T}_i
+ O\left({1\over c^{8}}\right) \ . \label{eq:4.22}
\end{equation}
The ${}_n {\cal T}_i$'s represent the 2PN and 3PN
approximations. Thanks to (\ref{eq:4.21})-(\ref{eq:4.22}) one can now
transform the laws (\ref{eq:4.20}) into
\begin{mathletters}
\label{eq:4.23}
\begin{eqnarray}
{d \over dt}\left\{- \int d^3{\bf x}\, \Pi^0_0\right\}
&=& \int d^3{\bf x}\,\left\{ -\sigma \partial_t {\cal V}^{\rm in}
+ {4\over c^2} \sigma_j \partial_t {\cal V}^{\rm in}_j \right\}
+ {1 \over c^{4}} {}_4 X + {1 \over c^{6}} {}_6 X + O\left({1\over c^{8}}
\right)\ ,\nonumber \\ \label{eq:4.23a} \\
{d \over dt}\left\{{1\over c}\int d^3{\bf x}\,\Pi^0_i\right\}
&=& \int d^3{\bf x}\,\left\{ \sigma \partial_i
{\cal V}^{\rm in} - {4\over c^2} \sigma_j \partial_i {\cal V}^{\rm in}_j
\right\}
+ {1 \over c^{4}} {}_4 Y_i + {1 \over c^{6}} {}_6 Y_i + O\left(
{1\over c^{8}}\right)\ , \nonumber \\ \label{eq:4.23b} \\
{d \over dt}\left\{{1\over c} \varepsilon_{ijk} \int d^3{\bf x}\, x_j
\Pi^0_k\right\} &=& \varepsilon_{ijk} \int d^3{\bf x}\,
\left\{ \sigma x_j \partial_k {\cal V}^{\rm in}
- {4\over c^2} \sigma_m x_j \partial_k {\cal V}^{\rm in}_m - {4\over c^2}
\sigma_j {\cal V}^{\rm in}_k \right\} \nonumber\\
&&\qquad\qquad+ {1 \over c^{4}} {}_4 Z_i +{1 \over c^{6}}{}_6Z_i + O\left(
{1\over c^{8}}\right)\ , \label{eq:4.23c}
\end{eqnarray}
\end{mathletters}
where ${}_n X$, ${}_n Y_i$, and ${}_n Z_i$ denote some spatial
integrals of the 2PN and 3PN terms in (\ref{eq:4.21})-(\ref{eq:4.22}).
Consider first the piece in ${\cal V}^{\rm in}_\mu$ which is composed of
the potential $V_\mu^{\rm in}$, given by the symmetric integral
(\ref{eq:3.4}) or by the Taylor expansion (\ref{eq:3.5}). To 1PN order
$V^{\rm in}_\mu$ contributes to the laws (\ref{eq:4.23})
only in the form of total time-derivatives (see Sect IV.A). Here we
present a more general proof of this result, valid formally up to any
post-Newtonian order. This proof shows that the result is due to the very
structure of the symmetric potential $V_\mu^{\rm in}$ as given by
(\ref{eq:3.5}). The technical formula sustaining the
proof is
\begin{equation}
\sigma_\mu (x) \partial^N_t \sigma_\nu (x') + (-)^{N+1} \sigma_\nu
(x') \partial^N_t \sigma_\mu (x) = {d\over dt} \left\{ \sum^{N-1}_{q=0}
(-)^q \partial^q_t \sigma_\mu (x) \partial^{N-q-1}_t \sigma_\nu (x')
\right\}\ , \label{eq:4.24}
\end{equation}
where $x \equiv ({\bf x},t)$ and $x'\equiv ({\bf x}',t)$ denote two
field points (located in the same hypersurface $t \equiv x^0/c
=$~const), and where $N$ is some integer [we recall the notation
$\sigma_\mu \equiv (\sigma,\sigma_i)$]. The contributions of the
symmetric potential in the energy and linear momentum laws
(\ref{eq:4.23a}) and (\ref{eq:4.23b}) are all of the same type,
involving the spatial integral of $\sigma_\mu (x) \partial_{\alpha}
V^{\rm in}_\mu (x)$ (with no summation on $\mu$ and $\alpha =0,i$).
One replaces into this spatial integral the potential $V^{\rm in}_\mu$
by its Taylor expansion when $c \to \infty$ as given by
(\ref{eq:3.5}). This yields a series of terms involving when the
index $\alpha = 0$ the double spatial integral of $|{\bf x}-{\bf
x}'|^{2p-1} \sigma_\mu (x) \partial^{2p+1}_t \sigma_\mu (x')$, and
when $\alpha = i$ the double integral of $\partial_i |{\bf x}-{\bf
x}'|^{2p-1}
\sigma_\mu (x) \partial^{2p}_t \sigma_\mu (x')$. By symmetrizing the
integrand under the exchange ${\bf x} \leftrightarrow {\bf x}'$ and by
using the formula (\ref{eq:4.24}) where $N = 2p+1$ when $\alpha = 0$ and
$N = 2p$ when $\alpha = 1,2,3$, one finds that the integral is indeed a
total time-derivative. The same is true for the symmetric contributions in
the angular momentum law (\ref{eq:4.23c}), which yield a series of integrals of
$\varepsilon
_{ijk} x_j \partial_k |{\bf x}-{\bf x}'|^{2p-1} \sigma_\mu (x)
\partial^{2p}_t \sigma_\mu (x')$ and $\varepsilon_{ijk} |{\bf x}-{\bf
x}'|^{2p-1} \sigma_j (x) \partial^{2p}_t \sigma_k (x')$, on which one
uses the formula (\ref{eq:4.24}) where $N = 2p$.
Thus the symmetric (inner) potentials $V_\mu^{\rm in}$ contribute to
the right-hand-sides of (\ref{eq:4.23}) only in the form of total
time-derivatives. This is true even though, as noticed earlier, the
potentials $V^{\rm in}_\mu$ contain some reactive (``time-odd'')
terms, through the contributions of the source densities
$\sigma_\mu$. As shown here, these time-odd terms combine with the
other time-odd terms present in the $\sigma_\mu$'s appearing
explicitly in (\ref{eq:4.23}) to form time-derivatives. Such time-odd
terms will not participate ultimately to the balance equations, but
they do participate to the complete 3.5PN approximation in the
equations of motion of the system. This fact has been noticed, and
these time-odd terms computed for binary systems, by Iyer and Will
\cite{IW95} (see their equations (\ref{eq:3.8}) and (\ref{eq:3.9})).
The numerous time-derivatives resulting from the symmetric potentials
are then transferred to the left-hand-sides of (\ref{eq:4.23}). To 1PN
order these time-derivatives permit reconstructing the 1PN-conserved
energy and momenta $E^{\rm 1PN}$, $P_i^{\rm 1PN}$ and $S_i^{\rm
1PN}$. We include also the time-derivatives of higher order but they
will be negligible in the balance equations (see below). Therefore,
we have proved that the laws (\ref{eq:4.23}) can be re-written as
\begin{mathletters}
\label{eq:4.25}
\begin{eqnarray}
{d\over dt} \left[ E^{\rm 1PN}+{\overline O}\left({1\over c^{4}}
\right)\right] &=& \int d^3{\bf x}\,\left\{ -\sigma \partial_t
{V}^{\rm reac}+ {4\over c^2} \sigma_j \partial_t {V}^{\rm reac}_j \right\}
+ {1 \over c^{4}} {}_4 X + {1 \over c^{6}} {}_6 X + O\left({1\over c^{8}}
\right)\ , \nonumber \\ \label{eq:4.25a} \\
{d\over dt} \left[ P^{\rm 1PN}_i +{\overline O}\left({1\over c^{4}}
\right)\right] &=& \int d^3{\bf x}\,\left\{ \sigma
\partial_i {V}^{\rm reac} -{4\over c^2} \sigma_j \partial_i {V}^{\rm reac}_j
\right\} + {1 \over c^{4}} {}_4 Y_i + {1 \over c^{6}} {}_6 Y_i +
O\left({1\over c^{8}}\right)\ , \nonumber \\ \label{eq:4.25b} \\
{d\over dt} \left[ S^{\rm 1PN}_i +{\overline O}\left({1\over c^{4}}
\right)\right] &=& \varepsilon_{ijk} \int d^3{\bf x}\,
\left\{ \sigma x_j \partial_k {V}^{\rm reac}
- {4\over c^2} \sigma_m x_j \partial_k {V}^{\rm reac}_m - {4\over c^2}
\sigma_j {V}^{\rm reac}_k \right\} \nonumber\\
&&\qquad\qquad+ {1 \over c^{4}} {}_4 Z_i + {1 \over c^{6}} {}_6 Z_i + O
\left({1\over c^{8}}\right)\ . \label{eq:4.25c}
\end{eqnarray}
\end{mathletters}
The $O$-symbols $\overline O (c^{-4})$ denote the terms, coming in
particular from the symmetric potentials in the right-hand-sides,
which are of higher order than 1PN. We add an overbar on these
remainder terms to distinguish them from other terms introduced below.
Now recall
that the 2PN and 3PN approximations, including in particular the terms
${}_n X$, ${}_n Y_i$, and ${}_n Z_i$ in (\ref{eq:4.25}),
are non-radiative (non-dissipative). Indeed they correspond to ``even''
approximations, and depend instantaneously on the parameters of the
source. In the case of the 2PN approximation, Chandrasekhar and Nutku
\cite{CN69} have proved explicitly that ${}_4X$, ${}_4Y_i$,
and ${}_4Z_i$ can be transformed into total time-derivatives, leading
to the expressions of the 2PN-conserved energy and momenta. Here we
shall assume that the same property holds for the 3PN approximation,
namely that the terms ${}_6X$, ${}_6Y_i$, and ${}_6Z_i$ can also be
transformed into time-derivatives. This assumption is almost
certainly correct. The 3PN approximation is not expected to yield any
secular decrease of quasi-conserved quantities. It can be argued, in
fact, that the 3PN approximation is the last approximation which is
purely non-dissipative. Under this (slight) assumption we can now
transfer the terms ${}_n X$, ${}_n Y_i$, and ${}_n Z_i$ to the
left-hand-sides, where they modify the remainder terms $\overline O
(c^{-4})$. Thus,
\begin{mathletters}
\label{eq:4.26}
\begin{eqnarray}
{d\over dt} \left[ E^{\rm 1PN}+{\widetilde O}\left({1\over c^{4}}
\right)\right] &=& \int d^3 {\bf x} \left\{ -\sigma \partial_t
V^{\rm reac} + {4\over c^2} \sigma_j \partial_t V^{\rm reac}_j \right\}
+ O \left( {1\over c^8}\right)\ , \label{eq:4.26a}\\
{d\over dt} \left[ P^{\rm 1PN}_i +{\widetilde O}\left({1\over c^{4}}
\right)\right] &=& \int d^3 {\bf x}\left\{ \sigma \partial_i
V^{\rm reac} - {4\over c^2} \sigma_j \partial_i V^{\rm reac}_j \right\}
+ O \left( {1\over c^8}\right)\ , \label{eq:4.26b}\\
{d\over dt} \left[ S^{\rm 1PN}_i +{\widetilde O}\left({1\over c^{4}}
\right)\right] &=& \varepsilon_{ijk}
\int d^3 {\bf x} \left\{ \sigma x_j \partial_k V^{\rm reac}
- {4\over c^2} \sigma_m x_j \partial_k V^{\rm reac}_m
- {4\over c^2} \sigma_j V^{\rm reac}_k \right\} \nonumber \\
&& + O \left( {1\over c^8}\right)\ , \label{eq:4.26c}
\end{eqnarray}
\end{mathletters}
where $\widetilde O(c^{-4})$ denotes the modified remainder
terms, which satisfy, for instance,
$E^{1PN}+\widetilde O(c^{-4}) = E^{2PN}+ O(c^{-5})$.
The equations (\ref{eq:4.26}) clarify the way the losses of energy and
momenta are driven by the radiation reaction potentials. However,
these equations are still to be transformed using the explicit
expressions (\ref{eq:3.6})-(\ref{eq:3.8}). When inserting these
expressions into the right-hand-sides of (\ref{eq:4.26}) one is left
with numerous terms. All these terms have to be transformed and
combined together modulo total time-derivatives. Thus, numerous
operations by parts on the time variable are performed
[i.e. $A\partial_t B =\partial_t (AB) -B\partial_t A$], thereby
producing many time-derivatives which are transferred as before to the
left-hand-sides of the equations, where they modify the ${\widetilde
O} (c^{-4})$'s by some contributions of order $c^{-5}$ at least (since
this is the order of the reactive terms). During the transformation
of the laws (\ref{eq:4.26a}) and (\ref{eq:4.26c}) for the energy and
angular momentum, it is crucial to recognize among the terms the
expression of the 1PN-accurate mass quadrupole moment $I_{ij}$ given
by (\ref{eq:3.7}) (or (\ref{eq:3.21a}) with $l = 2$), namely
\begin{equation}
I_{ij} = \int d^3{\bf x} \left\{ \hat x_{ij} \sigma + {1\over 14c^2}
{\bf x}^2 \hat x_{ij} \partial_t^2 \sigma - {20\over 21c^2} \hat x_{ijk}
\partial_t \sigma_k \right\}\ .\label{eq:4.27}
\end{equation}
And during the transformation of the law (\ref{eq:4.26b}) for linear
momentum, the important point is to remember that the 1PN-accurate
mass dipole moment $I_i$, whose second time-derivative is zero as a
consequence of the equations of motion $[d^2I_i/dt^2 =O(c^{-4})]$,
reads
\begin{mathletters}
\label{eq:4.28}
\begin{equation}
I_i = \int d^3 {\bf x} \left\{ x_i \sigma + {1\over 10c^2}{\bf x}^2 x_i
\partial^2_t \sigma - {6\over 5c^2} \hat x_{ij} \partial_t \sigma_j
\right\}\ . \label{eq:4.28a}
\end{equation}
This moment is also a particular case, when $l = 1$, of the general
formula (\ref{eq:3.21a}). An alternative expression of the dipole moment is
\begin{equation}
I_i = \int d^3 {\bf x}~ x_i \left\{ \sigma + {1\over c^2} \left(
{1\over 2}\sigma U - \sigma_{jj} \right) \right\} + O\left( {1\over
c^4}\right) \ , \label{eq:4.28b}
\end{equation}
\end{mathletters}
which involves the Newtonian conserved mass density (given also by the
three first terms in (4.15)). Finally, the end results of the
computations are some laws involving in the right-hand-sides some
quadratic products of derivatives of multipole moments, most
importantly the 1PN quadrupole moment (\ref{eq:4.27}) (the other
moments being Newtonian), namely
\begin{mathletters}
\label{eq:4.29}
\begin{eqnarray}
{d\over dt} \left[ E^{\rm 1PN}+{\widehat O}\left({1\over c^{4}}
\right)\right] &=& - {G\over c^5} \left\{ {1\over 5} I^{(3)}_{ij}
I^{(3)}_{ij} + {1\over c^2}\left[{1\over 189} I^{(4)}_{ijk} I^{(4)}_{ijk}
+ {16\over 45} J^{(3)}_{ij} J^{(3)}_{ij} \right] \right\}
+ O\left( {1\over c^8}\right) \ , \nonumber \\ \label{eq:4.29a} \\
{d\over dt} \left[ P^{\rm 1PN}_i +{\widehat O}\left({1\over c^{4}}
\right)\right] &=& - {G\over c^7} \left\{ {2\over 63} I^{(4)}_{ijk}
I^{(3)}_{jk} + {16\over 45} \varepsilon_{ijk} I^{(3)}_{jm} J^{(3)}_{km}
\right\} + O\left( {1\over c^9}\right)\ , \label{eq:4.29b} \\
{d\over dt} \left[ S^{\rm 1PN}_i +{\widehat O}\left({1\over c^{4}}
\right)\right] &=& - {G\over c^5} \varepsilon_{ijk} \left\{ {2\over 5}
I^{(2)}_{jm} I^{(3)}_{km}+{1\over c^2}\left[{1\over 63} I^{(3)}_{jmn}
I^{(4)}_{kmn} + {32\over 45} J^{(2)}_{jm} J^{(3)}_{km} \right] \right\}
+ O\left( {1\over c^8}\right)\ . \nonumber \\ \label{eq:4.29c}
\end{eqnarray}
\end{mathletters}
The remainders in the left-hand-sides are such that $\widehat
O(c^{-4}) = \widetilde O(c^{-4})+O(c^{-5})$. The remainders in the
right-hand-sides are $O(c^{-8})$ in the cases of energy and angular
momentum because of tail contributions (see Sect.~IV. C), but is
$O(c^{-9})$ in the case of the linear momentum.
The last step is to argue that the unknown terms in the
left-hand-sides, namely the total time-derivatives of the remainders
${\widehat O}( c^{-4})$, are negligible as compared to the controlled terms
in the right-hand-sides, despite their larger formal post-Newtonian order
($c^{-4}$ vs $c^{-5}$ and $c^{-7}$). When computing, for instance, the
time evolution of the orbital phase of inspiralling compact binaries
\cite{3mn,FCh93,CF94,P93,CFPS93,TNaka94,Sasa94,TSasa94,P95,BDI95,BDIWW95,WWi96,B96pn},
one uses in the left-hand-side of the balance equation the energy
valid at the {\it same} post-Newtonian order as the energy flux in the
right-hand-side. Because the difference between the orders of
magnitude of the two sides of the equations is $c^{-5}$, we need to
show that the time-derivative increases the formal post-Newtonian
order by a factor $c^{-5}$. In (\ref{eq:4.28}) this means $d
{\widehat O}(c^{-4})/dt = O(c^{-9})$ [actually, $O(c^{-8})$ would be
sufficient in (4.28), but $O(c^{-9})$ will be necessary in
Sect.~IV.C]. In the case of inspiralling compact binaries, such an
equation is clearly true, because the terms ${\widehat O}(c^{-4})$
depend only on the orbital separation between the two bodies (the
orbit being circular), and thus depend only on the energy which is
conserved at 2PN order (for non-circular orbits one would have also a
dependence on the angular momentum). Thus the time-derivative adds,
by the law of composition of derivatives, an extra factor $c^{-5}$
coming from the time-derivative of the energy itself. More generally,
this would be true for any system whose 2PN dynamics can be
parametrized by the 2PN-conserved energy and angular momentum. This
argument could perhaps be extended to systems whose 2PN dynamics is
integrable, in the sense that the solutions are parametrized by some
finite set of integrals of motion, including the integral of energy.
Another argument, which is often presented (see e.g. \cite{BRu81}),
is that the terms $d {\widehat O}\left({c^{-4}}\right)/dt$ are
negligible when taken in average for quasi-periodic systems, for
instance a binary system moving on a quasi-Keplerian orbit. The time
average of a total time-derivative is clearly numerical small for such
systems, but it seems difficult to quantify precisely the gain in
order of magnitude which is achieved in this way, for general systems.
The most general argument, valid for any system, is that the terms
$d\widehat O(c^{-4})/dt$ are numerically small when one looks at the
evolution of the system over long time scales, for instance $\Delta
t\gg \widehat O(c^{-4}) (dE^{\rm 1PN}/dt)^{-1}$ (see Thorne
\cite{Th83}, p. 46).
Adopting here $d\widehat O(c^{-4})/dt = O(c^{-9})$ and the latter
general argument, we can neglect the terms $\widehat O(c^{-4})$ and
arrive to the 1PN energy-momenta balance equations
\begin{eqnarray}
{{d E^{\rm 1PN}} \over {dt}} &=& - {G\over c^5} \left\{ {1\over 5}
I^{(3)}_{ij} I^{(3)}_{ij} + {1\over c^2}\left[{1\over 189}
I^{(4)}_{ijk} I^{(4)}_{ijk} + {16\over 45} J^{(3)}_{ij} J^{(3)}_{ij}
\right] \right\} + O\left( {1\over c^8}\right) \ , \label{eq:4.29*}
\\ {{d P^{\rm 1PN}_i} \over {dt}} &=& - {G\over c^7} \left\{ {2\over
63} I^{(4)}_{ijk} I^{(3)}_{jk} + {16\over 45} \varepsilon_{ijk}
I^{(3)}_{jm} J^{(3)}_{km} \right\} + O\left( {1\over c^9}\right)\ ,
\label{eq:4.30} \\ {{d S^{\rm 1PN}_i} \over {dt}} &=& - {G\over c^5}
\varepsilon_{ijk} \left\{ {2\over 5} I^{(2)}_{jm}
I^{(3)}_{km}+{1\over c^2}\left[{1\over 63} I^{(3)}_{jmn}
I^{(4)}_{kmn} + {32\over 45} J^{(2)}_{jm} J^{(3)}_{km} \right]
\right\} + O\left( {1\over c^8}\right)\ , \label{eq:4.31}
\end{eqnarray}
relating the 1PN-conserved energy and momenta, given by the explicit
integrals over the source (\ref{eq:3.17})-(\ref{eq:3.19}), to some
combinations of derivatives of multipole moments, also given by
explicit integrals over the source (see
(\ref{eq:3.7})-(\ref{eq:3.8})). Note that at this order both sides of
the equations are in the form of compact-support integrals. The
right-hand-sides of (\ref{eq:4.29*})-(\ref{eq:4.31}) agree exactly
with (minus) the fluxes of energy and momenta as computed in the wave
zone of the system. See for instance the equations (4.16'), (4.20')
and (4.23') in \cite{Th80}, when truncated to 1PN order [and recalling
that the quadrupole moment which enters the 1PN fluxes is precisely
the one given by (4.26)]. Thus, we can conclude on the validity of
the balance equations at 1PN order, for weakly self-gravitating
systems.
These equations could also be recovered, in principle, from the
relations (\ref{eq:2.14})-(\ref{eq:2.15}) (which were obtained in
paper I). Indeed (2.14)-(2.15) involve, besides some instantaneous
contributions such as $T_L(t)$, some non-local (or hereditary)
contributions contained in the functions $m(t)$, $m_i(t)$ and
$s_i(t)$. These contributions modify the constant monopole and dipole
moments $M$, $M_i$ and $S_i$ by some expressions which correspond
exactly to the emitted fluxes. The balance equations could be
recovered (with, though, less precision than obtained in this paper)
by using the constancy of the monopole and dipoles $M$, $M_i$ and
$S_i$ in the equations (2.14) written for $l=0$ and $l=1$, and by
using the matching equations obtained in (\ref{eq:3.27}), also written
for $l=0$ and $l =1$. Related to this, notice the term involving a
single time-antiderivative in the function $m_i(t)$ of
(\ref{eq:2.15b}), and which is associated with a secular displacement
of the center of mass position.
\subsection{Tail effects at 1.5PN order}
To 1.5PN order in the radiation reaction force appears an hereditary
integral (i.e. an integral extending on the whole past history of the
source), which is associated physically with the effects of
gravitational-wave tails. More precisely, it is shown in \cite{BD88},
using the same combination of approximation methods as used in paper I
and this paper, that the dominant hereditary contribution in the inner
post-Newtonian metric $g^{\rm in}_{\mu \nu}$ (valid all over $D_i$)
arises at the 4PN order. At this order, the dynamics of a
self-gravitating system is thus intrinsically dependent on the full
past evolution of the system.
In a particular gauge (defined in \cite{BD88}), the 4PN-hereditary contribution
in $g^{\rm in}_{\mu \nu}$ is entirely located in the 00 component of the
metric, and reads
\begin{equation}
g^{\rm in}_{00}|_{\rm hereditary} = -{8G^2M \over 5c^{10}} x^ix^j
\int^{+\infty}_0 d\lambda~ {\rm \ln} \left({\lambda \over 2} \right)
I^{(7)}_{ij} (t-\lambda) + O\left({1\over c^{11}}\right)\ . \label{eq:4.32}
\end{equation}
The hereditary contributions in the other components of the metric
($0i$ and $ij$) arise at higher order. Note that the hereditary
(tail) integral in (\ref{eq:4.32}) involves a logarithmic kernel. A
priori, one should include in the logarithm a constant time scale {\it
P} in order to adimensionalize the integration variable $\lambda$, say
$\ln (\lambda / 2P)$. However, $\ln P$ would actually be in factor of
an instantaneous term [depending only on the current instant $t$
through the sixth time-derivative $I^{(6)}_{ij}(t)]$, so
(\ref{eq:4.32}) is in fact independent of the choice of time scale. In
(\ref{eq:4.32}) we have chosen for simplicity $P=1$ sec. The presence
of the tail integral (\ref{eq:4.32}) in the metric implies a
modification of the radiation reaction force at the relative 1.5PN
order \cite{BD88}. The other 4PN terms are not controlled at this
stage, but are instantaneous and thus do not yield any radiation
reaction effects (indeed the 4PN approximation is ``even" in the
post-Newtonian sense). It was further shown
\cite{BD92} that the 1.5PN tail integral in the radiation reaction is
such that there is exact energy balance with
a corresponding integral present in the far-zone
flux. Here we recover this fact and add it up to the results obtained
previously.
As the gauge transformation yielding (\ref{eq:4.32}) in \cite{BD88}
deals only with 4PN terms, it can be applied to the inner metric
$g^{\rm in}_{\mu \nu}$ given by (\ref{eq:3.1}) without modifying any
of the known terms at the 1PN non-radiative and reactive
approximations. It is clear from (\ref{eq:4.32}) and the reactive
potentials (\ref{eq:3.6}) that after gauge transformation, the inner
metric takes the same form as (\ref{eq:3.1}), except that the reactive
potentials are now more accurate, and given by
\begin{mathletters}
\label{eq:4.33}
\begin{eqnarray}
V^{\rm reac}({\bf x}, t) &=& -{G \over 5c^5} x_{ij} I^{(5)}_{ij} (t) +
{G \over c^7}
\left[{1\over 189}x_{ijk} I^{(7)}_{ijk}(t) - {1\over 70} {\bf x}^2 x_{ij}
I^{(7)}_{ij}(t) \right] \nonumber \\
&& - {4G^2M \over 5c^8}x_{ij} \int^{+\infty}_0
d\lambda \ln \left(\lambda\over 2 \right) I^{(7)}_{ij}(t-\lambda)
+ O\left(1\over c^9 \right) \, \label{eq:4.33a},\\
V^{\rm reac}_i ({\bf x}, t) &=& {G\over c^5}\left[{1\over 21} \hat x_{ijk}
I^{(6)}_{jk}(t)
- {4\over 45} \varepsilon_{ijk}x_{jm} J^{(5)}_{km}(t) \right] + O\left({1\over c^7}
\right)\, \label{eq:4.33b}.
\end{eqnarray}
\end{mathletters}
Still there remain in the metric some un-controlled (even) 4PN terms,
but these are made of {\it instantaneous} spatial integrals over the
source variables, exactly like the un-controlled 2PN and 3PN terms.
[The expressions (\ref{eq:4.33}) can be recovered also from
Sect.~III.D of paper I and a matching similar to the one performed in
this paper.] With (\ref{eq:4.33}) in hands, one readily extends the
balance equations to 1.5PN order. First one obtains (4.24), but where
the reactive potentials are given more accurately by (\ref{eq:4.33}),
and where there are some instantaneous 4PN terms $_8X$, $_8Y_i$ and
$_8Z_i$ in the right-hand-sides. Extending the (slight) assumption
made before concerning the similar 3PN terms, we can transform $_8X$,
$_8Y_i$ and $_8Z_i$ into time-derivatives and transfer them to the
left-hand-sides. This yields (4.25), except that the remainders in the
right-hand-sides are $O(c^{-9})$ instead of $O(c^{-8})$. Using
(\ref{eq:4.33}), we then obtain (working modulo total
time-derivatives) the laws (4.28) augmented by the tail contributions
arising at order $c^{-8}$ in the right-hand-sides. The remainders in
the left-hand-sides are of the order $d \widehat{O} (c^{-4})/dt =
O(c^{-9})$ (arguing as previously), and therefore are negligible as
compared to the tail contributions at $c^{-8}$. In the case of energy
the 1.5PN balance equation is obtained as
\begin{eqnarray}
{dE^{\rm 1PN}\over dt} &=& -{G\over 5c^5} I^{(3)}_{ij} I^{(3)}_{ij}
-{G\over c^7} \left[{1\over 189} I^{(4)}_{ijk} I^{(4)}_{ijk}
+{16 \over 45} J^{(3)}_{ij} J^{(3)}_{ij} \right] \nonumber \\
&& - {4G^2M \over 5c^8} I^{(3)}_{ij}(t) \int^{+\infty}_0 d\lambda \ln
\left(\lambda \over 2\right) I^{(5)}_{ij}(t-\lambda)+O\left({1\over c^9}
\right)\, \label{eq:4.34}.
\end{eqnarray}
Because there are no terms of order $c^{-3}$ in the internal energy
of the system (see Sect. IV.A), the energy $E^{1PN}$ appearing
in the left-hand-side is in fact valid at the 1.5PN order. Finally,
to the required order, one can re-write (\ref{eq:4.34}) equivalently
in a form where the flux of energy is manifestly positive-definite,
\begin{eqnarray}
{dE^{\rm 1PN}\over dt} &=& -{G\over 5c^5} \left({I^{(3)}_{ij}(t)+{2GM\over c^3}
\int^{+\infty}_0 d\lambda \ln \left(\lambda \over 2 \right) I^{(5)}_{ij}
(t-\lambda)} \right)^2 \nonumber \\ && - {G\over c^7} \left[{1\over
189} (I^{(4)}_{ijk})^2 +{16\over 45}(J^{(3)}_{ij})^2
\right]+O\left({1\over c^9}\right) \ .
\label{eq:4.35}
\end{eqnarray}
Under the latter form one recognizes in the right-hand-side the known
energy flux at 1.5PN order. Indeed the effective quadrupole moment
which appears in the parenthesis agrees with the tail-modified {\it
radiative} quadrupole moment parametrizing the field in the far zone
(see Eq. (3.10) in \cite{BD92}). [The term associated with the
(gauge-dependent) constant 11/12 in the radiative quadrupole moment
\cite{BD92} yields a total time-derivative in the energy flux (as
would yield any time scale $P$ in the logarithm), and can be neglected
in (\ref{eq:4.35}).] The 1.5PN balance equation for angular momentum
is proved similarly (it involves as required the same tail-modified
radiative quadrupole moment). The balance equation for linear momentum
does not include any tail contribution at 1.5PN order, and simply
remains in the form (\ref{eq:4.30}).
\acknowledgments
The author would like to thank Bala Iyer and Clifford Will for
stimulating discussions, and for their computation of the 1PN
radiation reaction potentials in the case of two-body systems
\cite{IW95}. This prior computation clarified several issues, and
made easier the problem addressed in the present paper. The author
would like also to thank Thibault Damour for discussions at an early
stage of this work, notably on the law of conservation of energy at
1PN order (Sect.~IV.A). Finally, the author is very grateful to a
referee for his valuable remarks which have motivated an improved
version of the paper.
| proofpile-arXiv_065-959 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Bosonization techniques have proven useful in solving two--dimensional quantum
field theories. In particular the Schwinger and Thirring models have been
solved in this way \cite{2}. In the path integral framework the solubility
of the
Schwinger model manifests itself in the factorization of the partition function
in terms of a free massive positive metric field, a free negative metric zero
mass field and free massless fermions \cite{2,3}. It has long been
realized that this
factorization can be understood as a chiral change of variables in the path
integral \cite{4}. In this decoupled formulation the physical Hilbert
space of gauge
invariant observables of the original model is recovered by implementing the
BRST conditions associated with the usual gauge fixing procedure and the
chiral change of variables \cite{8}. These conditions are identical to
those originally obtained by Lowenstein and Swieca \cite{5} on the operator
level, stating that the physical Hilbert space be annihilated by the sum of the
currents of the negative metric fields and the free fermions.
Similar bosonization techniques have been applied to Quantum Chromodynamics in
1+1--dimensions (QCD$_2$). A review can be found in \cite{2,6}.
Analogous to the case of the
Schwinger model,
it has recently been shown that for a suitable choice of integration variables
the partition function of QCD$_2$ factorizes in terms of free massless
fermions, ghosts, negative level Wess Zumino Witten fields and fields
describing massive degrees of freedom \cite{7}. In the case of one flavor
the sector
corresponding to the massless fields was found \cite{8} to be equivalent to
that of a
conformally invariant, topological $G/G$ coset model \cite{9} with $G$ the
relevant gauge group. As a result of the
gauge fixing and the decoupling procedure
there were shown to exist several nilpotent charges \cite{13}
associated with BRST-like symmetries. These charges were found to be second
class. This raises the question as to whether all or just some of these
charges are required to annihilate the physical states.
Assuming the ground state(s)
of $QCD_2$ to be given by the state(s) of the conformally invariant sector,
(as is the case in the Schwinger model), the solution of the
corresponding cohomology problem led to the conclusion \cite{8} that the ground
state of
$QCD_2$ with gauge group $SU(2)$ and one flavor is 2 times twofold degenerate,
and
corresponding to the primaries of the $(U(1) \times SU(2)_1)/SU(2)_1$ coset
describing
the conformal sector (with $U(1)$ playing a spectator role). Since there
are however also
BRST constraints linking the coset-sector to the sector of massive
excitations, the above
hypothesis is not necessarily realized.
In ref. \cite{10}
the idea of smooth bosonization was introduced whereby
two dimensional path integral bosonization is formulated in terms of a
gauge fixing procedure. To accomplish this, a ``bosonization''
gauge symmetry with an associated ``bosonization''
BRST symmetry were introduced.
It was argued in ref. \cite{11} that the most natural choice of gauge to
recover the canonical bosonization dictionary is to ``gauge fix'' the fermions
in a $U(N)/U(N)$ coset model. In fact, non-abelian smooth bosonization has
only been achieved by this choice of ``gauge'' \cite{11,12}. The main result
of this approach to bosonization \cite{11} is that the free fermion partition
function factorizes into the partition function of a $U(N)/U(N)$ coset model
and a WZW model. An interpretation of this result along the lines mentioned
above for the case of $QCD_2$ would lead one to conclude that the spectrum of
free $U(2)$ fermions is two-fold degenerate. This conclusion is, however,
wrong since the ``bosonization'' BRST links the $U(N)/U(N)$ coset model to the
"matter" sector described by the WZW model \cite{11} (see section 3).
Therefore the BRST constraints play an essential role in identifying the
correct spectrum.
The above examples illustrate that extreme care has to be taken with regard
to the
implementation of the BRST
symmetries when identifying the physical states. In particular the
identification of the BRST symmetries
on the decoupled level can be misleading as not all the charges
associated with these symmetries are generally
required to annihilate the physical states in order to ensure
equivalence with the original, coupled
formulation. In section 2 we illustrate this point using a simple
quantum mechanical model for which
the Hilbert space is known and the BRST symmetries and associated cohomology
problem are very
transparent and easy to solve. These considerations generalize to the field
theoretic case. As examples
we discuss the non-abelian bosonization of free fermions in section 3 and some
aspects of QCD$_2$, as
considered in ref. \cite{13}, in section 4.
It is of course well known that the BRST symmetries on the decoupled level
originate from the changes
of variables made to achieve the decoupling, as has been discussed in the
literature before (see e.g. ref.
\cite{14,24,25}). The aspect we want to emphasize here is the role these
symmetries,
and the associated
cohomolgy,
play in constructing the physical subspace on the decoupled level, i.e., a
subspace
isomorphic to the
Hilbert space of the original coupled formulation. In particular we want
to stress
that the mere existence
of a nilpotent symmetry does not imply that the associated charge must
annihilate the
physical states. We
are therefore seeking a procedure to decide on the latter issue. We
formulate such
a procedure in section 2
and apply it in sections 3 and 4. It amounts to implementing the changes
of variables
by inserting an
appropriate identity in terms of auxiliary fields in the functional
integral. BRST
transformation rules for
these auxiliary fields are then introduced such that the identity amounts
to the
addition of a $Q$-exact
form
to the action. The dynamics is therefore unaltered on the subspace of states
annihilated by the
corresponding BRST charge. We must therefore require that this charge
annihilate
the physical states.
The auxiliary fields are then systematically integrated out and the BRST
transformation rules "followed"
through the use of equations of motion. This simple procedure allows for the
identification of the BRST
charges on the decoupled level which are required to annihilate the physical
subspace isomorphic to the
Hilbert space of the coupled formulation.
The paper is organized as follows: In section 2 we illustrate the points
raised above in terms of a
simple quantum mechanical model. Using this model as example we also
formulate a
general procedure
to identify the BRST charges required to annihilate the physical states
such that
equivalence with the
coupled formulation is ensured on the physical subspace. In sections 3 and 4
we apply this procedure to the non-abelian bosonization of free Dirac fermions
in 1+1 dimensions and to QCD$_2$, respectively.
\section{Quantum mechanics}
In this section we discuss a simple quantum mechanical model to illustrate
the points raised in the introduction. Since
the emphasis is on the structures of the Hibert spaces on the coupled
and decoupled levels, and the role which the
BRST symmetries and associated cohomology play in this respect,
we introduce the
model in second quantized form
and only then set up a path integral formulation using
coherent states. The model is then bosonized (decoupled)
and the BRST symmetries and cohomologies are
discussed.
Consider the model with Hamiltonian
$$\hat{H} = 2g \, \hat{J} \cdot \hat{J} \eqno{(2.1)}$$
where $\hat{J}^a$ are the generators of a $SU(2)$ algebra. We realize this
algebra on Fermion Fock space in the following way \cite{15}:
$$\begin{array}{rcl} \hat{J}^+ &=& \displaystyle \sum_{m>0} \, a^\dagger_m \,
a^\dagger_{-m} \, , \\[7mm] \hat{J}^- &=& \displaystyle \sum_{m>0} \, a_{-m} \,
a_m \, , \\[7mm] \hat{J}^0 &=& \frac{1}{2} \displaystyle \sum_{m>0} \,
(a^\dagger_m \, a_m - a_{-m} \, a^\dagger_{-m}) \, . \end{array}
\eqno{(2.2)}$$ Here $a^\dagger_m \, (a_m)$, $|m| = 1, 2, \dots N_f$ are
fermion creation (annihilation) operators. The operators (2.2) provide a
reducible representation of the usual commutation relations of angular momentum
and the representations carried by Fermion Fock space are well known \cite{16}.
The index $|m|$ plays the role of flavor with $N_f$ the number of flavors.
The Hamiltonian (2.1) thus describes a $SU(2)$ invariant theory with $N_f$
flavors, as will become clear in the Lagrange formulation discussed below.
The spectrum of the Hamiltonian $\hat{H}$ is completely known,
the eigenvalues of $\hat{H}$ being given by
$$E_j = 2 \, g \, j \, (j + 1) \eqno{(2.3)}$$
where each eigenvalue is $g_j \, (2j + 1)$-fold degenerate, with $g_j$ the
number
of times the corresponding irreducible representation occurs. In particular,
for $N_f=1$ the spectrum of $\hat{H}$ consists of a doublet, as well as two
singlets describing a two-fold degenerate state of energy $E=0$. For
positive $g$
this corresponds to a two-fold degenerate ground state.
We express the vacuum--to--vacuum amplitude associated with $\hat{H}$
as a functional integral over Grassmann
variables. For this purpose introduce the fermionic coherent state \cite{17}
$$| \, \chi > = \exp \, \left[- \frac{1}{2} \, \sum_{m>0} \, (\chi^\dagger_m \,
\chi_m + \chi^\dagger_{-m} \, \chi_{- m} + \chi_m \, a^\dagger_m + \chi_{-m} \,
a_{-m})\right] | \, 0 > \eqno{(2.4)}$$
where $\chi^\dagger_m$, $\chi_m$ are complex valued Grassmann variables and
$$a_m \, | \, 0 > = a_{-m}^\dagger \, | \, 0 > \, = 0 \, , \quad \forall m > 0
\, . \eqno{(2.5)}$$
We obtain the path integral representation of the vacuum--to--vacuum transition
amplitude by following the usual procedure \cite{18} and using the completeness
relation for the coherent states \cite{17}. We find
$$Z = \int [d \eta^\dagger] \, [d \eta] \, e^{i \int \, dt \, L_F} \,
\eqno{(2.6{\rm a})}$$
where $L_F$ is the "fermionic" Lagrangian
$$L_F = \eta_f^\dagger \, (i \, \partial_t + m) \, \eta_f - g \, {\rm tr} \,
j^2 \eqno{(2.6{\rm b})}$$
and a summation over the flavor index $f$ $(f = 1, 2, \dots N_f)$ is
implied. The
mass $m = - 3g$ arises from normal ordering with respect to the Fock vacuum
$| \, 0
>$ defined in (2.5). Furthermore, $\eta_f$ denotes the two-component spinor
$$
\eta_f = \left(\begin{array}{c}
\chi_f\\
\chi_{-f}
\end{array}\right) \, , \eqno{(2.7{\rm a})}$$
\\[7mm]
and $j, j^a_f$ are the "currents"
$$\begin{array}{rcl}j &=& \displaystyle \sum_f \, j_f, \,
j_f \; = \; j^a_f \, t^a \, , \\[7mm]
j^a_f &=& \eta^\dagger_f \, t^a \, \eta_f \end{array} \eqno{(2.7{\rm
b})}$$
where the $SU(2)$ generators are normalized as
${\rm tr} \, (t^a \, t^b) = \delta^{ab}$.
Introducing the field
$$B = B^a \, t^a \, , \eqno{(2.8)}$$
we can write the partition function as
$$
Z = \displaystyle \int \, [d \eta^\dagger] \, [d \eta] \, [dB]
\, e^{i \int \, dt \, L} \eqno{ (2.9{\rm a})}
$$
$$
L = \eta^\dagger_f \, (i \, \partial_t + B + m)
\, \eta_f + \frac{1}{2g} \, {\rm tr} \, B^2 \, \eqno{(2.9{\rm b})}
$$
The bosons and fermions can be decoupled by making the change of variables
$$B = V \, i \, \partial_t \, V^{-1} \eqno{(2.10{\rm a})}$$
where $V$ are group valued fields in the fundamental representation of $SU(2)$.
Simultaneously we make the change of variables
$$\psi_f = V^{-1} \, \eta_f \, . \eqno{(2.10{\rm b})}$$
The Jacobian associated with the transformation is (there are no anomalous
contributions in $0 + 1$ dimensions):
$$J = \det \, (D^{adj}_t \, (V)) = \det \, (\partial_t) \eqno{(2.11)}$$
where $D_t^{adj} \, (V)$ is the covariant derivative in the adjoint
representation
$$D^{adj}_t \, (V) = \partial_t + [V \, i \, \partial_t \, V^{-1} \, , ~] \, .
\eqno{(2.12)}$$
Representing the determinant in terms of ghosts we obtain for the partition
function the factorized form
$$Z = Z^{(0)}_F \, Z^{(0)}_{\rm gh} \, Z_V = \int \, [d \psi^\dagger] \, [d
\psi]
\, [d V] \, [db] \, [dc] \, e^{i \int \, dt \, L^{(0)}} \eqno{(2.13{\rm a})}$$
with
$$L^{(0)} = L^{(0)}_F + L^{(0)}_{\rm gh} + L_V \eqno{(2.13{\rm b})}$$
where
$$\begin{array}{rcl}
L^{(0)}_F &=& \psi^\dagger_f \, (i \, \partial_t + m) \,
\psi_f \, , \\[3mm] L^{(0)}_{\rm gh} &=& {\rm tr} \, b \, i \, \partial_t \, c
\, ,
\\[5mm] L_V &=& \frac{1}{2g} \, {\rm tr} \, (V \, i \, \partial_t \, V^{-1})^2
\, . \end{array}
\eqno{(2.13{\rm c})}$$
Here $b$ and $c$ are Lie algebra valued ghost fields $b = b^a \, t^a$ and
$c = c^a
\, t^a$.
In arriving at the above decoupled form of the partition function, we have not
mentioned the effect of the change of variables on the boundary of
the path integral. It is important to realize that the decoupling
of the partition function is not
affected by the implied change of the boundary condition since the
transformation (2.10) is local.
The Hilbert space associated with the factorized partition function is the
direct product of fermion, boson and ghost Fock spaces and is clearly much
larger than that of the original interacting model. It is therefore natural to
ask what conditions should be imposed on this direct product space to recover
a subspace isomorphic to the original Hilbert space. As these conditions are
of a group theoretical nature, it is necessary to first clarify the precise
content of these Hilbert spaces from a representation theory point of view
before the above mentioned isomorphism can be established.
As we have seen the states in the Hilbert space (Fermion Fock space) of the
original interacting model can be labeled by $| \, \alpha \, j \, m >$ where
$j$, $m$ labels the $SU(2)$-flavor representations and weights,
respectively, and $\alpha$ is a multiplicity index.
On the decoupled level the Hilbert space is the direct product of fermion,
boson and ghost Fock spaces. Upon canonical quantization it is again clear
that states in the (free) fermionic sector can be labeled by $| \, \alpha
\, j \, m
>_{\rm F}$ where the allowed values of $\alpha$ and $j$ coincide with those of
the interacting model. The ghosts $c$ and $b$ are canonically conjugate
fields. We take $b$ as the annihilation and $c$ as the creation operator. The
ghost vacuum is then defined by $b^a \, | \, 0 \, > \, = 0$. Defining the
ghost number operator by $N_{\rm gh} = c^a \, b^a \, , \,$
we see that $ b^a$ carries ghost
number $-1$ and $c^a$ ghost number 1. Since we are interested in the physical
sector which is built on the ghost vacuum (ghost number zero), we do not need
to analyze the representation theory content of the ghost sector in more
detail. It is therefore only the bosonic sector that requires a detailed
analysis.
Turning to the bosonic Lagrangian of (2.13c) we note the presence of a left
and right global symmetry
$$\begin{array}{rcl}
V &\longrightarrow& LV \, ,\\
V &\longrightarrow& VR
\end{array} \eqno{(2.14)}$$
where $L$ and $R$ are $SU(2)$ matrices in the fundamental representation
corresponding to
left- and right-transformations, respectively.
Following the Noether construction the conserved currents generating
these symmetries are identified as \cite{19}
$$\begin{array}{rcl} L^a &=& \frac{1}{g} \, {\rm tr} \, V \, i \, \partial_t
\, V^{-1} \, t^a \, ,\\[5mm] R^a &=& \frac{1}{g} \, {\rm tr} \, (i \,
\partial_t \, V^{-1}) \, V \, t^a \, , \end{array} \eqno{(2.15)}$$
respectively. In phase space this reads
$$\begin{array}{rcl}
L^a &=& {\rm tr} \, i \, V \tilde\pi_V \, t^a \, , \\[2mm]
R^a &=& {\rm tr} \, i \, \tilde\pi_V \, V \, t^a \, ,
\end{array} \eqno{(2.16)}$$
with $\pi_V$ the momentum canonically conjugate to $V$, and "tilde" denoting
"transpose". The following Poisson brackets are easily verified
$$\begin{array}{rcl} \{L^a \, , \, L^b \}_{\rm P} &=& - f^{abc} \, L^c \,
,\\[2mm] \{R^a \, , \, R^b \}_{\rm P} &=& f^{abc} \, R^c \, ,\\[2mm] \{L^a \, ,
\, R^b \}_{\rm P} &=& 0 \, ,\\[2mm] \{L^a \, , \, V \}_{\rm P} &=& - i \, t^a
\, V \, ,\\[2mm] \{R^a \, , \, V \}_{\rm P} &=& - i \, V \, t^a \, ,
\end{array} \eqno{(2.17)}$$ where $f^{abc}$ are the $SU(2)$ structure
constants.
Canonical quantization proceeds as usual. The Hilbert space of
this system is well known, and corresponds to that of the rigid rotator
\cite{19}.
Hence the Wigner $D$--functions $D^I_{MK}$ provide a realization in terms of
square integrable functions on the group manifold. It is important to note
that the Casimirs of the left and right symmetries both equal $I \, (I + 1)$.
Furthermore $M$ and $K$ label the weights of the left and right symmetries,
respectively. To determine the allowed values of $I$ one notes from (2.17)
that $V$ transforms as the $j = \frac{1}{2}$ representation under left and
right transformations. Thus $V$ acts as a tensor operator connecting
integer and
half--integer spins. It follows that $I$ can take the values
$I = 0, \frac{1}{2}, 1, \frac{3}{2} \dots$~. We therefore conclude
that on the decoupled
level the states have the structure $| \, \alpha \, j \, m >_{\rm F} \, |
\, I \, M
\, K >_{\rm B} \, | \, gh >$ where the subscripts F, B refer
to the fermionic and bosonic sectors, respectively. The allowed values
of the quantum numbers in these sectors are as discussed above.
Returning to the question as to which conditions are to be imposed on the
direct product space of the decoupled formalism in order
to recover the Hilbert space of
the original model, we note by inspection of (2.13) the existence of three
BRST symmetries (of which only two are independent). One of them acts in all
three sectors and is given by
$$\begin{array}{rcl}
\delta_1 \, \psi &=& c \, \psi \, , \\[2mm]
\delta_1 \, \psi^\dagger &=& \psi^\dagger \, c \, , \\[2mm]
\delta_1 \, V &=& - V \, c \, , \\[2mm]
\delta_1 \, V^{-1} &=& c \, V^{-1} \, , \\[2mm]
\delta_1 \, b &=& - j^{(0)} - R + \{b, \, c\} \, \, , \\[2mm]
\delta_1 \, c &=& \frac{1}{2} \, \{c, \, c\} \, .
\end{array} \eqno{(2.18)}$$
Here $\delta_1$ is a variational derivative graded with respect to Grassmann
number, \{~\} denotes a matrix anti--commutator and
$$\begin{array}{rcl} j^{(0)} &=& \displaystyle \sum_f \, (\psi^\dagger_f \, t^a
\, \psi_f) \, t^a \, ,\\[5mm] R &=& R^a \, t^a \; = \; \frac{1}{g} \, (i \,
\partial_t \, V^{-1}) \, V \, . \end{array} \eqno{(2.19)}$$
The other BRST symmetries act in the fermion--ghost and boson--ghost sectors,
respectively, and are given by
$$\begin{array}{rcl}
\delta_2 \, \psi &=& c \, \psi \, ,\\[2mm]
\delta_2 \, \psi^\dagger &=& \psi^\dagger \, c \, ,\\[2mm]
\delta_2 \, b &=& - j^{(0)} + \{b \, , c\} \, ,\\[2mm]
\delta_2 \, c &=& \frac{1}{2} \, \{c \, , \, c\} \, .
\end{array} \eqno{(2.20)}$$
as well as
$$\begin{array}{rcl}
\delta_3 \, V &=& - V \, c \, ,\\[2mm]
\delta_3 \, V^{-1} &=& c \, V^{-1} \, ,\\[2mm]
\delta_3 \, b &=& - R + \{b \, , \, c\} \, ,\\[2mm]
\delta_3 \, c &=& \frac{1}{2} \, \{c \, , \, c\} \, .
\end{array} \eqno{(2.21)}$$
The above transformations are nilpotent. Note, however, that they do
not commute.
Performing a canonical quantization, we define the ghost current
$$J_{\rm gh} = - : \, \{b \, , c\} \, : \eqno{(2.22)}$$
where : ~: denotes normal ordering with respect to the ghost vacuum. The
nilpotent charges $Q_i$ generating the transformations (2.18) -- (2.21),
i.e., $\delta_i \, \phi = [Q_i \, , \phi]$, with $\phi$ a generic field and
[~,~] a graded commutator, have the general form:
$$\begin{array}{rcl}
Q_1 &=& - {\rm tr} \, [c \, (j^{(0)} + R + \frac{1}{2} \, J_{\rm gh})] \,
,\\[3mm]
Q_2 &=& - {\rm tr} \, [c \, (j^{(0)} + \frac{1}{2} \, J_{\rm gh})] \, ,\\[3mm]
Q_3 &=& - {\rm tr} \, [c \, (R + \frac{1}{2} \, J_{\rm gh})] \, .
\end{array} \eqno{(2.23)}$$
We remarked above that the direct product Hilbert space associated with the
facto\-rized form of the partition function is much larger than that of the
original interacting model. We now inquire as to which of the above BRST
charges are required to vanish on the physical subspace (${\cal H}_{\rm ph}$)
in order to establish the isomorphism to the Hilbert space of the original
model.
We begin by showing that $Q_1$ is required to vanish on ${\cal H}_{\rm ph}$.
In order to illustrate the method, which will be used repeatedly, we briefly
sketch the main steps for the case at hand. To implement the change of
variables (2.10a) we make use of the identity
$$1 = \int \, [dV] \, \delta \, (B - V \, i \, \partial_t \, V^{-1}) \, \det
\, i \, D^{adj}_t \, (V) \eqno{(2.24)}$$
where the covariant derivative was defined in (2.12). Inserting this
identity into (2.9a), using the Fourier representation of the Dirac delta
and lifting the determinant by introducing Lie--algebra valued ghosts
$\tilde{b}$ and $\tilde{c}$ we have
$$Z = \displaystyle \int \, [d \, \eta^\dagger] \, [d \,
\eta] \, [dB] \, [d \, \lambda] \, [dV] \, [d \, \tilde{b}] \, [d \, \tilde{c}]
\, e^{i \int \, dt \, L'}\eqno{(2.25{\rm a})}$$
with
$$L' = L + \Delta L\eqno{(2.25{\rm b})}$$
where
$$\Delta L = {\rm tr} \, (\lambda \, (B - V \, i \, \partial_t \, V^{-1}) -
{\rm tr} \, (\tilde{b} \, i \, D^{adj}_t \, (V) \, \tilde{c}) \, .
\eqno{(2.25{\rm c})}$$
This Lagrangian is invariant under the BRST transformation
$$\begin{array}{rcl} \delta \, \eta &=& \delta \, \eta^\dagger \; = \; \delta
\, B \; = \; \delta \, \lambda \; = \; 0 \, ,\\[2mm] \delta \, \tilde{b} &=&
\lambda \, ,\\[2mm] \delta V &=& \tilde{c} \, V \, , \, \delta \, V^{-1} \; =
\; - V^{-1} \, \tilde{c} \, ,\\[2mm] \delta \, \tilde{c} &=& \frac{1}{2} \,
\{\tilde{c} \, , \, \tilde{c}\} \, . \end{array} \eqno{(2.26)}$$
One readily checks that this symmetry is nilpotent off--shell.
Noting that $\Delta L$ can be expressed as a BRST exact form
$$\Delta L = \delta \, (\tilde{b} \, (B - V \, i \, \partial_t \, V^{-1}))\,,
\eqno{(2.27)}$$
we conclude that equivalence with the original model is ensured on the subspace
of states annihilated by the corresponding BRST charge.
Next we show that the transformation (2.26) is in fact equivalent to the
transformation (2.18). Using the equation of motion for $\lambda$ and $B$ we
obtain the BRST transformation rules:
$$\begin{array}{rcl}
\delta \psi &=& \delta \, \psi^\dagger \; = \; 0 \, ,\\[2mm]
\delta \, \tilde{b} &=& - \frac{1}{g} \, V \, i \, \partial_t \, V^{-1} - j
\, ,\\[5mm]
\delta \, \tilde{c} &=& \frac{1}{2} \, \{\tilde{c} \, , \, \tilde{c}\} \,
,\\[4mm]
\delta \, V &=& \tilde{c} \, V \, ,\\[2mm]
\delta \, V^{-1} &=& - V^{-1} \, \tilde{c} \, .
\end{array} \eqno{(2.28)}$$
One readily checks that this is a symmetry of the action with Lagrangian
$$L = \eta^\dagger_f \, (i \, \partial_t + V \, i \, \partial_t \, V^{-1} + m)
\, \eta_f + \frac{1}{2g} \, {\rm tr} \, (V \, i \, \partial_t \, V^{-1})^2 -
{\rm tr} \, (\tilde{b} \, i \, D^{adj}_t \, (V) \, \tilde{c}) \, .
\eqno{(2.29)}$$
obtained after integrating out $\lambda$ and $B$.
Finally we return to the decoupled partition function (2.13) by transforming
to the free fermions and ghosts
$$\begin{array}{rcl}
\psi_f &=& V^{-1} \, \eta_f \, ,\\[2mm]
c &=& - V^{-1} \, \tilde{c} \, V \, ,\\[2mm]
b &=& V^{-1} \, \tilde{b} \, V \, .
\end{array} \eqno{(2.30)}$$
In terms of these variables the BRST transformations (2.28) become those of
(2.18). This demonstrates our above claim that the BRST charge generating
the transformation (2.18) has to vanish on ${\cal H}_{\rm ph}$ to ensure
equivalence with the original model.
An alternative way of proving the above statement is to note that the decoupled
Lagrangian $L^{(0)}$ of (2.13) can be expressed in terms of the original
fermionic Lagrangian $L_F$ of (2.6b) plus a $\delta_1$ exact part. In terms
of the free fermions $ \psi_f$ and interacting fermions $\eta_f = V^{-1} \,
\psi_f$, we may rewrite $L^{(0)}$ as:
$$L^{(0)} = \eta^\dagger_f \, (i \, \partial_t + m) \, \eta_f + g \, {\rm tr}
\, (R j^{(0)}) + \frac{g}{2} \, {\rm tr} \, R^2 + {\rm tr} \, (b \, i \,
\partial_t \, c) \eqno{(2.31)}$$
which may be put in the form
$$L^{(0)} = \eta^\dagger_f \, (i \, \partial_t + m) \, \eta_f - \frac{g}{2} \,
{\rm tr} \, j^2 + \frac{1}{2} \, {\rm tr} \, (b \, i \, \partial_t \, c) -
\frac{g}{2} \, \delta_1 \, [{\rm tr} \, b \, (R + j^{(0)})] \, .
\eqno{(2.32)}$$
Comparing with (2.6) we see that $L^{(0)}$ and $L_F$ just differ by a BRST
exact term (up to a decoupled free ghost term). Hence we recover the original
fermion dynamics on the sector which is annihilated by the BRST charge $Q_1$.
Therefore only the first of the three BRST symmetries (2.18) -- (2.21)
has to be imposed on the states. To see what this implies, we now solve the
cohomology problem associated with $Q_1$.
As usual \cite{20} we solve the cohomology problem in the zero ghost number
sector
$b^ a \, | \, \Psi >_{\rm ph} \, = 0$. The condition $Q_1 \, | \, \Psi >_{\rm
ph} \, = 0$ is then equivalent to (see (2.22) and (2.23))
$$(j^{(0)} + R) \, | \, \Psi >_{\rm ph} = 0 \, . \eqno{(2.33)}$$
The physical states $| \, \Psi >_{\rm ph}$ are thus singlets under the total
current $J = j^{(0)} + R$. We have already established the general structure of
states on the decoupled level and it is now simple to write down the solution
of the cohomology problem (2.33); it is
$$| \, \alpha \, j \, m >_{\rm ph} \, = \sum_M < j \, M \, j - M \, | \, 00 >
\, | \, \alpha \, j \, M >_{\rm F} \, | \, j - M \, m >_{\rm B} \, | \, 0
>_{\rm gh} \eqno{(2.34)}$$
with $< j \, M \, j - M \, | \, 00 >$ the
Glebsch--Gordon coefficients. We note that (2.34) restricts the a priori
infinite number of $SU(2)$ representations carried by boson Fock space to those
carried by fermion Fock space. Equation (2.34) shows that every state in
Fermion Fock space gives rise to exactly one physical state. This establishes
the isomorphism between the decoupled formulation and the original model on the
physical Hilbert space.
We note that the BRST condition can also be interpreted as a bosonization rule
which states that on the physical subspace the following replacements may be
made: $j^{(0)}\rightarrow -R$. This bosonization dictionary can be completed
by constructing physical operators, i.e., the operators that commute with the
BRST charge $Q_1$. Once this has been done, a set of rules result according to
which every fermion operator can be replaced by an equivalent bosonic operator.
It is easy to check that the following operators are BRST invariant
$$\begin{array}{rcl} 1, & & \eta^\dagger_f \, \eta_f \, ,\\[3mm] \eta_f &=& V
\, \psi_f \, , \, \eta^\dagger_f \; = \; \eta^\dagger_f \, V^{-1} \, ,\\[3mm]
j^a_f &=& \psi^\dagger_f \, V^{-1} \, t^a \, V \, \psi_f \; = \; \eta^\dagger_f
\, t^a \, \eta_f \, ,\\[3mm] L^a &=& \frac{1}{g} \, {\rm tr} \, (V \, i \,
\partial_t \, V^{-1} \, t^a) \, . \end{array} \eqno{(2.35)}$$
We recognize in $\eta_f$, $L^a$ and $j^a_f$ the (physical) fermion fields,
boson fields $B^a = g \, L^a$, and generators of the $SU(2)$ color symmetry
associated with the partition function (2.6) of the original model. Note
that the currents $R^a = \frac{1}{g} \, {\rm tr} \, (i \, \partial_t \,
V^{-1}) \, V \, t^a$ appearing in the BRST charge are not BRST invariant. Once
the physical operators have been identified, the physical Hilbert space can be
constructed in terms of them. In this way the isomorphism (2.34) can also be
established.
As we have now demonstrated explicitly the only condition that physical states
are required to satisfy in order to ensure the above isomorphism is that $Q_1
\, | \, \Psi >_{\rm ph} = 0$. It is, however, interesting to examine what
further
restrictions would result by imposing that a state be annihilated by all three
nilpotent charges. Since these charges do not commute, it raises the question
as to whether this is a consistent requirement. This leads us to consider the
algebra of those BRST charges. One finds
$$\begin{array}{rcl} [Q_{\alpha} \, , \, Q_{\beta}] &=& K_{(\gamma)}
\;\, (\alpha \, , \, \beta \, , \, \gamma \quad \mbox {\rm cyclic}) \, ,\\[3mm]
K_{(\gamma)} &=& - \frac{1}{2} \, f^{abc} \, J^a_{(\gamma)} \, c^b \, c^c
\end{array} \eqno{(2.36)}$$
where $J^a_{(\gamma)} = j^a \, , \, R^a$ and $j^a + R^a$ for $\gamma = 1 \, ,
\, 2 ~\mbox {\rm and} ~3 \, ,$ respectively. The $K_{(\gamma)}$ are nilpotent
and further have the properties $[K_{(\gamma)} \, , \, Q_{\alpha} ] =
0$ and $[K_{(\gamma)} \, , \, K_{(\gamma^\prime)} ] = 0$.
The $K_{(\gamma)}$ generate the infinitesimal transformation
$$\begin{array}{rclcl} [K_1, \, \psi] &=& -\frac{1}{2} \, \{c \, , \, c \}
\, \psi \, , \quad
[K_1, \, b] &=& \{j \, , \, c\} \, ,\\[5mm]
[K_2, \, \psi]
&=& \frac{1}{2} \, \{c \, , \, c \} \, V \, , \quad
[K_2, \, b] &=& \{R \, , \,
c\} \end{array} \eqno{(2.37)}$$
with $K_1 + K_2 = K_3$. As before $[~,~]$ denotes a graded commutator and
$\{~,~\}$ a matrix anti-commutator. All other transformations vanish. They
are easily checked to represent a symmetry of the action, as is required by
consistency.
From eq (2.37) we note that the conditions $Q_{\alpha} \, | \, \Psi > \,
= 0 \;
(\alpha = 1 \, , \, 2 \, , \, 3)$ can only be consistently imposed if we
require $K_{(\gamma)} \, | \, \Psi > \, = 0 \; (\gamma = 1 \, , \, 2 \, , \,
3)$ as well. The implementation of all three conditions would
restrict the physical (ghost number zero) states to be
singlets with respect to the physical fermionic currents generating the $SU(2)$
symmetry.
From eq (2.1) we note that for $g > 0$ the ground--state is a singlet.
There is in
fact a double degeneracy since there is a double multiplicity in the singlet
sector for an arbitrary number of flavors. By restricting to this subspace
one is therefore effectively studying the ground--state sector of the model.
\section{Non-abelian bosonization by coset factorization}
Consider the partition function of free fermions in the fundamental
representation of $U(N)$.
As mentioned in the introduction, the approach of ref. \cite{10, 12}
to the bosonization of such fermions in two dimensions
is most naturally implemented by factoring
from the corresponding partition function $Z^{(0)}_F$ a topological
$U(N)/U(N)$ coset carrying the fermion and chiral selection rules
associated with the fermions, but no dynamics \cite{11}.
In factoring out this coset, the bosonization BRST symmetry of ref.
\cite{10,12} is also uncovered.
To emphasize the care with which BRST symmetries have to be implemented
in the identification of
physical states, we note that if we were to ignore the BRST constraint
linking the coset sector to the remaining WZW sector , we would conclude
that the
spectrum of free fermions is not equivalent to a WZW model, but to
the direct product of the coset model and the WZW model. In particular we
would conclude
that this spectrum in N-fold degenerate.
A correct interpretation thus requires a careful analysis of
the BRST symmetries associated
with the introduction of additional degrees of freedom in the path integral,
and those associated with changes of variables.
As we show in this section the original spectrum of free fermions
is obtained, if the BRST symmetries of the physical states
are correctly identified.
To discuss the BRST cohomology associated with
the bosonization BRST, it is useful to decouple the coset again, that is,
we work with the fermionic coset in its decoupled form.
The reason for doing this is that it is
more convenient to analyze the physical spectrum of the coset model
in the decoupled formulation
\cite{9}.
As in the quantum mechanical models discussed above, one can proceed with the
bosonization procedure and only after the final action has been obtained,
the BRST symmetries are identified by inspection. The disadvantage of this
procedure, as became abundantly clear in our discussion above, is that one does
not recognize which of these BRST charges should be imposed as symmetries of
the physical states to ensure equivalence with the original free fermion
dynamics. Instead we follow here the procedure used above to identify
the relevant BRST charges from first principles.
In subsection 3.1 we review briefly the main results of \cite{11}, showing
how the bosonization BRST arises by an argument similar to that of section 2.
In subsection 3.2 we proceed to rewrite the
coset in the decoupled form, keeping track of the bosonization BRST and the
new BRST that arises when the decoupling is performed. In the last part of
this section we briefly discuss the structure of the physical Hilbert space.
\subsection{Bosonization BRST}
As explained in ref. \cite{11} the partition function of free Dirac
fermions in the fundamental
representation of $U(N)$ can be written as:
$$
Z_F^{(0)}=Z_{U(N)/U(N)} \times Z_{\rm WZW} \eqno{(3.1.1{\rm a})}
$$
where $Z_{U(N)/U(N)}$ is the partition function of a ${U(N)}/{U(N)}$
coset,
$$\begin{array}{rcl}
Z_{U(N)/U(N)} &=&\int [d\eta][d\bar\eta] \int [d(ghosts)] \int [dB_-]\\
&&\times\, e^{i\int d^2x\{ \eta^\dagger_1 i\partial_+ \eta_1
+\eta^\dagger_2 (i\partial_- + B_-){\eta}_2\}}
e^{i\int d^2x {\rm tr} b_- i\partial_+ c}\end{array}
\eqno{(3.1.1{\rm b})}
$$
and $Z_{WZW}$ is the partition function
$$
Z_{WZW} = \int [dg] e^{i\Gamma[g]} \eqno{(3.1.1{\rm c})}
$$
of a Wess-Zumino-Witten (WZW) field $g$ of level one,
with $\Gamma [g]$ the corresponding action \cite{21}
$$\begin{array}{rcl}
\Gamma[g]& = & \frac{1}{8\pi}\int\!d^2 x\,{\rm tr}(\partial_{\mu}
g^{-1}\partial^{\mu} g^{-1})\\
&& +\frac{1}{12\pi}\int_{\Gamma}\!d^3 x\, \epsilon^{\mu \nu\rho}
{\rm tr}(g\partial_{\mu} g^{-1}g\partial_{\nu}g^{-1}
g\partial_{\rho} g^{-1}) \quad. \end{array}
\eqno{(3.1.2)}
$$
Since, as we have seen in section 2,
the Hilbert space of the bosonic sector
(described by the WZW action in the case in question) is in general
much larger than that of the original fermionic description,
the question arises
as to which constraints must be imposed in order to ensure equivalence
of the two formulations. We now show that if the BRST symmetry of the
physical states is correctly identified, the original spectrum of free
fermions is recovered.
We begin by briefly reviewing the steps leading to the factorized
form (3.1.1), with the objective of establishing systematically which
of the
BRST symmetries should be imposed on the physical states.
We start with the partition function of
free Dirac fermions in the fundamental
representation of $U(N)$,
$$
Z_{\rm F}^{(0)}=\displaystyle\int\,[d\eta][d\bar\eta]
e^{i\int d^2x [\eta^{\dagger}_1 i \partial_- \eta_1
+ \eta^{\dagger}_2 i \partial_+ \eta_2}]\,. \eqno{(3.1.3)}
$$
Following ref. \cite{11} we enlarge
the space by introducing bosonic $U(N)$
Lie algebra valued fields $B_-=B_-^at^a$ (${\rm tr}(t^at^b)=\delta^{ab}$)
via the
identity
$$
1 = \displaystyle \int \, [d \, B_-] \, e^{i\int \, d^2
\, x \, [\eta^\dagger_2 \, B_- \, \eta_2]} \, \delta \, [B_-]\,.\eqno{(3.1.4)}
$$
Using a Fourier representation of the Dirac
Delta functional by introducing an
auxiliary field $\lambda_+$, the partition function
(3.1.3) then takes
the alternative form
$$
Z^{(0)}=\int [d\eta][d\bar\eta]\int [d\lambda_+][dB_-]
e^{i\int\{\eta^\dagger_1i\partial_+\eta_1
+\eta^\dagger_2(i\partial_- +B_-)\eta_2 + {\rm tr} \lambda_+
B_-\}}\eqno{(3.1.5)}
$$
where $\lambda_+$ are again $U(N)$ Lie algebra
valued fields.
We now make the
change of variable $\lambda_+ \to g$ defined by $\lambda_+=\alpha
g^{-1}i\partial_+ g$
where $g$ are $U(N)$ group-valued fields.
The Jacobian associated with this transformation is ambiguous
since we do not have gauge invariance as a guiding principle.
For reasons to become apparent later, we
choose it to be defined with respect
to the Haar measure $g\delta g^{-1}$. Noting that
$$
\delta (g^{-1}i\partial_+ g) = -g^{-1}
i\partial_+(g\delta g^{-1})g\eqno{(3.1.6)}
$$
we have for the corresponding Jacobian,
$$
J=\int[d\tilde b_-][d c_-]
e^{i\int {\rm tr}(g\tilde b_-g^{-1})\partial_+ c_-}\,. \eqno{(3.1.7)}
$$
The partition function (3.1.5) then takes the form
$$\begin{array}{rcl}
Z^{(0)}_F&=&\int [d\eta][d\bar\eta]\int [d(ghosts)]\int [dg][dB_-]
e^{i\int d^2x\{\eta^\dagger_1i\partial_+\eta_1
+ \eta^\dagger_2(i\partial_- +B_-)\eta_2\}}\\
&&\times\, e^{i\int d^2x\{\alpha {\rm tr} (g^{-1}i\partial_+gB_-)+
{\rm tr}\tilde b_-g^{-1}i(\partial_+c_-)g\} }\,.\end{array}
\eqno{(3.1.8)}
$$
There is a BRST symmetry associated with the change of variable $\lambda_+
\to g$.
In order to discover it we systematically
perform this change of variable
by introducing in (3.1.5) the identity
$$
1=\int[dg]J\delta[\lambda_+ -\alpha g^{-1}i\partial_+g] \eqno{(3.1.9)}
$$
where $J$ is the Jacobian defined in (3.1.7).
Using the Fourier representation
for the delta functional we are thus led to the alternative form for the
partition function,
$$\begin {array}{rcl}
Z^{(0)}&=&\int [d\eta][d\bar\eta]\int [d(ghosts)]\int [dg][dB_-]\int
[d\lambda_+][d\rho_-]
e^{iS_{{\rm aux}}}\\
&&\times\,e^{i\int\{\eta^\dagger_1i\partial_+\eta_1
+\eta^\dagger_2(i\partial_- +B_-)
\eta_2+{\rm tr} \lambda_+B_-\}}\end {array}
\eqno{(3.1.10)}
$$
where
$$
S_{{\rm aux}}=\int d^2x {\rm tr}\{\rho_-(\lambda_+ - \alpha
g^{-1}i\partial_+g) +
g\tilde b_-g^{-1}i\partial_+c_- \}\,.\eqno{(3.1.11)}
$$
The auxiliary action, $S_{{\rm aux}}$, is
evidently invariant under the off-shell
nilpotent transformations
$$\begin{array}{rcl}
&&\delta_1B_-=\delta_1\rho_-=\delta_1\lambda_+ =\delta_1\eta_1 =
\delta_1\eta_2 = 0\,,\\
&&\delta_1 g g^{-1} = c_-\,,\\
&&\delta_1 \tilde b_- = \alpha \rho_-,
\quad \delta_1 c_- = {1\over 2}{\{c_- , c_-\}}\,.
\end{array}\eqno{(3.1.12)}
$$
We now observe that $S_{{\rm aux}}$ may be written as
$$
S_{{\rm aux}} = {1\over\alpha}
{\delta_1{\rm tr}\tilde b_- (\lambda_+ - \alpha g^{-1}i\partial_+ g)}\,.
\eqno{(3.1.13)}
$$
Hence $S_{{\rm aux}}$ is BRST exact, so that equivalence of the two partition
functions is guaranteed on the (physical) states invariant under the
transformations (3.1.12).
Integrating over $\rho_-$ and $\lambda_+$
the BRST transformations (3.1.12) are replaced by
$$\begin{array}{rcl}
&&\delta_1 B_- = \delta_1\eta_1 = \delta_1\eta_2 = 0\,,\\
&&\delta_1g g^{-1} = c_-\,,\\
&&\delta_1 \tilde b_- = -\alpha B_-,
\quad \delta_1 c_- = {1\over 2}{\{c_- , c_-\}}\,.\end{array}
\eqno{(3.1.14)}
$$
and the partition function (3.1.10) reduces to
(3.1.8).
We now further make the change of variables
$$
\eta_2 \to {\eta'}_2 = g\eta_2,
\quad \tilde b_- \to b_- = g\tilde b_- g^{-1}\,.\eqno{(3.1.15)}
$$
The transformation $\tilde b_- \to b_-$ has
Jacobian one. The Jacobian
associated with the transformation
$\eta_2 \to {\eta'}_2$ is, on the other hand, given by
$$
J_F = e^{i\Gamma[g]-\frac
{i}{4\pi}\int d^2x {\rm tr}(B_- g^{-1}i\partial_+ g)}\,. \eqno{(3.1.16)}
$$
Notice that $J_F$ contains the contribution
from the non-abelian, as well as
abelian $U(1)$ anomaly. For the choice $\alpha = {1\over {4\pi}}$ the
second term in $\ln J_F$ cancels the term proportional to $\alpha$ in
(3.1.8).
Noting that $g(i\partial_- + B_-)g^{-1} = i\partial_- + B_-'$ with
$B_-' = gB_-g^{-1} + gi\partial_- g^{-1}$, using $[dB] = [dB']$, and
streamlining the notation by dropping "primes" everywhere,
the partition function (3.1.8)
reduces to (3.1.1a), and
the BRST transformations (3.1.14) now read in terms of the new variables,
$$\begin{array}{rcl}
&&\delta_1g g^{-1} = c_-\,,\\
&&\delta_1 \eta_2 = c_- \eta_2, \quad \delta_1 \eta_1 = 0\,,\\
&&\delta_1 b_- = -\frac{1}{4\pi}B_- +
\frac{1}{4\pi}gi\partial_- g^{-1} + \{b_-, c_-\}\,,\\
&&\delta_1 c_- = \frac{1}{2}\{c_-, c_-\}\,,\\
&&\delta_1 B_-= [c_-, B_-] - i\partial_- c_-\,.
\end{array}\eqno{(3.1.17)}
$$
As one readily checks, they represent a symmetry
of the partition function (3.1.1a). We see that these BRST conditions
couple the matter sector ($g$) to the coset sector.
As we have shown, they must be symmetries of the physical states.
\subsection{BRST analysis of coset sector}
It is inconvenient to analyze the cohomology problem with the $U(N)/U(N)$
coset realized in the present form as a constrained
fermion system. Instead it is
preferable to decouple \cite{9} in (3.1.1b) the $B_-$
field from the fermions, in order
to rewrite the coset partition function in
terms of free fermions, negative
level WZW fields and ghosts. As we now show, this procedure
leads to an additional BRST symmetry.
Concentrating now on the coset sector we
introduce in (3.1.1b) the identity
$$
1=\int [d \rho_+][d h] [d \tilde b_+] [d \tilde c_+]
e^{i\tilde S_{\rm aux}}\eqno{(3.2.1)}
$$
with
$$
\tilde S_{\rm aux}= \displaystyle\int\,d^2x {\rm tr}(\rho_+
[B_- - h i\partial_-h^{-1}])
+ {\rm tr}(\tilde b_+ iD_-(h) \tilde c_+)\eqno{(3.2.2)}
$$
where $h$ is a $U(N)$ group-valued field,
$D_-(h)=\partial_-+[h\partial_- h^{-1},]$ and, like the $\tilde
b_-,c_-$--ghosts,
the $\tilde b_+,\tilde c_+$--ghosts
transform in the adjoint representation.
Note that, unlike in the previous case,
the representation of the coset
as a gauged fermionic system has led us
to define the Jacobian with respect
to the Haar measure $h\delta h^{-1}$:
$$
\delta(h\partial_- h^{-1}) = \det D_-(h)h\delta h^{-1} \,.\eqno{(3.2.3)}
$$
$\tilde S_{{\rm aux}}$ is invariant under the off-shell nilpotent
transformation
$$\begin{array}{rcl}
\delta_2 \rho_+ &=& \delta_2 B_- = 0 \,,\\
h\delta_2 h^{-1} &=& \tilde c_+ \,,
\quad
\delta_2 \tilde b_+ =\rho_+\,,
\quad
\delta_2 \tilde c_+ = {-\frac{1}{2}}{\{\tilde c_+,\tilde c_+\}}
\end{array}\eqno{(3.2.4)}
$$
and $\tilde S_{{\rm aux}}$ is readily seen to be an exact form with respect
to this transformation. As before we thus conclude that the
original dynamics is recovered on the subspace
annihilated by the corresponding
BRST charge.
Following the previous steps we find, upon
introducing the identity (3.2.1)
in (3.1.1b) and integrating over $\rho_+$ and $B_-$,
$$\begin{array}{rcl}
Z_{U(N)/U(N)} &=& \int [d\eta][d\bar\eta]\int [dh] [d(ghosts)]
e^{i\int d^2x\{\eta_1^\dagger i\partial_+ \eta_1 +
{\eta}_2^\dagger (i\partial_- + hi\partial_- h^{-1}){\eta}_2}\\
&&\times\,e^{i\int d^2x \{{\rm tr}(b_-i\partial_+ c_-) + {\rm tr}(\tilde
b_+ iD_-(h)
\tilde c_-)\}}\,.
\end{array}\eqno{(3.2.5)}
$$
This partition function is seen to be invariant
under the BRST transformation
$$
h\delta_2 h^{-1} = \tilde c_+\,,
\quad
\delta_2 \tilde b_+ = {\eta}_2 {\eta}_2^\dagger \,,
\quad
\delta_2 \tilde c_+ = {-\frac{1}{2}}{\{\tilde c_+,\tilde c_+\}} \,,
\delta_2 \eta_1 = 0, \quad \delta_2 {\eta}_2 = 0\eqno{(3.2.6)}
$$
obtained from (3.2.4) after making use of the
equations of motion associated with a general variation in $\rho_+$ and $B_-$.
We now decouple the fermions by making the change of variable
${\eta}_2 \to \psi_2$ with
${\eta}_2 = h\psi_2$.
Correspondingly the last variation in (3.2.6) is replaced by
$\delta_2 \psi_2 = -h^{-1} \tilde c_+ h$. Taking account of the Jacobian
exp${-i\Gamma[h]}$ associated with this change of variable
and setting $\eta_1 = \psi_1$ to further streamline the notation,
the coset partition function (3.2.5) reduces to
$$\begin{array}{rcl}
Z_{U(N)/U(N)} &=&\int[d\eta][d\bar\eta] \int [d(ghosts) \int [dh]
e^{-i\Gamma[h]}\\
&&\times\,e^{\int d^2x
\{\psi_1^\dagger i\partial_+ \psi_1 + \psi_2^\dagger i\partial_- \psi_2
+ {\rm tr}(b_- i\partial_+ c_-) + {\rm tr}(\tilde b_+ D_-(h) \tilde c_+)\}
}\,. \end{array}\eqno{(3.2.7)} $$ Finally we also decouple the ghosts $\tilde
b_+, \tilde c_+$ by making the change of variables $\tilde b_+ \to b_+, \tilde
c_+ \to c_+$ defined by $$ \tilde b_+ = hb_+ h^{-1} , \quad \tilde c_+ = hc_+
h^{-1}\,.\eqno{(3.2.8)} $$ Only the $SU(N)$ part of $h$ contributes a
non-trivial Jacobian. Setting $h = v\hat h$ with $v \,\in\, U(N)$ and $ \hat
h\, \in\, SU(N)$, we have $$ [d\tilde b_+][d\tilde c_+] = e^{-iC_V \Gamma[\hat
h]} [d b_+][d c_+]\eqno{(3.2.9)} $$ where $C_V$ is the quadratic Casimir in the
adjoint representation. Making further use of the Polyakov-Wiegmann identity
one has $\Gamma[h] = \Gamma[\hat h] + \Gamma[v]$ and our final result for the
coset partition function reads $$ Z_{U(N)/U(N)} = \int [d\eta][d\bar\eta]\int
[d(ghosts)]\int [dv][d\hat h] e^{iS_{U(N)/U(N)}} \eqno{(3.2.10)} $$ with
$$\begin{array}{rcl} S_{U(N)/U(N)} &=& -\Gamma[v]-(1+C_V)\Gamma[\hat h] + \int
d^2x \{ \psi_1^\dagger i\partial_- \psi_1 + \psi_2^\dagger i\partial_+
\psi_2\}\\ &+&\int d^2x\{ {\rm tr}(b_- i\partial_+ c_-) + {\rm tr}(b_+
i\partial_- c_+) \}\,.\end{array}\eqno{(3.2.11)} $$ Notice that $v$ and $\hat
h$ correspond to level -1 and -(1+$C_V$) fields, respectively. Notice also that
the ghost term contains the $SU(N)$ as well as $U(1)$ contributions.
In terms of the new variables the BRST conditions (3.2.6) read (notice
in particular the changes with regard to the first and last variations)
$$\begin{array}{rcl}
&&h^{-1} \delta_2 h = -c_+ \,,\\
&&\delta_2\psi_1 = 0 \,,
\quad
\delta_2\psi_2 = c_+\psi_2 \,,\\
&&\delta_2 b_+ = \psi_2\psi_2^\dagger
- \frac{1}{4\pi}v^{-1} i\partial_+ v
- \frac{(1+C_V)}{4\pi}\hat h^{-1} i\partial_+ \hat h
+ \{b_+,c _+\} \,,\\
&&\delta_2 c_+ = \frac{1}{2}\{c_+,c_+\} \,.\\
\end{array}\eqno{(3.2.12)}
$$
Notice that we have included in the
transformation law for $\delta_2 b_+$ an
anomalous piece proportional to $(1+C_V)$, in order to
compensate the contribution
coming from the variation of the (anomalous) first two terms in (3.2.11)
arising from the Jacobians of the transformations.
Finally, returning to the transformation laws (3.1.17), and
recalling that, according to (3.2.2)
$B_- = hi\partial_- h^{-1}$, these transformations
are to be replaced by
$$\begin{array}{rcl}
&&g\delta_1 g^{-1} = -c_- ,\quad
h\delta_1 h^{-1} = -c_- \,,\\
&&\delta_1\psi_1 = \delta_1\psi_2 = 0\,,\\
&&\delta_2b_- = \frac{1}{4\pi} gi\partial_- g^{-1}
-\frac{(1+C_V)}{4\pi}hi\partial_- h^{-1}
+ \{b_-,c_-\} \,,\\
&&\delta_1c_- = \frac{1}{2}\{c_-,c_-\}\,.
\end{array}\eqno{(3.2.13)}
$$
where an anomalous piece has again been included in the variation for $b_-$
in order to compensate for the corresponding contribution coming from the
Jacobian in (3.2.9).
The corresponding BRST charges are obtained
via the usual Noether construction,
and are found to be of the general form
$$
\Omega_{\pm} = {\rm tr} c_{\pm} [\Omega_{\pm} -
\frac{1}{2}\{c_{\pm},c_{\pm}\}]\eqno{(3.2.14)}
$$
for the
$SU(N)$ and $U(1)$ pieces separately. For the $U(1)$
piece the anticommutator of
the ghosts vanishes, of course. Setting
$g = u\hat g$, $u\, \in\, U(1)$ ,
$\hat g \,\in\, SU(N)$ and noting that $\Gamma[g] = \Gamma[u] +
\Gamma[\hat g]$, we find
$$\begin{array}{rcl}
&&\Omega_- = \frac{1}{4\pi}ui\partial_- u^{-1} - \frac{1}{4\pi}vi\partial_-
v^{-1}\,,\\
&&\Omega_+ = {\rm tr}(\psi_2 \psi_2^\dagger) -
\frac{1}{4\pi}v^{-1}i\partial_+ v\,,\\
&&\Omega_-^a = {\rm tr} t^a [\frac{1}{4\pi}gi\partial_- g^{-1}
- \frac{(1+C_V)}{4\pi} hi\partial_- h^{-1} + \{b_-,c_-\}]\,,\\
&&\Omega_+^a = {\rm tr} t^a[\psi_2\psi_2^\dagger -
\frac{(1+C_V)}{4\pi}h^{-1}i\partial_+h
+ \{b_+,c_+\}]\end{array}\eqno{(3.2.15)}
$$
where $a = 1,...,N^2-1$ . All these operators are required to annihilate
the physical states, as we have seen.
By going over to canonical variables
and using the results of ref. \cite{9},
one easily verifies that these constraints
are first class with respect to themselves (vanishing central extension).
Indeed, define (tilde stands for ``transpose'')
$$\begin{array}{rcl}
&&\tilde{\hat\Pi}^{g}=\frac{1}{4\pi}\partial_0 g^{-1},\,,\\
&&\tilde{\hat\Pi}^{h}=-\frac{(1+C_V)}{4\pi}\partial_0
h^{-1}\,.\end{array}\eqno{(3.2.16)}
$$
Canonical quantization then implies the Poisson algebra (see ref.
\cite{2,22} for derivation; $g$ stands for a generic field)
$$\begin{array}{rcl}
&&\{g_{ij}(x),\hat\Pi^{g}_{kl}(y)\}_P=\delta_{ik}\delta
_{jl}\delta(x^1-y^1)\,,\\
&&\{\hat\Pi_{ij}^{g}(x),\ \hat\Pi^{g}_{kl}(y)\}_P
=-\frac{1}{4\pi}\left(\partial_1g^{-1}_{jk}g^{-1}_{li}-g_{jk}
^{-1}\partial_1g^{-1}_{li}\right)\delta (x^1-y^1)\,. \end{array}\eqno{(3.2.17)}
$$
In terms of canonical variables, we have for the constraints (3.2.15)
$$\begin{array}{rcl}
&&\Omega_+= {\rm tr}[\psi_2\psi^\dagger_2
- i\tilde{\hat\Pi}^{h}h -\frac{1}{4\pi}h^{-1}i\partial_1 h]\,,\\
&&\Omega_-= {\rm tr}[ig\tilde\Pi^{g} - \frac{1}{4\pi} gi\partial_1 g^{-1}
+ih\tilde{\hat\Pi}^{h}+\frac{1}{4\pi}hi\partial_1 h^{-1}]\\
&&\Omega^a_+= {\rm tr}t^a[\psi_2\psi^\dagger_2
- i\tilde{\hat\Pi}^{h}h -\frac{(1+C_V)}{4\pi}h^{-1}i\partial_1 h
+ \{b_+,c_+\}]\,,\\
&&\Omega^a_-= {\rm tr}t^a[ig\tilde\Pi^{g} - \frac{1}{4\pi} gi\partial_1 g^{-1}
+ih\tilde{\hat\Pi}^{h}+\frac{(1+C_V)}{4\pi}hi\partial_1 h^{-1}
+ \{b_-,c_-\}]\,.\end{array}\eqno{(3.2.18)}
$$
With the aid of the Poisson brackets (3.2.17) it is straightforward
to verify that $\Omega_{\pm}$ and $\Omega^a_{\pm}$ are
fist class:
$$
\left\{\Omega_\pm(x),\Omega_\pm(y)\right\}_P=0, \quad
\left\{\Omega_\pm^a(x),\Omega^b_\pm(y)\right\}_P
=-f_{abc}\Omega^c_\pm\delta(x^1-y^1) \,.\eqno{(3.2.19)}
$$
Hence the corresponding BRST charges are nilpotent.
The physical Hilbert space is now obtained by solving the cohomology problem
associated with the BRST charges $\Omega_\pm$ in the
ghost-number zero sector. This can be done in two ways. One can either
solve the cohomology problem
in the Hilbert space explicitly, i.e., find the states which are annihilated
by the BRST charges, but which
are not exact, as was done in section 2. Alternatively one can construct
the physical operators, which
commute with the BRST charges, in terms of which the physical Hilbert space can
be constructed.
Here we prefer to follow the second approach as it is more transparent.
For completeness let us indicate
how the analysis would proceed in the first approach.
Although technically more involved, the analysis parallels that of section 2.
One first notes that on the
decoupled level the Hilbert space is the direct product of four sectors,
namely, a free fermion sector,
positive - and negative level WZW sectors and a ghost sector. Each of the
sectors
is again the direct
product of left and right moving sectors. For the matter fields the left
and right
moving sectors are not
independent, but have to be combined in a specific way, namely,
they must belong to the same
representation of the Kac-Moody algebra \cite{23}. This is analogous
to the quantum mechanical model
discussed in section 2 where the left and right symmetries also belong to
the same $SU(2)$ representation.
Using this and the results of ref. \cite{9} one finds that the constraints
(3.2.18) relate the representations
and weights of the various sectors in such a way that a one-to-one
correspondence
is established between
the states of the free fermion model and the physical states annihilated by
the BRST charges (3.2.18).
This equivalence is even more transparent in the second approach where one
requires that the
physical operators commute with the constraints.
Making use of the Poisson brackets (3.2.17), we have
$$\begin{array}{rcl}
&&\left\{\Omega_-^a(x),\ g^{-1}(y)\right\}_P=
i(g^{-1}(x)t^a)\delta(x^1-y^1)\,,\\
&&\left\{\Omega_-^a(x),\ h(y)\right\}_P=
-i(t^a h(y))\delta(x^1-y^1)\end{array}\eqno{(3.2.20)}
$$
$$\begin{array}{rcl}
&&\left\{\Omega_+^a(x),\ \psi(y)\right\}_P=
-i(t^a\psi)\delta(x^1-y^1)\,,\\
&&\left\{\Omega_+^a(x),\ h(y)\right\}_P=
i(h(y)t^a)\delta(x^1-y^1)\end{array}\eqno{(3.2.21)}
$$
From the Poisson brackets (3.2.20) follows that the fields $h$ and $g$
can occur in physical observables only in the combinations $g^{-1}h$.
From the other two Poisson brackets (3.2.21) follows
that the fermion field can
only occur in the combination $h\psi$. Putting things together we conclude
that physical fermion field corresponds to the local product $g^{-1}h\psi$.
Turning back our set of transformations on the original fermion field,
we see that the BRST conditions establish in this way a one-to-one
correspondence between the fields of the decoupled formulation and the
original free fermion field: $\eta = h^{-1}g\psi$.
This establishes the equivalence of the decoupled partition function
(3.1.1a), subject to the BRST conditions, to the fermionic one as given by
(3.1.3). The coset factor in (3.1.1a) merely encodes the selection rules
of the partition function (3.1.3), but carries no dynamics.
\section{$QCD_2$ in the local decoupled formulation}
As a final example we prove deductively that the BRST charges
associated with the currents (2.40) and (2.49) of ref. \cite{13}
must annihilate the physical states in order to ensure
equivalence with the original formulation. To this end
we start from the partition $QCD_2$ function
$$
Z=\int [dA_+][dA_-]\int[d\psi][d\bar\psi] e^{iS[A,\psi,\bar
\psi]}\eqno{(4.1)}
$$
with
$$
S[A,\psi,\bar\psi]=-\frac{1}{4}{\rm tr} F_{\mu\nu}F^{\mu\nu}
+\psi^\dagger_1(i\partial_++eA_+)\psi_1+\psi^\dagger
_2(i\partial_-
+eA_-),\eqno{(4.2)}
$$
where $F_{\mu\nu}$ is the chromoelectric field strength
tensor, and $\partial_\pm=\partial_0\pm
\partial_1,\\
A_\pm=A_0\pm A_1$.
We parametrize $A_\pm$ as follows:
$$
eA_+=U^{-1}i\partial_+U,\quad eA_-=Vi\partial_-V^{-1}\eqno{(4.3)}
$$
and change variables from $A_\pm$ to $U,V$
by introducing the identities
$$
\begin{array}{rcl}
&&1=\int[dU]\det iD_+(U)\delta(eA_+
-U^{-1}i\partial_+U)\,,\\
&&1=\int[dV]\det iD_-(V)\delta(eA_-
-Vi\partial_-V^{-1})\end{array}\eqno{(4.4)}
$$
in the partition function (4.1). Here $D_+(U)$ and
$D_-(V)$ are the covariant derivatives in the adjoint
representation:
$$\begin{array}{rcl}
&&D_+(U)=\partial_++[U^{-1}\partial_+U,\,],\\
&&D_-(V)=\partial_-+[V\partial_-V^{-1},\,].\end{array}\eqno{(4.5)}
$$
Exponentiating as usual the corresponding functional
determinants in terms of ghost fields and
representing the delta functions as a Fourier integral, we
obtain for (4.1)
$$\begin{array}{rcl}
Z&=&\int[dA_+][dA_-][d\psi][d\bar\psi]\int[d
U][dV][d\lambda_+][d\lambda_-]\int[d(ghosts)]\\
&&\times e^{iS[A,\psi,\bar\psi]}
\times e^{i\int\lambda_-(eA_+-U^{-1}i\partial_+U)+i\int b_-iD_+
(U)c_-}\\
&&\times e^{i\int\lambda_+(eA_--Vi\partial_-V^{-1})+i\int b_+iD_-
(V)c_+},\end{array}\eqno{(4.6)}
$$
We follow again the procedure of ref. \cite{24}. The partition
function (4.6) is seen to be invariant under the
transformations
$$\begin{array}{rcl}
&&\delta_1\lambda_+=0,\ \delta_1A_-=0\,,\\
&&V\delta_1V^{-1}=c_+\,,\nonumber\\
&&\delta_1\psi_2=0\,,\\
&&\delta_1b_+=\lambda_+\,,\nonumber\\
&&\delta_1c_+=-\frac{1}{2}\{c_+,c_+\}\end{array}\eqno{(4.7{\rm a})}
$$
and
$$\begin{array}{rcl}
&&\delta_2\lambda_-=0,\ \delta A_+=0\,,\nonumber\\
&&U^{-1}\delta_2U=c_-\,,\nonumber\\
&&\delta_2\psi_1=0\,,\\
&&\delta_2b_-=\lambda_-\,,\nonumber\\
&&\delta_2c_-=-\frac{1}{2}\{c_-,c_-\}\,.\end{array}\eqno{(4.7{\rm b})}
$$
These transformations are off-shell nilpotent. It
is easily seen that in terms of the graded variational
derivatives $\delta_{1,2}$, the effective action in
(4.6) can be rewritten as
$$
S_{eff}=S[A,\psi,\bar\psi]+\Delta_1+\Delta_2\eqno{(4.8)}
$$
where
$$\begin{array}{rcl}
&&\Delta_1=\delta_1[b_-(eA_+-U^{-1}i\partial_+U)]\,,\\
&&\Delta_2=\delta_2[b_+(eA_--Vi\partial_-V^{-1})]\end{array}\eqno{(4.9)}
$$
are $Q_1$ and $Q_2$ exact with $Q_1,Q_2$ the
BRST Noether charges associated with the respective
transformations. Hence the physical states must belong
to ${\rm kern}\ Q_1/{\rm Im}\ Q_1$ and ${\rm kern}\ Q_2/{\rm Im}\ Q_2$ if
$S_{eff}$ is to be equivalent to the original action
$S[A,\psi,\bar\psi]$.
Integrating over $A_\pm$ and $\lambda_\pm$ the partition
function and BRST transformations reduce to
$$\begin{array}{rcl}
Z&=&\int [dU][dV]\int[d\psi][d\bar\psi]\int
[d(ghosts)]\\
&&\times e^{\frac{i}{2}\int {\rm tr}(F_{01})^2}
e^{i\int(U\psi_1)^\dagger
i\partial_+(U\psi_1)+i\int(V^{-1}\psi_2)^\dagger i\partial_+
(V^{-1}\psi_2)}\\
&&\times e^{i\int b_-iD_+(U)c_-}e^{i\int b_+iD_-(V)c_-}\end{array}\eqno{(4.10)}
$$
and
$$\begin{array}{rcl}
&&V\delta_1V^{-1}=c_+\,,\\
&&\delta_1\psi_2=0\,,\\
&&\delta_1b_+=-\frac{1}{2}D_+(U) F_{01}+
\psi_2\psi_2^\dagger\,,\\
&&\delta_1c_+=-\frac{1}{2}\{c_+,c_+\}\,,\end{array}\eqno{(4.11{\rm a})}
$$
$$
\begin{array}{rcl}
&&U^{-1}\delta_2U=c_-\,,\\
&&\delta_2\psi_1=0\,,\\
&&\delta_2b_-=\frac{1}{2}D_-(V) F_{01}+
\psi_1\psi_1^\dagger\,,\\
&&\delta_1c_-=-\frac{1}{2}\{c_-,c_-
\}\,,\end{array}\eqno{(4.11{\rm b})}
$$
respectively. As one readily checks, the partition
function (4.10) is invariant under these
(nilpotent) transformations which, as we have
seen, must also leave ${\Ha}_{phys}$ invariant.
We now decouple the fermions and ghosts by defining
$$\begin{array}{lcr}
&&\psi_1^{(0)}\equiv U\psi_1,\quad \psi_2^
{(0)}=V^{-1}\psi_2\,,\\
&&b_-^{(0)}=Ub_-U^{-1},\quad c_-^{(0)}=Uc_-U^{-1}\,,\\
&&b_+^{(0)}=V^{-1}c_+V,\quad c_+^{(0)}=V^{-1}c_+V\,.\end{array}
\eqno{(4.12)}
$$
Making a corresponding transformation in the measure, we
have
$$
\begin{array}{rcl}
[d\psi_1][d\psi_2]&=&e^{-i\Gamma[UV]}[d\psi_1^{(0)}][d
\psi_2^{(0)}]\\[1mm]
[d(ghosts)]&=&e^{-iC_V\Gamma[UV]}[d(ghosts^{(0)})]\end{array}\eqno{(4.13)}
$$
where $\Gamma[g]$ is the Wess-Zumino-Witten (WZW) functional (3.1.2).
We thus arrive at the decoupled partition function \cite{6,7,13}
$$
Z=Z_F^{(0)}Z_{gh}^{(0)}Z_{U,V}\eqno{(4.14{\rm a})}
$$
where
$$
Z_F^{(0)}=\int[d\psi^{(0)}][d\bar\psi^{(0)}]e^{i\int\bar
\psi i\raise.15ex\hbox{$/$}\kern-.57em\hbox{$\partial$}\psi},\eqno{(4.14{\rm b})}
$$
$$
Z_{gh}^{(0)}=\int[d(ghosts)^{(0)}]e^{i\int b_+^{(0)}i\partial_-
c_+^{(0)}}e^{i\int b_-^{(0)}i\partial_+c_-^{(0)}}\eqno{(4.14{\rm c})}
$$
and
$$
Z_{U,V}=\int[dU][dV]e^{-i(1+C_V)\Gamma[UV]}
e^{\frac{i}{2}\int {\rm tr}(F_{01})^2}\,.\eqno{(4.14{\rm d})}
$$
We rewrite $F_{01}$ in terms of $U$ and $V$ by noting that
$$
\begin{array}{rcl}
F_{01}&=&-\frac{1}{2}[D_+(U)Vi\partial_- V^{-1}
-\partial_-(U^{-1}i\partial_+U)]\\
&=&\frac{1}{2}[D_-(V)U^{-1}i\partial_+U-\partial_+(Vi\partial
_-V^{-1})]\end{array}\eqno{(4.15)}
$$
and making use of the identities
$$
\begin{array}{rcl}
&&D_-(V)B=V[\partial_-(V^{-1}BV)]V^{-1}\\
&&D_+(U)B=U^{-1}[\partial_+(UBU^{-1})]U\end{array}\eqno{(4.16)}
$$
as well as
$$
U^{-1}[\partial_+(U\partial_-U^{-1})]U=-\partial_-(U^{-1}\partial
_+U).\eqno{(4.17)}
$$
We thus obtain the alternative expressions
$$
\begin{array}{rcl}
F_{01}&=&-\frac{1}{2}U^{-1}[\partial_+(\Sigma\partial_-\Sigma^{-1})]
U\\
&=&\frac{1}{2}V[\partial_-(\Sigma^{-1}\partial_+\Sigma)]V\end{array}\eqno{(4
.18)}
$$
where $\Sigma$ is the gauge invariant quantity
$\Sigma=UV$.
The term $-(1+C_V)\Gamma[UV]$ in the effective action arising
from the change of variables is of quantum origin and must be
explicitly taken into account when rewriting the BRST transformations
laws (4.11a), (4.11b) in terms of the decoupled
variables. Its contribution to these transformations is obtained
by noting that
$(\delta=\delta_1+\delta_2)$
$$
\begin{array}{rcl}
-(1+C_V)\delta\Gamma[UV]&=&\frac{1+C_V}{4\pi}
\int {\rm tr}\left\{\left[(UV)^{-1}i\partial_+(UV)
\right]i\partial_-c_+^{(0)}\right.\\
&+&\left.\left[(UV)i\partial_-(UV)^{-1}\right]
i\partial_+c_-^{(0)}\right\}.\end{array}\eqno{(4.19)}
$$
We thus find, making use of (4.18) and the
identities (4.16), (4.17),
$$
\begin{array}{rcl}
\delta V^{-1}V&=&c_+^{(0)}\,,\\
\delta\psi_1^{(0)}&=&c_+^{(0)}\psi_2^{(0)}\,,\\
\delta b^{(0)}_+&=&-\frac{1}{2}\Sigma^{-1}
\left[\partial^2_+(\Sigma i\partial_-\Sigma^{-1})\right]
\Sigma-\left(\frac{1+C_V}{4\pi}\right)
\Sigma^{-1}i\partial_+\Sigma\\
&&+\psi_2^{(0)}\psi_2^{(0)+}+\left\{b_+^{(0)},c_+^{(0)}
\right\}\,,\\
\delta c_+^{(0)}&=&\frac{1}{2}\left\{c_+^{(0)},c_+^{(0)}\right\}\,
\end{array}\eqno{(4.20{\rm a})}
$$
$$
\begin{array}{rcl}
\delta U^{-1}U&=&c_-^{(0)}\,,\\
\delta\psi_1^{(0)}&=&c_-^{(0)}\psi_1^{(0)}\,,\\
\delta b^{(0)}_-&=&-\frac{1}{2}\Sigma
\left[\partial^2_-(\Sigma^{-1} i\partial_+\Sigma)\right]
\Sigma^{-1}-\left(\frac{1+C_V}{4\pi}\right)
\Sigma i\partial_-\Sigma^{-1}\\
&&+\psi_1^{(0)}\psi_1^{(0)\dagger}+\left\{b_-^{(0)},c_-^{(0)}
\right\}\,,\\
\delta c_-^{(0)}&=&\frac{1}{2}\left\{c_-^{(0)},c_-^{(0)}\right\}.
\end{array}\eqno{(4.20{\rm b})}
$$
Notice the change in the transformation law for $V$ and $U$,
as well as the change in sign in the
transformation of $c_\pm^{(0)}$.
We now perform a gauge transformation $U\to UG^{-1}$,
$V\to GV$, taking us to the gauge $U=1$ $(G=U)$:
$U\to 1,\qquad V\to \Sigma$.
The decoupled fields evidently remain unaffected by this gauge
transformation. The transformation laws for $V$ and $U$
above are replaced by a single transformation law
$$
\delta \Sigma^{-1}\Sigma=c_+^{(0)}-\Sigma^{-1}c_-^{(0)}\Sigma.\eqno{(4.21)}
$$
Making once more use of the identities (4.16) and (4.17),
we finally obtain for the BRST transformations for the decoupled
fields in the $U=1$ gauge ($c_+^{(0)}$ and $c_-^{(0)}$ are
to be regarded as independent ``parameters''):
$$
\begin{array}{rcl}
&&\delta \Sigma^{-1}\Sigma=c_+^{(0)}\,,\\
&&\delta\psi_1^{(0)}=c_+^{(0)}\psi_2^{(0)}\,,\\
&&\delta b^{(0)}_+=-\frac{1}{2}\Sigma^{-1}
\left[\partial^2_+(\Sigma i\partial_-\Sigma^{-1})\right]
\Sigma+\psi_2^{(0)}\psi_2^{(0)+}+\left\{
b_+^{(0)},c_+^{(0)}\right\}\,,\\
&&\delta c_+^{(0)}=\frac{1}{2}\left\{c_+^{(0)},c_+^{(0)}\right\}.
\end{array}\eqno{(4.22{\rm a})}
$$
$$
\begin{array}{rcl}
&&\Sigma\delta \Sigma{-1}=-c_-^{(0)}\,,\\
&&\delta\psi_2^{(0)}=c_-^{(0)}\psi_2^{(0)}\,,\\
&&\delta b^{(0)}_-=-\frac{1}{2}\Sigma^{-1}
\left[\partial^2_-(\Sigma^{-1} i\partial_+\Sigma)\right]
\Sigma^{-1}+\psi_1^{(0)}\psi_1^{(0)+}+\left\{b_-^{(0)},
c_-^{(0)}\right\}\,,\\
&&\delta c_+^{(0)}=\frac{1}{2}\left\{c_-^{(0)},c_-^{(0)}\right\}.
\end{array}\eqno{(4.22 {\rm b})}
$$
Using again the identities (4.16) and (4.17), we have
$$
\Sigma[\partial_-^2(\Sigma^{-1}\partial_+\Sigma)]\Sigma^{-1}
=D_-(\Sigma)(\partial_+(\Sigma\partial_-\Sigma^{-1})).\eqno{(4.23)}
$$
Comparing our results with those of ref. \cite{13}, we
see that we have recovered the BRST conditions of the local
formulation (eqs. (2.27) and (2.41) of
ref. \cite{13} after identification of $V$ with $\Sigma$ in the
$U=1$ gauge). This establishes that the transformations
(4.22a) and (4.22b) indeed have to be a symmetry
of the physical states, as has been taken for granted
in ref. \cite{13}.
\section{Conclusion}
Much interest has been devoted recently to gauged WZW theories and $QCD_2$ in a
formulation in which various sectors of the theory appear decoupled on the level
of the partition function, and are only linked via BRST conditions associated
with nilpotent charges. In particular, in the case of $QCD_2$ one is thus
led via
the Noether construction to several such conserved charges; however not all them
are required to vanish on the physical subspace. In order to gain further
insight
into the question as to which BRST conditions must actually be imposed in
order
to ensure equivalence of the decoupled formulation to the original coupled one,
we have examined this question in the context of simple fermionic models. We
have in
particular exhibited a general procedure for deciding which of the BRST
conditions
are to be imposed, and have thereby shown that this selects in general a subset
of nilpotent charges. By solving the corresponding cohomology problem we have
shown that one recovers the Hilbert space structure of the original models.
We have further demonstrated that the requirement that all
of the nilpotent charges should vanish generally implies a restriction to a
subspace of
the physical Hilbert space. On this subspace the full set of nilpotent
operators,
though non-commuting, could be consistently imposed to vanish.
For $QCD_2$ this means that the vacuum degeneracy
obtained in ref. \cite{8} by solving the cohomology problem in the conformally
invariant sector, described by a G/G topological coset theory
presumes, a priori, that the ground state of $QCD_2$
lies in the conformally invariant, zero-mass sector of the theory \cite{26}.
We have emphasized the difference in the "currents" involved in the
BRST conditions and the currents generating the symmetries of the original
coupled formulation: With respect to the former, physical states have to be
singlets, whereas these states belong to the irreducible representations
with respect to the latter.
Finally, we have clarified the BRST symmetries underlying the non-abelian
bosonization of free fermions. The role these symmetries play in assuring
equivalence with the original free fermion dynamics has also been elucidated.
\section{Acknowledgment}
One of the authors (KDR) would like to thank the Physics Department of the
University of Stellenbosch for their kind hospitality. This work was supported
by a grant from the Foundation of Research development of South Africa.
| proofpile-arXiv_065-963 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Statistical properties of the electron eigenfunctions in disordered
quantum
dots have recently become a subject attracting considerable theoretical and
experimental interest \cite{FM} - \cite{PR1995}. One of the reasons is that
the problem of particle motion in a bounded disordered potential comprises a
particular case of general chaotic systems, such as quantum billiards \cite
{Argaman}. On the other hand, the development of the ``microwave-cavity''
technique \cite{Prigodin-et} - \cite{Stein} has opened up a unique
possibility for experimental studies of wave-function statistics.
Furthermore, the mesoscopic conductance fluctuations in a quantum dot in the
Coulomb blockade regime contain fingerprints of the statistics of the
eigenfunctions of confined electrons \cite{FE}, \cite{JSA} - \cite{Alhassid}%
. This also provides experimental access \cite{Chang,Folk} to the
wave-function statistics.
In order to characterize the eigenfunction statistics, we introduce two
distribution functions. The first refers to the local {\em amplitudes} of
electron eigenfunctions $\psi _n$ inside a dot of volume $V$,
\begin{equation}
{\cal P}\left( v\right) =\left\langle \delta \left( v-V\left| \psi _n\left(
{\bf r}\right) \right| ^2\right) \right\rangle \text{,} \label{eq.01}
\end{equation}
while the second distribution function
\begin{equation}
{\cal Q}\left( s\right) =\left\langle \delta \left( s-\sin \varphi _n\left(
{\bf r}\right) \right) \right\rangle \label{eq.02}
\end{equation}
is associated with the eigenfunction {\em phases}, $\varphi _n=\arg \psi _n$%
. Angular brackets $\left\langle ...\right\rangle $ denote averaging over
bulk impurities inside a quantum dot.
When all electronic states are extended (metallic regime), the random matrix
theory (RMT) \cite{Mehta} predicts the Porter-Thomas distribution of
eigenfunction intensity that only depends on the fundamental symmetry of the
quantum dot: ${\cal P}\left( v\right) =\frac{\beta /2}{\Gamma \left( \frac
\beta 2\right) }\left( \frac{\beta v}2\right) ^{\beta /2-1}\exp \left(
-\beta v/2\right) $. The parameter $\beta =1$ for a system having
time-reversal symmetry (orthogonal ensemble), whereas $\beta =2$ when
time-reversal symmetry is broken (unitary ensemble), and $\beta =4$ for a
system having time-reversal symmetry and strong spin-orbit interactions
(symplectic ensemble). The RMT predictions are known to be valid in the
limit of large conductance $g=E_c/\Delta \gg 1$ (here, $E_c=\hbar D/L^2$ is
the Thouless energy, $D$ is the classical diffusion constant, $L$ is the
system size, and $\Delta $ is the mean level spacing) providing the system
size $L$ is much larger than the electron mean free path $l$. Corrections to
the Porter-Thomas distribution calculated in the framework of the $\sigma $%
-model formalism \cite{FM} are of order $g^{-1}$ in the weak-localization
domain.
The limiting case of orthogonal symmetry corresponds to systems with pure
potential electron-impurity scattering, while the case of unitary symmetry
applies to systems in a strong{\em \ }magnetic field that breaks the
time-reversal symmetry completely. For intermediate magnetic fields, a
crossover occurs between pure orthogonal and pure unitary symmetry classes.
This crossover in the distribution function ${\cal P}\left( v\right) $ was
previously studied within the framework of supersymmetry techniques \cite{FE}
and within an approach \cite{Kogan} exploiting the analogy with the
statistics of radiation in the regime of the crossover between ballistic and
diffusive transport.
We consider below the problem of eigenfunction statistics of chaotic
electrons in a quantum dot in an arbitrary magnetic field in the framework
of RMT. Our treatment is related to the case of the metallic regime, and
describes statistical properties of eigenfunctions amplitudes {\em and}
phases in the regime of the orthogonal-unitary crossover. The results are
applied to describe distributions of level widths and conductance peaks for
weakly disordered quantum dots in the Coulomb blockade regime in the
presence of an arbitrary magnetic field. \newpage\
\section{Distribution of local amplitudes and phases of eigenfunctions}
In order to study the statistical properties of electron eigenfunctions
within the RMT approach, we replace the microscopic Hamiltonian ${\cal H}$
of an electron confined in a dot by the $N\times N$ random matrix ${\bf H}%
=S_\beta \widehat{\varepsilon }S_\beta ^{-1}$ that exactly reproduces the
energy levels $\varepsilon _n$ of the electron in a dot for a given impurity
configuration. Here, $\widehat{\varepsilon }=$diag$\left( \varepsilon
_1,...,\varepsilon _N\right) $, $S_\beta $ is a matrix that diagonalizes
matrix ${\bf H}$ (parameter $\beta $ reflects the system symmetry), and it
is implied that $N\rightarrow \infty $. An ensemble of such random matrices
reproduces the electron eigenlevels for microscopically different but
macroscopically identical realizations of the random potential, and
therefore it should describe the level statistics of electrons inside the
dot. As long as we consider the metallic regime, the relevant ensemble of
random matrices is known to belong to the Wigner-Dyson class \cite
{Mehta,Brezin-Zee}. Correspondingly, averaging over randomness of the system
is replaced by averaging over the distribution function $P\left[ {\bf H}%
\right] \propto \exp \left\{ -\text{tr}V\left[ {\bf H}\right] \right\} $ of
the random-matrix elements, where $V\left( \varepsilon \right) $ is a
so-called ``confinement potential'' that grows at least as fast as $\left|
\varepsilon \right| $ at infinity \cite{Weidenmuller,FKY}. In such a
treatment, the columns of a diagonalizing matrix $S_\beta $ contain
eigenvectors of the matrix ${\bf H}$, that is $\psi _j\left( {\bf r}%
_i\right) =\left( N/V\right) ^{1/2}\left( S_\beta \right) _{ij}$, provided
the space inside a dot is divided onto $N$ boxes with radius vectors ${\bf r}%
_i$ enumerated from $1$ to $N$ (the coefficient $\left( N/V\right) ^{1/2}$
is fixed by the normalization condition $\int_Vd{\bf r}\left| \psi _j\left(
{\bf r}\right) \right| ^2=1$).
For the case of pure orthogonal symmetry, the eigenfunctions are real, being
the columns of the orthogonal matrix $S_1$ (up to a normalization constant $%
\left( N/V\right) ^{1/2}$). When time-reversal symmetry is completely
broken, the eigenfunctions are complex and may be thought of as elements of
the unitary matrix $S_2$. We note that in this case, the real and imaginary
parts of the eigenfunction are statistically independent, and $\left\langle
\left| \text{Re}\psi _j\left( {\bf r}\right) \right| ^2\right\rangle
=\left\langle \left| \text{Im}\psi _j\left( {\bf r}\right) \right|
^2\right\rangle $. It is natural to assume that in the transition region
between orthogonal and unitary symmetry classes, the eigenfunctions of
electrons can be constructed as a sum of two independent (real and
imaginary) parts with different weights, such as $\left\langle \left| \text{%
Im}\psi _j\left( {\bf r}\right) \right| ^2\right\rangle /\left\langle \left|
\text{Re}\psi _j\left( {\bf r}\right) \right| ^2\right\rangle =\gamma ^2$,
where the transition parameter $\gamma $ accounts for the strength of the
symmetry breaking. This means that in the crossover region, the
eigenfunctions $\left( V/N\right) ^{1/2}\psi _j\left( {\bf r}_i\right) $ can
be imagined as columns of the matrix $S$,
\begin{equation}
\begin{tabular}{lll}
$S=\frac 1{\sqrt{1+\gamma ^2}}\left( O+i\gamma \widetilde{O}\right) $, & $%
OO^T=1$, & $\widetilde{O}\widetilde{O}^T=1$,
\end{tabular}
\label{eq.03}
\end{equation}
composed of two independent orthogonal matrices $O$ and $\widetilde{O}$. The
parameter $\gamma $ in the parametrization Eq. (\ref{eq.03}) governs the
crossover between pure orthogonal symmetry $\left( \gamma =0\right) $ and
pure unitary symmetry $\left( \gamma =1\right) $. The values $0<\gamma <1$
correspond to the transition region between the two symmetry classes. From a
microscopic point of view, this parameter is connected to an external
magnetic field. (This issue will be discussed below.)
From Eq. (\ref{eq.03}), we obtain the moments
\[
\mu _p\left( \gamma ,N\right) =\left\langle \left| S_{ij}\right|
^{2p}\right\rangle _S=\frac 1{\left( 1+\gamma ^2\right) ^p}
\]
\begin{equation}
\times \sum_{q=0}^p\left(
\begin{array}{l}
p \\
q
\end{array}
\right) \gamma ^{2\left( p-q\right) }\left\langle \left| O_{ij}\right|
^{2q}\right\rangle _O\left\langle \left| \widetilde{O}_{ij}\right| ^{2\left(
p-q\right) }\right\rangle _{\widetilde{O}}\text{.} \label{eq.03a}
\end{equation}
Here $\left\langle ...\right\rangle _O$ stands for integration over the
orthogonal group \cite{Ullah}. The summation in Eq. (\ref{eq.03a}) can be
carried out using the large-$N$ formula
\begin{equation}
\left\langle \left| O_{ij}\right| ^{2p}\right\rangle _O=\left( \frac 2N%
\right) ^p\frac{\Gamma \left( p+\frac 12\right) }{\sqrt{\pi }},
\label{eq.03b}
\end{equation}
leading to
\begin{equation}
\mu _p\left( \gamma ,N\right) =\frac{p!}{N^p}\frac{P_p\left( X\right) }{X^p}%
\text{, }X=\frac 12\left( \gamma +\gamma ^{-1}\right) , \label{eq.04a}
\end{equation}
which is valid over the whole transition region $0\leq \gamma \leq 1$ in the
thermodynamic limit $N\rightarrow \infty $. (Here $P_p\left( X\right) $
denotes the Legendre polynomial). One can easily see from the properties of
Legendre polynomials that the ansatz introduced by Eq. (\ref{eq.03}) leads
to the correct moments in the both limiting cases $\gamma =0$ [$\mu _p\left(
0,N\right) =\left( 2p-1\right) !!/N^p$] and $\gamma =1$ [$\mu _p\left(
1,N\right) =p!/N^p$].
Using the RMT mapping described above, we can write the distribution
function ${\cal P}\left( v\right) $, Eq. (\ref{eq.01}), in the form
\begin{equation}
{\cal P}\left( v\right) =\int \frac{d\omega }{2\pi }\exp \left( -i\omega
v\right) \sum_{p=0}^\infty \frac{\left( i\omega N\right) ^p}{\Gamma \left(
p+1\right) }\left\langle \left| S_{rn}\right| ^{2p}\right\rangle _S\text{,}
\label{eq.05}
\end{equation}
whence we get, with the help of Eq. (\ref{eq.04a})
\begin{equation}
{\cal P}\left( v\right) =X\exp \left( -vX^2\right) I_0\left( vX\sqrt{X^2-1}%
\right) \text{,} \label{eq.10}
\end{equation}
where $I_0$ is the modified Bessel function. Equation (\ref{eq.10}) gives
the distribution function of the local amplitudes of electron eigenfunctions
in a quantum dot [see Fig. 1]. It is easy to confirm that for pure
orthogonal $\left( X\rightarrow \infty \right) $ and pure unitary $\left(
X=1\right) $ symmetries, Eq. (\ref{eq.10}) yields
\[
\left. {\cal P}\left( v\right) \right| _{X\rightarrow \infty }=\frac 1{\sqrt{%
2\pi v}}\exp \left( -\frac v2\right) ,
\]
\begin{equation}
\;\left. {\cal P}\left( v\right) \right| _{X=1}=\exp \left( -v\right) .
\label{eq.18}
\end{equation}
A different approach to the issue of eigenfunction statistics was recently
proposed in Ref. \cite{Kogan}, whose authors obtained a single-integral
representations for ${\cal P}\left( v\right) $ [see their Eq. (17)]. It can
be shown that our Eq. (\ref{eq.10}) coincides with Eq. (17) of the cited
work provided the parameter $X$ is related to the parameter $Y$ appearing in
Ref. \cite{Kogan} by $X=\sqrt{1+1/Y}$.
The advantage of the random-matrix approach is that it allows one to
calculate in a rather simple way, the distribution functions for many other
quantities, in particular, the phase distribution given by Eq. (\ref{eq.02}).
Since within the framework of the proposed random-matrix model $\sin \varphi
_j\left( {\bf r}\right) =\gamma \widetilde{O}_{ij}/\sqrt{O_{ij}^2+\gamma ^2%
\widetilde{O}_{ij}^2}$, and $\left\langle \sin ^{2p+1}\varphi _j\left( {\bf r%
}\right) \right\rangle _{O,\widetilde{O}\ }=0$, we have to compute
\begin{equation}
{\cal Q}\left( s\right) =\int \frac{d\omega }{2\pi }\exp \left( -i\omega
s\right) \sum_{p=0}^\infty \frac{\left( -1\right) ^p\left( \gamma \omega
\right) ^{2p}}{\Gamma \left( 2p+1\right) }\sigma _{2p}\left( \gamma \right)
\text{,} \label{eq.22}
\end{equation}
where the average
\begin{equation}
\sigma _{2p}\left( \gamma \right) =\left\langle \frac{\widetilde{O}_{ij}^{2p}%
}{\left( O_{ij}^2+\gamma ^2\widetilde{O}_{ij}^2\right) ^p}\right\rangle _{O,%
\widetilde{O}} \label{eq.22a}
\end{equation}
can be calculated using its integral representation,
\[
\sigma _{2p}\left( \gamma \right) =\frac 1{\Gamma \left( p\right) }
\]
\begin{equation}
\times \left\langle \widetilde{O}_{ij}^{2p}\int_0^\infty dxx^{p-1}\exp
\left\{ -x\left( O_{ij}^2+\gamma ^2\widetilde{O}_{ij}^2\right) \right\}
\right\rangle _{O,\widetilde{O}}\text{.} \label{eq.22b}
\end{equation}
Expanding the exponent in the integrand of Eq. (\ref{eq.22b}) yields
\[
\sigma _{2p}\left( \gamma \right) =\frac 1{\Gamma \left( p\right) }%
\int_0^\infty dxx^{p-1}\sum_{q=0}^\infty \frac{\left( -1\right) ^qx^q}{%
\Gamma \left( q+1\right) }
\]
\begin{equation}
\times \sum_{k=0}^q\left(
\begin{array}{l}
q \\
k
\end{array}
\right) \gamma ^{2\left( q-k\right) }\left\langle \left| O_{ij}\right|
^{2k}\right\rangle _O\left\langle \left| \widetilde{O}_{ij}\right| ^{2\left(
p+q-k\right) }\right\rangle _{\widetilde{O}}\text{.} \label{eq.22c}
\end{equation}
Using Eq. (\ref{eq.03b}), we obtain after straightforward calculations
\begin{equation}
\sigma _{2p}\left( \gamma \right) =\frac 1\pi \int_0^\infty \frac{d\lambda }{%
\sqrt{\lambda }\left( 1+\lambda \right) \left( \gamma ^2+\lambda \right) ^p}%
\text{.} \label{eq.22f}
\end{equation}
Equations (\ref{eq.22}) and (\ref{eq.22f}) yield the following formula for
the phase distribution function in the crossover regime:
\begin{equation}
{\cal Q}\left( s\right) =\frac \gamma {\pi \sqrt{1-s^2}}\frac 1{\gamma
^2+s^2\left( 1-\gamma ^2\right) }\text{.} \label{eq.25}
\end{equation}
As can be seen from Eq. (\ref{eq.25}), the limiting case of pure orthogonal
symmetry is characterized by the $\delta $-functional phase distribution
\begin{equation}
\frac 12\sqrt{1-s^2}\left. {\cal Q}\left( s\right) \right| _{\gamma
\rightarrow 0}=\frac 12\delta \left( s\right) \text{,} \label{eq.26a}
\end{equation}
whereas the case of pure unitary symmetry is described by the uniform
distribution
\begin{equation}
\frac 12\sqrt{1-s^2}\left. {\cal Q}\left( s\right) \right| _{\gamma
\rightarrow 1}=\frac 1{2\pi }\text{.} \label{eq.26}
\end{equation}
In the crossover region, the calculated phase distribution $q\left( s\right)
=\frac 12\sqrt{1-s^2}{\cal Q}\left( s\right) $ displays a smooth transition
from $\delta $-functional distribution in the case of pure orthogonal
symmetry ($X\rightarrow \infty $) to the uniform distribution in the case of
pure unitary symmetry ($X=1$), see Fig. 2.
In order to relate the phenomenological parameter $\gamma $ (or $X$)
entering the parametrization of Eq. (\ref{eq.03}) to the magnetic field
breaking time-reversal symmetry in the real microscopic problem, we compare
the distribution given by Eq. (\ref{eq.10}) with the exact distribution
derived within the framework of microscopic supersymmetry model (Eq. (13) in
Ref. \cite{FE}). Although these distributions have different analytical
forms, good numerical agreement is observed even for the tails of
distribution functions after appropriate rescaling of our phenomenological
parameter $X$ [see inset in Fig. 1, where the distribution function $\varphi
\left( \tau \right) =2\tau {\cal P}\left( \tau ^2\right) $ for different
values of parameter $X$ is plotted]. Analysis of the behavior of $\varphi
\left( \tau \right) $ in both cases in the region of small $\tau $ yields
the following relation between transition parameter $X$ and microscopic
parameter $X_m$ appearing in Ref. \cite{FE}:\widetext
\begin{equation}
X=2X_m^2\exp \left( X_m^2\right) \left\{ \Phi _1\left( X_m\right) \left[
\Lambda _1\left( X_m\right) -\Lambda _2\left( X_m\right) \right] +\Lambda
_2\left( X_m\right) -\frac{1-\Phi _1\left( X_m\right) }{^{X_m^2}}\Lambda
_1\left( X_m\right) \right\} \text{.} \label{eq.20}
\end{equation}
Here $\Phi _1\left( X_m\right) =\left[ \exp \left\{ -X_m^2\right\}
/X_m\right] \int_0^{X_m}dy\exp \left\{ y^2\right\} $, and
\begin{equation}
\Lambda _n\left( X_m\right) =\frac{\sqrt{\pi }\left( 2n-1\right) !!}{%
2^{n+1}X_m^{2n+1}}\left\{ 1-%
\mathop{\rm erf}
\left( X_m\right) +\frac{\exp \left\{ -X_m^2\right\} }{\sqrt{\pi }}%
\sum_{k=0}^{n-1}\frac{2^{n-k}X_m^{2n-2k-1}}{(2n-2k-1)!!}\right\} \text{.}
\label{eq.21}
\end{equation}
\narrowtext\noindent
In accordance with Ref. \cite{FE}, the microscopic parameter $X_m=\left|
\phi /\phi _0\right| \left( \alpha _gE_c/\Delta \right) ^{1/2}$, where $%
\alpha _g$ is a factor depending on the sample geometry, $\phi $ is the
magnetic field flux penetrating into the cross-sectional area of a sample,
and $\phi _0$ is the flux quantum. It can be seen from Eq. (\ref{eq.20})
that for very weak breaking of time-reversal symmetry $\left( X_m\ll
1\right) $, $X\approx 2\sqrt{\pi }/3X_m$. In the opposite limit of weak
deviation from unitary symmetry $\left( X_m\gg 1\right) $, $X\approx
1+1/2X_m^2$.
The connection between parameters $X$ and $X_m$ given by Eqs. (\ref{eq.20})
and (\ref{eq.21}) allows to use the simple random-matrix model proposed here
for describing various phenomena occurring in quantum dots in arbitrary
magnetic fields.
\section{Statistics of resonance conductance of a quantum dot}
The issue of eigenfunction statistics is closely connected to the problem of
conductance of a quantum dot that is weakly coupled to external leads in the
Coulomb blockade regime. At low temperatures, the conductance of a dot
exhibits sharp peaks as a function of the external gate voltage. The height
of conductance peaks strongly fluctuate since the coupling to the leads
depends on the fluctuating magnitudes of electron eigenfunctions near the
leads. Thus far both theoretical \cite{JSA} - \cite{Alhassid} and
experimental \cite{Chang,Folk} studies were restricted to the pure symmetry
classes (with conserved or completely broken time-reversal symmetry) even in
the simplest case of two pointlike leads. Here we present the analytical
treatment of the problem for arbitrary magnetic fields.
Let us consider non-interacting electrons confined in a quantum dot of
volume $V$ with weak volume disorder (metallic regime). The system is probed
by two pointlike leads weakly coupled to the dot at the points ${\bf r}_L$
and ${\bf r}_R$. The Hamiltonian of the problem can be written in the form
\cite{Iida}
\[
{\cal H}=\frac{\hbar ^2}{2m}\left( i\nabla +\frac e{c\hbar }{\bf A}\right)
^2+U\left( {\bf r}\right)
\]
\begin{equation}
+\frac i{\tau _H}V\left[ \alpha _R\delta \left( {\bf r-r}_R\right) +\alpha
_L\delta \left( {\bf r-r}_L\right) \right] , \label{eq.40}
\end{equation}
where $U$ consists of the confinement potential and the potential
responsible for electron scattering by impurities, $\tau _H$ is the
Heisenberg time, and $\alpha _{R(L)}$ is the dimensionless coupling
parameter of the right (left) lead. We also suppose that the coupling
between the leads and the dot is extremely weak, $\alpha _{R(L)}\ll 1$, so
that the only mechanism for electron transmission through the dot is
tunneling.
It can be shown that the heights of conductance peaks are entirely
determined by the partial level widths
\begin{equation}
\gamma _{\nu R(L)}=\frac{2\alpha _{R(L)}}{\tau _H}V\left| \psi _\nu \left(
{\bf r}_{R(L)}\right) \right| ^2 \label{eq.44}
\end{equation}
in two temperature regimes. When $T\ll \alpha _{R(L)}\Delta $, the heights $%
g_\nu =\frac h{2e^2}G_\nu $ are given by the Breit-Wigner formula
\begin{equation}
g_\nu =\frac{4\gamma _{\nu R}\gamma _{\nu L}}{\left( \gamma _{\nu R}+\gamma
_{\nu L}\right) ^2}. \label{eq.45}
\end{equation}
At higher temperatures, $\alpha _{R(L)}\Delta \ll T\ll \Delta $, the heights
are determined by the Hauser-Feshbach formula
\begin{equation}
g_\nu =\frac \pi {2T}\frac{\gamma _{\nu R}\gamma _{\nu L}}{\gamma _{\nu
R}+\gamma _{\nu L}}\text{.} \label{eq.46}
\end{equation}
Using the distributions $P_X\left( \gamma _{\nu R(L)}\right) $ of the
partial level widths, Eqs. (\ref{eq.44}) and Eq. (\ref{eq.10}),
\[
P_X\left( \gamma _{\nu R(L)}\right) =\frac{X\tau _H}{2\alpha _{R(L)}}\exp
\left( -\frac{X^2\tau _H}{2\alpha _{R(L)}}\gamma _{\nu R(L)}\right)
\]
\begin{equation}
\times I_0\left( \frac{X\sqrt{X^2-1}\tau _H}{2\alpha _{R(L)}}\gamma _{\nu
R(L)}\right) \text{,} \label{eq.47}
\end{equation}
and assuming that the eigenfunctions of electrons near the left and right
contacts are uncorrelated, that is $\left| {\bf r}_R{\bf -r}_L\right| \gg
\lambda $, we can derive analytical expressions for the distribution of the
conductance peaks heights.
\subsection{Breit-Wigner regime}
From Eqs. (\ref{eq.45}), we conclude that the distribution of the
conductance peaks is given by
\[
R_X\left( g_\nu \right) =\int_0^\infty d\gamma _{\nu R}\int_0^\infty d\gamma
_{\nu L}P_X\left( \gamma _{\nu R}\right) P_X\left( \gamma _{\nu L}\right)
\]
\begin{equation}
\times \delta \left( g_\nu -\frac{4\gamma _{\nu R}\gamma _{\nu L}}{\left(
\gamma _{\nu R}+\gamma _{\nu L}\right) ^2}\right) . \label{eq.48}
\end{equation}
Straightforward calculation lead to the following integral representation
\[
R_X\left( g_\nu \right) =\frac{\Theta \left( 1-g_\nu \right) }{\sqrt{1-g_\nu
}}\int_0^\infty d\mu \;\mu
\]
\[
\times \left\{ \prod_{k=\left( \pm \right) }\exp \left( -X\frac{S^{\left(
k\right) }}{f^{\left( k\right) }}\mu \right) I_0\left( \frac{S^{\left(
k\right) }}{f^{\left( k\right) }}\mu \sqrt{X^2-1}\right) \right.
\]
\begin{equation}
\left. +\prod_{k=\left( \pm \right) }\exp \left( -XS^{\left( k\right)
}f^{\left( k\right) }\mu \right) I_0\left( S^{\left( k\right) }f^{\left(
k\right) }\mu \sqrt{X^2-1}\right) \right\} , \label{eq.49}
\end{equation}
where
\[
S^{\left( \pm \right) }=1\pm \sqrt{1-g_\nu }\text{,}
\]
\[
f_{}^{\left( \pm \right) }=a\pm \sqrt{a^2-1}\text{,}
\]
\begin{equation}
a=\frac 12\left( \sqrt{\frac{\alpha _R}{\alpha _L}}+\sqrt{\frac{\alpha _L}{%
\alpha _R}}\right) \text{.} \label{eq.50}
\end{equation}
The integral in Eq. (\ref{eq.49}) can be calculated, yielding the
distribution function for the conductance peaks
\[
R_X\left( g_\nu \right) =\frac X{2\pi }\frac{\Theta \left( 1-g_\nu \right) }{%
\sqrt{1-g_\nu }}
\]
\begin{equation}
\times \sum_{i=1}^2\frac{{\bf E}\left( k_i\right) }{{\cal M}_i\sqrt{{\cal M}%
_i^2+g_\nu \left( X^2-1\right) }}\text{,} \label{eq.51}
\end{equation}
where the following notation has been used:
\[
{\cal M}_i=a+\left( -1\right) ^{i+1}\sqrt{1-g_\nu }\sqrt{a^2-1}\text{,}
\]
\begin{equation}
k_i=\sqrt{X^2-1}\frac{\sqrt{g_\nu }}{\sqrt{{\cal M}_i^2+g_\nu \left(
X^2-1\right) }}\text{,} \label{eq.52}
\end{equation}
and ${\bf E}\left( k\right) $ stands for the elliptic integral. Equation (%
\ref{eq.51}) is valid for arbitrary magnetic field flux, and it interpolates
between the two distributions corresponding to the quantum dot with
completely broken time-reversal symmetry
\begin{equation}
R_{X=1}\left( g_\nu \right) =\frac{\Theta \left( 1-g_\nu \right) }{2\sqrt{%
1-g_\nu }}\frac{1+\left( 2-g_\nu \right) \left( a^2-1\right) }{\left[
1+g_\nu \left( a^2-1\right) \right] ^2} \label{eq.53}
\end{equation}
and with conserved time-reversal symmetry:
\begin{equation}
R_{X\rightarrow \infty }\left( g_\nu \right) =\frac{\Theta \left( 1-g_\nu
\right) }{\pi \sqrt{g_\nu }\sqrt{1-g_\nu }}\frac a{1+g_\nu \left(
a^2-1\right) }\text{.} \label{eq.54}
\end{equation}
Comparing Eqs. (\ref{eq.51}) and (\ref{eq.54}), one can see that the
influence of the magnetic field is most drastic in the region of small
heights of the conductance peaks, $g_\nu \approx 0$, [Fig. 3]. Equations (%
\ref{eq.51}) yields
\begin{equation}
R_X\left( 0\right) =\frac X2\left( 2a^2-1\right) \text{.} \label{eq.55}
\end{equation}
Thus, we conclude that the probability density of zero-height conductance
peaks decreases from infinity to the value $\left( 2a^2-1\right) \sqrt{\pi }%
/3X_m$ at arbitrary small magnetic fluxes penetrating into the sample.
\subsection{Hauser-Feshbach regime}
When $\alpha _{R(L)}\Delta \ll T\ll \Delta $, the distribution of the
conductance peaks is given by
\[
F_X\left( g_\nu \right) =\int_0^\infty d\gamma _{\nu R}\int_0^\infty d\gamma
_{\nu L}P_X\left( \gamma _{\nu R}\right) P_X\left( \gamma _{\nu L}\right)
\]
\begin{equation}
\times \delta \left( g_\nu -\frac \pi {2T}\frac{\gamma _{\nu R}\gamma _{\nu
L}}{\gamma _{\nu R}+\gamma _{\nu L}}\right) . \label{eq.56}
\end{equation}
Carrying out the double integration and using Eq. (\ref{eq.47}), we obtain
the following formula for the function $T_X\left( \xi \right) =F_X\left(
g_\nu \right) \pi \sqrt{\alpha _L\alpha _R}/2T\tau _H$, where the
dimensionless variable $\xi =2g_\nu T\tau _H/\pi \sqrt{\alpha _L\alpha _R}$:%
\widetext
\[
T_X\left( \xi \right) =\left( \frac X2\right) ^2\xi \exp \left( -aX^2\xi
\right) \int_0^\infty d\mu \left( 1+\frac 1\mu \right) ^2\exp \left[ -\frac{%
X^2}2\xi \left( \mu f^{\left( +\right) }+\frac 1{\mu f^{\left( +\right) }}%
\right) \right]
\]
\begin{equation}
\times I_0\left( \xi \frac{X\sqrt{X^2-1}}2\left( 1+\mu \right) f^{\left(
+\right) }\right) I_0\left( \xi \frac{X\sqrt{X^2-1}}2\frac{\left( 1+\frac 1%
\mu \right) }{f^{\left( +\right) }}\right) \text{.} \label{eq.57}
\end{equation}
\narrowtext\noindent
This equation interpolates between two limiting distributions
\begin{equation}
T_{X=1}\left( \xi \right) =\xi \exp \left( -a\xi \right) \left[ K_0\left(
\xi \right) +aK_1\left( \xi \right) \right] \label{eq.58}
\end{equation}
and
\begin{equation}
T_{X\rightarrow \infty }\left( \xi \right) =\sqrt{\frac{1+a}2}\frac 1{\sqrt{%
\xi }}\exp \left( -\frac{a+1}2\xi \right) \label{eq.59}
\end{equation}
for unitary and orthogonal symmetries, respectively.
The distribution function Eq. (\ref{eq.57}) can be represented as a series
\[
T_X\left( \xi \right) =\frac{X^2}2\xi \exp \left( -aX^2\xi \right)
\]
\[
\times \sum_{n=0}^\infty \sum_{m=0}^\infty \left( \frac 1{n!m!}\right)
^2\left[ \frac{X\sqrt{X^2-1}}4\xi \right] ^{2\left( n+m\right) }\left(
f^{\left( +\right) }\right) ^{2\left( n-m\right) }
\]
\begin{equation}
\times \sum_{s=-1-2n}^{s=1+2m}\left(
\begin{array}{l}
2\left( n+m+1\right) \\
2n+s+1
\end{array}
\right) \left( f^{\left( +\right) }\right) ^sK_s\left( X^2\xi \right)
\label{eq.60}
\end{equation}
that is convenient for the calculation of the distribution of conductance
peaks in the case of weak deviations from unitary symmetry. As is the case
of very low temperatures, the distribution function of the heights of the
conductance peaks given by Eq. (\ref{eq.57}) is most affected by magnetic
field in the region $g_\nu \approx 0$:
\begin{equation}
T_X\left( 0\right) =aX, \label{eq.61}
\end{equation}
and therefore, the probability density of the conductance peaks of zero
height immediately drops from infinity to the value $T_X\left( 0\right)
\approx 2a\sqrt{\pi }/3X_m$ when an arbitrary small magnetic field $\left(
X_m\ll 1\right) $ is applied to a system.
\section{Conclusions}
We have introduced a one-parameter random matrix model for describing the
eigenfunction statistics of chaotic electrons in a weakly disordered quantum
dot in the crossover regime between orthogonal and unitary symmetry classes.
Our treatment applies equally to the statistics of local amplitudes and
local phases of electron eigenfunctions inside a dot in the presence of an
arbitrary magnetic field. The transition parameter $X$ entering our model is
related to the microscopic parameters of the real physical problem, and
therefore the distributions calculated within the proposed random-matrix
formalism can be used for the interpretation of experiments.
This random matrix model has also been applied to describe the distribution
function of the heights of conductance peaks for a quantum dot weakly
coupled to external pointlike leads in the regime of Coulomb blockade in the
case of crossover between orthogonal and unitary symmetry. We have shown
that the magnetic field exerts a very significant influence on the
distribution of the heights of the conductance peaks in the region of small
heights. The effect of the magnetic field consists of reducing the
probability density of zero-height conductance peaks from infinity to a
finite value for arbitrary small magnetic flux.
\begin{center}
{\bf ACKNOWLEDGMENT}
\end{center}
One of the authors (E. K.) gratefully acknowledges the financial support of
The Ministry of Science and The Arts of Israel.
| proofpile-arXiv_065-965 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{I. Introduction}
\end{flushleft}
Recently the CLEO collaboration has observed [1] the exclusive radiative
decay $B\rightarrow K^*\gamma$ with a branching fraction of $BR ( B
\rightarrow K^*\gamma ) = ( 4.5 \pm 1.0 \pm 0.9 ) \times 10^{-5}$. The
inclusive $b\rightarrow s\gamma$ branching ratio measured by CLEO [2] is
\begin{equation}
BR ( B\rightarrow X_s\gamma ) = ( 2.32 \pm 0.57 \pm 0.35 ) \times 10^{-4}.
\end{equation}
The newest upper and lower limits of this decay branching ratio are
\begin{equation}
1.0 \times 10^{-4} < BR ( B\rightarrow X_s\gamma ) < 4.2 \times 10^{-4}, at
\ \ 95 \% C.L.
\end{equation}
As a loop-induced flavor changing neutral current ( FCNC ) process the
inclusive decay ( at quark level ) $b\rightarrow s\gamma$ is in particular
sensitive to contributions from those new physics beyond the standard model
( SM ) [3]. There is a vast interest in this decay.
The decay $b\rightarrow s\gamma$ and its large leading log QCD corrections
have been evaluated in the SM by several groups [4]. The reliability of the
calculations of this decay is improving as partial calculations of the next-
to-leading logarithmic QCD corrections to the effective Hamiltonian [5,6]
The great progress in theoretical studies and in experiments achieved
recently encourage us to do more investigations about this decay in
technicolor theories.
Technicolor ( TC ) [7] is one of the important candidates for the mechanism
of naturally breaking the electroweak symmetry. To generate ordinary fermion
mass, extended technicolor ( ETC ) [8] models have been proposed. The
original ETC models suffer from the problem of predicting too large flavor
changing neutral currents ( FCNC ). It has been shown, however, that this
problem can be solved in walking technicolor ( WTC ) theories [9].
Furthermore, the electroweak parameter $S$ in WTC models is smaller than that
in the simple QCD-like ETC models and its deviation from the standard model
( SM ) value may fall within current experimental bounds [10]. To explain the
large hierarchy of the quark masses, multiscale WTC models ( MWTCM ) are
further proposed [11]. These models also predict a large number of
interesting Pseudo-Goldstone bosons ( PGBs ) which are shown to be
testable in future experiments [12]. So it is interesting to study physical
consequences of these models.
In this paper, we examine the correction to the $b\rightarrow s\gamma$ decay
from charged PGBs in the MWTCM. We shall see that the original MWTCM gives
too large correction to the branching ratio of $b\rightarrow s\gamma$ due to
the smallness of the decay constant $F_Q$ in this model. We shall show that
if topcolor is further introduced, the branching ratio of $b\rightarrow s
\gamma$ in the topcolor assisted MWTCM can be in agreement with the CLEO data
for a certain range of the parameters.
This paper is organized as the following: In Section II, we give a brief
review of the MWTCM and then calculate the PGBs corrections to $b\rightarrow
s \gamma$ decay, together with the full leading log QCD corrections. In
Section III, we obtain the branching ratio of this decay. The conclusions
and discussions are also included in this Section.
\begin{flushleft}
\section*{II. Charged PGBs of MWTCM and QCD corrections to
$b\rightarrow s\gamma$}
\end{flushleft}
Let us start from the consideration of the MWTCM proposed by Lane and Ramana
[11]. The ETC gauge group in this model is
\begin{equation}
G_{ETC} = SU ( N_{ETC} )_1 \times SU ( N_{ETC} )_2,
\end{equation}
where $N_{ETC} = N_{TC} + N_C + N_L$ in which $N_{TC}$, $N_C$ and $N_L$ stand
for the number of technicolors, the number of ordinary colors and the
doublets of color-singlet technileptons, respectively. In Ref.[11], $N_{TC}$
and $N_L$ are chosen to be the minimal ones guaranteeing the walking of the
TC coupling constant which are: $N_{TC} = N_L = 6$. The group $G_{ETC}$ is
supposed to break down to a diagonal ETC gauge group $SU ( N_{ETC})_{1+2}$ at
a certain energy scale. The decay constant $F_Q$ satisfies the following
constraint [11]:
\begin{equation}
F= \sqrt{F^2_{\psi}+3F^2_Q+N_LF^2_L} = 246 GeV.
\end{equation}
It is found in Ref.[11] that $F_Q = F_L = 20 - 40 GeV$. We shall take
$F_Q=40 GeV$ in our calculation. This present model predict the existence of
a large number of PGBs, whose masses are typically expected to be larger than
100 GeV. In this paper, we shall take the mass of color-singlet PGBs
$m_{p^{\pm}}$ =100 GeV and the mass of color-octet PGBs $m_{p_8^{\pm}}$ =
( 300 $\sim$ 600 ) GeV as the input parameters for our calculation.
The phenomenology of those color-singlet charged PGBs in the MWTCM is similar
with that of the elementary charged Higgs bosons $H^{\pm}$ of Type-I Two-
Higgs-Doublet model ( 2HDM ) [13]. And consequently, the contributions to the
decay $b\rightarrow s\gamma$ from the color-singlet charged PGBs in the MWTCM
will be similar with that from charged Higgs bosons in the 2HDM. As for the
color-octet charged PGBs, the situation is more complicated because of the
involvement of the color interactions. Other neutral PGBs do not
contribute to the rare decay $b\rightarrow s\gamma$.
The gauge coupling of the PGBs are determined by their quantum numbers. The
Yukawa couplings of PGBs to ordinary fermions are induced by ETC interactions
The relevant couplings needed in our calculation are
\begin{equation}
[ p^+--u_i--d_j ] = i\frac{1}{\sqrt{6}F_Q}V_{u_id_j}[m_{u_i}(1-\gamma_5)-
m_{d_j}(1+\gamma_5)],
\end{equation}
\begin{equation}
[ p_8^+--u_i--d_j ] = i\frac{1}{F_Q}V_{u_id_j}\lambda^a[m_{u_i}(1-\gamma_5)-
m_{d_j}(1+\gamma_5)],
\end{equation}
where $u = ( u, c, t )$, $d = ( d, s, b )$; $V_{u_id_j}$ is the corresponding
element of Kobayashi-Maskawa matrix; $\lambda^a$ are the Gell-Mann
$SU ( 3 )_c$ matrices.
In Fig.1, we draw the relevant Feynman diagram which contributes to the
decay $b\rightarrow s\gamma$, where the blob represents the photonic penguin
operators including the $W$
gauge boson of the SM as well as the charged PGBs in the MWTCM. In the
evaluation we at first integrate out the top quark and
the weak $W$ bosons at $\mu=m_W$ scale, generating an effective five-quark
theory. By using the renormalization group equation, we run the effective
field theory down to b-quark scale to give the leading log QCD corrections,
then at this scale, we calculate the rate of radiative b decay.
After applying the full QCD equations of motion [14], a complete set of
dimension-6 operators relevant for $b\rightarrow s\gamma$ decay can be chosen
to be
\begin{equation}
\begin{array}{l}
O_1 = ( \overline c_{L\beta}\gamma^{\mu}b_{L\alpha} )( \overline s_{L\alpha}
\gamma_{\mu}c_{L\beta} ),\\
O_2 = ( \overline c_{L\alpha}\gamma^{\mu}b_{L\alpha} ) ( \overline s_{L\beta}
\gamma_{\mu}c_{L\beta} ),\\
O_3 = ( \overline s_{L\alpha}\gamma^{\mu} b_{L\alpha} )\sum\limits_{
q=u,d,s,c,b} ( \overline q_{L\beta}\gamma_{\mu}q_{L\beta} ),\\
O_4 = ( \overline s_{L\alpha}\gamma^{\mu}b_{L\beta} )\sum\limits_{
q=u,d,s,c,b} (\overline q_{L\beta}\gamma_{\mu}q_{L\alpha} ),\\
O_5 = ( \overline s_{L\alpha}\gamma^{\mu}b_{L\alpha} )\sum\limits_{
q=u,d,s,c,b} ( \overline q_{R\beta}\gamma_{\mu}q_{R\beta} ),\\
O_6 = ( \overline s_{L\alpha}\gamma^{\mu}b_{L\beta} )\sum\limits_{
q=u,d,s,c,b}^{} ( \overline q_{R\beta}\gamma_{\mu}q_{R\alpha} ),\\
O_7 = ( e/16\pi^2 )m_b\overline s_L\sigma^{\mu\nu}b_{R}F_{\mu\nu},\\
O_8 = ( g/16\pi^2 )m_b\overline s_L\sigma^{\mu\nu}T^ab_RG^a_{\mu\nu}.
\end{array}
\end{equation}
The effective Hamiltonian at the $W$ scale is given as
\begin{equation}
H_{eff} = \frac{4G_F}{\sqrt{2}}V_{tb}V_{ts}^*\sum_{i=1}^{8}C_i(m_W)O_i(m_W).
\end{equation}
The coefficient of 8 operators are calculated and given as
\begin{equation}
\begin{array}{l}
C_i(m_W) = 0, i = 1, 3, 4, 5, 6, C_2 ( m_W ) = -1,\\
C_7(m_W)=\frac{1}{2}A(x)+\frac{1}{3\sqrt{2}G_FF_{Q}^2}[B(y)+8B(z)],\\
C_8(m_W)=\frac{1}{2}C(x)+\frac{1}{3\sqrt{2}G_FF_{Q}^2}[D(y)
+(8D(z)+E(z))],
\end{array}
\end{equation}
with $x=(\frac{m_t}{m_W})^2$, $y=(\frac{m_t}{m_{p^{\pm}}})^2$, $z=
(\frac{m_t}{m_{p_8^{\pm}}})^2$. The functions $A$ and $C$ arise from graphs
with $W$ boson exchange are already known contributions from the SM; while
the functions $B$, $D$ and $E$ arise from diagrams with color-singlet and
color-octet charged PGBs of MWTCM. They are given as
\begin{equation}
\begin{array}{l}
A(x)=-\frac{x}{12(1-x)^4}[(1-x)(8x^2+5x-7)+6x(3x-2)\ln x ],\\
B(x)=\frac{x}{72(1-x)^4}[(1-x)(22x^2-53x+25)+6(3x^2-8x+4)\ln x ],\\
C(x)=-\frac{x}{4(1-x)^4}[(1-x)(x^2-5x-2)-6x\ln x ],\\
D(x)=\frac{x}{24(1-x)^4}[(1-x)(5x^2-19x+20)-6(x-2)\ln x ],\\
E(x)=-\frac{x}{8(1-x)^4}[(1-x)(12x^2-15x-5)+18x(x-2)\ln x ].
\end{array}
\end{equation}
The running of the coefficients of operators from $\mu=m_W$ to $\mu=m_b$ was
well described in Refs.[4]. After renormalization group running we have the
QCD corrected coefficients of operators at $\mu=m_b$ scale:
\begin{equation}
C_7^{eff}(m_b) =\varrho^{-\frac{16}{23}}[C_7(m_W)+\frac{8}{3}
(\varrho^\frac{2}{23}-1)C_8(m_W)]+C_2(m_W)\sum\limits_{i=1}\limits^{8}h_i
\varrho^{-a_i},
\end{equation}
with
$$
\begin{array}{l}
\varrho=\frac{\alpha_s(m_b)}{\alpha_s(m_W)},\\
h_i=(\frac{626126}{272277},-\frac{56281}{51730},-\frac{3}{7},-\frac{1}{14},
-0.6494,-0.0380,-0.0186,-0.0057),\\
a_i=(\frac{14}{23},\frac{16}{23},\frac{6}{23},-\frac{12}{23},0.4086,-0.4230,
-0.8994,0.1456).
\end{array}
$$
\begin{flushleft}
\section*{III. The branching ratio of $B\rightarrow X_s\gamma$ and
phenomenology}
\end{flushleft}
Following Refs.[4], applying the spectator model,
\begin{equation}
BR ( B\rightarrow X_s\gamma )/BR ( B\rightarrow X_ce\overline \nu )\approx
\Gamma ( b\rightarrow s\gamma )/\Gamma ( b\rightarrow ce\overline \nu ).
\end{equation}
If we take experimental result $BR ( B\rightarrow X_ce\overline\nu ) = 10.8
\%$ [15], the branching ratio of $B\rightarrow X_s\gamma$ is found to be
\begin{equation}
BR ( B\rightarrow X_s\gamma ) \approx 10.8\%\times\frac{\vert V_{tb}V_{ts}^*
\vert^2}{\vert V_{cb}\vert^2}\frac{6\alpha_{QED}\vert C_7^{eff}(m_b)\vert^2}
{\pi g(m_c/m_b)}(1-\frac{2\alpha_s(m_b)}{3\pi}f(m_c/m_b))^{-1}.
\end{equation}
Where the phase factor $g ( x )$ is given by
\begin{equation}
g ( x ) = 1 - 8x^2 + 8x^6 - x^8 - 24x^4 \ln x,
\end{equation}
and the factor $f (m_c/m_b )$ of one-loop QCD correction to the semileptonic
decay is
\begin{equation}
f( m_c/m_b) = ( \pi^2 - 31/4 )( 1-m_c^2/m_b^2 ) + 3/2.
\end{equation}
In numerical calculations we always use $m_W$ = 80.22 GeV, $\alpha_s(m_Z)$=
0.117, $m_c$ = 1.5 GeV, $m_b$ = 4.8 GeV and $\vert V_{tb}V_{ts}^*\vert ^2/
\vert V_{cb}\vert ^2$ = 0.95 [15] as input parameters.
Fig.2 is a plot of the branching ratio $BR ( B\rightarrow X_s\gamma )$ as a
function of $m_{p_8^{\pm}}$ assuming $m_t$ = 174 GeV, $m_{p^{\pm}}$ = 100
GeV. The Long Dash line corresponds to the newest CLEO upper limit. From
Fig.2 we can see that the contribution of the charged PGBs in the MWTCM is
too large. In view of the above situation, we consider the topcolor assisted
MWTCM. The motivation of introducing topcolor to MWTCM is the following:
In the MWTCM, it is very difficult to generate the top quark mass as large as
that measured in the Fermilab CDF and D0 experiments [16], even with strong
ETC [17]. Thus, topcolor interactions for the third generation quarks seem to
be required at an energy scale of about 1 TeV [18]. In the present model,
topcolor is taken to be an ordinary asymptotically free gauge theory, while
technicolor is still a walking theory for avoiding large FCNC [19]. As in
other topcolor assisted technicolor theories [19], the electroweak symmetry
breaking is driven mainly by technicolor interactions which are strong near 1
TeV. The ETC interactions give contributions to all quark and lepton masses,
while the large mass of the top quark is mainly generated by the topcolor
interactions introduced to the third generation quarks. The ETC-generated
part of the top quark mass is $m_t' = 66 k$, where $k \sim$ 1 to $10^{-1}$
[19]. In this paper, we take $m_t'$ = 15 GeV and 20 GeV as input parameters
in our calculation ( i.e., in the above calculations, the $m_t$ =174 GeV is
substituted for $m_t'$ = 15 GeV and 20 GeV, the other input parameters and
calculations are the same as the original MWTCM ). The $BR ( b\rightarrow
s\gamma )$ in topcolor assisted MWTCM is illustrated in Fig.3. From Fig.3
we can see that the obtained $BR ( b\rightarrow s\gamma )$ is in agreement
with the CLEO data for a certain range of the parameters.
In this paper, we have not considered the effects of other possible
uncertainties, such as that of $\alpha ( m_Z )$, next-to-leading-log QCD
contribution [5], QCD correction from $m_t$ to $m_W$ [6] etc. The
inclusion of those additional uncertainties will make the limitations weaken
slightly.
\vspace{1cm}
\noindent {\bf ACKNOWLEDGMENT}
This work was supported in part by the National Natural Science
Foundation of China, and by the funds from
Henan Science and Technology Committee.
\newpage
\begin {center}
{\bf Reference}
\end {center}
\begin{enumerate}
\item
R. Ammar, $et \ \ al.$, CLEO Collaboration: Phys. Rev. Lett. 71 ( 1993 ) 674
\item
M. S. Alam, $et \ \ al.$, CLEO Collaboration: Phys. Rev. Lett. 74 ( 1995 )
2885
\item
J. L. Hewett, SLAC Preprint: SLAC-PUB-6521, 1994
\item
M. Misiak: Phys. Lett. B 269 ( 1991 ) 161; K. Adel, Y. P. Yao: Mod. Phys.
Lett. A 8 ( 1993 ) 1679; Phys. Rev. D 49 ( 1994 ) 4945; M. Ciuchini $et \ \
al.$: Phys. Lett. B 316 ( 1993 ) 127
\item
A. J. Buras $et \ \ al.$: Nucl. Phys. B 370 ( 1992 ) 69; Addendum: ibid. B
375 ( 1992 ) 501; B 400 ( 1993 ) 37 and B 400 ( 1993 ) 75; M. Ciuchini $et
\ \ al.$: Phys. Lett. B 301 ( 1993 ) 263; Nucl. Phys. B 415 ( 1994 ) 403
\item
C. S. Gao, J. L. Hu, C. D. L \" u and Z. M. Qiu: Phys. Rev. D52 (1995)3978
\item
S. Weinberg: Phys. Rev. D 13 ( 1976 ) 974; D 19 ( 1979 ) 1277; L. Susskind:
Phys. Rev. 20 ( 1979 ) 2619
\item
S. Dimopoulos and L. Susskind: Nucl. Phys. B 155 ( 1979 ) 237; E. Eichten and
K. Lane: Phys. Lett. B 90 ( 1980 ) 125
\item
B. Holdom: Phys. Rev. D 24 ( 1981 ) 1441; Phys. Lett. B 150 ( 1985 ) 301;
T. Appelquist, D. Karabali and L. C. R. Wijewardhana: Phys. Rev. Lett. 57 (
1986 ) 957
\item
T. Appelquist and G. Triantaphyllou: Phys. Lett. B 278 ( 1992 ) 345; R. Sundr
um and S. Hsu: Nucl. Phys. B 391 ( 1993 ) 127; T. Appelquist and J. Terning:
Phys. Lett. B 315 ( 1993 ) 139.
\item
K. lane and E. Eichten: Phys. Lett. B 222 ( 1989 ) 274; K. Lane and M. V.
Ramana: Phys. Rev. D 44 ( 1991 ) 2678
\item
E. Eichten and K. Lane: Phys. Lett. B 327 ( 1994 ) 129; V. Lubicz and P.
Santorelli: BUHEP-95-16
\item
J. F. Gunion and H. E. Haber: Nucl. Phys. B 278, ( 1986 ) 449
\item
H. D. Politzer: Nucl. Phys. B 172 ( 1980 ) 349; H. Simma: Preprint, DESY 93-
083
\item
Particle Data Group: Phys. Rev. D 50 ( 1994 ) 1173
\item
F. Abe, $et \ \ al.$, The CDF Collaboration: Phys. Rev. Lett. 74 ( 1995 )
2626; S. Abachi, $et \ \ al.$, The D0 Collaboration: Phys. Rev. Lett. 74 (
1995 ) 2697
\item
T. Appelquist, M. B. Einhorn, T. Takeuchi, and L. C. R. Wijewardhana: Phys.
Lett. B 220 ( 1989 ) 223; R. S. Chivukula, A. G. Cohen and K. Lane: Nucl.
Phys. B 343 ( 1990 ) 554; A. Manohar and H. Georgi: Nucl. Phys. B 234 (
1984 ) 189
\item
C. T. Hill: Phys. Lett. B 266 ( 1991 ) 419; S. P. Martin: Phys. Rev. D 45
( 1992 ) 4283; D 46 ( 1992 ) 2197; Nucl. Phys. B 398 ( 1993 ) 359; M. Linder
and D. Ross: Nucl.Phys. B 370 ( 1992 ) 30; C. T. Hill, D. Kennedy, T. Onogi
and H. L. Yu: Phys. Rev. D 47 ( 1993 ) 2940; W. A. Bardeen, C. T. Hill and
M. Linder: Phys. Rev. D 41 ( 1990 ) 1649
\item
C. T. Hill: Phys. Lett. B 345, ( 1995 ) 483; K. Lane and E. Eichten, BUHEP-
95-11, hep-ph/9503433: To appear in Phys. Lett. B
\end{enumerate}
\newpage
\begin{center}
{\bf Figure captions}
\end{center}
Fig.1: The Feynman diagram which contributes to the rare radiative decay $
b\rightarrow s\gamma$. The blob represents the photonic penguin operators
including the $W$ gauge boson of the SM as well as the charged PGBs in
the MWTCM.
Fig.2: The plot of the branching ratio of $ b\rightarrow s\gamma$ versus the
mass of charged color-octet PGBs $m_{p_8^{\pm}}$ assuming $m_t$ = 174 GeV
and the mass of color-singlet PGBs $m_{p^{\pm}}$ =
100 GeV in the MWTCM ( Solid line ). The Long Dash line corresponds to the
newest CLEO upper limit.
Fig.3: The plot of the branching ratio of $b\rightarrow s\gamma$ versus the
mass of charged color-octet PGBs $m_{p_8^{\pm}}$ assuming the color-singlet
PGBs $m_{p^{\pm}}$ = 100 GeV in the topcolor assisted MWTCM. The Solid line
represents the plot assuming $m_t'$ = 15 GeV, and the Dot Dash line
represents the plot assuming $m_t'$ = 20 GeV. The Long Dash line and
Short Dash line correspond the newest CLEO upper and lower limits,
respectively.
\newpage
\begin{picture}(30,0)
{\bf
\setlength{\unitlength}{0.1in}
\put(15,-15){\line(1,0){20}}
\put(13,-16){b}
\put(36,-16){s}
\multiput(24,-15)(1,1){5}{\line(0,1){1}}
\multiput(23,-15)(1,1){6}{\line(1,0){1}}
\put(24,-15){\circle*{3}}
\put(29,-10){$\gamma$}
\put(24,-30){Fig.1}
}
\
\end{picture}
\end{document}
| proofpile-arXiv_065-968 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Genetic Algorithms (GAs) are adaptive search techniques, which can be
used to find low energy states in poorly characterized,
high-dimensional energy landscapes~\cite{Gold,Holl}. They have already
been successfully applied in a large range of domains~\cite{Handbook}
and a review of the literature shows that they are becoming
increasingly popular. In particular, GAs have been used in a number of
machine learning applications, including the design and training of
artificial neural networks~\cite{Fitz,Sch,Yao}.
In the simple GA considered here, each population member is
represented by a genotype, in this case a binary string, and an
objective function assigns an energy to each such genotype. A
population of solutions evolves for a number of discrete generations
under the action of genetic operators, in order to find low energy
(high fitness) states. The most important operators are selection,
where the population is improved through some form of preferential
sampling, and crossover (or recombination), where population members
are mixed, leading to non-local moves in the search space. Mutation is
usually also included, allowing incremental changes to population
members. GAs differ from other stochastic optimisation techniques,
such as simulated annealing, because a population of solutions is
processed in parallel and it is hoped that this may lead to
improvement through the recombination of mutually useful features from
different population members.
A formalism has been developed by Pr\"{u}gel-Bennett, Shapiro and
Rattray which describes the dynamics of a simple GA using methods from
statistical mechanics~\cite{adam,PBS,PBS2,Ratt}. This formalism has
been successfully applied to a number of simple Ising systems and has
been used to determine optimal settings for some of the GA search
parameters~\cite{PBS3}. It describes problems of realistic size and
includes finite population effects, which have been shown to be
crucial to understanding how the GA searches. The approach can be
applied to a range of problems including ones with multiple optima,
and it has been shown to predict simulation results with high
accuracy, although small errors can sometimes be detected.
Under the statistical mechanics formalism, the population is described
by a small number of macroscopic quantities which are statistical
measures of the population. Statistical mechanics techniques are used
to derive deterministic difference equations which describe the
average effect of each operator on these macroscopics. Since the
dynamics of a GA is to be modelled by the average dynamics of an
ensemble of GAs, it is important that the quantities which are used to
describe the system are robust and self-averaging. The macroscopics
which have been used are the cumulants of some appropriate quantity,
such as the energy or the magnetization, and the mean correlation
within the population, since these are robust statistics which average
well over different realizations of the dynamics. There may be small
systematic errors, since the difference equations for evolving these
macroscopics sometimes involve nonlinear terms which may not
self-average, but these corrections are generally small and will be
neglected here.
The statistical mechanics theory is distinguished by the facts that a
macroscopic description of the GA is used and that the averaging is
done such that fluctuations can be included in a systematic way. Many
other theoretical approaches are based on the intuitive idea that
above average fitness building blocks are preferentially sampled by
the GA, which, if they can be usefully recombined, results in highly
fit individuals being produced~\cite{Gold,Holl}. Although this may be
a useful guide to the suitability of particular problems to a GA, it
is difficult to make progress towards a quantitative description for
realistic problems, as it is difficult to determine which are the
relevant building blocks and which building blocks are actually
present in a finite population. This approach has led to false
predictions of problem difficulty, especially when the dynamic nature
of the search is ignored~\cite{Forr,Greff}. A rigorous approach
introduced by Vose \etal describes the population dynamics as a
dynamical system in a high-dimensional Euclidean space, with each
genetic operator incorporated as a transition
tensor~\cite{Vose1,Vose2}. This method uses a microscopic description
and is difficult to apply to specific problems of realistic size due
to high-dimensionality of the equations of motion. More recently, a
number of results have been derived for the performance of a GA on a
class of simple additive problems \cite{Baum,Muhl,Theirens}. These
approaches use a macroscopic description, but assume a particular form
for the distribution of macroscopics which is only applicable in large
populations and for a specific class of problem. It is difficult to
see how to transfer the results to other problems where finite
population effects cannot be ignored.
Other researchers have introduced theories based on averages. A
description of GA dynamics in terms of the evolution of the parent
distribution from which finite populations are sampled was produced by
Vose and Wright~\cite{Vose95}. This microscopic approach provides a
description of the finite population effects which is elegant and
correct. However, like other microscopic descriptions it is difficult
to apply to specific realistic problems due to the enormous
dimensionality of the system. Macroscopic descriptions can result in
low-dimensional equations which can be more easily studied. Another
formalism based on the evolution of parent distributions was developed
by Peck and Dhawan~\cite{Peck}, but they did not use the formalism to
develop equations describing finite population dynamics.
The importance of choosing appropriate quantities to average is
well-known in statistical physics, but does not seem to be widely
appreciated in genetic algorithm theory. In particular, many authors
use results based on properties of the {\em average} probability
distribution; this is insensitive to finite-population fluctuations
and only gives accurate results in the infinite population
limit. Thus, many results are only accurate in the infinite population
limit, even though this limit is not taken explicitly. For example,
Srinivas and Patnaik~\cite{Sri} and Peck and Dhawan~\cite{Peck} both
produce equations for the moments of the fitness distribution in terms
of the moments of the initial distribution. These are moments of the
average distribution. Consequently, the equations do not correctly
describe a finite population and results presented in these papers
reflect that. Other attempts to describe GAs in terms of population
moments (or schema moments or average Walsh coefficients) suffer from
this problem. Macroscopic descriptions of population dynamics are
also widely used in quantitative genetics (see, for example,
reference~\cite{Falconer}). In this field the importance of
finite-population fluctuations is more widely appreciated; the
infinite population limit is usually taken explicitly. Using the
statistical mechanics approach, equations for fitness moments which
include finite-population fluctuations can be derived by averaging the
cumulants, which are more robust statistics.
Here, the statistical mechanics formalism is applied to a simple
problem from learning theory, generalization of a rule by a perceptron
with binary weights. The perceptron learns from a set of training
patterns produced by a teacher perceptron, also with binary weights. A
new batch of training patterns are presented to each population member
each generation which simplifies the analysis considerably, since
there are no over-training effects and each training pattern can be
considered as statistically independent. Baum \etal have shown that
this problem is similar to a paramagnet whose energy is corrupted by
noise and they suggest that the GA may perform well in this case,
since it is relatively robust towards noise when compared to local
search methods~\cite{Baum}. The noise in the training energy is due to
the finite size of the training set and is a feature of many machine
learning problems~\cite{Fitz}.
We show that the noise in the training energy is well approximated by
a Gaussian distribution for large problem size, whose mean and
variance can be exactly determined and are simple functions of the
overlap between pupil and teacher. This allows the dynamics to be
solved, extending the statistical mechanics formalism to this simple,
yet non-trivial, problem from learning theory. The theory is compared
to simulations of a real GA averaged over many runs and is shown to
agree well, accurately predicting the evolution of the cumulants of
the overlap distribution within the population, as well as the mean
correlation and mean best population member. In the limit of weak
selection and large problem size the population size can be increased
to remove finite training set effects and this leads to an expression
for the optimal training batch size.
\section{Generalization in a perceptron with binary weights}
A perceptron with Ising weights $w_i \in \{-1,1\}$ maps an Ising
training pattern $\{\zeta_i^\mu\}$ onto a binary output,
\begin{equation}
O^\mu = {\rm Sgn}\left(\sum_{i=1}^N w_i \zeta_i^\mu\right)
\qquad {\rm Sgn}(x)=\cases{ 1&for $x \geq 0$ \\ -1&for
$x<0$\\}
\end{equation}
where $N$ is the number of weights. Let $t_i$ be the weights of the
teacher perceptron and $w_i$ be the weights of the pupil. The
stability of a pattern is a measure of how well it is stored by the
perceptron and the stabilities of pattern $\mu$ for the teacher and
pupil are $\Lambda_t^\mu$ and $\Lambda_w^\mu$ respectively,
\begin{equation}
\Lambda_t^\mu = \frac{1}{\sqrt{N}}\sum_{i=1}^N t_i \zeta_i^\mu
\hspace{1cm} \Lambda_w^\mu = \frac{1}{\sqrt{N}}\sum_{i=1}^N
w_i \zeta_i^\mu
\end{equation}
The training energy will be defined as the number of patterns the
pupil misclassifies,
\begin{equation}
E = \sum_{\mu=1}^{\lambda N}
\Theta(-\Lambda_t^\mu\Lambda_w^\mu) \qquad
\Theta(x)=\cases{ 1&for $x \geq 0$ \\ 0&for $x<0$\\}
\label{def_E}
\end{equation}
where $\lambda N$ is the number of training patterns presented and
$\Theta(x)$ is the Heaviside function. In this work a new batch of
training examples is presented each time the training energy is
calculated.
For large $N$ it is possible to calculate the entropy of solutions
compatible with the total training set and there is a first-order
transition to perfect generalization as the size of training set is
increased~\cite{Gyorg,Somp}. This transition occurs for $O(N)$
patterns and beyond the transition the weights of the teacher are the
only weights compatible with the training set. In this case there is
no problem with over-training to that particular set, although a
search algorithm might still fail to find these weights. The GA
considered here will typically require more than $O(N)$ patterns,
since it requires an independent batch for each energy evaluation, so
avoiding any possibility of over-training.
Define $R$ to be the overlap between pupil and teacher,
\begin{equation}
R = \frac{1}{N}\sum_{i=1}^N w_i t_i \label{def_R}
\end{equation}
We choose $t_i = 1$ at every site without loss of generality. If a
statistically independent pattern is presented to a perceptron, then
for large $N$ the stabilities of the teacher and pupil are Gaussian
variables each with zero mean and unit variance, and with covariance
$R$,
\begin{equation}
p(\Lambda_t,\Lambda_w) =
\frac{1}{2\pi\sqrt{1-R^2}}\exp\!\left(\frac{-(\Lambda_t^2 -
2R\Lambda_t\Lambda_w + \Lambda_w^2)}{2(1-R^2)}\right)
\label{stab_dist}
\end{equation}
The conditional probability distribution for the training energy given
the overlap is,
\begin{equation}
p(E|R) = \left\langle \delta\!\left(E - \sum_{\mu =
1}^{\lambda N} \Theta(-\Lambda_t^\mu\Lambda_w^\mu)
\right)\right\rangle_{\!\!\{\Lambda_t^\mu,\Lambda_w^\mu\}}
\label{def_p}
\end{equation}
where the brackets denote an average over stabilities distributed
according to the joint distribution in equation~(\ref{stab_dist}). The
logarithm of the Fourier transform generates the cumulants of the
distribution and using the Fourier representation for the delta
function in $p(E|R)$ one finds,
\begin{eqnarray}
\hat{\rho}(-\i t|R) \:&= \:\int_{-\infty}^\infty \!\!\!\d E\,
p(E|R) \, \e^{tE} \nonumber \\ &=
\:\left\langle \prod_{\mu=1}^{\lambda N}
\exp\left[t\Theta(-\Lambda_t^\mu\Lambda_w^\mu)\right]
\right\rangle \nonumber \\ &= \:\left(1 +
\frac{1}{\pi}(\e^t-1)\cos^{-1}(R)
\right)^{\!\lambda N}
\end{eqnarray}
The logarithm of this quantity can be expanded in $t$, with the
cumulants of the distribution given by the coefficients of the
expansion. The higher cumulants are $O(\lambda N)$ and it turns out
that the shape of the distribution is not critical as long as
$\lambda$ is $O(1)$. A Gaussian distribution will be a good
approximation in this case,
\begin{equation}
p(E|R) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\!\left(\frac{-(E -
E_{\rm g}(R))^2}{2\sigma^2}\right)
\label{def_p_g}
\end{equation}
where the mean and variance are,
\begin{eqnarray}
E_{\rm g}(R) \; = \; \frac{\lambda N}{\pi} \cos^{-1}(R)
\label{<E|R>} \\ \sigma^2 \; = \; \frac{\lambda
N}{\pi}\cos^{-1}(R)\left(1 -
\frac{1}{\pi}\cos^{-1}(R)\right)\label{sigma}
\end{eqnarray}
Here, $E_{\rm g}(R)$ is the generalization error, which is the
probability of misclassifying a randomly chosen training example. The
variance expresses the fact that there is noise in the energy
evaluation due to the finite size of the training batch.
\section{Modelling the Genetic Algorithm}
\subsection{The Genetic Algorithm}
Initially, a random population of solutions is created, in this case
Ising weights of the form $\{w_1,w_2\ldots,w_N\}$ where the alleles
$w_i$ are the weights of a perceptron. The size of the population is
$P$ and will usually remain fixed, although a dynamical resizing of
the population is discussed in section~\ref{sec_rescale}. Under
selection, new population members are chosen from the present
population with replacement, with a probability proportional to their
Boltzmann weight. The selection strength $\beta$ is analogous to the
inverse temperature and determines the intensity of selection, with
larger $\beta$ leading to a higher variance of selection
probabilities~\cite{Maza,PBS}. Under standard uniform crossover, the
population is divided into pairs at random and the new population is
produced by swapping weights at each site within a pair with some
fixed probability. Here, bit-simulated crossover is used, with new
population members created by selecting weights at each site from any
population member in the original population with equal
probability~\cite{Sysw}. In practice, the alleles at every site are
completely shuffled within the population and this brings the
population straight to the fixed point of standard crossover. This
special form of crossover is only practicable here because crossover
does not change the mean overlap between pupil and teacher within the
population. Standard mutation is used, with random bits flipped
throughout the population with probability $p_{\rm m}$.
Each population member receives an independent batch of $\lambda N$
examples from the teacher perceptron each generation, so that the
relationship between the energy and the overlap between pupil and
teacher is described by the conditional probability defined in
equation~(\ref{def_p}). In total, $\lambda N\!\times\!PG$ training
patterns are used, where $G$ is the total number of generations and
$P$ is the population size (or the mean population size).
\subsection{The Statistical Mechanics formalism}
The population will be described in terms of a number of macroscopic
variables, the cumulants of the overlap distribution within the
population and the mean correlation within the population. In the
following sections, difference equations will be derived for the
average change of a small set of these macroscopics, due to each
operator. A more exact approach considers fluctuations from mean
behaviour by modelling the evolution of an ensemble of populations
described by a set of order parameters~\cite{adam}. Here, it is
assumed that the dynamics average sufficiently well so that we can
describe the dynamics in terms of deterministic equations for the
average behaviour of each macroscopic. This assumption is justified by
the excellent agreement between the theory and simulations of a real
GA, some of which are presented in section~\ref{sec_sim}. Once
difference equations are derived for each macroscopic, they can be
iterated in sequence in order to simulate the full dynamics.
Notice that although we follow information about the overlap between
teacher and pupil, this is of course not known in general. The only
feedback available when training the GA is the training energy defined
in equation~\ref{def_E}. Selection acts on this energy, and it is
therefore necessary to average over the noise in selection which is
due both to the stochastic nature of the training energy evaluation
and of the selection procedure itself.
Finite population effects prove to be of fundamental importance when
modelling the GA. A striking example of this is in selection, where an
infinite population assumption leads to the conclusion that the
selection strength can be set arbitrarily high in order to move the
population to the desired solution. This is clearly nonsense, as
selection could never move the population beyond the best existing
population member. Two improvements are required to model selection
accurately; the population should be finite and the distribution from
which it is drawn should be modelled in terms of more than two
cumulants, going beyond a Gaussian approximation~\cite{PBS}. The
higher cumulants play a particularly important role in selection which
will be described in section~\ref{sec_selapp}~\cite{PBS2}.
The higher cumulants of the population after bit-simulated crossover
are determined by assuming the population is at maximum entropy with
constraints on the mean overlap and correlation within the population
(see \ref{max_ent}). The effect of mutation on the mean overlap and
correlation only requires the knowledge of these two macroscopics, so
these are the only quantities we need to evolve in order to model the
full dynamics. All other relevant properties of the population after
crossover can be found from the maximum entropy ansatz. A more
general method is to follow the evolution of a number of cumulants
explicitly, as in references~\cite{PBS2,Ratt}, but this is unnecessary
here because of the special form of crossover used, which is not
appropriate in problems with stronger spatial interactions.
\subsection{The cumulants and correlation}
The cumulants of the overlap distribution within the population are
robust statistics which are often reasonably stable to fluctuations
between runs of the GA, so that they average well~\cite{PBS2}. The
first two cumulants are the mean and variance respectively, while the
higher cumulants describe the deviation from a Gaussian
distribution. The third and fourth cumulants are related to the
skewness and kurtosis of the population respectively. A population
member, labelled $\alpha$, is associated with overlap $R_\alpha$
defined in equation~(\ref{def_R}). The cumulants of the overlap
distribution within a finite population can be generated from the
logarithm of a partition function,
\begin{equation}
Z = \sum_{\alpha=1}^P \exp(\gamma R_\alpha)
\end{equation}
where $P$ is the population size. If $\kappa_n$ is the $n$th cumulant,
then,
\begin{equation}
\kappa_n = \lim_{\gamma\rightarrow
0}\frac{\partial^n}{\partial\gamma^n}\log Z
\end{equation}
The partition function holds all the information required to determine
the cumulants of the distribution of overlaps within the population.
The correlation within the population is a measure of the microscopic
similarity of population members and is important because selection
correlates a finite population, sometimes leading to premature
convergence to poor solutions. It is also important in calculating the
effect of crossover, since this involves the interaction of different
population members and a higher correlation leads to less disruption
on average. The correlation between two population members, $\alpha$
and $\beta$, is $q_{\alpha\beta}$ and is defined by,
\begin{equation}
q_{\alpha\beta} = \frac{1}{N}\sum_{i=1}^N w_i^\alpha w_i^\beta
\end{equation}
The mean correlation is $q$ and is defined by,
\begin{equation}
q = \frac{2}{P(P-1)}\sum_{\alpha=1}^P\sum_{\beta > \alpha}
q_{\alpha\beta}
\end{equation}
In order to model a finite population we consider that $P$ population
members are randomly sampled from an infinite population, which is
described by a set of infinite population cumulants,
$K_n$~\cite{adam}. The expectation values for the mean correlation and
the first cumulant of a finite population are equal to the infinite
population values. The higher cumulants are reduced by a factor which
depends on the population size, \numparts
\begin{eqnarray}
\kappa_1 & = & K_1 \label{k1} \\ \kappa_2 & = & P_2 K_2
\label{k2} \\ \kappa_3 & = & P_3 K_3 \label{k3} \\ \kappa_4 &
= & P_4 K_4 - 6P_2(K_2)^2/P \label{k4}
\end{eqnarray}
\endnumparts Here, $P_2$, $P_3$ and $P_4$ give finite population
corrections to the infinite population result (see
reference~\cite{PBS2} for a derivation),
\begin{equation}
P_2 = 1 - \frac{1}{P} \qquad P_3 = 1 - \frac{3}{P} +
\frac{2}{P^2} \qquad P_4 = 1 - \frac{7}{P} + \frac{12}{P^2} -
\frac{6}{P^3}
\end{equation}
Although we model the evolution of a finite population, it is more
natural to follow the macroscopics associated with the infinite
population from which the finite population is
sampled~\cite{adam}. The expected cumulants of a finite population can
be retrieved through equations~(\ref{k1}) to (\ref{k4}).
\section{Crossover and mutation}
\label{sec_mutcross}
The mean effects of standard crossover and mutation on the
distribution of overlaps within the population are equivalent to the
paramagnet results given in~\cite{PBS2}. However, bit-simulated
crossover brings the population straight to the fixed point of
standard crossover, which will be assumed to be a maximum entropy
distribution with the correct mean overlap and correlation, as
described in \ref{max_ent}. To model this form of crossover one only
requires knowledge of these two macroscopics, so these are the only
two quantities we need to evolve under selection and mutation.
The mean overlap and correlation after averaging over all mutations
are, \numparts
\begin{eqnarray}
K_1^{\rm m} & = (1 - 2p_{\rm m})K_1 \\ q_{\rm m} & = (1 -
2p_{\rm m})^2q
\end{eqnarray}
\endnumparts where $p_{\rm m}$ is the probability of flipping a bit
under mutation~\cite{PBS2}. The higher cumulants after crossover are
required to determine the effects of selection, discussed in the next
section. The mean overlap and correlation are unchanged by crossover
and the other cumulants can be determined by noting that bit-simulated
crossover completely removes the difference between site averages
within and between different population members. For example, terms
like $\langle w_i^\alpha w_j^\beta\rangle_{i\neq j}$ and $\langle
w_i^\alpha w_j^\alpha \rangle_{i \neq j}$ are equal on average. After
cancelling terms of this form one finds that the first four cumulants
of an infinite population after crossover are, \numparts
\begin{eqnarray} K_1^{\rm c} & = & K_1 \label{k2inf} \\ \vspace{6mm}
K_2^{\rm c} & = & \frac{1}{N}(1-q) \\ K_3^{\rm c} & = &
-\frac{2}{N^2} \left(K_1 - \frac{1}{N}\sum_{i=1}^N\langle
w_i^\alpha \rangle_\alpha^3 \right) \label{k3inf} \\ K_4^{\rm
c} & = & -\frac{2}{N^3}\left(1 - 4q +
\frac{3}{N}\sum_{i=1}^N\langle w_i^\alpha \rangle_\alpha^4
\right) \label{k4inf}
\end{eqnarray}
\endnumparts Here, the brackets denote population averages. The third
and fourth order terms in the expressions for the third and fourth
cumulants are calculated in \ref{max_ent} by making a maximum entropy
ansatz. The expected cumulants of a finite population after crossover
are determined from equations~(\ref{k1}) to (\ref{k4}).
\section{The cumulants after selection}
\label{sec_selcum}
Under selection, $P$ new population members are chosen from the
present population with replacement. Following Pr\"{u}gel-Bennett we
split this operation into two stages~\cite{adam}. First we randomly
sample $P$ population members from an infinite population in order to
create a finite population. Then an infinite population is generated
from this finite population by selection. The proportion of each
population member represented in the infinite population after
selection is equal to its probability of being selected, which is
defined below. The sampling procedure can be averaged out in order to
calculate the expectation values for the cumulants of the overlap
distribution within an infinite population after selection, in terms
of the infinite population cumulants before selection.
The probability of selecting population member $\alpha$ is $p_\alpha$
and for Boltzmann selection one chooses,
\begin{equation}
p_{\alpha} = \frac{\e^{-\beta E_\alpha}}{\sum^P \e^{-\beta
E_\alpha}} \label{p_alpha}
\end{equation}
where $\beta$ is the selection strength and the denominator ensures
that the probability is correctly normalized. Here, $E_\alpha$ is
the training energy of population member $\alpha$.
One can then define a partition function for selection,
\begin{equation}
Z_{\rm s} = \sum_{\alpha=1}^P \exp(-\beta E_\alpha + \gamma
R_\alpha)
\end{equation}
The logarithm of this quantity generates the cumulants of the overlap
distribution for an infinite population after selection,
\begin{equation}
K_n^{\rm s} = \lim_{\gamma\rightarrow 0}
\frac{\partial^n}{\partial\gamma^n}\log Z_{\rm s}
\end{equation}
One can average this quantity over the population by assuming each
population member is independently selected from an infinite
population with the correct cumulants,
\begin{equation}
\langle \log Z_{\rm s} \rangle = \left( \prod_{\alpha=1}^P
\int\!\!\d R_\alpha\,\d E_\alpha\,
p(R_\alpha)\,p(E_\alpha|R_\alpha) \right) \log Z_{\rm s}
\label{logZ}
\end{equation}
where $p(E|R)$ determines the stochastic relationship between energy
and overlap as defined in equation~(\ref{def_p}) which will be
approximated by the Gaussian distribution in
equation~(\ref{def_p_g}). Following Pr\"{u}gel-Bennett and Shapiro one
can use Derrida's trick and express the logarithm as an integral in
order to decouple the average~\cite{Derrida,PBS}.
\begin{eqnarray}
\langle \log Z_{\rm s} \rangle & = & \int_0^\infty\!\! \d t \:
\frac{\e^{-t} - \langle \e^{-tZ_{\rm s}} \rangle}{t} \nonumber
\\ & = & \int_0^\infty\!\! \d t \: \frac{\e^{-t} -
f^P(t,\beta,\gamma)}{t}
\label{log_Z}
\end{eqnarray}
where,
\begin{equation}
f(t,\beta,\gamma) = \int \!\d R\,\d E\,p(R)\,p(E|R)
\exp\!\left(-t\e^{-\beta E + \gamma R}\right) \label{ft}
\end{equation}
The distribution of overlaps within an infinite population is
approximated by a cumulant expansion around a Gaussian
distribution~\cite{PBS2},
\begin{equation}
p(R) = \frac{1}{\sqrt{2\pi
K_2}}\exp\!\left(\frac{-(R-K_1)^2}{2 K_2}\right)\left[1 +
\sum_{n=3}^{n_c} \frac{K_n}{K_2^{n/2}} \:
u_n\!\!\left(\frac{R-K_1}{\sqrt{K_2}}\right)\right]
\label{cum_exp}
\end{equation}
where $u_n(x) = (-1)^n\e^{\frac{x^2}{2}}\frac{\rm d}{\rm d
x^n}\e^{\frac{-x^2}{2}}/n!$ are scaled Hermite polynomials. Four
cumulants were used for the simulations presented in
section~\ref{sec_sim} and the third and fourth Hermite polynomials are
$u_3(x) = (x^3 - 3x)/3!$ and $u_4(x) = (x^4 - 6x^2 + 3)/4!$. This
function is not a well defined probability distribution since it is
not necessarily positive, but it has the correct cumulants and
provides a good approximation. In general, the integrals in
equations~(\ref{log_Z}) and (\ref{ft}) have to be computed
numerically, as was the case for the simulations presented in
section~\ref{sec_sim}.
\subsection{Weak selection and large $N$}
\label{sec_selapp}
It is instructive to expand in small $\beta$ and large $N$, as this
shows the contributions for each cumulant explicitly and gives some
insight into how the size of the training set affects the
dynamics. Since the variance of the population is $O(1/N)$ it is
reasonable to expand the mean of $p(E|R)$, defined in
equation~(\ref{<E|R>}), around the mean of the population in this
limit ($R \simeq K_1$). It is also assumed that the variance of
$p(E|R)$ is well approximated by its leading term and this assumption
may break down if the gradient of the noise becomes important. Under
these simplifying assumptions one finds,
\begin{eqnarray}
E_{\rm g}(R) \:\simeq \:\frac{\lambda N}{\pi}\left(
\cos^{-1}(K_1) - \frac{(R - K_1)}{\sqrt{1 - K_1^2}} \right)
\label{app_mean}\\ \sigma^2 \:\simeq \:\frac{\lambda
N}{\pi}\cos^{-1}(K_1)\left(1 -
\frac{1}{\pi}\cos^{-1}(K_1)\right) \label{app_var}
\end{eqnarray}
Following Pr\"{u}gel-Bennett and Shapiro~\cite{PBS}, one can expand
the integrand in equation~(\ref{log_Z}) for small $\beta$ (as long as
$\lambda$ is at least $O(1)$ so that the variance of $p(E|R)$ is
$O(N)$),
\begin{equation}
f^P(t,\beta,\gamma) \simeq
\exp(-tP\hat{\rho}_1(\beta,\gamma))\left(1 +
\frac{Pt^2}{2}\left(\hat{\rho}_2(\beta,\gamma) -
\hat{\rho}_1^2(\beta,\gamma)\right)\right)
\end{equation}
where,
\begin{equation}
\hat{\rho}_n(\beta,\gamma) = \int \!\!\d R\,\d E\, p(R) \,
p(E|R) \:\e^{n(-\beta E + \gamma R)} \label{rhohat}
\end{equation}
We approximate $p(E|R)$ by a Gaussian whose mean and variance given in
equations~(\ref{app_mean}) and (\ref{app_var}). Completing the
integral in equation~(\ref{log_Z}), one finds an expression for the
cumulants of an infinite population after selection,
\begin{equation}
K_n^{\rm s} = \lim_{\gamma\rightarrow
0}\frac{\partial^n}{\partial
\gamma^n}\left[\log(P\rho_1(k\beta,\gamma)) -
\frac{\e^{(\beta\sigma)^2}}{2P}\left(\frac{\rho_2(k\beta,\gamma)}{\rho_1^2(k\beta,\gamma)}\right)\right]
\label{kns}
\end{equation}
where,
\begin{eqnarray}
\rho_n(k\beta,\gamma) & = & \int \!\!\d R\, p(R)\e^{nR(k\beta
+ \gamma)} \nonumber \\ & = & \exp\!\left(\sum_{i=1}^\infty
\frac{n^i(k\beta+\gamma)^i K_i}{i!}\right) \label{def_rhon}
\end{eqnarray}
Here, a cumulant expansion has been used. The parameter $k$ is the
constant of proportionality relating the generalization error to the
overlap in equation~(\ref{app_mean}) (constant terms are irrelevant,
as Boltzmann selection is invariant under the addition of a constant
to the energy).
\begin{equation}
k = \frac{\lambda N}{\pi\sqrt{1 - K_1^2}} \label{def_k}
\end{equation}
For the first few cumulants of an infinite population after selection
one finds, \numparts
\begin{eqnarray}
K_1^{\rm s} & = & K_1 + \left(1 -
\frac{\e^{(\beta\sigma)^2}}{P}\right)k\beta K_2 + O(\beta^2)\\
K_2^{\rm s} & = & \left(1 -
\frac{\e^{(\beta\sigma)^2}}{P}\right)K_2 + \left(1 -
\frac{3\e^{(\beta\sigma)^2}}{P}\right)k\beta K_3 +
O(\beta^2)\\ K_3^{\rm s} & = & \left(1 -
\frac{3\e^{(\beta\sigma)^2}}{P}\right)K_3 -
\frac{6\e^{(\beta\sigma)^2}}{P}k\beta K_2^2 + O(\beta^2)
\label{k3s}
\end{eqnarray}
\endnumparts The expected cumulants of a finite population after
selection are retrieved through equations~(\ref{k1}) to (\ref{k4}).
For the zero noise case ($\sigma = 0$) this is equivalent to selecting
directly on overlaps (with energy $-R$), with selection strength
$k\beta$. We will therefore call $k\beta$ the effective selection
strength. It has previously been shown that this parameter should be
scaled inversely with the standard deviation of the population in
order to make continued progress under selection, without converging
too quickly~\cite{PBS2}. Strictly speaking, we can only use
information about the distribution of energies since the overlaps will
not be known in general, but to first order in $R-K_1$ this is
equivalent to scaling the selection strength inversely to the standard
deviation of the energy distribution. As in the problems considered in
reference~\cite{PBS2}, the finite population effects lead to a reduced
variance and an increase in the magnitude of the third cumulant,
related to the skewness of the population. This leads to an
accelerated reduction in variance under further selection. The noise
due to the finite training set increases the size of the finite
population effects. The other genetic operators, especially crossover,
reduce the magnitude of the higher cumulants to allow further progress
under selection.
\section{The correlation after selection}
\label{sec_cor}
To model the full dynamics, it is necessary to evolve the mean
correlation within the population under selection. This is rather
tricky, as it requires knowledge of the relationship between overlaps
and correlations within the population. To make the problem tractable,
it is assumed that before selection the population is at maximum
entropy with constraints on the mean overlap and correlation within
the population, as discussed in \ref{max_ent}. The calculation
presented here is similar to that presented elsewhere~\cite{Ratt},
except for a minor refinement which seems to be important when
considering problems with noise under selection.
The correlation of an infinite population after selection from a
finite population is given by,
\begin{eqnarray}
q_{\rm s} & = & \sum_{\alpha=1}^P p_\alpha^2(1 -
q_{\alpha\alpha}) + \sum_{\alpha=1}^P\sum_{\beta=1}^P p_\alpha
p_\beta q_{\alpha\beta} \nonumber \\ & = & \Delta q_\d \: + \:
q_\infty \label{q_s}
\end{eqnarray}
where $p_\alpha$ is the probability of selection, defined in
equation~(\ref{p_alpha}). The first term is due to the duplication of
population members under selection, while the second term is due to
the natural increase in correlation as the population moves into a
region of lower entropy. The second term gives the increase in the
correlation in the infinite population limit, where the duplication
term becomes negligible. An extra set of variables $q_{\alpha\alpha}$
are assumed to come from the same statistics as the distribution of
correlations within the population. Recall that the expectation value
for the correlation of a finite population is equal to the correlation
of the infinite parent population from which it is sampled.
\subsection{Natural increase term}
We estimate the conditional probability distribution for correlations
given overlaps before selection $p(q_{\alpha\beta}|R_\alpha,R_\beta)$
by assuming the weights within the population are distributed
according to the maximum entropy distribution described in
\ref{max_ent}. Then $q_\infty$ is simply the correlation averaged over
this distribution and the distribution of overlaps after selection,
$p_{\rm s}(R)$.
\begin{equation}
q_\infty = \int\!\! \d q_{\alpha\beta} \, \d R_\alpha \, \d
R_\beta \, p_{\rm s}(R_\alpha) p_{\rm s}(R_\beta)
p(q_{\alpha\beta}|R_\alpha,R_\beta)\,q_{\alpha\beta}
\label{q_infty} \\
\end{equation}
This integral can be calculated for large $N$ by the saddle point
method and we find that in this limit the result only depends on the
mean overlap after selection (see \ref{app_cond}).
\begin{equation}
q_\infty(y) = \frac{1}{N}\sum_{i=1}^N \left(\ \frac{W_i +
\tanh(y)}{1 + W_i\tanh(y)} \right)^2 \label{qqs}
\end{equation}
where,
\begin{equation}
K_1^{\rm s} = \frac{1}{N}\sum_{i=1}^N \frac{W_i + \tanh(y)}{1
+ W_i\tanh(y)} \label{qk1s}
\end{equation}
The natural increase contribution to the correlation $q_\infty$ is an
implicit function of $K_1^{\rm s}$ through $y$, which is related to
$K_1^{\rm s}$ by equation~(\ref{qk1s}). Here, $W_i$ is the mean weight
at site $i$ before selection (recall that we have chosen the teacher's
weights to be $t_i = 1$ at every site, without loss of generality) and
for a distribution at maximum entropy one has,
\begin{equation}
W_i = \tanh(z + x\eta_i) \\
\end{equation}
The Lagrange multipliers, $z$ and $x$, are chosen to enforce
constraints on the mean overlap and correlation within the population
before selection and $\eta_i$ is drawn from a Gaussian distribution
with zero mean and unit variance (see \ref{max_ent}).
It is instructive to expand in $y$, which is appropriate in the weak
selection limit. In this case one finds,
\begin{eqnarray}
K_1^{\rm s} = K_1^{\rm c} + y(N K_2^{\rm c}) +
\frac{y^2}{2}(N^2 K_3^{\rm c}) + \cdots \\ q_\infty(y) = q -
y(N^2 K_3^{\rm c}) - \frac{y^2}{2}(N^3 K_4^{\rm c}) + \cdots
\label{q_inf}
\end{eqnarray}
where $K_n^{\rm c}$ are the infinite population expressions for the
cumulants after bit-simulated crossover, when the population is
assumed to be at maximum entropy (defined in equations (\ref{k2inf})
to (\ref{k4inf}) up to the fourth cumulant). Here, $y$ plays the role
of the effective selection strength in the associated infinite
population problem, so for an infinite population one could simply set
$y = k\beta/N$, where $k$ is defined in equation~(\ref{def_k}). To
calculate the correlation after selection, we solve
equation~(\ref{qk1s}) for $y$ and then substitute this value into the
equation~(\ref{qqs}) to calculate $q_\infty$. In general this must be
done numerically, although the weak selection expansion can be used to
obtain an analytical result which gives a very good approximation in
many cases. Notice that the third cumulant in equation~(\ref{q_inf})
will be negative for $K_1>0$ because of the negative entropy gradient
and this will accelerate the increased correlation under selection.
\subsection{Duplication term}
\label{sec_dup}
The duplication term $\Delta q_\d$ is defined in
equation~(\ref{q_s}). As in the partition function calculation
presented in section~\ref{sec_selcum}, population members are
independently averaged over a distribution with the correct cumulants,
\begin{eqnarray}
\fl \Delta q_\d = P\left( \prod_{\alpha=1}^P \int \!\!\d R_\alpha \,
\d E_\alpha \,\d q_{\alpha\alpha}p(R_\alpha) \, p(E_\alpha|R_\alpha)
\, p(q_{\alpha\alpha}|R_\alpha,R_\alpha) \right) \frac{(1 -
q_{\alpha\alpha})\e^{-2\beta E_\alpha}}{(\sum_\alpha \e^{-\beta
E_\alpha})^2} \nonumber \\ \lo = P\left( \prod_{\alpha=1}^P \int
\!\!\d R_\alpha \cdots \right)(1 - q_{\alpha\alpha})\exp(-2\beta
E_\alpha) \int_0^\infty \!\!\!\d t\,t\,\exp\left(-t\sum_\alpha
\e^{-\beta E_\alpha}\right)
\end{eqnarray}
Here, $q_{\alpha\alpha}$ is a construct which comes from the same
statistics as the correlations between distinct population members.
The integral in $t$ removes the square in the denominator and
decouples the average,
\begin{equation}
\Delta q_\d \: = \: P\!\!\int_0^\infty \!\!\!\d t\,t\,
f(t)\,g^{P-1}(t) \label{q_d}
\end{equation}
where,
\begin{eqnarray}
f(t) & = & \int \!\!\d R \, \d E \, \d q\, p(R) \, p(E|R) \,
p(q|R,R) \:(1 - q)\exp(-2\beta E - t\e^{-\beta E}) \\ g(t) & =
& \int \!\!d R \, \d E \, p(R) p(E|R) \exp(-t\e^{-\beta E})
\end{eqnarray}
The overlap distribution $p(R)$ will be approximated by the cumulant
expansion in equation~(\ref{cum_exp}) and $p(q|R,R)$ by the
distribution derived in \ref{app_cond}. In general, it would be
necessary to calculate these integrals numerically, but the
correlation distribution is difficult to deal with as it requires the
numerical reversion of a saddle point equation.
Instead, we expand for small $\beta$ and large $N$ as we did for the
selection calculation in section~\ref{sec_selapp} (this approximation
is only used for the term involving the correlation in
equation~(\ref{q_d}) for the simulations presented in
section~\ref{sec_sim}). In this case one finds,
\begin{eqnarray}
f(t)\,g^{P-1}(t) \; & \simeq &\;
\hat{\rho}(2\beta)\exp\!\left[-t\left(
(P-1)\hat{\rho}(\beta) +
\frac{\hat{\rho}(3\beta)}{\hat{\rho}(2\beta)}
\right)\right] \nonumber \\ & & -
\hat{\rho}_q(2\beta)\exp\!\left[-t\left(
(P-1)\hat{\rho}(\beta) +
\frac{\hat{\rho}_q(3\beta)}{\hat{\rho}_q(2\beta)}
\right)\right]
\end{eqnarray}
where,
\begin{eqnarray}
\hat{\rho}(\beta) & = & \int \!\!\d R\,\d E\, p(R) \, p(E|R)
\:\e^{-\beta E} \\ \hat{\rho}_q(\beta) & = & \int \!\!\d R\,\d
E\, p(R) \, p(E|R) \int\!\!\d q \: p(q|R,R) \,q\,\e^{-\beta E}
\end{eqnarray}
Completing the integral in equation~(\ref{q_d}) one finds,
\begin{equation}
\Delta q_\d = \frac{\hat{\rho}(2\beta) -
\hat{\rho}_q(2\beta)}{P\hat{\rho}^2(\beta)} +
O\!\!\left(\frac{1}{P^2}\right)
\end{equation}
We express $\hat{\rho}_q(\beta)$ in terms of the Fourier transform of
the distribution of correlations, which is defined in
equation~(\ref{ftq}),
\begin{equation}
\hat{\rho}_q(\beta) = \lim_{t \rightarrow
0}\frac{\partial}{\partial t} \log\!\left(\int \!\!\d R\,\d
E\, p(R) \, p(E|R) \hat{\rho}(-\i t|R,R) \,\e^{-\beta
E}\right)\!\hat{\rho}(\beta)
\end{equation}
The integrals can be calculated by expressing $p(E|R)$ by the same
approximate form as in section~\ref{sec_selapp} and using the saddle
point method to integrate over the Fourier transform as in
\ref{app_cond}.
Eventually one finds,
\begin{equation}
\Delta q_\d \; = \; \frac{\e^{(\beta\sigma)^2} [1 -
q_\infty(2k\beta/N)] \rho_2(k\beta,0)}{P \rho_1^2(k\beta,0) }
\; + \; O\!\!\left(\frac{1}{P^2}\right) \label{dqd}
\end{equation}
where $q_\infty(y)$ is defined in equation~(\ref{qqs}) and
$\rho_n(k\beta,\gamma)$ is defined in equation~(\ref{def_rhon}).
It is instructive to expand in $\beta$ as this shows the contributions
from each cumulant explicitly. To do this we use the cumulant
expansion described in equation~(\ref{cum_exp}) and to third order in
$\beta$ for three cumulants one finds,
\begin{equation}
\fl \Delta q_\d \; \simeq \; \frac{\e^{(\beta\sigma)^2}}{P} \left[1 -
q_\infty(2k\beta/N)\right]\left( 1 + K_2(k\beta)^2 - K_3(k\beta)^3 +
O(\beta^4) \right)
\end{equation}
The $q_\infty$ term has not been expanded out since it contributes
terms of $O(1/N)$ less than these contributions for each
cumulant. Selection leads to a negative third cumulant (see
equation~(\ref{k3s})), which in turn leads to an accelerated increase
in correlation under further selection. Crossover reduces this effect
by reducing the magnitude of the higher cumulants.
\section{Dynamic population resizing}
\label{sec_rescale}
The noise introduced by the finite sized training set increases the
magnitude of the detrimental finite population terms in selection. In
the limit of weak selection and large problem size discussed in
sections~\ref{sec_selapp} and \ref{sec_dup}, this can be compensated
for by increasing the population size. The terms which involve noise
in equations (\ref{kns}) and (\ref{dqd}) can be removed by an
appropriate population resizing,
\begin{equation}
P = P_0\exp[(\beta\sigma)^2]
\end{equation}
Here, $P_0$ is the population size in the infinite training set, zero
noise limit. Since these are the only terms in the expressions
describing the dynamics which involve the finite population size, this
effectively maps the full dynamics onto the infinite training set
case.
For zero noise the selection strength should be scaled so that the
effective selection strength $k\beta$ is inversely proportional to the
standard deviation of the population~\cite{PBS},
\begin{equation}
\beta = \frac{\beta_{\rm s}}{k\sqrt{\kappa_2}}
\end{equation}
Here, $k$ is defined in equation~(\ref{def_k}) and $\beta_{\rm s}$ is
the scaled selection strength and remains fixed throughout the
search. Recall that $\kappa_2$ is the expected variance of a finite
population, which is related to the variance of an infinite population
through equation~(\ref{k2}). One could also include a factor of
$\sqrt{\log P}$ to compensate for changes in population size, as in
reference~\cite{PBS2}, but this term is neglected here. The resized
population is then,
\begin{eqnarray}
P & = & P_0\exp\left(\frac{(\beta_{\rm s}\sigma)^2}{k^2
\kappa_2}\right) \nonumber \\ & = &
P_0\exp\left(\frac{\beta_{\rm s}^2 (1 -
\kappa_1^2)\cos^{-1}(\kappa_1)(\pi -
\cos^{-1}(\kappa_1))}{\lambda N \kappa_2}\right)
\label{rescale}
\end{eqnarray}
Notice that the exponent in this expression is $O(1)$, so this
population resizing does not blow up with increasing problem size. One
might therefore expect this problem to scale with $N$ in the same
manner as the zero-noise, infinite training set case, as long as the
batch size is $O(N)$.
Baum \etal have shown that a closely related GA scales as
$O(N\log_2^2N)$ on this problem if the population size is sufficiently
large so that alleles can be assumed to come from a binomial
distribution~\cite{Baum}. This is effectively a maximum entropy
assumption with a constraint on the mean overlap alone. They use
culling selection, where the best half of the population survives each
generation leading to a change in the mean overlap proportional to the
population's standard deviation. Our selection scaling also leads to a
change in the mean of this order and the algorithms may therefore be
expected to compare closely. The expressions derived here do not rely
on a large population size and are therefore more general.
In the infinite population limit it is reasonable to assume $N\kappa_2
\simeq 1 - \kappa_1^2$ which is the relationship between mean and
variance for a binomial distribution, since in this limit the
correlation of the population will not increases due to duplication
under selection. In this case the above scaling results in a
monotonic decrease in population size, as $\kappa_1$ increases over
time. This is easy to implement by removing the appropriate number of
population members before each selection.
In a finite population the population becomes correlated under
selection and the variance of the population is usually less than the
value predicted by a binomial distribution. In this case the
population size may have to be increased, which could be implemented
by producing a larger population after selection or crossover. This is
problematic, however, since increasing the population size leads to an
increase in the correlation and a corresponding reduced
performance. In this case the dynamics will no longer be equivalent to
the infinite training set situation.
Instead of varying the population size, one can fix the population
size and vary the size of the training batches. In this case one
finds,
\begin{equation}
\lambda = \frac{\beta_{\rm
s}^2(1-\kappa_1^2)\cos^{-1}(\kappa_1)(\pi -
\cos^{-1}(\kappa_1))}{N\kappa_2\log(P/P_0)}
\label{scale_alpha}
\end{equation}
Figure~\ref{fig_scale} shows how choosing the batch size each
generation according to equation~(\ref{scale_alpha}) leads to the
dynamics converging onto the infinite training set dynamics where the
training energy is equal to the generalization error. The infinite
training set result for the largest population size is also shown, as
this gives some measure of the potential variability of trajectories
available under different batch sizing schemes. Any deviation from
the weak selection, large $N$ limit is not apparent here. To a good
approximation it seems that the population resizing in
equation~(\ref{rescale}) and the corresponding batch sizing expression
in equation~(\ref{scale_alpha}) are accurate, at least as long as
$\lambda$ is not too small.
\begin{figure}[h]
\setlength{\unitlength}{1.0cm}
\begin{center}
\begin{picture}(8,6)
\put(0,0){\epsfig{figure=scale2.ps,width=8.0cm,height=6.0cm}}
\put(3.7,0.6){\epsfig{figure=alpha2.ps,width=4.0cm,height=2.7cm}}
\put(3.4,-0.6){\mbox{Generation}}
\put(3.5,2.1){\mbox{\small{$\lambda$}}}
\put(-0.5,4.3){\mbox{$\kappa_1$}}
\end{picture}
\end{center}
\caption{The mean overlap between teacher and pupil within the population
is shown each generation for a GA training a binary perceptron to
generalize from examples produced by a teacher perceptron. The results
were averaged over $100$ runs and training batch sizes were chosen
according to equation~(\ref{scale_alpha}), leading to the trajectories
converging onto the infinite training set result where $E=E_{\rm
g}(R)$. The solid curve is for the infinite training set with
$P_0=60$ and the finite training set results are for $P = 90$
(\opensqr), $120 (\diamond)$ and $163 (\triangle)$. Inset is the mean
choice of $\lambda$ each generation. The dashed line is the infinite
training set result for $P = 163$, showing that there is significant
potential variability of trajectories under different batch sizing
schemes. The other parameters were $N=279$, $\beta_{\rm s}=0.25$ and
$p_{\rm m}=0.001$.}
\label{fig_scale}
\end{figure}
\subsection{Optimal batch size}
In the previous section it was shown how the population size could be
changed to remove the effects of noise associated with a finite
training set. If we use this population resizing then it is possible
to define an optimal size of training set, in order to minimize the
computational cost of energy evaluation. This choice will also
minimize the total number of training examples presented when
independent batches are used. This may be expected to provide a useful
estimate of the appropriate sizing of batches in more efficient
schemes, where examples are recycled, as long as the total number of
examples used significantly exceeds the threshold above which
over-training is impossible.
We assume that computation is mainly due to energy evaluation and note
that there are $P$ energy evaluations each generation with computation
time for each scaling as $\lambda$. If the population size each
generation is chosen by equation~(\ref{rescale}), then the computation
time $\tau_c$ (in arbitrary units) is given by,
\begin{equation}
\tau_c \: = \: \lambda
\,\exp\!\left(\frac{\lambda_o}{\lambda}\right) \qquad \lambda_o =
\frac{\beta_{\rm s}^2(1-\kappa_1^2)\cos^{-1}(\kappa_1)(\pi -
\cos^{-1}(\kappa_1))}{N\kappa_2}
\end{equation}
The optimal choice of $\lambda$ is given by the minimum of $\tau_c$,
which is at $\lambda_o$. Choosing this batch size leads to the
population size being constant over the whole GA run and for optimal
performance one should choose,
\begin{eqnarray}
P & = & P_0 \, \e^1 \: \simeq \: 2.73 P_0 \\ \lambda & = &
\lambda_o
\end{eqnarray}
where $P_0$ is the population size used for the zero noise, infinite
training set GA. Notice that it is not necessary to determine $P_0$ in
order to choose the size of each batch, since $\lambda_o$ is not a
function of $P_0$. Since the batch size can now be determined
automatically, this reduces the size of the GA's parameter space
significantly.
One of the runs in figure~\ref{fig_scale} is for this choice of $P$
and $\lambda$, showing close agreement to the infinite training set
dynamics ($P = 163 \, \simeq \, P_0\e$). In general, the first two
cumulants change in a non-trivial manner each generation and their
evolution can be determined by simulating the dynamics, as described
in section~\ref{sec_sim}.
\section{Simulating the dynamics}
\label{sec_sim}
In sections~\ref{sec_mutcross}, \ref{sec_selcum} and \ref{sec_cor},
difference equations were derived for the mean effect of each operator
on the mean overlap and correlation within the population. The full
dynamics of the GA can be simulated by iterating these equations
starting from their initial values, which are zero. The equations for
selection also require knowledge of the higher cumulants before
selection, which are calculated by assuming a maximum entropy
distribution with constraints on the two known macroscopics (see
equations (\ref{k2inf}) to (\ref{k4inf})). We used four cumulants and
the selection expressions were calculated numerically, although for
weak selection the analytical results in section~(\ref{sec_selapp})
were also found to be very accurate. The largest overlap within the
population was estimated by assuming population members were randomly
selected from a distribution with the correct
cumulants~\cite{PBS2}. This assumption breaks down towards the end of
the search, when the population is highly correlated and the higher
cumulants become large, so that four cumulants may not describe the
population sufficiently well.
Figures~\ref{fig_conv} and \ref{fig_best} show the mean, variance and
largest overlap within the population each generation, averaged over
1000 runs of a GA and compared to the theory. The infinite training
set case, where the training energy is the generalization error, is
compared to results for two values of $\lambda$, showing how
performance degrades as the batch size is reduced. Recall that
$\lambda N$ new patterns are shown to each population member, each
generation, so that the total number of patterns used is $\lambda
N\!\times\! PG$, where $P$ is population size and $G$ is the total
number of generations. The skewness and kurtosis are presented in
figure~\ref{fig_conv34} for one value of $\lambda$, showing that
although there are larger fluctuations in the higher cumulants they
seem to agree sufficiently well to the theory on average. It would
probably be possible to model the dynamics accurately with only three
cumulants, since the kurtosis does not seem to be particularly
significant in these simulations.
\begin{figure}[h]
\setlength{\unitlength}{1.0cm}
\begin{center}
\begin{picture}(8,6)
\put(0,0){\epsfig{figure=conv.ps,width=8.0cm,height=6.0cm}}
\put(3.4,-0.6){\mbox{Generation}}
\put(3.5,4.0){\mbox{$\kappa_1$}}
\put(3.5,2.3){\mbox{$N\kappa_2$}}
\end{picture}
\end{center}
\caption{The theory is compared to averaged results from a GA training
a binary perceptron to generalize from examples produced by a teacher
perceptron. The mean and variance of the overlap distribution within
the population are shown, averaged over $1000$ runs, with the solid
lines showing the theoretical predictions. The infinite training set
result~($\Diamond$) is compared to results for a finite training set
with $\lambda = 0.65$~(\opensqr) and $\lambda =
0.39$~($\triangle$). The other parameters were $N=155$, $\beta_{\rm
s}=0.3$, $p_{\rm m}=0.005$ and the population size was $80$.}
\label{fig_conv}
\end{figure}
\begin{figure}[h]
\setlength{\unitlength}{1.0cm}
\begin{center}
\begin{picture}(8,6)
\put(0,0){\epsfig{figure=best.ps,width=8.0cm,height=6.0cm}}
\put(3.4,-0.6){\mbox{Generation}}
\put(-0.8,4.3){\mbox{$R_{\mbox{\tiny{max}}}$}}
\end{picture}
\end{center}
\caption{The maximum overlap between teacher and pupil is shown each generation,
averaged over the same runs as the results presented in
figure~\ref{fig_conv}. The solid lines show the theoretical
predictions and the symbols are as in figure~\ref{fig_conv}.}
\label{fig_best}
\end{figure}
\begin{figure}[h]
\setlength{\unitlength}{1.0cm}
\begin{center}
\begin{picture}(8,6)
\put(0,0){\epsfig{figure=conv34.ps,width=8.0cm,height=6.0cm}}
\put(3.4,-0.6){\mbox{Generation}}
\put(3.8,5.4){\mbox{$\kappa_4/\kappa_2^2$}}
\put(3.8,1.6){\mbox{$\kappa_3/\kappa_2^{3/2}$}}
\end{picture}
\end{center}
\caption{The skewness and kurtosis of the overlap distribution are shown averaged over the same runs
as the results presented in figure~\ref{fig_conv} for $\lambda =
0.65$. Averages were taken over cumulants, rather than the ratios
shown. The solid lines show the theoretical predictions for mean
behaviour.}
\label{fig_conv34}
\end{figure}
These results show excellent agreement with the theory, although there
is a slight underestimate in the best population member for the
reasons discussed above. This is typical of the theory, which has to
be very accurate in order to pick up the subtle effects of noise due
to the finite batch size. Unfortunately, the agreement is less
accurate for low values of $\lambda$, where the noise is
stronger. This may be due to two simplifications. Firstly, we use a
Gaussian approximation for the noise which relies on $\lambda$ being
at least $O(1)$. This could be remedied by expanding the noise in
terms of more than two cumulants as we have done for the overlap
distribution. Secondly, the duplication term in section~\ref{sec_dup}
uses the large $N$, weak selection approximation which also relies on
$\lambda$ being $O(1)$. The error due to this approximation is
minimized by only using the approximation for the term involving the
correlation in equation (\ref{q_d}), with the other term calculated
numerically. It is expected that good results for smaller values of
$\lambda$ would be possible for larger values of $N$, where the
correlation calculation would be more exact.
\section{Conclusion}
A statistical mechanics formalism has been used to solve the dynamics
of a GA for a simple problem from learning theory, generalization in a
perceptron with binary weights. To make the dynamics tractable, the
case where a new batch of examples was presented to each population
member each generation was considered. For $O(N)$ training examples
per batch the training energy was well approximated by a Gaussian
distribution whose mean is the generalization error and whose variance
increases as the batch size is reduced. The use of bit-simulated
crossover, which takes the population straight to the fixed point of
standard crossover, allowed the dynamics to be modelled in terms of
only two macroscopics; the mean correlation and overlap within the
population. The higher cumulants of the overlap distribution after
crossover were required to calculate the effect of selection and were
estimated by assuming maximum entropy with respect to the two known
macroscopics. By iterating difference equations describing the average
effect of each operator on the mean correlation and overlap the
dynamics of the GA were simulated, showing very close agreement with
averaged results from a GA.
Although the difference equations describing the effect of each
operator required numerical enumeration in some cases, analytical
results were derived for the weak selection, large $N$ limit. It was
shown that in this limit a dynamical resizing of the population maps
the finite training set dynamics onto the infinite training set
situation. Using this resizing it is possible to calculate the most
computationally efficient size of population and training batch, since
there is a diminishing return in improved performance as batch size is
increased. For the case of independent training examples considered
here this choice also gives the minimum total number of examples
presented.
In future work it would be essential to look at the situation where
the patterns are recycled, leading to a much more efficient use of
training examples and the possibility of over-training. In this case,
the distribution of overlaps between teacher and pupil would not be
sufficient to describe the population, since the training energy would
then be dependent on the training set. One would therefore have to
include information specific to the training set, such as the mean
pattern per site within the training set. This might be treated as a
quenched field at each site, although it is not obvious how one could
best incorporate such a field into the dynamics.
Another interesting extension of the present study would be to
consider multi-layer networks, which would present a much richer
dynamical behaviour than the single-layer perceptron considered
here. This would bring the formalism much closer to problems of
realistic difficulty. In order to describe the population in this case
it would be necessary to consider the joint distribution of many order
parameters within the population. It would be interesting to see how
the dynamics of the GA compares to gradient methods in networks with
continuous weights, for which the dynamics of generalization for a
class of multi-layer architectures have recently been solved
analytically in the case of on-line learning~\cite{Saad}. In order to
generalize in multi-layer networks it is necessary for the search to
break symmetry in weight space and it would be of great interest to
understand how this might occur in a population of solutions, whether
it would occur spontaneously over the whole population in analogy to a
phase transition or whether components would be formed within the
population, each exhibiting a different broken symmetry. This would
again require the accurate characterization of finite population
effects, since an infinite population might allow the coexistence of
all possible broken symmetries, which is presumably an unrealizable
situation in finite populations.
\ack We would like to thank Adam Pr\"{u}gel-Bennett for many helpful
discussions and for providing code for some of the numerical work used
here. We would also like to thank the anonymous reviewers for making a
number of useful suggestions. MR was supported by an EPSRC award
(ref. 93315524).
| proofpile-arXiv_065-971 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Homodyne detection is a well known
technique in detecting phase--dependent
properties of optical radiation.
In quantum optics it has been
widely used in studies and applications
of squeezed light \cite{squeezing}.
A statistical distribution of the outcomes of a homodyne detector
has recently found novel
applications in the measurement of the quantum state of light
via optical homodyne tomography \cite{tomography} and the direct probing of
quantum phase space by photon counting \cite{banaszek}.
The phase--sensitivity of homodyne detection is achieved by
performing a
superposition of the signal field with a coherent
local oscillator by means of a beamsplitter \cite{YuenShapCQOIV}.
It was an important observation \cite{TwoPapersOL}
that in a balanced scheme, with a 50\%:50\% beamsplitter,
the local oscillator noise
can be cancelled by subtracting the photocurrents of the
detectors facing two outgoing beams. Then, in the limit
of a classical local oscillator, the statistics
of difference photocounts is simply a signal quadrature
distribution rescaled by the amplitude of the local oscillator
\cite{BrauPRA90,VogeGrabPRA93}.
Therefore, balanced homodyne detection is an optical realization
of an abstract quantum mechanical measurement
of the field quadratures described by a quantum observable
$\hat{x}_{\theta}$. The statistical outcomes of an ideal
measurement of $\hat{x}_{\theta}|{x}_{\theta}\rangle =
{x}_{\theta} |{x}_{\theta}\rangle $, are described by the spectral
measure
\begin{equation}
p({x}_{\theta}) = \langle\,
| {x}_{\theta}\rangle \langle {x}_{\theta}|
\,\rangle.
\end{equation}
Although the spectral measure contains all
the relevant statistical information about the
homodyne measurement, it corresponds to a
quantity that is measured by an ideal noise--free detector.
Due to this property $\hat{x}_{\theta}$
will be called an {\it intrinsic homodyne quantum observable}.
However analysis of the homodyne setup with imperfect detectors
\cite{VogeGrabPRA93} shows that the relation between the statistics of
the difference counts and the quadrature spectral distribution is in
fact more
complicated. The distribution measured in a real experiment
is smoothed by a convolution with a Gaussian function of width
dependent on the detector efficiency. Consequently,
realistic homodyne detection cannot be straightforwardly
interpreted as a measurement of the intrinsic field quadratures
$\hat{x}_{\theta}$.
A recent experimental application of homodyne detection to the
reconstruction of
the photon number distribution of a weak field from a pulsed diode
laser \cite{MunrBoggPRA95} has shown that a homodyne setup with the
fluctuating phase $\theta$ is a powerful tool in measuring
phase--insensitive properties of light. However, it is not possible
to associate with this setup any spectral measure even in the case of
perfect detectors. Therefore, homodyne detection with the random
phase cannot be described in terms of measuring any intrinsic quantum
observable.
It is the purpose of this paper to
show that homodyne detection provides an
interesting and nontrivial example
of a realistic quantum measurement
leading to operational quantum observables,
i.e., to quantum operators that depend
on properties of a specific experimental setup used in the homodyne
detection.
In particular, these operational observables will depend on the
detector losses described by a quantum efficiency $\eta$ and on the
phase $\theta$ of the local oscillator used to probe the signal
field. Such operational observables provide a natural link between
the quantum formalism and raw data recorded in a realistic homodyne
experiment.
General features of the operational
approach, with references to earlier
literature are given in
\cite{EnglWodkPRA95}. The main conclusions of this
approach, if applied to the homodyne measurement, can be summarized
as follows.
A quantity delivered by the homodyne
experiment is a propensity density
$\text{Pr} (a)$ of a certain classical
variable $a$. This density
is given by an expectation value
of an $a$-dependent positive operator valued measure (POVM), denoted
by $\hat{\cal H} (a)$:
\begin{equation}
\text{Pr} (a) = \langle \hat{\cal H} (a) \rangle.
\end{equation}
Thus, the POVM given by $\hat{\cal H} (a)$ corresponds to
a realistic homodyne detection and is the mathematical
representation of the device dependent measurement. In one way of
looking at quantum measurements, the emphasis is put
on the construction and properties of
such POVMs. In such an approach,
in realistic homodyne detection,
the spectral decomposition
$ \text{d} {x}_{\theta} | {x}_{\theta}\rangle
\langle {x}_{\theta}|$ of the intrinsic
observable $\hat{x}_{\theta}$,
is effectively replaced by the
POVM $ \text{d} a \hat{\cal H} (a)$.
Consequently the moments of $\text{Pr} (a)$
can be represented as
\begin{equation}
\label{OQOdef}
\overline{a^n}
= \int \text{d}a\, a^{n}\text{Pr} (a) = \left\langle
\mbox{$\hat{x}^{(n)}_{\theta}$}_{\cal H} \right\rangle,
\end{equation}
defining in this way a family of {\it operational homodyne quantum
observables}
\begin{equation}
\label{operadef}
\mbox{$\hat{x}^{(n)}_{\theta}$}_{\cal H}
= \int \mbox{d}a\, a^n \hat{\cal H}(a),
\end{equation}
where the index ${\cal H}$ stands for
the homodyne detection scheme associated with
the given POVM. This family characterizes
the experimental device and is independent on a specific state of
the measured system.
In this paper we derive and discuss
the family of operational observables
for balanced homodyne
detection with imperfect photodetectors.
We show that for balanced homodyne
detection an exact reconstruction of
the POVM $\hat{\cal H} (a)$ and of
the corresponding operational quantum
quadratures $\mbox{$\hat{x}^{(n)}_{\theta}$}_{\cal H} $
can be performed. Thus, homodyne
detection provides a nontrivial
measurement scheme for which an
exact derivation of the corresponding POVM
and the operational observables is
possible. The interest in construction of this
operational algebra is due to
the fact that the number of physical
examples where the operational
description can be found explicitly
is very limited \cite{BLM}. We show that the algebraic properties of
the $ \mbox{$\hat{x}^{(n)}_{\theta}$}_{\cal H} $
differ significantly from those of the powers of
$\hat{x}_{\theta}$. In particular
$\mbox{$\hat{x}^{(2)}_{\theta}$}_{\cal H}
\neq (\mbox{$\hat{x}^{(1)}_{\theta}$}_{\cal H})^{2}$.
This property will have immediate
consequences in the discussion of the uncertainty relation with
imperfect detectors.
This paper has the following structure.
First, in Sec.~\ref{Sec:Zlambda}, we derive the POVM and the
generating operator
for the operational observables. Their explicit form is found in the
limit of a classical local oscillator in Sec.~\ref{Sec:xthetaH}.
Given this result, we discuss the operational uncertainty relation
in Sec.~\ref{Sec:Uncertainty}.
In Sec.~\ref{Sec:Random} we derive the family of operational homodyne
observables for the homodyne detector with a random phase between
the signal and the local oscillator fields, and relate them to the
intrinsic photon number operator. These calculations link the homodyne
noise with fluctuations of the photon statistics, and can be useful in
the time--resolved measurement of the properties of pulsed diode lasers.
Finally, Sec.~\ref{Sec:TheEnd} summarizes the results.
\section{Generating operator for homodyne detection}
\label{Sec:Zlambda}
The family of the operational homodyne quantum observables
defined in Eq.~(\ref{operadef})
can be written conveniently with the help of the generating operator
\begin{equation}
\label{Eq:ZHlambdaDef}
\hat{Z}_{\cal H} (\lambda) = \int \text{d}a\,e^{i\lambda a}
\hat{\cal H}(a).
\end{equation}
Operational quantum observables are given by derivatives
of the generating operator at $\lambda = 0$:
\begin{equation}
\hat{x}^{(n)}_{\cal H} = \left. \frac{1}{i^n}
\frac{{\text d}^n}{\mbox{\rm d}\lambda^n}
\hat{Z}_{\cal H} (\lambda)
\right|_{\lambda=0}.
\end{equation}
This compact representation will noticeably
simplify further calculations.
We will start the calculations by finding
the generating operator for the homodyne detector. In a balanced setup,
the signal field
described by an annihilation operator $\hat{a}$,
is superimposed on a local oscillator $\hat{b}$ by means of a 50\%:50\%
beamsplitter. The annihilation operators of the outgoing modes are
given, up to the irrelevant phase factors, by the relation
\begin{equation}
\left(
\begin{array}{c} \hat{c} \\ \hat{d} \end{array}
\right)
=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{rr} 1 & 1 \\ 1 & -1 \end{array}
\right)
\left(
\begin{array}{c} \hat{a} \\ \hat{b} \end{array}
\right).
\end{equation}
We will assume that the local oscillator is in a coherent state
$|\beta\rangle_{LO}$. If another state of the local oscillator is
considered, our formulae can be generalized in a straightforward manner
by averaging the results over an appropriate Glauber's $P$-representation.
A quantity recorded in the experiment is the statistics
of the difference counts between photodetectors facing the
modes $\hat{c}$ and $\hat{d}$. The difference of the counts $\Delta N$
corresponds to the classical variable, denoted before
as $a$, recorded in a homodyne detection experiment.
The POVM $\hat{\cal H}(\Delta N)$ describing this
detection scheme can be easily derived.
It is clear that this POVM is an
operator acting in the Hilbert space of the signal mode.
Its explicit form can be found with the help of
standard theory of photodetection \cite{photodetection}:
\begin{eqnarray}
\hat{\cal H} (\Delta N) & = & \sum_{n_1 - n_2 = \Delta N}
\text{Tr}_{LO} \{ |\beta\rangle\langle\beta|_{LO}
\nonumber \\
& &
: e^{ -
\eta \hat{c}^\dagger \hat{c}} \frac{(\eta \hat{c}^\dagger \hat{c}
)^{n_1}}{n_1!} \, e^{ - \eta \hat{d}^\dagger \hat{d}} \frac{(\eta
\hat{d}^\dagger \hat{d})^{n_2}}{n_2!} : \}
\end{eqnarray}
where $\eta$ is the quantum efficiency, assumed to be identical for
both the detectors. In this formula the partial trace is over the local
oscillator mode and a marginal average with a fixed value
of $\Delta N$ is performed. We will now convert this POVM into the
generating operator according to Eq.~(\ref{Eq:ZHlambdaDef}).
As we discuss later, the POVM $\hat{\cal H}$ and consequently the
generating operator $\hat{Z}_{\cal H}$ have their natural
parametrization, independent on the LO intensity. Before we find this
scaling, we will use $\xi$ instead of $\lambda$ as a parameter
of the generating operator:
\begin{eqnarray}
\hat{Z}_{\cal H} (\xi) & = & \sum_{\Delta N = - \infty}^{\infty}
e^{i\xi \Delta N} \hat{\cal H} ( \Delta N)
\nonumber \\
& = &
\text{Tr}_{LO} \{ |\beta\rangle\langle\beta|_{LO}
\nonumber \\
\label{ZHlambda1}
& & : \exp [ \eta (e^{i\xi} - 1 )
\hat{c}^\dagger \hat{c} + \eta (e^{-i\xi} -1) \hat{d}^\dagger \hat{d}
] : \, \}.
\end{eqnarray}
Let us transform this expression to the form which does not
contain the normal ordering symbol. For this purpose we will use
the technique developed by Yuen and Shapiro \cite{YuenShapCQOIV} consisting
of extending the Hilbert space by two additional modes $\hat{c}_v$ and
$\hat{d}_v$ and constructing the fields annihilation operators
\begin{equation}
\label{cvdv}
\hat{c}_{d} = \sqrt{\eta} \hat{c} + \sqrt{1-\eta} \hat{c}_v, \;\;\;\;\;
\hat{d}_{d} = \sqrt{\eta} \hat{d} + \sqrt{1-\eta} \hat{d}_v.
\end{equation}
The generating operator can be written in the extended four--mode space
using these operators as
\begin{eqnarray}
\label{ZHlambda2}
\hat{Z}_{\cal H} (\xi)
& = & \text{Tr}_{LO,v} \{ |\beta\rangle\langle\beta|_{LO} \otimes
|0 \rangle \langle 0 |_v
\nonumber \\
& &
: \exp[(e^{i\xi} - 1 )
\hat{c}_d^\dagger \hat{c}_d + (e^{-i\xi} -1) \hat{d}_d^\dagger \hat{d}_d
] : \, \},
\end{eqnarray}
where $\text{Tr}_v$ denotes the trace over both the vacuum modes
$\hat{c}_v$ and $\hat{d}_v$. We can now apply the relation \cite{NormOrdExp}
\begin{equation}
\label{normalny}
: \exp[(e^{i\xi} -1) \hat{v}^\dagger
\hat{v}]: \, = \exp(i\xi \hat{v}^{\dagger} \hat{v})
\end{equation}
valid for an arbitrary bosonic
annihilation operator $\hat{v}$, which
finally gives:
\begin{eqnarray}
\label{ZHlambda3}
\lefteqn{\hat{Z}_{\cal H} (\xi)} & & \nonumber \\
& = & \text{Tr}_{LO,v} \{ |\beta\rangle\langle\beta|_{LO} \otimes
|0 \rangle \langle 0 |_v \,
\exp[i\xi (\hat{c}_d^\dagger \hat{c}_d -
\hat{d}_d^\dagger \hat{d}_d)]
\}.
\nonumber \\
& &
\end{eqnarray}
This expression contains the most compact form of the homodyne POVM. The
exponent in Eq.~(\ref{ZHlambda2}) resembles the one from
Eq.~(\ref{ZHlambda1}),
with $\hat{c},\hat{d}$ replaced by $\hat{c}_d, \hat{d}_d$ and
the detector efficiency equal to one. It is known \cite{YuenShapCQOIV},
that there is a physical picture behind this similarity.
An imperfect photodetector can be equivalently
described by an ideal detector
preceded by a beamsplitter with the power
transmissivity equal to the quantum
efficiency of the real detector,
assuming that the vacuum state enters
through the unused port of the beamsplitter.
Mathematically, this construction corresponds to the so called Naimark
extension of the POVM into a projective measure on a larger Hilbert
space \cite{BLM}.
\section{Approximation of a classical local oscillator}
\label{Sec:xthetaH}
When the local oscillator is in a strong coherent state, the
bosonic operators $\hat{b}, \hat{b}^\dagger$ can be replaced by
$c$-numbers $\beta, \beta^\ast$. However this approximation
violates the bosonic commutation relations for the pairs
$\hat{c}, \hat{c}^\dagger$ and $\hat{d}, \hat{d}^\dagger$, which
have been used implicitly several times in the manipulations
involving $\hat{Z}_{\cal H} (\xi)$.
Therefore some care should be taken when considering the
classical limit of the local oscillator.
We will perform the approximation on the exponent of
Eq.~(\ref{ZHlambda3}). We will replace the quantum
average over the state $|\beta\rangle$ by
inserting $\beta, \beta^\ast$ in place of
$\hat{b}, \hat{b}^\dagger$ and keep only the terms linear in
$\beta$. This gives
\begin{equation}
\hat{c}_d^\dagger c_d - \hat{d}_d^\dagger \hat{d}_d =
\sqrt{\eta} \beta^{\ast} \left( \sqrt{\eta} \hat{a} + \sqrt{1-\eta}
\frac{\hat{c}_v + \hat{d}_v}{\sqrt{2}}
\right) + \mbox{\rm h.c.}
\end{equation}
The operator in the brackets has a form analogous to Eq.~(\ref{cvdv})
with the combination $(\hat{c}_v + \hat{d}_v)/\sqrt{2}$ as a vacuum mode.
Consequently, the imperfectness of the photodetectors
in the balanced homodyne detection can be
modelled by superposing the signal on a fictitious vacuum
mode before superposing it with the local oscillator
and attenuating the amplitude of the local oscillator field by
$\sqrt{\eta}$. This observation has been originally
made by Leonhardt and Paul \cite{LeonPaulPRA93}, and is an example of
the Naimark extension involving a nonquantized local oscillator.
Under the approximation of a classical local oscillator, it is now easy
to perform the trace over the vacuum modes with the
help of the Baker--Campbell--Hausdorff formula. This yields
\begin{equation}
\label{ZHlambdaClassLO}
\hat{Z}_{\cal H}(\xi) = \exp[-\xi^2
\eta (1- \eta) |\beta|^2/2 ] \exp [ i \xi \eta ( \beta \hat{a}^\dagger +
\beta^\ast \hat{a}) ].
\end{equation}
The exponent $\exp [ - \xi^2 \eta ( 1- \eta) |\beta|^2 /2 ]$
introduces a specific ordering of the
creation and annihilation operators in the generating operator.
Therefore the detector efficiency $\eta$ can be related to the ordering
of the operational observables. For example, for $\eta=1/2$ we
get $\hat{Z}_{\cal H}(\xi) = \exp (i\xi\beta^{\ast} \hat{a}/2)
\exp (i \xi \beta \hat{a}^\dagger/2)$, i.e.\ the generating
operator is ordered antinormally.
The expansion of the generating operator into a power series
of $i\xi$ gives
\begin{equation}
\label{ZExpansion}
\frac{1}{i^n} \left. \frac{\mbox{\rm d}^n \hat{Z}_{\cal H}}{{\mbox{\rm
d}} \xi^n} \right|_{\xi=0} = \left( \frac{1}{i}
\sqrt{\frac{\eta(1-\eta)}{2}} |\beta| \right)^n H_n \left( i
\sqrt{ \frac{\eta}{1-\eta}} \hat{x}_\theta \right),
\end{equation}
where $H_n$ denotes the $n$th Hermite polynomial and
$\hat{x}_\theta$ is the standard quadrature operator
\begin{equation}
\label{xtheta}
\hat{x}_\theta = \frac{e^{i\theta} \hat{a}^\dagger +
e^{- i \theta} \hat{a}}{\sqrt{2}}
\end{equation}
expressed in terms of the creation and annihilation operators
of the signal field and dependent on the local oscillator
phase $\theta$ defined as $\beta = |\beta| e^{i\theta}$.
In the terminology of the operational approach to quantum measurement,
$\hat{x}_\theta$
is called an {\it intrinsic quantum observable}, since
it refers to internal properties of the system independent of
the measuring device \cite{EnglWodkPRA95}.
It will be convenient to change the parameter of
the generating operator in order to make the derivatives
(\ref{ZExpansion}) independent of the
amplitude of the local oscillator.
A scaling factor which can be directly obtained from an
experiment is the square root of
the intensity of the local oscillator field measured by the
photodetector. We will multiply it by $\sqrt{2}$
in order to get the intrinsic quadrature operator (\ref{xtheta})
in the limit $\eta \rightarrow 1$. Thus substituting
$\xi=\lambda/\sqrt{2\eta|\beta|^2}$ yields the generating operator
independent of the amplitude of the local oscillator:
\begin{equation}
\hat{Z}_{\cal H} (\lambda, \theta) = \exp [-\lambda^2(1-\eta)/4 +
i\lambda\sqrt{\eta/2}(e^{i\theta}\hat{a}^\dagger
+ e^{-i\theta}\hat{a})].
\end{equation}
The derivatives of $\hat{Z}_{\cal H}(\lambda,\theta)$
give the final form of the family of the
operational
observables ${\mbox{$\hat{x}$}_\theta^{(n)}}_{\cal H}$
for the homodyne detector:
\begin{equation}
{\mbox{$\hat{x}$}_\theta^{(n)}}_{\cal H} =
\left( \frac{\sqrt{1-\eta}}{2i} \right)^n H_n \left( i \sqrt{
\frac{\eta}{1-\eta} } \hat{x}_\theta \right).
\end{equation}
The algebraic properties of
the operational observables are quite complicated, since
${\mbox{$\hat{x}$}_\theta^{(n)}}_{\cal H}$
is not simply an $n$th power of
${\mbox{$\hat{x}$}_\theta^{(1)}}_{\cal H}$.
Thus a single operator does not suffice to describe the
homodyne detection with imperfect detectors. Complete
characterization of the setup is provided by the whole family of
operational observables. In fact the operators
${\mbox{$\hat{x}$}_\theta^{(n)}}_{\cal H}$ define an infinite algebra of
operational homodyne observables for an arbitrary state of the signal mode.
As mentioned above, for $\eta=50\% $, the general formula reduces
to antinormally ordered powers of the intrinsic quadrature operators:
\begin{equation}
{\mbox{$\hat{x}$}_\theta^{(n)}}_{\cal H}
= \frac{1}{2^{n/2}}
\vdots (\hat{x}_\theta)^{n}\vdots \ .
\end{equation}
This expression shows that the operational operators are in some sense
equivalent to a prescription of ordering of the intrinsic operators. This
prescription is dynamical in character, i.e., it depends on the efficiency
$\eta$ of the detectors used in the homodyne detection. In fact the homodyne
operational algebra is defined by a one-parameter family of dynamical
orderings defined by the generating operator derived in this section.
\section{Operational uncertainty relation}
\label{Sec:Uncertainty}
With explicit forms of operational observables in hand,
we can now analyze their relation to the intrinsic quadrature
operator. For this purpose,
let us look at the first lowest--order
operational quadrature observables:
\begin{eqnarray}
{\mbox{$\hat{x}$}_\theta^{(1)}}_{\cal H} & = & \eta^{1/2}
\hat{x}_\theta, \nonumber \\
{\mbox{$\hat{x}$}_\theta^{(2)}}_{\cal H} & = & \eta \left(
\hat{x}_\theta^2 + \frac{1 - \eta}{2\eta} \right), \nonumber \\
{\mbox{$\hat{x}$}_\theta^{(3)}}_{\cal H} & = & \eta^{3/2}
\left( \hat{x}_\theta^3 + \frac{3}{2} \frac{1-\eta}{\eta}
\hat{x}_\theta \right).
\end{eqnarray}
The imperfectness of photodetectors influences the operational
observables in two ways. The first one is a trivial rescaling of the
observables by the powers of $\sqrt{\eta}$, the second way is a
contribution of the lower--order terms to the operational
counterparts
of $\hat{x}_\theta^n$.
In order to see its consequences
let us investigate the rescaled operational variance
$\overline{(\Delta N)^{2}}-(\overline{\Delta N})^{2}$:
\begin{equation}
{\delta x_\theta^2}_{\cal H} =
\frac{1}{2\eta|\beta|^2}
\left(\overline{(\Delta N)^{2}}-
(\overline{\Delta N})^{2}\right)
\end{equation}
From the definitions
of the operational operators it
is clear that this operational variance
involves ${\mbox{$\hat{x}$}_\theta^{(2)}}_{\cal H}$ and
${\mbox{$\hat{x}$}_\theta^{(1)}}_{\cal H}$. The combination of these two
operators is in general different from the intrinsic variance.
Because of this the operational dispersion of
$x_\theta$ is:
\begin{equation}
\label{deltaxthetaH}
{\delta x_\theta^2}_{\cal H} =
\langle {\mbox{$\hat{x}$}_\theta^{(2)}}_{\cal H} \rangle -
\langle {\mbox{$\hat{x}$}_\theta^{(1)}}_{\cal H} \rangle^2
= \eta \left( \Delta x _\theta^2 + \frac{1-\eta}{2\eta}
\right),
\end{equation}
where $\Delta x_\theta = \sqrt{\langle
\hat{x}_\theta^2 \rangle - \langle
\hat{x}_\theta \rangle^2}$ is the intrinsic quantum dispersion of
the quadrature $x_\theta$. This intrinsic dispersion is
enhanced by a term coming from the imperfectness of the detectors.
Thus, the imperfectness of the photodetectors
introduces an additional noise
to the measurement and deteriorates its resolution.
Using the above
result we can derive the operational uncertainty relation for
the quadratures related to the angles $\theta$ and $\theta'$
\begin{equation}
\label{OperUncRel}
{\delta x_\theta}_{\cal H} {\delta x_{\theta'}}_{\cal H} \ge
\eta \left( {\Delta x_\theta} {\Delta x_{\theta'}} +
\frac{1-\eta}{2\eta} \right).
\end{equation}
Again, an additional term is added to the intrinsic uncertainty
product. This situation is similar to that in Ref.\ \cite{WodkPLA87} where
it was argued that taking into account the measuring device
raises the minimum limit for the uncertainty product. However,
that discussion concerned a {\it simultaneous} measurement of
canonically conjugate variables, which is not the case in homodyne
detection. Using the intrinsic uncertainty relation
$\Delta x_{\theta} \Delta x_{\theta'} \ge |\sin(\theta - \theta')|/2$
we get the result that the right hand side in the operational relation
(\ref{OperUncRel}) is not smaller than $(\eta|\sin(\theta-\theta')|
+1-\eta)/2$.
One may wonder if the definition of squeezing is affected by the operational
operators. Let us consider the two quadratures ${\delta x_\theta}_{\cal H}$
and ${\delta x_{\theta+\frac{\pi}{2}}}_{\cal H}$. In this case the
operational uncertainty,
\begin{equation}
{\delta x_\theta}_{\cal H} {\delta x_{\theta+\frac{\pi}{2}}}_{\cal H} \ge
\frac{1}{2},
\end{equation}
is independent of $\eta$. However, it has to be kept in mind that only
a part of the operational dispersion comes from the field fluctuations.
The easiest way to discuss this is to
rewrite Eq.~(\ref{deltaxthetaH}) to the form
\begin{equation}
{\delta x_\theta}_{\cal H} = \sqrt{\eta (\Delta x_\theta)^2
+ (1-\eta) \left( \frac{1}{\sqrt{2}} \right)^2 },
\end{equation}
which shows that the
operational dispersion is a quadratic average of
the intrinsic field dispersion $\Delta x_\theta$
and the detector noise $1/\sqrt{2}$ that corresponds
to the vacuum fluctuation level. These contributions
enter with the weights $\eta$ and $1-\eta$,
respectively. Therefore if a squeezed quadrature
is measured with imperfect detectors, the
observed dispersion is larger than the intrinsic one.
\section{Homodyne detection with random phase}
\label{Sec:Random}
Homodyne detection is used primarily to detect phase--dependent
properties of light. However, it has been recently shown that even a
setup with a random phase between the
signal and local oscillator fields can
be a useful tool in optical experiments \cite{MunrBoggPRA95}.
Although in this
case the phase sensitivity is lost, the homodyne detector can be
applied to measure phase--independent quantities and such a setup
presents some advantages over a single photodetector. First, the
information on the statistics of the field is carried by the
photocurrent difference between the two rather intense fields. Within
existing detector
technology, this quantity can be measured with a significantly better
efficiency than the weak field itself. Secondly, the spatio--temporal
mode that is actually measured by the homodyne detector
is defined by the shape
of the local oscillator field. Consequently, application of
the local oscillator in
the form of a short pulse allows the measurement to be performed with an
ultrafast sampling time. This technique
has been used in Ref.\ \cite{MunrBoggPRA95} to measure the time
resolved photon number statistics
from a diode laser operating below threshold. The achieved
sampling time was significantly shorter
than those of previously used methods.
The photon number distribution and other phase--independent quantities are
reconstructed from the average of the random phase homodyne statistics
calculated with the so--called {\it pattern functions}
\cite{LeonPaulPRA95}.
For commonly used quantities, such as the diagonal elements of the density
matrix in the Fock basis, these pattern functions
take a quite complicated form. In this
section we will consider observables that are related to the
experimental data in the most direct way, the moments of the
homodyne statistics with randomized phase.
We will derive the family of operational
observables and relate them to the powers of the photon number
operator $\hat{n} = \hat{a}^\dagger \hat{a}$.
The generating operator for homodyne detection with random phase
$\hat{Z}_{\cal R}$ ($\cal R$ stands for the random phase) is
obtained readily from $\hat{Z}_{\cal H}$ by averaging it over the
phase $\theta$. This gives
\begin{equation}
\label{Eq:ZRNormOrd}
\hat{Z}_{\cal R} (\lambda) =
\int_0^{2\pi} \frac{\text{d}\theta}{2\pi}
\hat{Z}_{\cal H} (\lambda, \theta)
= e^{-\lambda^2/4}
: J_0 \left(\lambda \sqrt{2\eta \hat{a}^\dagger \hat{a}}
\right): \ ,
\end{equation}
where $J_0$ is the Bessel function of the 0th order.
With the help of the result derived in the Appendix,
the normally ordered form of the Bessel function
can be transformed into the following expression:
\begin{equation}
\hat{Z}_{\cal R} (\lambda) = e^{-\lambda^2/4} L_{\hat{n}}
(\eta\lambda^2/2),
\end{equation}
where the index of the Laguerre polynomial is the the photon number
operator. The Laguerre polynomials with a operator valued index is
defined by the decomposition in the Fock basis.
The family of operational observables is given by the derivatives
of the generating operator
\begin{equation}
\hat{x}^{(n)}_{\cal R} = \left. \frac{1}{i^n}
\frac{{\text d}^n}{\mbox{\rm d}\lambda^n}
\hat{Z}_{\cal R} (\lambda)
\right|_{\lambda=0}.
\end{equation}
Since the homodyne statistics averaged over the phase is even,
the odd derivatives disappear. A straightforward calculation yields
the operators for even $n=2m$:
\begin{eqnarray}
\hat{x}_{\cal R}^{(2m)} & = & \frac{(2m-1)!!}{2^m} : L_m ( -2\eta
\hat{a}^{\dagger} \hat{a}): \nonumber \\
& = & \frac{(2m-1)!!}{2^m} \sum_{k=0}^m
\left( \begin{array}{c} m \\ k \end{array} \right)
\frac{(2\eta)^k}{k!}
\nonumber \\
& & \;\;\;\;\;\;\;\;\; \times
\hat{n} (\hat{n}-1)\ldots(\hat{n}-k+1).
\end{eqnarray}
This formula shows that
$\hat{x}_{\cal R}^{(2m)}$
is a polynomial of $\hat{n}$
of the order of $m$. Therefore the first $m$ moments of the photon
number distribution can be computed from
$\langle\hat{x}_{\cal R}^{(2)}\rangle,
\ldots,
\langle\hat{x}_{\cal R}^{(2m)}\rangle$.
The two lowest--order observables
are given explicitly by
\begin{eqnarray}
\hat{x}_{\cal R}^{(2)} & = & \eta \hat{n} + \frac{1}{2}
\nonumber \\
\hat{x}_{\cal R}^{(4)} & = & \frac{3}{2} \left(
\eta^2 \hat{n}^2 + \eta ( 2- \eta) \hat{n} + \frac{1}{2} \right).
\end{eqnarray}
It is seen that even in the case of ideal noise-free detectors
$\hat{x}_{\cal R}^{(4)} \neq (\hat{x}_{\cal R}^{(2)})^2$ and the
family of the operational observables has nontrivial algebraic
properties. Inversion of the above equations yields:
\begin{eqnarray}
\hat{n} & = & \frac{1}{\eta}
\left(\hat{x}_{\cal R}^{(2)} - \frac{1}{2} \right)
\nonumber \\
\hat{n}^2 & = & \frac{1}{\eta^2}
\left( \frac{2}{3} \hat{x}_{\cal R}^{(4)} - (2-\eta)
\hat{x}_{\cal R}^{(2)} + \frac{1-\eta}{2} \right).
\end{eqnarray}
As an illustration, let us express the normalized photon number
variance $Q = (\langle \hat{n}^2 \rangle
- \langle \hat{n} \rangle^2 - \langle \hat{n} \rangle)/
\langle \hat{n} \rangle$ \cite{MandOL79} in terms of the
expectation values of $\hat{x}_{\cal R}^{(2)}$ and
$\hat{x}_{\cal R}^{(4)}$. This variance is used to characterize the
sub-Poissonian statistics of light. After some simple
algebra we arrive at
\begin{equation}
Q = \frac{1}{\eta}
\frac{\frac{2}{3}\langle\hat{x}_{\cal R}^{(4)}\rangle
- \langle\hat{x}_{\cal R}^{(2)}\rangle^2
-\langle\hat{x}_{\cal R}^{(2)}\rangle + \frac{1}{4}}%
{\langle\hat{x}_{\cal R}^{(2)}\rangle - \frac{1}{2}}.
\end{equation}
Thus, the variance $Q$ can be read out from the two lowest
moments of the homodyne statistics with the randomized phase.
The photodetector efficiency $\eta$ enters into the above formula
only as an overall scaling factor. This result is analogous to that
obtained for the setup with a single imperfect detector, and is due to
the fact that $Q$ describes normally ordered field fluctuations.
\section{Conclusions}
\label{Sec:TheEnd}
We have presented the operational description
of the balanced homodyne detection
scheme with imperfect photodetectors. For homodyne detection it is
possible to derive exact expressions for the POVM and the corresponding
algebra of operational operators. The result of these calculations
shows that a whole family of operational observables rather than a single
operator should to be used to discuss a realistic setup. This family
allows one to easily relate the experimentally observed
fluctuations to the intrinsic properties of the system.
\section*{Acknowledgments}
The authors have benefited from
discussions with P. L. Knight and G. Herling.
This work was partially supported by
the US Air Force Phillips
Laboratory Grant No. F29601-95-0209.
K.B. thanks the European Physical
Society for the EPS/SOROS
Mobility Grant.
| proofpile-arXiv_065-974 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and motivations}
\vspace*{-0.5pt}
\noindent
In this report, we review some properties concerning Yang-Mills (YM)
theories in 1+1 dimensions in the light-cone gauge. The reason why YM
theories in 1+1 dimensions are interesting is at least twofold:
\begin{itemlist}
\item The reduction of the dimensions to $D=2$ entails tremendous
simplifications in the theory, so that several important problems can be faced
in this lower dimensional context. We are thinking for instance to the exact
(when possible) evaluation of vacuum to vacuum amplitudes of Wilson loop
operators, that, for a suitable choice of contour and in some specific limit,
provide the potential between two static quarks. Another example is the
spectrum of the Bethe-Salpeter equation, when dynamical fermions are added to
the system.
\item The second reason is that YM theories in $D=2$ have several peculiar
features that are interesting by their own. The most remarkable ones are:
a) in $D=2$ within the same gauge choice (light-cone gauge) two
inequivalent formulations of the theory seem to coexist;
b) $D=2$ is a point of discontinuity for YM theories;
this is an intriguing
feature whose meaning has not been fully understood so far.
\end{itemlist}
\noindent
All the features we have listed are most conveniently studied
if the light-cone
gauge (lcg) is chosen. In such a gauge the Faddeev Popov sector
decouples and the unphysical degrees of freedom content of the theory is
minimal. The price to be paid for these nice features is the presence of the so
called `spurious' poles in the vector propagator.\cite{BNS91}
In fact, in the gauge $nA=0$ with $n^\mu$ a given constant null vector
($n^2=0$), the form of the
propagator in $D$ dimensions turns out to be
\begin{equation}
D^{ab}_{\mu \nu} (k)= {-i \delta^{ab}\over k^2 + i \epsilon} \left( g_{\mu \nu}
- {n_\mu k_\nu + n_\nu k_\mu\over nk}\right)\, .
\label{prop1}
\end{equation}
As we shall see, to handle the spurious pole at $nk=0$ is a
delicate matter; basically all difficulties encountered in the past
within the lcg quantization are related to this problem.
In Sect. 2 we focus on the $D\ne 2$ case, and discuss the so
called `manifestly unitary' and `causal' formulations of the theory. We shall
see that the correct formulation is the causal one: the
manifestly unitary formulation will meet so many inconsistencies to make it
unacceptable. Moreover, even in the causal formulation, the theory looks
discontinuous in the limit $D=2$.
In Sect. 3 we compare the two formulations at strictly $D=2$. Surprisingly, in
this case both seem to coexist, without obvious inconsistencies.
Thus, a natural question arises: are the two quantization schemes equivalent in
$D=2$? Do they provide us with equal results?
The
answer to these questions are given in Sect. 4 where Wilson loop
expectation values are evaluated. We shall find
that the two formulations are indeed inequivalent.
In Sect. 5
the theory will be considered on the cylinder ${\cal R}\times {\cal S}$,
namely with the space variable constrained in an interval, in order to
reach a consistent infrared (IR) regularization. Again the two formulations
behave quite differently.
Finally Sect. 6 contains a discussion
of the bound state integral equation when dynamical fermions are present
and our conclusions.
\textheight=7.8truein
\setcounter{footnote}{0}
\renewcommand{\fnsymbol{footnote}}{\alph{footnote}}
\section{$D\ne 2$: a comparison between manifestly unitary and causal
formulations}
\noindent
A manifestly unitary formulation of YM theories in lcg can be obtained by
quantizing the theory in the so called null-frame formalism, i.e. passing in
light-cone coordinates and interpreting $x^+$ as the evolution coordinate
(time) of the system; the remaining components $x^-, x_\perp$ will be
interpreted as `space' coordinates. Within this quantization scheme, one of
the unphysical components of the gauge potential (say $A_-$) is set equal to
zero by the gauge choice whereas the remaining unphysical component ($A_+$)
is no
longer a dynamical variable but rather a Lagrange multiplier of the secondary
constraint (Gauss' law). Thus, already at the classical level, it is possible
to restrict to the phase space containing only the physical (transverse)
polarization of the gauge fields. Then, canonical quantization on the null
plane provides the answer to the prescription for the spurious pole in the
propagator, the answer being essentially the Cauchy principal value (CPV)
prescription.
Unfortunately, following this scheme, several
inconsistencies arise, all of them being related to the violation of causality
that $CPV$ prescription entails:
\begin{itemlist}
\item non-renormalizability of the theory: already at the one loop level,
dimensionally regularized Feynman integrals develop loop singularities that
appear as double poles at $D=4$.\cite{CDL85}
\item power counting criterion is lost: the pole structure in the complex $k_0$
plane is such that spurious poles contribute under Wick rotation. As a
consequence euclidean Feynman integrals are not
simply related to Minkowskian ones as an extra contribution shows up
which jeopardizes naive power counting.\cite{BW89}
\item gauge invariance is lost: due to the above mentioned extra
contributions, the $N=4$ supersymmetric version of the theory turns out
not to be finite, at variance with the Feynman gauge result.\cite{CDL85}
\end{itemlist}
Consequently, manifestly unitary theories do not seem to exist. As explained
above, all the bad features of this formulation have their root in the lack of
causality of the prescription for the spurious pole, and the subsequent
failure of the power counting criterion for convergence.
Thus, a natural way to circumvent
these problems is to choose a causal prescription. It was
precisely following these arguments that Mandelstam and
Leibbrandt \cite{ML83}, independently, introduced the ML prescription
\begin{equation}
{1\over k_-}\equiv ML({1\over k_-})= {k_+\over k_+k_- + i \epsilon}={1\over
k_- +
i \epsilon {\rm sign}(k_+)}\, .
\label{ml}
\end{equation}
It can be easily realized that with this choice the position of
the spurious pole is always `coherent' with that of Feynman ones, no
extra terms appearing after Wick rotation which threaten the power counting
criterion for convergence.
How can one justify such a recipe? One year later Bassetto and
collaborators \cite{BDLS85} filled the gap
by showing that ML prescription arises naturally by quantizing the
theory at equal time,
rather than at equal $x^+$. Eventually they succeeded \cite{BDS87} in proving
full renormalizability of
the theory and full agreement with Feynman gauge results in
perturbative
calculations.\cite{BKKN93}
At present the level of
accuracy of the light-cone gauge is indeed comparable with that of the
covariant gauges.
An important point to be stressed is that equal time canonical quantization
in lcg,
leading to the ML prescription for the spurious pole, does not provide us
with a manifestly
unitary formulation of the theory. In fact in this formalism Gauss' laws do not
hold strongly but, rather, the Gauss' operators obey to a free field equation
and entail the presence in the Fock
space of unphysical degrees of freedom. The causal
nature of the ML prescription for the spurious poles is a consequence of
the causal propagation of those `ghosts'. A physical
Hilbert space can be selected by imposing the (weakly) vanishing
of Gauss' operators. This mechanism is similar to the Gupta Bleuler
quantization scheme for electrodynamics in Feynman gauge,
but with the great advantage that it can be naturally extended to the non
abelian case without Faddeev Popov ghosts.\cite{BNS91}
\vfill\eject
\textheight=7.8truein
\setcounter{footnote}{0}
\renewcommand{\fnsymbol{footnote}}{\alph{footnote}}
\section{$D= 2$: a comparison between the manifestly unitary and causal
formulations}
\noindent
The causal formulation of the theory can be straightforwardly extended to {\it
any} dimension, including the case $D=2$. On the other hand, the
manifestly unitary formulation can {\it only} be defined in $D=2$
without encountering obvious inconsistencies. The reason is simple:
all problems
were related to the lack of causality encoded in the $CPV$ prescription.
But at exactly $D=2$ there are no physical degrees of
freedom propagating at all, and then causality is no longer a concern.
Moreover, at exactly $D=2$ and within the lcg, the 3- and 4- gluon vertices
vanish, so that all the inconsistencies related to the perturbative evaluation
of Feynman integrals are no longer present in this case.
A manifestly unitary formulation provides the following
`instantaneous - Coulomb type' form for the only non vanishing component of the
propagator:
\begin{equation}
D^{ab}_{++} (x) = - {i\delta^{ab}\over (2\pi)^2}\int d^2 k \, e^{ikx}
{\partial\over \partial k_-} P\left({1\over k_-}\right) = -i\delta^{ab}
{|x^-|\over 2} \delta(x^+)\, , \label{prcpv2}
\end{equation}
where $P$ denotes CPV prescription,
whereas equal time canonical quantization gives, for the same component of the
propagator,
\begin{equation}
D_{++}^{ab}(x)= {i\delta^{ab}\over \pi^2}\int d^2k\, e^{ikx} {k_+^2\over
(k^2+i\epsilon)^2}={\delta^{ab}(x^-)^2\over \pi ( -x^2 + i \epsilon)}\ .
\label{prml2}
\end{equation}
Thus, it seems we have two different formulation of YM theories in
$D=2$, and within the same gauge choice, the lcg.\cite{BDG94}
Whether they are equivalent and, in
turn, whether they are equivalent to a different gauge choice, such as
Feynman gauge, has to be explicitly verified.
We can summarize the situation according to the content of unphysical degrees
of freedom. Since the paper by 't Hooft in 1974 \cite{TH74},
it is a common belief that
pure YM in
$D=2$ is a theory with no propagating degrees of freedom. This
happens in the
manifestly unitary formulation leading to CPV prescription for the spurious
pole and to the propagator (\ref{prcpv2}).
This formulation, however, cannot be
extended outside $D=2$ without inconsistencies. Alternatively, we have the same
gauge choice but with a different quantization scheme, namely at equal time,
leading to the
causal (ML) prescription for the spurious pole and to the propagator
(\ref{prml2}). Here, even in the pure YM case, some
degrees of freedom survive, as we have propagating ghosts.
Such a formulation is in a better shape when compared to the
previous one as it can be smoothly extended to any dimension,
where consistency
with Feynman gauge has been established.
Feynman gauge validity for
any $D\ne 2$ is unquestionable, while, at strictly $D=2$, the vector
propagator in this gauge
fails to be a tempered distribution. Still, in the spirit of dimensional
regularization, one can always evaluate amplitudes in $D\ne 2$ and take
eventually the limit $D\to 2$. In following this attitude, the number of
degrees of freedom
is even bigger as Faddeev-Popov ghosts are also to be taken into account.
In addition,
in the covariant gauge 3- and 4- gluon vertices do not
vanish and the theory does not look free at all.
\textheight=7.8truein
\setcounter{footnote}{0}
\renewcommand{\fnsymbol{footnote}}{\alph{footnote}}
\section{Wilson loops calculations}
\noindent
To clarify the whole matter we need a test of gauge invariance. In particular,
we want to answer the following three questions:
\begin{romanlist}
\item Is YM theory continuous in the limit $D\to 2$?
\item Is YM theory in $D=2$ a free theory?
\item Are the two lcg formulations in $D=2$ equivalent?
\end{romanlist}
\noindent
To probe gauge invariance and to answer the above questions, following
ref.\cite{BDG94}, we shall evaluate
vacuum to vacuum amplitudes of Wilson loop operators, defined as a
functional of the closed contour $\gamma$ through
\begin{equation}
W[\gamma]= {1\over N} \int dA_\mu \delta (\Phi (A)) {\rm det} [M_\Phi] e^{i\int
dx {\cal L}(x)}\, {\rm Tr} \left\{ {\cal P} e^{i g \oint_\gamma dx^\mu A^a_\mu
T^a}\right\}\, ,\label{wl}
\end{equation}
where for convenience we choose $SU(N)$ as gauge group with hermitean
generators $T^a$. In Eq.(\ref{wl}), $\Phi(A)=0$ denotes the gauge choice and
${\rm det}[M_\Phi]$ the corresponding Faddeev-Popov determinant, that can be
either trivial or not, depending on the gauge $\Phi(A)$. As usual,
${\cal P}$ denotes ordering along the closed path $\gamma$, that we shall
choose to be a light-like rectangle in the plane $(x^+, x^-)$
with length sides $(T,L)$.
For later convenience, we recall that the Casimir constants of the
fundamental and adjoint representations, $C_F$ and $C_A$, are defined through
\begin{equation} C_F= (1/N) {\rm Tr} (T^a T^a)= (N^2-1)/2N\ , \quad {\rm and}
\quad C_A \delta^{ab} = f^{acd}f^{bcd}= \delta^{ab} N \ , \label{casimir}
\end{equation}
$f^{abc}$ being the structure constants for $SU(N)$.
First of all we shall check continuity in the $D\to 2$ limit. To this
purpose, we have to choose the lcg in its causal formulation. In fact,
among the
gauge choices we considered, this is the only one whose formulation is
smooth in the $D\to 2$ limit\footnote{In fact, lcg in its manifestly unitary
formulation is acceptable only at $D=2$ and therefore cannot be used to check
continuity; even Feynman gauge cannot be used, as the propagator is divergent
at $D=2$, preventing therefore a calculation at exactly $D=2$.}.
Within this gauge choice, only a perturbative ${\cal O}(g^4)$ calculation is
viable. Performing the calculation in $D$ dimensions and
eventually taking the limit $D\to 2$, the expression for the Wilson loop gives
\begin{equation}
\lim_{D\to2} W_{ML}^{(D)}(\gamma) = 1-i {g^2\over 2}{LTC_F} -{g^4\over 8}(LT)^2
\left[C^2_F-{C_FC_A\over 8\pi^2}\left(1+{\pi^2\over 3}\right)\right] + {\cal
O}(g^6)\ ,\label{wlmld}
\end{equation}
whereas the same quantity evaluated at exactly $D=2$ gives a different answer,
namely
\begin{equation}
W_{ML}^{(D=2)}(\gamma) = 1-i {g^2\over 2}{LTC_F} -{g^4\over 8}(LT)^2
\left[C^2_F-{C_FC_A\over 24}\right] + {\cal
O}(g^6)\ .\label{wlml2}
\end{equation}
Thus, we have a surprising result: YM theories are discontinuous at $D=2$.
The technical reason of such discontinuity can be easily understood
in terms of
`anomalous' diagrams that survive in the limit $D\to 2$. In strictly $D=2$, as
already stressed, the 3- and 4- gluon vertices vanish in lcg.
Consequently, the free propagator (\ref{prml2}) is the $complete$ two
point Green
function, as there are no radiative corrections. On the other hand, in
$D=2+\varepsilon$ the gluon vertices do not vanish anymore, as `$\varepsilon$'
transverse components couple the gauge fields. Thus, in $D\ne 2$ dimensions
the two-point Green function has radiative corrections. The one loop
correction, ${\cal O}(g^2)$, is the standard `bubble diagram', with two free
propagators connected by two 3-gluon vertices. Obviously, the strength of the
vertices vanishes in the limit $\varepsilon=(D-2) \to 0$; nevertheless,this
correction to the Green function produce a finite contribution in the limit
$\varepsilon \to 0$ due to the matching with the loop pole precisely at
$D=2$. Such a dimensional `anomaly-type' phenomenon is responsible of the
discontinuity of YM theory at $D=2$.
As a matter of fact, it is easy to
evaluate the contribution to the Wilson loop given by this anomalous part of
Green function, surviving in the limit $D\to 2$: it provides the factor $g^4
(LT)^2 C_FC_A/64\pi^2$, which is indeed the difference between Eq.~(\ref{wlmld})
and Eq.~(\ref{wlml2}). This discontinuity at $D=2$ is a very interesting
phenomenon, whose nature is still unclear; whether it is related to a
true anomaly, $i.e.$ whether there is a classical symmetry violated
at the quantum level,
is still a matter of investigation.
Consistency with Feynman gauge can be checked by evaluating the
dimensionally regularized Wilson loop: an ${\cal O}(g^4)$
calculation provides exactly the same result of lcg in its causal formulation
for any $D$ and therefore also in the limit $D\to 2$: as expected, we have full
agreement between Feynman and light-cone gauge if the ML prescription for the
spurious poles is adopted. We stress that the `anomalous' self-energy
contribution we have hitherto discussed,
is essential in order to get such an agreement.\cite{BKKN93}
However, both in the dimensionally regularized case
with the limit $D\to 2$ taken at the end and in the strictly $D=2$ case,
within the causal lcg formulation, we realize that the Wilson loop
results do not depend only on $C_F$:
a `genuine' non abelian $C_FC_A$ dependence appears at ${\cal O}(g^4)$.
This means that, although the vertices vanish, YM in $D=2$ dimensions is
$not$ equivalent to an abelian theory.
On the contrary, this feature does not occur in the manifestly unitary
(strictly 2-dimensional) formulation. In fact, due to the contact nature
of the propagator, Eq.~(\ref{prcpv2}), non-abelian $C_A$-dependent terms
do not appear in the expression of Wilson loops. In this case, it is easy to
find that the perturbative result exponentiates in a simple abelian way
\begin{equation}
W_{CPV}^{(D=2)}(\gamma)= e^{-ig^2 LT C_F/2}\, .
\label{wlcpv2}
\end{equation}
Pure YM in its manifestly unitary formulation is essentially free and
equivalent
to an abelian theory. We are lead to conclude that the
two light-cone formulations at $D=2$ are indeed inequivalent.
Summarizing, three different evaluations of the same Wilson loop within the
same gauge choice (lcg) provided us with three different
answers! Discrepancy between Eqs. (\ref{wlml2}) and (\ref{wlcpv2})
is explained by the coexistence of two different inequivalent formulations of
YM theory, whereas discrepancy between Eqs.
(\ref{wlmld}) and (\ref{wlml2}) is explained by the discontinuity in
the limit $D\to 2$.
However, in all the cases we considered, we always got at least
a pure `area-law' dependence of the Wilson loop.
Is this an universal property of $D=2$ YM theory? Contrary to a
common belief, we shall show that this is not the case, by providing an
explicit counterexample.\cite{BCN96}
Let us consider again a rectangular
loop $\tilde \gamma$ with area $A=LT$, but now centered at the origin of
the plane $(x^0,x^1)$. For convenience, let us stick to the $D=2$ case
and focus on the two different formulations of lcg. From a
physical point of view, this contour is even more interesting: would one be
able to compute the exact value of the Wilson loop amplitude, one could
derive
the potential $V(L)$ between two static quarks separated by a distance $L$
through the well known formula
\begin{equation}
\lim_{T\to \infty} W(\tilde \gamma)= e^{-i T V(L)}
\label{potential}
\end{equation}
In the manifestly unitary (CPV) case, due to the contact nature of the
potential (\ref{prcpv2}), the Wilson loop can again be exactly evaluated
giving, for a finite size of the rectangle,
\begin{equation}
W_{CPV}^{(D=2)}(\tilde \gamma) = e^{-ig^2 C_F LT/2}\
\label{loop2cpv}
\end{equation}
and therefrom a linear confining potential between quarks with string tension
$\sigma= g^2 C_F/2$. However, it should be emphasized that such a confining
result for $QCD_2$ has the same origin as in QED, namely follows from
the $abelian$ `contact' nature of the potential.
In the causal (ML) case a complete evaluation at all orders is not
viable due to the presence of genuine non abelian terms. Only a
perturbative ${\cal O}(g^4)$
evaluation is viable and after lengthy calculations, one finds
\begin{eqnarray}
W_{ML}^{(D=2)}(\tilde\gamma)&=& 1-i {g^2\over 2}{LTC_F} -{g^4\over 8}(LT)^2
\left\{C_F^2 + {C_FC_A\over 4\pi^2}\left[ 3 + {2\pi^2\over 3} + 2\beta[1+
\right. \right.\nonumber\\
&&\left.\left. (2+\beta)\ln\beta] - 2 (1+\beta)^2 \ln(1+\beta) - {2\over 3\beta}
\ln^2 (1+\beta) - {1\over 6\beta^2} \times
\right.\right.\nonumber\\
&&\left.\left.(1-\beta)^2\ln^2 (1-b) -{1\over
3\beta^2}(1-\beta)^4 \left({\rm Li}(\beta)+{\rm Li}\left(-{\beta\over
1+\beta}\right)\right)- \right.\right.\nonumber\\
&&\left.\left. {1\over \beta} \left({\rm Li} (\beta) + {\rm Li}
\left({\beta\over 1 + \beta}\right)\right)\right]\right\} + {\cal O} (g^6)\ .
\label{loop2ml}
\end{eqnarray}
The Wilson loop amplitude, for finite $L$ and $T$, not only depends on
the area, but also on the dimensionless ratio $\beta = L/T$ through a
complicated factor involving the dilogarithm function ${\rm Li} (z)$.
Obviously,
the fact that in this case we only have a perturbative ${\cal O}(g^4)$
calculation prevent us from making any interpretation of the result in term of
a potential between static quarks in the large $T$ limit. Nevertheless, it is
remarkable and perhaps not incidental that in such a limit
all the dependence on
$\beta$ cancel leaving again a pure area dependence
\begin{equation}
\lim_{T\to\infty}W_{ML}^{(D=2)}(\tilde\gamma)= 1-i {g^2\over 2}{LTC_F}
-{g^4\over
8}(LT)^2 \left\{C_F^2 + {C_FC_A\over 12\pi^2}(9 + 2\pi^2) \right\}\
\label{loop2mllarget}
\end{equation}
with finite coefficients.
We stress that again the same theory with the same gauge choice leads to
{\it different} results when using different expressions for the two
point Green function, even in the large T limit.
\section{Wilson loops on the cylinder}
\noindent
While a comparison with Feynman gauge at $D\ne2$ gave a satisfactory result,
a comparison at strictly $D=2$ is impossible owing to the well-known IR
singular behaviour of the vector propagator in Feynman gauge. Then, in
order to achieve a consistent IR regularization, we consider the theory
on the cylinder ${\cal R}\times {\cal S}$, namely we restrict the space variable
to the interval $-L\le x \le L$ with periodic boundary conditions on the
potentials. Time is $not$ compactified. We follow here the treatment given
in ref.\cite{BGN96}.
In so doing new features appear owing to the non trivial topology of the
cylinder and we feel preliminary to examine the equal-time quantization
of the pure YM theory in the light-cone gauge $A_{-}=0$. Introduction
of fermions at this stage would not entail particular difficulties,
but would be inessential to our subsequent argument.
We recall that axial-type gauges cannot be defined on compact manifolds
without introducing singularities in the vector potentials (Singer's
theorem).\cite{SI78} Partial compactifications are possible provided they occur
in a direction different from the one of the gauge fixing vector: this
is indeed what happens in the present case.
Starting from the standard lagrangian density (for SU(N))
\begin{equation}
{\cal L}= -1/2\, Tr(F^{\mu\nu}F_{\mu\nu})\, - 2 Tr(\lambda nA),
\label{lagrangian}
\end{equation}
$n_{\mu}={1\over \sqrt{2}}(1,1)$ being the gauge vector and $\lambda$
being Lagrange multipliers, which actually coincide with Gauss' operators,
it is straightforward to derive
the equations of motion
\begin{eqnarray}
&A_{-}=0,\,\,\,{\partial_{-}}^2 A_{+}=0,\nonumber\\
&\partial_{-}\partial_{+}A{+} -ig[A_{+},\partial_{-}A_{+}]=\lambda.
\label{motion}
\end{eqnarray}
As a consequence we get
\begin{equation}
\label{gausson}
\partial_{-}\lambda=0.
\end{equation}
In a `light-front' treatment (quantization at equal $x^{+}$), this
equation would be a constraint and $\partial_{-}$ might be inverted
(with suitable boundary conditions) to get the `strong' Gauss' laws
\begin{equation}
\label{gauss}
\lambda=0.
\end{equation}
This would correspond in the continuum to the $CPV$ prescription for
the singularity at $k_{-}=0$ in the relevant Green functions.
In equal-time quantization eq.(\ref{gausson}) is an evolution equation. The
Gauss' operators do not vanish strongly: Gauss' laws are imposed
as conditions on the `physical' states of the theory.
In so doing one can show \cite{BGN96} that the only surviving `physical'
degrees of freedom are zero modes of the potentials related
to phase factors of contours winding around the cylinder.
Frequency parts are unphysical, but non vanishing: they contribute
indeed to the causal expression of the vector propagator
\begin{eqnarray}
G_{c}(t,x)&=& G(t,x) -{{it}\over {4L}}P ctg \big({{\pi\sqrt2 x^{+}}
\over {2L}}\big),\nonumber\\
G(t,x)&=&1/2\, |t|\, \big(\delta_{p}(x+t) - {1\over {2L}} \big),
\label{propa}
\end{eqnarray}
$\delta_{p}$ being the periodic generalization of the Dirac distribution.
$G_{c}$ looks like a complex ``potential" kernel, the absorptive part being
related to the presence of ghost-like excitations, which are
essential to recover the ML prescription in the decompactification
limit $L \to \infty$; as a matter of fact
in this limit $G(t,x)$ becomes the
`instantaneous' 't Hooft potential, whereas $G_{c}$ is turned into the
causal ML distribution.
We are now in the position of comparing a Wilson loop on the cylinder
when evaluated according to the 't Hooft potential or using the
causal light-cone propagator.
In order to avoid an immediate interplay with topological features, we
consider a Wilson loop entirely contained in the basic interval
$-L\le x \le L$. We choose again a rectangular
Wilson loop $\gamma$ with light--like sides, directed along the vectors
$n_\mu$ and $n^*_\mu$, with lengths $\lambda$ and $\tau$
respectively, and parametrized according to the equations:
\begin{eqnarray}
\label{quarantasei}
C_1:x^\mu (s) &=& n^{\mu} {\lambda} s, \nonumber\\
C_2:x^\mu (s) &=& n^{\mu} {\lambda}+ n^{* \mu}{\tau} s, \nonumber\\
C_3:x^\mu (s) &=& n^{* \mu}{\tau} + n^{\mu} {\lambda}( 1-s), \\
C_4:x^\mu (s) &=& n^{* \mu}{\tau} (1 - s), \qquad 0 \leq s \leq 1, \nonumber
\end{eqnarray}
with ${\lambda + \tau}<2\sqrt 2 L$.
We are again interested in the quantity
\begin{equation}
\label{quarantasette}
W(\gamma)={1\over N} {\bf \Big< 0}|Tr{\cal T}{\cal P}\Big( exp\Big[ig
\oint_{\gamma}
A dx^+\Big]\Big)|{\bf 0 \Big>},
\end{equation}
where ${\cal T}$ means time-ordering and
${\cal P}$ color path-ordering along $\gamma$.
The vacuum state belongs to the physical Hilbert space as far as
the non vanishing frequency parts are concerned; it is indeed the
Fock vacuum $|{\bf \Omega \Big>}$. Then we
consider its direct product with the lowest eigenstate of the
Hamiltonian concerning zero modes (see \cite{BGN96}).
Due to the occurrence of zero modes, we cannot define a ``bona fide"
complete propagator for our theory:
on the other hand a propagator is not required in
eq.(\ref{quarantasette}).
We shall first discuss
the simpler case of QED, where no color ordering is involved.
Eq.(\ref{quarantasette}) then becomes
\begin{equation}
\label{quarantotto}
W(\gamma)= {\bf \Big< 0}|{\cal T}\Big( exp\Big[ig\oint_{\gamma}
A dx^+\Big]\Big)|{\bf 0 \Big>},
\end{equation}
and a little thought is enough to realize the factorization property
\begin{eqnarray}
\label{quarantanove}
W(\gamma)&=& {\bf \Big< 0}|{\cal T}\Big( exp\Big[{ig \over {\sqrt
{2L}}}\oint_{\gamma}
(b_0 + a_0t) dx^+\Big]\Big)|{\bf 0 \Big>}{\bf \Big< 0}|{\cal T}
\Big( exp\Big[ig \oint_{\gamma}
\hat{A}(t,x) dx^+\Big]\Big)|{\bf 0 \Big>}\nonumber\\
&=& W_0\cdot\hat{W},
\end{eqnarray}
according to the splitting of the potential in zero mode and frequency parts.
In turn the Wilson loop $\hat{W}$ can also be expressed as a Feynman
integral starting
from the QED lagrangian , without the zero mode
\begin{equation}
\label{cinquanta}
\hat{W}(\gamma)={\cal N}^{-1} \Big( exp\Big[g\oint_{\gamma}{\partial
\over {\partial J}}dx^+\Big]\Big)
\Big[\int {\cal D} \hat{A}\, {\cal D}\lambda\, exp\,\,i\Big(\int d^2x({\cal L}+
J\hat{A})\Big)\Big]_{\Big| J=0},
\end{equation}
${\cal N}$ being a suitable normalization factor.
Standard functional integration gives
\begin{eqnarray}
\label{cinquantuno}
\hat{W}(\gamma)&=&{\cal N}^{-1}\Big(exp\Big[g\oint_{\gamma}{\partial
\over {\partial J}}dx^+\Big]\Big) \nonumber\\
&& exp\Big[{i\over 2}\,\int\!\!\!\int d^2\xi d^2\eta
J(\xi) G_{c}(\xi-\eta)
J(\eta)\Big]_{\Big| J=0}.
\end{eqnarray}
and we are led to the expression
\begin{eqnarray}
\label{cinquantasette}
\hat{W}(\gamma)&=&exp\Big[ i\,g^2 \oint_{\gamma}dx^+\,\oint_{\gamma}
dy^+\,{G}_c(x^+-y^+,x^-\,-y^-)\Big]\nonumber\\
&=&exp\Big[ i\,g^2 \oint_{\gamma}dx^+\,\oint_{\gamma}
dy^+\,{G}(x^+-y^+,x^-\,-y^-)\Big]\nonumber\\
&=& exp\Big[-i\,{g^2\,{\cal A} \over 2}\Big]\,
exp\Big[ -i\,g^2 \oint_{\gamma}dx^+\,\oint_{\gamma}
dy^+\,{|x^+ + x^- -y^+ - y^-|\over {4L \sqrt 2}}\Big]
\end{eqnarray}
the absorptive part of the ``potential" averaging to zero in the abelian
case. Therefore the abelian Wilson-loop calculation is unable to
discriminate between the two different Green functions ${G}_c$ and
${G}$. The quantity ${\cal A}=\lambda \tau$ is the area of the loop.
The same result can also be obtained by operatorial
techniques, using Wick's theorem and the canonical algebra.
We are thereby left with the problem of computing $W_0$. In \cite{BGN96}
we have shown that the zero mode contribution
{\it exactly cancels} the last exponential in eq.(\ref{cinquantasette}),
leaving the pure loop area result, only as a consequence of the
canonical algebra, and even in the presence of
a topological degree of freedom. The result coincides
with the one we would have obtained introducing in eq.
(\ref{cinquantasette}) the complete Green's function, $i.e.$ with
the zero mode included. The same area result is obtained also in the non
abelian case if we use the 't Hooft form for the propagator
${1\over 2}|t|\delta_{p}(x+t)$, in spite of
the fact that this form has not a sound canonical basis and that
factorization in (\ref{quarantanove}) is no longer
justified in the non abelian case. As a matter of fact a little thought
is enough to realize that only planar diagrams survive thanks to the
`contact' nature of the potential, leading to the expression
\begin{equation}
\label{eresia3}
W(\gamma)= exp\Big[-i\,{g^2\,C_F\,\lambda\,\tau \over 2}\Big].
\end{equation}
The area (${\cal A}=\lambda\,\tau$) law behaviour of
the Wilson loop we have found in this case together with the occurrence
of a simple exponentiation in terms
of the Casimir of the fundamental representation, is a quite
peculiar result, insensitive to the decompactification limit
$L\to \infty$. It is rooted in the particularly simple expression
for the ``potential" we have used that coincides with the one
often considered in analogous Euclidean calculations \cite{BR80}.
However canonical quantization suggests that we should rather
use the propagator $G_{c}(t,x)$.
A full resummation of perturbative exchanges is
no longer viable in this case,
owing to the presence of non vanishing cross diagrams, in which
topological excitations mix non trivially with the frequency parts.
Already at ${\cal O}(g^4)$, a tedious but straightforward calculation
of the sum of all the ``cross" diagrams leads to the result
\begin{equation}
\label{follia1}
W_{cr}= -({g^2 \over {4\pi}})^2\, 4\,C_F\, (C_F - {{C_A}\over 2})
({\cal A})^2 \int_{0}^{1}
d\,\xi \int_{0}^{1} d\,\eta\,log{|sin\rho(\xi-\eta)|\over
{|sin\rho\xi|}}log{|sin\rho(\xi-\eta)|\over
{|sin\rho\eta|}},
\end{equation}
where $\rho= {\pi\lambda\over{\sqrt 2 L}}$.
One immediately recognizes the appearance of the
quadratic Casimir of the adjoint
representation ($C_A$); moreover, a
dimensionless parameter $\rho$, which measures the ratio of the side
length $\lambda$ to the interval length $L$, explicitly occurs.
In the decompactification limit $\rho\to 0$, the expression of the
cross graph
given in ref. \cite{BDG94}
\begin{equation}
\label{follia2}
W_{cr}=-({g^2 \over {4\pi}})^2\,2 C_F\,(C_F -C_A/2)
({\cal A})^2 {\pi^2\over 3}
\end{equation}
is smoothly recovered. Of course the finite self-energy contribution
found in ref. \cite{BDG94} in the dimensionally regularized theory
cannot appear in a strictly 1+1 dimensional treatment.
It is perhaps not surprising that in the
limit $L \to \infty$
the perturbative result for $W_{cr}$ in the continuum is correctly
reproduced, in spite
of the presence of topological excitations.
Still the difference between the result obtained with the `contact'
potential and with the causal one is even more striking: in both cases
at large N only planar diagrams survive but they nevertheless give
rise to different expressions for the same Wilson loop.
It seems that $planarity$ is not enough to single out an unambiguous
result.
\section{The 't Hooft bound state equation}
\noindent
In 1974 G. 't Hooft \cite{TH74} proposed a very interesting model to describe
the mesons, starting from a SU(N) Yang-Mills theory in 1+1 dimensions
in the large N limit.
Quite remarkably in this model quarks look confined, while a discrete
set of quark-antiquark bound states emerges, with squared masses lying
on rising Regge trajectories.
The model is solvable thanks to the ``instantaneous'' character of
the potential acting between quark and antiquark.
Three years later such an approach was criticized by T.T. Wu \cite{WU77},
who replaced the instantaneous 't Hooft's potential by an expression
with milder analytical properties, allowing for a Wick's rotation
without extra terms.
Unfortunately this modified formulation led to a quite involved bound
state equation, which may not be solved. An attempt to treat it
numerically in the zero bare mass case for quarks \cite{BS78} led only to
partial answers in the form of a completely different physical
scenario. In particular no rising Regge trajectories were found.
After those pioneering investigations, many interesting papers
followed 't Hooft's approach, pointing out further remarkable
properties of his theory and blooming into the recent achievements
of two dimensional QCD , whereas Wu's approach sank into oblivion.
Still, equal time canonical quantization of Yang-Mills theories
in light-cone gauge \cite{BDLS85} leads precisely in 1+1 dimensions
to the Wu's expression for the
vector exchange between quarks \cite{BDG94}, which is nothing but the 1+1
dimensional version of the Mandelstam-Leibbrandt (ML)
propagator. We have already stressed that this option is mandatory
in order to achieve gauge
invariance and renormalization in 1+(D-1) dimensions.
We follow here definitions and notations of
refs.\cite{TH74} and \cite{WU77} the reader is
invited to consult.
The 't Hooft potential exhibits an infrared singularity
which, in the original formulation, was handled by introducing
an infrared cutoff; a quite remarkable feature of this theory
is that bound state wave functions and related eigenvalues
turn out to be cutoff independent. As a matter of fact in
ref. \cite{CA76}, it has been pointed out that the singularity
at $k_{-}=0$ can also be regularized by a Cauchy principal
value ($CPV$) prescription without finding differences in gauge
invariant quantities. Then, the difference between the two
potentials is represented by the following distribution
\begin{equation}
\label{unoa}
\Delta (k)\equiv {{1}\over {(k_{-}-i\epsilon
sign (k_{+}))^2}} - P\Big({{1}\over {k_{-}^2}}\Big)= - i \pi
sign (k_{+}) \delta^{\prime}(k_{-}).
\end{equation}
In ref.\cite{BG96}, which we closely follow in the sequel,
this quantity has been treated as an insertion
in the Wu's
integral
equations for the quark propagator and for the bound state wave
function, starting from 't Hooft's solutions. $Exactly$
the same planar diagrams of
refs.\cite{TH74} and \cite{WU77}, which are the relevant ones
in the large $N$ limit, are summed.
The Wu's integral equation for the quark self-energy in the Minkowski
momentum space is
\begin{eqnarray}
\label{unob}
\Sigma(p;\eta)&=& i {{g^2}\over {\pi^2}} {{\partial}\over {\partial p_{-}}}
\int dk_{+}dk_{-} \Big[P\Big({{1}\over {k_{-}-p_{-}}}\Big)+
i \eta \pi sign (k_{+}-p_{+}) \delta (k_{-}-p_{-})\Big]\nonumber\\
&\cdot&{{k_{-}}\over {k^2+m^2-k_{-}\Sigma (k;\eta)-i\epsilon}},
\end{eqnarray}
where $g^2=g_0^2 \,N$ and $\eta$ is a real parameter which is used
as a counter of insertions and eventually should be set equal to 1.
Its exact solution with appropriate boundary conditions reads
\begin{eqnarray}
\label{uno}
\Sigma(p;\eta)&=& {{1}\over {2p_{-}}}\Big(\Big[p^2+m^2+(1-\eta){{g^2}\over
{\pi}}\Big]-\Big[p^2+m^2-(1-\eta){{g^2}\over
{\pi}}\Big]\nonumber\\
&\cdot&\sqrt {1- {{4\eta g^2 p^2}\over {\pi(p^2+m^2-(1-\eta){{g^2}\over
{\pi}}}-i\epsilon)^2}}\,\,\Big).
\end{eqnarray}
One can immediately realize that 't Hooft's and Wu's solutions
are recovered for $\eta =0$ and $\eta =1$ respectively.
The dressed quark propagator turns out to be
\begin{equation}
\label{due}
S(p;\eta) = - {{i p_{-}}\over {m^2+2 p_{+}p_{-}- p_{-}\Sigma(p;\eta)}}.
\end{equation}
Wu's bound state equation in
Minkowski space, using light-cone coordinates, is
\begin{eqnarray}
\label{tre}
\psi(p,r)&=& {{-ig^2}\over {\pi ^2}} S(p;\eta) S(p-r;\eta)
\int dk_{+}dk_{-} \Big[P\Big({{1}\over
{(k_{-}-p_{-})^2}}\Big)-\nonumber\\
&-&
i \eta \pi sign (k_{+}-p_{+}) \delta^{\prime} (k_{-}-p_{-})\Big]
\psi(k,r).
\end{eqnarray}
We are here considering for simplicity the equal mass case and $\eta$
should be set equal to 1.
Let us denote by $\phi_{k}(x),\,\, 0\le \! x= {{p_{-}}\over {r_{-}}}\le
\!1,\,\, r_{-}>0$,
the 't Hooft's eigenfunction corresponding
to the eigenvalue $\alpha_{k}$ for the quantity ${{-2 r_{+}r_{-}}\over
{M^2}}$, where $M^2= m^2 - {{g^2}\over {\pi}}$.
Those eigenfunctions are
real, of definite parity under the exchange $x \to 1-x$ and vanishing
outside the interval $0<x<1$:
\begin{eqnarray}
\label{quattro}
\phi_{k}(x)&=& \int dp_{+} {{r_{-}}\over {M^2}}
\psi_{k}(p_{+},p_{-},r),\nonumber\\
i \pi\,\psi_{k}&=&\phi_{k}(x) {{M^4}\over {M^2+2r_{-}p_{+}x
-i\epsilon}}\cdot\nonumber\\
&\cdot&{{1-\alpha_{k}x(1-x)}\over {M^2-\alpha_{k}M^2(1-x)-
2r_{-}p_{+}(1-x)- i \epsilon}}.
\end{eqnarray}
They are solutions of eq.(\ref{tre}) for $\eta=0$ and
form a complete set.
We are interested in a first order calculation in $\eta$.
This procedure
is likely to be sensible only in the weak coupling region
${{g_0^2}\over {\pi}}< m^2$.
The integral equation (\ref{tre}), after first order
expansion in $\eta$ of its kernel, becomes
\begin{eqnarray}
\label{cinque}
\psi(p_{+},p_{-},r)&=& {{ig^2}\over {\pi^2}}{{p_{-}}\over
{M^2+2p_{+}p_{-}-i\epsilon}}{{p_{-}-r_{-}}\over
{M^2+2(p_{+}-r_{+})(p_{-}-r_{-})-i\epsilon}}\nonumber\\
\cdot\Big[\Big(1-{{\eta g^2 M^2}\over
{\pi}}&[&(M^2+2p_{+}p_{-}-i\epsilon)^{-2}+(M^2+2(p_{+}-r_{+})(p_{-}-r_{-})
-i\epsilon)^{-2}]\Big)\nonumber\\
&\cdot&\int dk_{+}dk_{-}
P{{1}\over{(k_{-}-p_{-})^2}}\psi(k_{+},k_{-},r)-\nonumber\\
&-&i \pi \eta \int dk_{+}dk_{-} sign(k_{+}-p_{+})
\delta^{\prime}(k_{-}-p_{-})\psi(k_{+},k_{-},r)\Big].
\end{eqnarray}
We integrate this equation over $p_{+}$ with $r_{-}>0$ and look
for solutions with the same support properties of 't Hooft's ones.
We get
\begin{eqnarray}
\label{cinquea}
&&\phi(x,r)= {{g^2}\over{\pi M^2}}{{x(1-x)}\over{1-\alpha
x(1-x)-i\epsilon}}\Big[\Big(1-\eta{{g^2}\over{\pi M^2}}
{{x^2+(1-x)^2}\over{(1-\alpha
x(1-x)-i\epsilon)^2}}\Big)\nonumber\\
\cdot &P&\int_0^1 {{dy}\over {(y-x)^2}}\phi(y,r)
-{{\alpha \eta}\over{2}}\int d\xi \log {{{{1}\over{1-x}}-\alpha (1-\xi)-i
\epsilon}\over {{{1}\over{x}}-\alpha
\xi-i\epsilon}}\psi^{\prime}(\xi,x,r)\Big],
\end{eqnarray}
where $\prime$ means derivative with respect to $x$.
It is now straightforward to check that 't Hooft's solution
$\psi_{k}(p_{+},p_{-},r)$ is indeed a solution also of this
equation when $\alpha$
is set equal to $\alpha_{k}$, for any value of $\eta$, in particular
for $\eta=1$, thanks to a precise cancellation of the contributions
coming from the propagators (``virtual'' insertions) against the
extra term due to the modified form of the ``potential'' (``real''
insertion). In other words the extra piece of the kernel at
$\alpha=\alpha_{k}$ vanishes when acting on $\psi_{k}$ as a perturbation. This
phenomenon is analogous to the one occurring, with respect to the same extra
term, in one loop perturbative four-dimensional calculations concerning
Altarelli-Parisi \cite{BA93} and Balitsky-Fadin-Kuraev-Lipatov
\cite{BR93} kernels. This analogy may have far-reaching consequences.
As a matter of fact, taking 't Hooft's equation
into account, we get
\begin{eqnarray}
\label{sei}
&&\Big[1-{{\eta g^2}\over{\pi M^2
[1-\alpha x(1-x)-i\epsilon]^2}}\Big((1-x)^2+
+[x^2[1+{{1-\alpha x(1-x)}\over{1-\alpha_{k}
x(1-x)-i\epsilon}}]\Big)\Big]\cdot\nonumber\\
&\cdot&(\alpha_{k}-\alpha)\phi_{k}(x)=
{{\eta g^2}\over {\pi
M^2}}\,\,\phi_{k}^{\prime}(x)\,\, log {{1-\alpha_{k}x(1-x)-i\epsilon}
\over{1-\alpha x(1-x)-i\epsilon}}.
\end{eqnarray}
There are no corrections from a single insertion in the kernel to
't Hooft eigenvalues and eigenfunctions.
We stress that this result does
not depend on their detailed form,
but only on their general properties.
The ghosts which are responsible of the causal
behaviour of the ML propagator do not modify
the bound state spectrum, as their ``real'' contribution
cancels against the ``virtual'' one in propagators.
Wu's equation for colorless bound states,
although much more involved than the
corresponding 't Hooft's one, might still apply.
This is
the heuristic lesson one learns from a single insertion in the kernel
and is in agreement with the mentioned similar mechanism occurring
in four-dimensional perturbative QCD.
Unfortunately this conclusion holds only at the level of a single
insertion and may be a consequence of one loop unitarity which
tightly relates `real' to `virtual' exchanges. Already when
two insertions are taken into account, deviations are seen from
't Hooft spectrum \cite{BNS96}. This is not a surprise as
Wu's equation is deeply
different from 't Hooft's one and might describe
the theory in a different phase (see for instance
\cite{ZY95}).
Planarity plays a crucial role in both formulations; indeed the two equations
sum exactly the same set of diagrams (the planar ones), which are thought to
be the most important ones in the large N limit. The first lesson one
learns is that planarity by itself is not sufficient to set up unambiguously
a physical scenario.
Now there are good arguments \cite{ZY95} explaining why
planarity should break down in the limit $m\to 0$. The same situation
should occur when $m^2<{{g_0^2}\over \pi}$ which correspond to a `strong'
coupling situation, where we know that 't Hooft solution can no longer
be trusted.
What about the `weak' coupling regime? Should we believe that
't Hooft's picture describes correctly the physics in two
dimensions, which in turn should be represented by planar diagrams,
we would conclude that in 1+1 dimensions planarity is not a good approximation
in the causal formulation of the theory. Indeed the results we obtain
in the latter case are definitely different from 't Hooft ones.
This is a very basic issue
in our opinion, which definitely deserves further investigation.
This is even more compelling should this situation persist in
higher dimensions where causality is mandatory in order to
obtain an acceptable formulation of the theory.
\vskip .5truecm
\nonumsection{Acknowledgements}
\noindent
We thank L. Griguolo for many useful discussions and friendly
collaboration.
\nonumsection{References}
\noindent
| proofpile-arXiv_065-978 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The subject of this report is the production of jets with large transverse
momenta
in diffractive deep inelastic scattering.
We will concentrate on events with two jets in the final state,
i.\,e.\ events with a quark-- and an antiquark jet.
Due to the high photon virtuality $Q^2$ and the large transverse
momenta of the jets we can use perturbative QCD to describe this
process. From a theoretical point of view,
diffractive jet production should allow an even better test of pQCD
than diffractive vector meson production since it does not involve
the uncertainties connected with the wave function of the meson.
After defining the kinematic variables we will present in some
detail the main results of our analysis that was performed for
the production of light quark jets.
A more extensive account of these results can be found
in \cite{BLW,BELW}.
Related and in part similar work on quark--antiquark jet
production has been reported in \cite{Ryskin,NikZak}.
We will close this report with a comment on open charm production.
\section{Kinematics}
We assume the total energy $s$ to be much larger than the photon
virtuality $Q^2=-q^2$ and much larger than the squared
invariant mass $M^2 = (q + x_{I\!\!P}p)^2$ of the jet pair.
\begin{figure}[htb]
\begin{center}
\leavevmode
\input{diffkin80.pstex_t}
\end{center}
\caption{{\it
One of the four diagrams contributing to the hard scattering
$\gamma^* + p \rightarrow q\bar{q} + p$. The outgoing (anti)quark
momenta are held fixed.
}}
\label{kinematics}
\end{figure}
We constrain our analysis to events with a rapidity gap and
we require the momentum fraction $x_{I\!\!P}$ of the proton's
momentum carried by the pomeron to be small, $x_{I\!\!P} \ll 1$.
$x_{I\!\!P}$ can be expressed as
$x_{I\!\!P} = (M^2 + Q^2)/(W^2 + Q^2)$ where $W$ is the invariant
mass of the final state (including the outgoing proton).
We keep the transverse momentum ${\bf k}$ of the jets fixed with
${\bf k}^2 \ge 1\, \mbox{GeV}^2$.
It will be convenient to use also $\beta = x_B/x_{I\!\!P} = Q^2/(M^2+Q^2)$.
The momentum transfer $t$ is taken to be zero because the cross
section strongly peaks at this point. An appropriate $t$--dependence
taken from the elastic proton form factor is put in later by hand.
For large energy $s$ (small $x_{I\!\!P}$) the amplitude is dominated
by perturbative two gluon exchange as indicated in
fig.\ \ref{kinematics}, where the kinematic variables are
illustrated.
In fig.\ \ref{angles} we define the angle $\phi$ between
the electron scattering plane and the direction of the quark jet
pointing in the proton hemisphere (jet 1 in the figure).
The angle $\phi$ is
defined in the $\gamma^\ast$--$I\!\!P$ center of mass system
and runs from $0$ to $2 \pi$.
\begin{figure}[htb]
\begin{center}
\leavevmode
\input{swangles80.pstex_t}
\end{center}
\caption{{\it
Definition of the azimuthal angle $\phi$ in the $\gamma^\ast$--$I\!\!P$ CMS
}}
\label{angles}
\end{figure}
\section{Results}
In the double logarithmic approximation (DLA),
the amplitude of the process can be expressed in terms of the
gluon structure function.
The cross section is therefore proportional
to the square of the gluon structure function.
The momentum scale of the latter can be calculated.
We thus find as one of our main results
\begin{equation}
d \sigma \sim \left[ \, x_{I\!\!P} G_p
\left( x_{I\!\!P} ,{\bf k}^2 \frac{Q^2+M^2}{M^2}
\right) \right]^2 \,.
\label{sigmagluon}
\end{equation}
Performing our numerical estimates, however, we include some of the
next--to--leading corrections (proportional to the momentum
derivative of the gluon structure function) which we expect to
be the numerically most important ones.
From (\ref{sigmagluon}) one can deduce that, in our model,
Regge factorization \`a la Ingelman and Schlein \cite{IngSchl}
is not valid, i.\,e.\ the cross section can {\em not}
be written as a $x_{I\!\!P}$--dependent flux factor times a
$\beta$-- and $Q^2$--dependent function.
In the following we present only
the contribution of transversely polarized photons
to the cross section. The corresponding plots for longitudinal
polarization can be found in \cite{BLW,BELW}. As a rule
of thumb, the longitudinal contribution is smaller
by a factor of ten. Only in the region of large $\beta$
it becomes comparable in size.
Figure \ref{fig:xpdep} shows the $x_{I\!\!P}$--dependence of the cross
section for $Q^2 = 50 \,\mbox{GeV}^2$, $\beta=2/3$ and ${\bf k}^2 > 2
\,\mbox{GeV}^2$. We use GRV next--to--leading order parton
distribution functions \cite{GRVNL}. Our prediction is compared with
the cross section obtained in the soft pomeron model of Landshoff and
Nachtmann \cite{Diehl,LNDL} with nonperturbative two--gluon exchange.
Its flat $x_{I\!\!P}$--dependence, characteristic of the soft pomeron,
is quite in contrast to our prediction.
Further, we have included (indicated by {\em hybrid})
a prediction obtained in the framework
of the model by M.\ W\"usthoff \cite{MarkPHD}.
This model introduces a parametrization of the pomeron based on
a fit to small $x_B$ data for the proton structure function $F_2$.
In this fit, the pomeron intercept is made scale dependent in order
to account for the transition from soft to hard regions.
The $x_{I\!\!P}$--dependence is not quite as steep
as ours but comparable in size.
\setlength{\unitlength}{1cm}
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{6cm}
\input{fig2a.pstex_t}
\caption{{\it $x_{I\!\!P}$--spectrum}}
\label{fig:xpdep}
\end{minipage}
\hspace{1cm}
\begin{minipage}{6cm}
\input{fig4a.pstex_t}
\caption{{\it ${\bf k}^2$--spectrum}}
\label{fig:ktdep}
\end{minipage}
\end{center}
\end{figure}
Figure \ref{fig:ktdep} presents the ${\bf k}^2$--spectrum for
different values of $Q^2$ between $15 \,\mbox{GeV}^2$
and $45 \,\mbox{GeV}^2$. Here we have chosen
$x_{I\!\!P} = 5 \cdot 10^{-3}$ and $\beta = 2/3$.
The quantity $\delta$ given with each $Q^2$ value describes
the effective slope of the curves as obtained from a numerical fit
of a power behaviour $\sim ({\bf k}^2)^{-\delta}$. We have taken ${\bf k}^2$ down to
0.5 GeV$^2$. For $\beta=2/3$ the effective momentum scale of the gluon
structure function in (1) equals ${\bf k}^2 /(1-\beta)=1.5$ GeV$^2$.
Integrating the cross section for different minimal values
of ${\bf k}^2$ we find that the total cross section is dominated
by the region of small ${\bf k}^2$. If we choose, for instance,
$x_{I\!\!P} < 0.01$, $10 \,\mbox{GeV}^2 \le Q^2$ and
$50 \,\mbox{GeV} \le W \le 220 \,\mbox{GeV}$, the total cross section
is
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 20\,\mbox{pb} \phantom{0}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 5 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 117\,\mbox{pb} \;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 2 \,\mbox{GeV}^2 \,. \nonumber
\end{eqnarray}
In the hybrid model of \cite{MarkPHD} the corresponding numbers are
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 28\,\mbox{pb} \phantom{0}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 5 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 108\,\mbox{pb} \;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 2 \,\mbox{GeV}^2 \,. \nonumber
\end{eqnarray}
In accordance with fig.\ \ref{fig:ktdep} these number show that
the cross section is strongly suppressed with ${\bf k}^2$.
We are thus observing a higher twist effect here.
For comparison, we quote the numbers which are obtained
in the soft pomeron model \cite{Diehl}. With the same cuts the total
cross section is
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 10.5\,\mbox{pb} \;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 5 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 64 \,\mbox{pb} \phantom{.5}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 2 \,\mbox{GeV}^2 \,. \nonumber
\end{eqnarray}
The $\beta$--spectrum of the cross section is shown in fig.\
\ref{fig:betadep} for three different values of $Q^2$.
Here we have chosen $x_{I\!\!P} = 5 \cdot 10^{-3}$
and ${\bf k}^2 > 2\,\mbox{GeV}^2$.
The curves exhibit maxima which,
for not too large $Q^2$, are located well below $\beta = 0.5$.
For small $\beta$ we expect the production of an extra gluon to become
important. First studies in this direction have been reported
in \cite{MarkPHD,LW}
and in \cite{Levin}, but a complete calculation has not been done yet.
\setlength{\unitlength}{1cm}
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{6cm}
\input{fig5a.pstex_t}
\caption{{\it $\beta$--spectrum}}
\label{fig:betadep}
\end{minipage}
\hspace{1cm}
\begin{minipage}{6cm}
\input{angle.pstex_t}
\caption{{\it Azimuthal angular distribution}}
\label{fig:phidep}
\end{minipage}
\end{center}
\end{figure}
The most striking observation made in \cite{BELW} concerns the
azimuthal angular distribution, i.\,e.\ the $\phi$--distribution
of the jets. It turns out that
the jets prefer a plane {\em perpendicular} to the electron
scattering plane. This behaviour comes as a surprise because
in a boson gluon fusion process the jets appear
dominantly {\em in} the electron scattering plane \cite{BGF}.
The azimuthal angular distribution therefore provides a clear
signal for the two gluon nature of the exchanged pomeron.
This is supported by the fact that a very similar azimuthal
distribution is obtained in the soft pomeron model by M.\ Diehl
\cite{Diehl,Diehl2}.
Figure \ref{fig:phidep} shows the $\phi$--dependence of the
$ep$--cross section for the hard pomeron model, the soft pomeron
model and for a boson gluon fusion process.
We have normalized the cross section to unit integral to
concentrate on the angular dependence.
Thus a measurement of the azimuthal asymmetry of quark--antiquark
jets will clearly improve our understanding of diffractive
processes.
Finally, we would like to mention the interesting issue of
diffractive open charm production.
It is in principle straightforward to extend our calculation to
nonvanishing quark masses \cite{Lotter}.
A similar computation was done in \cite{NikZakcharm}.
A more ambitious calculation has
been performed in \cite{Durham} where also higher order
correction have been estimated.
The cross section for open charm production is again
proportional to the square of the gluon density, the relevant
scale of which is now modified by the charm quark mass
\begin{equation}
d \sigma \sim \left[ \, x_{I\!\!P} G_p
\left( x_{I\!\!P} ,(m_c^2 +{\bf k}^2)\, \frac{Q^2+M^2}{M^2}
\right) \right]^2 \,.
\end{equation}
Integrating the phase space with the same cuts as above the
charm contribution to the jet cross section is found to be \cite{Lotter}
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 8\,\mbox{pb} \phantom{0}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 5 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 29\,\mbox{pb} \;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 2 \,\mbox{GeV}^2 \,. \nonumber
\end{eqnarray}
In the case of open charm production one can even integrate down to
${\bf k}^2=0$ since the charm quark mass sets the hard scale. One then
finds for the $c\bar{c}$ contribution to the total diffractive cross section
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 101\,\mbox{pb} \phantom{0}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 0 \,\mbox{GeV}^2 \;.\nonumber
\end{eqnarray}
In the soft pomeron model the corresponding numbers read \cite{Diehl}
\begin{eqnarray}
\sigma_{\mbox{\scriptsize tot}} &=& 4.8\,\mbox{pb} \;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 5 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 17\,\mbox{pb} \phantom{.}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 2 \,\mbox{GeV}^2 \nonumber\\
\sigma_{\mbox{\scriptsize tot}} &=& 59\,\mbox{pb} \phantom{.}\;\;\;\;\;
\mbox{for} \; {\bf k}^2 \ge 0 \,\mbox{GeV}^2 \,. \nonumber
\end{eqnarray}
In \cite{Durham}, where corrections from gluon radiation
are taken into account,
the relative charm contribution to the
diffractive structure function is estimated to be of the order of
25--30\%.
\section{Summary and Outlook}
We have presented perturbative QCD calculations for the production
of quark--antiquark jets in DIS diffractive dissociation.
The results are parameter free predictions of the corresponding
cross sections, available for light quark jets
as well as for open charm production.
The cross sections are proportional to the square of the gluon
density and the relevant momentum scale of the gluon density
has been determined.
The azimuthal angular distribution of the
light quark jets can serve as a clean
signal for the two--gluon nature of the pomeron in hard processes.
The two--jet final state is only the simplest case of
jet production in diffractive deep inelastic scattering.
The next steps should be the inclusion of order--$\alpha_s$ corrections
and the extension to processes with additional
gluon jets. Such processes become dominant in the large--$M^2$ region.
| proofpile-arXiv_065-985 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
In SU(2) lattice gauge theory the plaquette,
$P \equiv \frac{1}{2} \mbox{tr} U_{\Box}$ takes values from $-1$ to $1$.
The action studied is $S_{\Box} = S_W = (1-P)$ if $P \ge c$
and $\infty$ if $P < c$, where
$c =$ cutoff parameter, $-1 \le c \le 1$. For instance, the
positive plaquette action\cite{mack} has $c=0$.
Since the continuum limit is determined only by the behavior of the
action in an infinitesimal region around its minimum which occurs around $P=1$,
this action should have the same continuum limit as the Wilson action
for all values of $c$ except $c=1$.
This study was primarily designed to determine whether a sufficiently
strong cutoff would cause the system to deconfine. To give
the lattices the maximum chance to confine, the strong coupling limit,
$\beta \rightarrow 0^+$, was taken. This removes the Wilson
part of the action completely, leaving only the plaquette restriction.
The study was carried out on symmetric lattices from $6^4$ to $20^4$ in an
attempt to study the zero-temperature system. However, as is well known,
symmetric lattices are not really at zero temperature until one takes the
limit of infinite lattice size. Finite size lattices are
at a finite temperature corresponding to their inverse time extent. In this
sense, finite symmetric lattices can be thought of as small spatial-volume
finite temperature systems. The big question is whether the transition observed
here as a function of the cutoff is a bulk transition which will remain
on the infinite lattice, or the remnant of the usual finite temperature
transition which will disappear by moving to $c=1$ on the infinite lattice.
\section{MOTIVATION}
The purpose of an action restriction is to eliminate lattice
artifacts which occur at strong coupling, and which may give misleading
results that have nothing to do with the continuum limit. For instance,
the original motivation of the positive plaquette action was to eliminate
single-plaquette
$Z_2$ monopoles and strings, which were possibly thought to be responsible
for confinement \cite{mack,grady1}. However,
for strong enough couplings the positive-plaquette
action was shown to still confine \cite{heller}. Eliminating these artifacts, does,
however,
appear to remove the ``dip'' from the beta-function in the crossover region,
possibly improving the scaling properties of the theory, and suggesting that
this dip is indeed an artifact of non-universal
strong coupling aspects of the Wilson action.
For the U(1) theory, all monopoles are eliminated for $c \ge 0.5$, since
in this case six plaquettes cannot carry a visible flux totaling $\pm 2 \pi$,
($\cos (\pi /3) = 0.5$). Therefore a Dirac string cannot end and there are no
monopoles. The theory will be deconfined for all $\beta$ for $c\ge 0.5$,
since confinement is due to
single-plaquette monopoles in this theory.
SU(2) confinement may be related to U(1) confinement through abelian
projection. It is an interesting question whether a similar
action restriction which limits the non-abelian flux carried by an SU(2)
plaquette will eliminate the associated abelian monopoles causing
deconfinement, or whether
surviving large monopoles can keep
SU(2) confining for all couplings. A related question is how can
the SU(2) average plaquette go to unity in
the weak coupling limit, but the corresponding effective U(1)
average plaquette stay in the
strong coupling confining region ($\leq .6$)? Chernodub et.\ al.\ have found
$<\!\! P_{\rm{U(1)}}\!\! > \geq 0.82<\!\! P_{\rm{SU(2)}}\!\! >$
over a wide range of couplings \cite{ch}.
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Polyakov Loop Modulus. The lower curves are reduced by the
value that a Gaussian of the same width would be expected to have.}
\label{fig:2}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Polyakov loop histograms from $8^4$ lattice showing typical symmetry breaking behavior.}
\label{fig:3}
\end{figure}
\section{Results}
A deconfining phase transition is observed on all lattices studied
($6^4$ to $20^4$). The behavior
of the Polyakov loop, $L$, shows the normal symmetry breaking behavior and
is smooth, suggesting a second or higher order transition (Figs 1,2).
The transition point has a clear lattice size (N) dependence (Figs 1,3).
The normalized fourth cumulant, $g_4 \equiv 3 - \!\! <\!\! L^4 \!\! >\!\! /\!\!
< \!\! L^2 \!\! >^2 $,
shows fixed point behavior
(no discernible lattice size dependence) for $c\geq 0.6$
at a non-trivial value around $g_4 = 1.6$ (Fig 3).
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Normalized fourth cumulant of the Polyakov Loop.}
\label{fig:4}
\end{figure}
This suggests either
a line of critical points for $c \ge 0.6$, e.g. from a massless gluon phase,
or that correlation lengths are so
large that finite lattice size dependence is hidden. For the standard
interpretation to
hold the
normalized fourth cumulant should go to zero as lattice size approaches
infinity
for all values of $c$.
The Polyakov loop susceptibility shows signs of diverging with lattice size ($N$) for
$c \geq 0.6$ and converging to a finite value for $c \leq 0.5$ (Fig. 4). This supports
the idea of a different behavior in these regions in the infinite volume
limit.
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Lattice size dependence of the plaquette susceptibility. Constant
large-lattice behavior for lower cutoffs indicates that dividing by $<\! L\! >$
correctly normalizes the changing order parameter (the definition changes
with lattice size).}
\label{fig:6}
\end{figure}
Extrapolations of finite lattice ``critical points'' to infinite lattice
size are consistent with infinite lattice critical points ranging from
around $c^{\infty}=0.55$ to $c^{\infty}=1.0$ depending on scaling function assumed,
so are basically inconclusive (Fig 5). Although a straightforward extrapolation
gives infinite lattice critical cutoff around $c^{\infty}=0.55$, a logarithmic scaling
function can be made consistent with $c^{\infty} = 1.0$ in which case the transition
would no longer exist on the infinite lattice.
Finally, the pseudo-specific heat from plaquette fluctuations shows a
definite signal
of a broad peak or shoulder beginning to form for the
largest lattice sizes (Fig 6),
near
the location of deconfinement. Interestingly, no noticeable effect
was seen here on the smaller lattices, but the $16^4$ and $20^4$ results are
definitely shifted up. This indicates a rather large
scaling exponent. Preliminary analysis shows an exponent near $4$, i.e.
a peak height diverging according to the space-time volume (if indeed
it is a peak). This would indicate a possible weak first-order transition.
Since this is a bulk (local in space-time) quantity, a divergence here would
be a surprise for a finite-temperature transition.
Clearly larger lattices and better statistics are needed
to get the full picture behind this rich and interesting system.
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Extrapolation of critical point to infinite volume. The finite
lattice critical point is arbitrarily defined
by $g_4 (c^*)=0.25.$}
\label{fig:7}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\framebox[55mm]{\rule[-21mm]{0mm}{50mm}}
\caption{Pseudo-specific heat from plaquette fluctuations.}
\label{fig:8}
\end{figure}
\section{CONCLUSIONS}
The imposition of an action restriction clearly induces
a deconfining ``phase transition'' on a finite symmetric
lattice,
even in the strong coupling limit (of course there really is no phase transition on
a finite lattice).
The extrapolation to infinite lattice size is, as always, difficult. There are two
possibilities.
Evidence is consistent with deconfinement on the infinite lattice
for all $c \! > \! 0.55$, representing possibly a massless gluon phase in the
continuum limit, similar to the U(1) continuum limit. It should be
remembered that the compact U(1) theory itself undergoes a similar deconfining
transition at $c=0.5$. The mechanism in both cases may be the same,
namely that limiting the flux that can be carried by an individual plaquette
prevents a Dirac string from ending, and spilling its flux out in all
directions in the form of a point monopole. Although it has proven
hard to identify monopoles in the SU(2) theory, the identification of them
with abelian monopoles appearing in the Maximal Abelian Gauge suggests
a possibly similar behavior in SU(2) and U(1).
It is also possible, however, that all signs of critical behavior will
disappear on large enough lattices, nothing will diverge, and the theory
will confine for all values of c, (i.e. $c^{\infty} = 1.0$), the apparent
critical behavior being due to the finite temperature associated with
finite $N$. However, as a small 3-volume
usually {\em masks} critical behavior by smoothing would-be criticalities
it is difficult to see how this could explain the large divergent-looking
finite size effects
being seen here. It is especially difficult to explain the large-lattice
behavior of the pseudo-specific heat.
Due to the small
spatial volume and the fact that one is very far from the scaling region,
however, it is rather difficult to make definite predictions
from this scenario.
One also must remember that our parameter is the cutoff, $c$, and
not the coupling
constant. Although $c$ does seem to play the role of an effective coupling,
(with $c=-1$ corresponding to $\beta=0$ in the Wilson theory, and $c=1$ corresponding
to $\beta=\infty$), it certainly differs in some respects. Incidentally,
the average plaquette at $c=0.5$ is 0.7449, corresponding to a Wilson
$\beta$ of around 3.2. This relatively weak coupling means that rather
large lattices will be needed to disentangle the finite temperature effects,
from any possible underlying bulk transition.
It is likely that further light could be shed on this subject by
gauge transforming configurations from restricted action
simulations to the maximum abelian gauge and extracting abelian
monopole loops.
This would answer the question of what effect the SU(2) action restriction
has on the corresponding abelian monopoles.
By studying the strength of the monopole suppression it may be possible
to tell whether a sufficient density of monopoles will survive the
continuum limit to produce a confining theory.
Such a study is underway.
It is clear that gauge configurations from the restricted action will be
relatively smoother on the smallest scale. It should be pointed out, however, that
anything can (and does) still happen at larger scales. For instance with $c=0.5$,
even the $2\times 2$ Wilson loop can take on any value. The maximum flux
carried by a plaquette is only limited to roughly $1/3$ of its unrestricted value,
i.e. around the value expected on a lattice twice as fine as
that in a typical unrestricted simulation. From this point of view, one
should only
need lattices twice as large in linear extent to see the same physics in
a $c=.5$ restricted simulation as in an unrestricted simulation.
The smoother configurations produced by restrictions of this strength
may also eliminate or at least suppress dislocations, possibly
enabling one to study
instantons without the need for cooling.
| proofpile-arXiv_065-998 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Analytical Study for Monopole Trajectory in the Multi-instanton Solution}
As 't~Hooft pointed out, a nonabelian gauge theory is reduced into
an abelian gauge theory with monopoles by the abelian gauge fixing [1,2].
Recent lattice studies suggest abelian dominance
and relevant roles of monopole condensation [3]
for the nonperturbative phenomena: confinement [4],
chiral symmetry breaking [5] and instantons [5-8].
In the abelian gauge,
unit-charge monopoles appear from the hedgehog-like gauge configuration
according to the nontrivial homotopy group,
$\pi_{2}\{{\rm SU}(N_c)/{\rm U}(1)^{N_c-1}\}=Z^{N_c-1}_\infty$ [2].
On the other hand, the instanton is another relevant topological object
in the nonabelian gauge manifold ($\pi_{3}({\rm SU}(N_c))$ =$Z_\infty$).
In the abelian-dominant system,
the instanton seems to lose the topological basis for its existence,
and hence it seems unable to survive in the abelian manifold [7-9].
However, even in the abelian gauge, nonabelian components remain
relatively large around the topological defect, {\it i.e.} monopoles, and
therefore instantons are expected to survive only around the monopole world lines
in the abelian-dominant system [7-9].
We have pointed out such a close relation between instantons and monopoles, and have demonstrated it
in the continuum Yang-Mills theory using the Polyakov gauge, where $A_4(x)$ is diagonalized [7-9].
We summarize our previous analytical works as follows [7-9]. \\
(1) Each instanton center is penetrated by a monopole world line in the Polyakov gauge,
because $A_4(x)$ takes a hedgehog configuration near the instanton center.
In other words, instantons only live along the monopole trajectory. \\
(2) Even at the classical level, the monopole trajectory is unstable against a small fluctuation
of the location or the size of instantons, although it is relatively stable inside the instanton profile. \\
(3) In the two-instanton solution, a loop or folded structure appears in the monopole trajectory
depending on the instanton location and size.\\
(4) In the multi-instanton solution, monopole trajectories become very unstable and complicated. \\
(5) At a high temperature, the monopole trajectories are drastically changed, and become simple lines along
the temporal direction.
To begin with, we study the monopole trajectory in the multi-instanton system
in terms of the topological charge density as a gauge invariant quantity.
We show in Fig.1 an example of the monopole trajectory in the Polyakov gauge
in the multi-instanton system, where all instantons are randomly
put on the $zt$-plane for simplicity [8].
The contour denotes the magnitude of the topological density.
Each instanton attaches the monopole trajectory.
As the instanton density increases, the monopole trajectory tends to be highly complicated and very long,
which can be regarded as a signal of monopole condensation [3,10].
As a remarkable feature in the Polyakov gauge,
the monopole favors the high topological density region, ``mountain'':
each monopole trajectory walks crossing tops of the mountain [8].
On the other hand, anti-monopole with the opposite color-magnetic
charge favors the low topological density region, ``valley".
Thus, the strong local correlation is found between the instanton and
the monopole trajectory [6-9].
\begin{figure}[htb]
\begin{center}
\epsfile{file=lattice1.eps,height=5.8cm}
\end{center}
{
\small
\noindent
Fig.1: The monopole trajectory in the Polyakov gauge in the multi-instanton solution,
where 150 instantons are put on the $zt$ plane.
The contour denotes the magnitude of the topological density.
}
\end{figure}
\section{Instanton and Monopole at Finite Temperature on SU(2) Lattice}
Next, we study the correlation between instantons and monopoles
in the maximally abelian (MA) gauge and in the Polyakov gauge
using the Monte Carlo simulation in the SU(2) lattice gauge theory [5-8].
The SU(2) link variable can be separated into the monopole-dominating (singular) part
and the photon-dominating (regular) part [4,5,7-8].
Using the cooling method, we measure the topological quantities ($Q$ and $I_Q$)
in the monopole and photon sectors as well as in the ordinary SU(2) sector.
Here, $I_Q \equiv \int d^4x |{\rm tr}(G_{\mu\nu} \tilde G_{\mu\nu})|$
corresponds to the total number $N_{\rm tot}$ of instantons and anti-instantons.
(1) On the $16^4$ lattice with $\beta=2.4$, we find that instantons exist only in the monopole part
both in the MA and Polyakov gauges, which means monopole dominance for the topological charge [5,7,8].
Hence, we can expect monopole dominance for the U$_{\rm A}$(1) anomaly and the $\eta'$ mass.
(2) We study the finite-temperature system using the
$16^3 \times 4$ lattice with various $\beta$ around $\beta_c \simeq 2.3$ [8].
We show in Fig.2 the correlation between $I_Q({\rm SU(2)})$ and $I_Q({\rm Ds})$, which are measured
in the SU(2) and monopole sectors, respectively, after 50 cooling sweeps.
The monopole part holds the dominant topological charge in the full SU(2) gauge configuration.
On the other hand, $I_Q(Ph)$, measured in the photon part, vanishes quickly by several cooling sweeps.
Thus, monopole dominance for the instanton is found also in the confinement phase
even at finite temperatures [8].
(3) Near the critical temperature $\beta_c \simeq 2.3$, a large reduction of $I_Q$ is observed.
In the deconfinement phase, $I_Q$ vanishes quickly by several cooling sweeps,
which means the absence of the instanton, in the SU(2) and monopole sectors
as well as in the photon sector [8].
Therefore, the gauge configuration becomes similar to the photon part
in the deconfinement phase.
\begin{figure}[ht]
\begin{center}
\epsfile{file=lattice2.eps,height=5.8cm}
\end{center}
{
\small
\noindent
Fig.2 : Correlation between $I_Q({\rm SU(2)})$ and $I_Q({\rm Ds})$ at $\beta$=2.2 ($\circ$),
2.3 ($\times$), 2.35 ($\bigtriangleup$) after 50 cooling sweeps.
}
\end{figure}
Thus, monopole dominance for instantons is found in the confinement phase even at finite temperatures, and the monopole part includes a dominant amount of instantons
as well as the monopole current [5,7].
At the deconfinement phase transition, both the instanton density and the monopole current are
rapidly reduced, and the QCD-vacuum becomes trivial in terms of the topological nontriviality.
Because of such strong correlation between instantons and monopoles,
monopole dominance for the nonperturbative QCD may be interpreted as
instanton dominance.
\section{Lattice Study for Monopole Trajectory and Instantons}
Finally, we study the correlation between the instanton number and the monopole loop length.
Our analytical studies suggest appearance of a highly complicated monopole trajectory
in the multi-instanton system even at the classical level [7-9].
Further monopole clustering would be brought by quantum effects.
We conjecture that the existence of instantons promotes monopole condensation [7-9],
which is characterized by a long complicated monopole trajectory covering over ${\bf R}^4$
in the similar argument for the Kosterlitz-Thouless transition [10]
and is observed also in the lattice QCD simulation [3,4,8].
To clarify the role of instantons on monopole condensation, we study the SU(2) lattice gauge theory
for the total monopole-loop length $L$ and the integral of the absolute value of the topological density
$I_Q$, which corresponds to the total number $N_{\rm tot}$ of instantons and anti-instantons.
We plot in Fig. 3 the correlation between $I_Q$ and $L$ in the MA gauge after 10 cooling sweeps
on the $16^3 \times 4 $ lattice with various $\beta$ .
A linear-type correlation is clearly found between $I_Q$ and $L$.
Hence, the monopole-loop length would be largely enhanced in the dense instanton system.
\begin{figure}[htb]
\begin{center}
\epsfile{file=lattice3.eps,height=5.8cm}
\end{center}
{
\small
\noindent
Fig.3 : Correlation between the total monopole-loop length $L$ and $I_Q$
(the total number of instantons and anti-instantons) in the MA gauge.
We plot the data at 10 cooling sweep on $16^3 \times 4$ lattice
with various $\beta$.
}
\end{figure}
From the above results, we propose the following conjecture.
Each instanton accompanies a small monopole loop nearby,
whose length would be proportional to the instanton size [11-14].
When $N_{\rm tot}$ is large enough, these monopole loops overlap, and there appears
a very long monopole trajectory, which bonds neighboring instantons [8,11].
Such a monopole clustering leads to monopole condensation and color confinement [10].
Thus, instantons would play a relevant role on color confinement by providing a source of
the monopole clustering [7-9,11].
| proofpile-arXiv_065-1065 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Abstract}
Within the scalar-tensor theory of gravity with
Higgs mechanism without Higgs particles, we prove that the excited
Higgs potential (the scalar field) vanishs inside and outside of the stellar
matter for static spherically symmetric configurations. The field equation
for the metric (the tensorial gravitational field) turns out to be
essentially the Einsteinian one.
\vfill
\noindent
\begin{description}
\item[Keywords]:
Higgs scalar-tensor theory;
Higgs mechanism without Higgs particles;
particle theoretical implications to gravitation theory;
static and spherically symmetric solutions
\end{description}
\clearpage
\pagenumbering{arabic}
\section{Introduction}
A scalar-tensor theory of gravity was developed by Brans and Dicke 1961
in order to introduce some foundation for the inertial mass as well as
the active and passive gravitational mass (i.e., the gravitational `constant'),
by a scalar function determined by the distribution of all other particles
in the universe; the background of this is Mach's principle and the principle
of equivalence.
This introduction of mass by a scalar field can now be regarded as a somehow
prophetic approach, because in today's Standard Model of particle physics
the masses of the elementary particles are generated via the Higgs mechanism,
thus using also a scalar field, the Higgs field.
The scalar interaction mediated by the Higgs field was investigated by
Dehnen, Frommert, and Ghaboussi 1990.
They showed that any excited Higgs field%
\footnote{
The quanta of this excited Higgs field are the hypothetical Higgs
particles.
}
mediates an attractive scalar interaction%
\footnote{
This interaction is similar to gravity because it couples to the masses
of the particles.
}
of Yukawa type (i.e.\ short range)
between those particles which acquire mass by the corresponding symmetry
breaking (i.e.\ the fermions and the massive $W$ and $Z$ gauge bosons).
The Higgs field of particle physics can also serve as the scalar field in
a scalar-tensor theory of gravity, as was first proposed by Zee 1979 and
deeper investigated by Dehnen, Frommert, and Ghaboussi 1992.
In this theory, in addition to its role in the Standard Model to make the
particles massive, the scalar Higgs field also generates the gravitational
constant G.
Surprisingly however, if the Higgs field of the
$SU(3)\times SU(2)\times U(1)$ Standard Model of the elementary particles
is employed to generate G, the Higgs field looses its source,
i.e.\ can no longer be generated by fermions and gauge bosons
unless in the very weak gravitational channel.
The reader can find the whole formalism of this theory in Dehnen and
Frommert 1993.
\section{Static spherically symmetric solutions of the Higgs scalar-tensor
theory}
For the excited Higgs field $\varphi$, one obtains
the following homogeneous, covariant Klein-Gordon equation%
\footnote{
Throughout this paper we use
$\hbar=c=1$ and the metric signature $(+---)$.
The symbol $(\ldots)_{|\mu}$ denotes the partial,
$(\ldots)_{\|\mu}$ the covariant derivative with respect to the
coordinate $x^\mu$.
}
(see Dehnen, Frommert 1993):
\begin{equation}
\xi^{|\mu}_{\ \ \|\mu} + M^2 \xi = 0\ ,\ \ \ \xi = (1+\varphi)^2-1\ ,
\label{eq:scf}
\end{equation}
where $M$ denotes the mass of the Higgs particles in this theory. The field
equation for the metric as the tensorial gravitational field reads:
\begin{eqnarray}
R_{\mu\nu} - {1 \over 2} R g_{\mu\nu}
&=& - {8\pi\G\over1+\xi} \Biggl[T_{\mu\nu}
+ {v^2\over4\left(1+\xi\right)} \left(\xi_{\vert\mu}\xi_{\vert\nu}
- {1\over2}\xi_{\vert\lambda}\xi^{\vert\lambda} g_{\mu\nu}\right)
+ V(\xi) g_{\mu\nu}\Biggr]
\nonumber \\*
&&{} - {1 \over 1+\xi} \left[\xi_{\vert\mu\Vert\nu}
- \xi^{\vert\lambda}_{\ \ \Vert\lambda} g_{\mu\nu} \right]\ .
\label{eq:gravf3chi}
\end{eqnarray}
with the Ricci tensor $R_{\mu\nu}$
and the Higgs potential
\begin{equation}
V(\xi) = {3\over32\pi\G} M^2 \left(1+{4\pi\over3\alpha}\right) \xi^2
\approx {3 M^2\over32\pi\G} \xi^2\ \ \ \ (\alpha\simeq 10^{33})\ .
\label{eq:pot}
\end{equation}
$T_{\mu\nu}$ is the energy-momentum tensor of matter.
We now look for the exact solution of this equation for the spherically
symmetric and time independent case. This means that the excited Higgs field
is a function of the radius $r$ only, and the metric has the form
\begin{equation}
g_{\mu\nu} =
\left[
\begin{array}{cccc}
e^{\nu(r)} & 0 & 0 & 0 \\
0 & - e^{\lambda(r)} & 0 & 0 \\
0 & 0 & - r^2 & 0 \\
0 & 0 & 0 & - r^2 \sin^2\vartheta
\end{array}
\right]\ .
\label{eq:met}
\end{equation}
Using the Christoffel symbols and the Ricci tensor components
following from the metric (\ref{eq:met})
(see e.g.\ Landau and Lifshitz 1992, \S100, or Tolman 1934, \S98),
the nontrivial field equations for the metric read
(primes denote derivatives with respect to the radial coordinate $r$,
$L=1/M$ the Compton wavelength corresponding to the Higgs mass $M$):
\begin{eqnarray}
R_{00} &=&
- e^{\nu-\lambda} \left(
{\nu^{\prime\prime}\over 2} + {{\nu^\prime}^2\over 4}
- {\nu^\prime\lambda^\prime\over 4} + {\nu^\prime\over r}
\right)
\nonumber
\\
&=& - {e^{\nu-\lambda}\over 1+\xi} \left[
4\pi\G\left(\rho+3p\right) e^\lambda
- {\nu^\prime\xi^\prime\over 2}
+ {\xi\over L^2} \left(1-{3\over 4}\xi\right) e^\lambda
\right]
\\
R_{11} &=&
{\nu^{\prime\prime}\over 2} + {{\nu^\prime}^2\over 4}
- {\nu^\prime\lambda^\prime\over 4} - {\lambda^\prime\over r}
\nonumber
\\
&=& {1\over 1+\xi} \left[
- 4\pi\G\left(\rho-p\right) e^\lambda - \xi^{\prime\prime}
+ {\lambda^\prime\xi^\prime\over 2}
+ {\xi\over L^2} \left(1-{3\over 4}\xi\right) e^\lambda
\right]
\\
R_{22} &=& e^{-\lambda} - 1
+ {r\over2} \left(\nu^\prime-\lambda^\prime\right) e^{-\lambda}
\nonumber
\\
&=& - {1\over 1+\xi} \left[
4\pi\G\left(\rho-p\right) r^2 + r \xi^\prime e^{-\lambda}
- {\xi r^2\over L^2} \left(1-{3\over 4}\xi\right)
\right]
\label{eq:R22}
\end{eqnarray}
and the scalar field equation (\ref{eq:scf}) takes the form
\begin{equation}
{d^ 2\xi(r)\over dr^ 2} +
\left\{{2\over r}+{1\over2}{d\over dr}\left[\nu(r)-\lambda(r)\right]\right\}
{d\xi(r)\over dr}
= M^2 e^{\lambda(r)} \xi(r)\ .
\label{eq:fex}
\end{equation}
\noindent
Because of a continuous and finite matter density, i.e.\ no singularities
such as matter points or infinitely thin massive surfaces, we are looking for
an exact solution for $\xi(r)$ of this equation, which is finite and
continuous together with its first derivative.
We can immediately find the exact solution of equation (\ref{eq:fex}) if the
metric is the Minkowskian one (perhaps with some constant coordinate
transformation). This should be a good approximation for the limit of
large distances from the star ($r\gg R$, where $R$ is the radius of the star)
in the static case.
Equation (\ref{eq:fex}) then gets
linearized and becomes the usual Klein-Gordon equation
for a static, spherically symmetric field:
\begin{equation}
{d^ 2\xi(r)\over dr^ 2} + {2\over r} {d\xi(r)\over dr} - M^2 \xi(r) = 0
\label{eq:kg}
\end{equation}
The bounded solution of this equation is the Yukawa function
\begin{equation}
\xi(r)={A e^ {-r/L} \over r}\ ,\ \ \ r\gg R\ ,
\label{eq:yuk}
\end{equation}
with $A$ an arbitrary real constant; this is the asymptotic solution for
all finite spherically symmetric systems for large values of $r$ which are
asymptotically embedded in flat Minkowski spacetime. The absolute value of
this solution is exponentially {\em decreasing\/} as
$r\longrightarrow\infty$.
On the other hand, the spacetime metric is also asymptotically equivalent to
the flat Minkowskian one for the limiting case%
\footnote{
This follows immediately from the requirement that for our spherically
symmetric configuration the fields should be differentiable, if one takes
a look on an arbitrary straight line through the origin: As our fields
$\nu$, $\lambda$, and $\xi$
must be spherically symmetric, they must be even functions of the distance
from the origin on this line, and thus have vanishing derivatives at $r=0$,
which makes the connection coefficients vanish.
It also follows as the limiting case of a corollary based on Birkhoff's
theorem, that the metric inside an empty central spherical cavity
(of radius $R_i$) in a spherically symmetric system is equivalent to the
flat Minkowski metric, for $R_i\longrightarrow 0$. This corrolary is
treated e.g.\ in Weinberg 1972, and is also valid in our scalar-tensor
theory.
}
$r\longrightarrow0$. Therefore, the scalar field near $r=0$ should be given
asymptotically again by a solution of equation (\ref{eq:kg}); in this case,
the solution should behave regular at $r=0$ to avoid singularities.
The regular solution at $r=0$ of (\ref{eq:kg}) is given by
\begin{equation}
\xi(r)={B\sinh(r/L)\over r}\ ,\ \ \ 0\leq r \ll R
\label{eq:sinh}
\end{equation}
($B$ another arbitrary real constant),
the absolute value of which has a minimum at $r=0$ and is {\em increasing\/}
outward.
In addition, we can discuss the limiting case for small values of $r$ more
acurately:
For the interior solution near the origin at $r=0$, it is convenient to
rewrite the field equation (\ref{eq:fex}) after multiplication with $r$:
\begin{equation}
{r\over2} \xi^{\prime\prime} +
\left[1+{r\over4}\left(\nu^\prime-\lambda^\prime\right)\right] \xi^\prime
= {r\over2} M^2 e^{\lambda(r)} \xi\ .
\label{eq:rfex}
\end{equation}
Obviously, for nonsingular fields, $\xi^\prime(r)$ must vanish at $r=0$.
Taylor-expanding $\xi(r)$ as $\xi(r)=\xi_0+\xi_1 r+\xi_2 r^2+\ldots$ yields
\begin{eqnarray}
\xi_1 &=& 0
\\
\xi_2 &=& {M^2\over 6} e^{\lambda_0} \xi_0\ ,\ \ (\lambda_0 = \lambda(r=0))
\end{eqnarray}
which shows that the second derivative $\xi^{\prime\prime}$ of the scalar
field $\xi$ has the same sign at the origin $r=0$ as $\xi(r=0)$.
If $\xi_0=\xi(r=0)$ is not zero, i.e.\ $\xi$ doesn't vanish identically,
its absolute value anyway increases outward from the center, i.e.\ if
$\xi_0$ is positive, $\xi$ increases, and if it is negative, it deceases
outward.
One could expect that the complete {\em exact solution} of equation
(\ref{eq:fex})
would have a maximum for every $A>0$ in equation (\ref{eq:yuk})
or $B>0$ in (\ref{eq:sinh}), because it grows when
starting from $r=0$, and vanishes exponentially as $r\longrightarrow\infty$.
On the other hand, its
first derivative would vanish at this extremal point, and then equation
(\ref{eq:fex}) would force the same sign on the solution $\xi(r)$ and its
second derivative.
Because the function $\xi(r)$ is positive, one would obtain a minimum
and not a maximum at this point.
For $A<0$ or $B<0$, we have the analogous situation: one expects
at least one minimum and gets only a maximum.
Therefore, one cannot get the assymptotically bounded exterior solution
(\ref{eq:yuk}) from any nontrivial solution which behaves regular near
r=0.
Thus the only physically permitted static solution is $\xi(r)\equiv0$,
with the constants $A=0$ and $B=0$ for the asymptotic solutions.
\section{Conclusions}
We have shown that the only physically permitted solution for a static,
spherically symmetric configuration in our theory is the trivial one
with respect to the scalar field.
Therefore, the gravitational tensor field equation becomes an ordinary
Einstein equation, so that all calculations for the astronomical objects
obtained from Einstein's General Relativity stay valid.
Of course, this approach is only manifest for the exactly
spherically symmetric and static case without pointlike singularities,
and it does not cover highly dynamic systems
(e.g., cosmological models or black holes).
Yet it is a good approximation for a great deal of ``normal''
objects like stars, or perhaps all closed systems, e.g. our Solar system;
for all these our fundamental result should be valid.
As the physical world is dynamic, however, there remains the
possibility of dynamic solutions which assymptotically fit to a
cosmological background (see e.g.\ Frommert, Schoor, and Dehnen 1996).
This may be of interest in the context of the dark matter problem.
\section*{Acknowledgements}
The authors are thankful to
Heinz Dehnen, Sokratis Rahoutis, and Holger Schoor
for helpful hints and discussions.
| proofpile-arXiv_065-1070 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Many of the most interesting new results in astrophysics
were obtained with survey projects, opening up a new range
of photon energies, lower flux limits, more precise position
measurements, etc. A rapid increase of the power
of computers, and a rapid decrease of their cost, opened
up yet another dimension for the surveys: the number of objects
that can be measured and monitored. The massive searches
for gravitational microlensing have lead to the detection
of over 100 microlensing events by the MACHO, OGLE, DUO, and EROS
collaborations (Paczy\'nski 1996b and references therein). These
projects demonstrated that automated real time processing of photometric
measurements of up to $ \sim 10^7 $ stars every night is possible with
very modest hardware and operating costs. They also demonstrated
that while the original goal of finding the very rare microlensing
events has been reached, much more has been accomplished;
a very large diversity of scientific results was obtained,
related to the galactic structure and stellar variability.
New programs have been stimulated,
providing new ways to improve the cosmic distance scale,
the age estimates of the oldest stellar systems, a robust
determination of the stellar luminosity function and mass function.
The microlensing searches
concentrate on small selected areas, the galactic bulge and the
Magellanic Clouds, and they reach down to $ \sim 20-21 ~ mag $ stars.
Note that the total number of all stars over the whole sky which
are brighter than $ 15 ~ mag $ is $ \sim 2 \times 10^7 $ (Allen 1973),
of which only $ \sim 50\% $ are visible from any given site on any given
night. Therefore, the processing power comparable to that of
the existing microlensing searches can allow nightly photometric
measurements of all stars brighter than $ \sim 15 ~ mag $.
An incremental increase in the data acquisition and data processing rates
can gradually bring into the monitoring programs all
objects brighter than magnitude 16, 17, etc. At every step there
are new and interesting scientific issues to address.
Some recent and planned large scale optical surveys and
searches are described in the next section.
Some examples of possible scientific programs feasible at various
magnitude limits are described in section 3, and some possible technical
implementations are described in section 4.
\section{The Past and Current Massive Monitoring Programs}
Most variable stars which can be found in the catalogues were discovered
with massive photographic monitoring programs, carried out over many
decades at Harvard College Observartory, Sonnenberg Observatory, and
many other observatories. The majority of supernovae
were discovered with photographic plates or films, even though many
were also discovered visually or with CCD detectors.
Recently, a number of massive photometric searches for gravitational
microlensing events were began with either photographic or CCD
detectors: DUO (Alard 1996b), EROS (Aubourg et al. 1993),
MACHO (Alcock et al 1993), and OGLE (Udalski et al. 1992). These
projects not only revealed a number of microlensing events (Alcock
et al. 1993, 1995a,c, Pratt et al. 1996, Aubourg et al. 1993, Alard 1996b,
Udalski et al. 1993, 1994a), but also a huge number of variable stars
(Udalski et al. 1994b, 1995a,b, Alard 1996a, Alcock et al. 1995b, Grison
et al. 1995, Cook et al. 1996, Ka\l u\.zny et al. 1995a).
The total number of variable stars in
the data bases of these four collaborations can be estimated to be about
$ 10^5 $, most of them new.
A very important aspect of these projects is
their very large scale, so the search for variable
objects has to be done, and is done, with computer software, following
some well defined algorithms. This means that the sensitivity of the
searches to detecting variables of any specific kind can be well
calibrated. So far a preliminary calibration was done only for
the sensitivity to detect microlensing events (Udalski et al. 1994a,
Alcock et al. 1995b). A similar procedure can be done, and certainly
will be done, to determine the sensitivity to detect variable stars
of various types, magnitudes, amplitudes, periods, etc. This is a new
possibility, rarely if ever available to the past photographic searches.
In addition to massive searches there are a number of on-going observing
programs of selected groups of known variables, in order to understand
their long term behavior. Many such observations are done with
robotic telescopes, looking for supernovae, optical flashes related
to gamma-ray bursts, near earth asteroids, and many other variable
objects (Hayes \& Genet l989, Honeycutt \& Turner 1990,
Baliunas \& Richard l991, Filippenko 1992, Perlmutter et al. 1992,
Schaefer et al. 1994, Akerlof et al. 1994,
Hudec \& Soldan 1994, Kimm et al. 1994, Henry \& Eaton 1995).
Many robotic telescopes are operated by amateur astronomers.
The CCD cameras and computers became so inexpensive that they can be
afforded by non-professional astronomers, or by groups of non-professional
astronomers.
One of the most important massive observational programs is being done with
the satellite Hipparcos, which will provide excellent astrometry and
photometry for the brightest $ 10^5 $ stars over the whole sky, and less
accurate data for another one million stars, as described in many articles
published in the whole issue of Astronomy and Astrophysics,
volume 258, pages 1 -- 222.
There are many new massive searches being planned by many different groups.
These include people who are interested in detecting possible optical
flashes from cosmic gamma-ray bursts (Boer 1994, Otani et al. 1996),
people who search for supernovae, general variable stars, near-earth
asteroids, distant asteroids, Kuiper belt comets, etc. Perhaps the
largest scale survey project is the Sloan Digital Sky Survey (Gunn \&
Knapp 1993, Kent 1994). The All Sky Patrol Astrophysics (ASPA, Braeuer \&
Vogt, 1995) is the project to replace the old photographic sky monitoring
with the modern CCD technology. It is not possible
to list all the proposed programs, as there are so many of them. They have
a large variety of scientific goals, but they all have one thing in
common: they are all aimed at the automatic photometry and/or astrometry
of a huge number of objects on a sustained basis, and most of them are
proposing a search for some rare, or extremely rare type of objects or
events.
\section{Scientific Goals}
There are different types of scientific results which will
come out of any major survey, including the future all
sky monitoring programs.
{\bf First}, if a survey is done
in a systematic way which can be calibrated, then it will
generate large complete samples of many types of objects:
ordinary stars of different types, eclipsing binaries,
pulsating stars, exploding stars, stars with large proper
motions, quasars, asteroids, comets, and other types of
objects. Such complete samples are essential for
statistical studies of the galactic structure, the stellar evolution,
the history of our planetary system, etc.
{\bf Second}, the identification
of more examples of various types of objects will make it possible
to study them in great detail with the follow-up dedicated
instruments and will help to improve the empirical calibration of
various relations. Bright detached eclipsing binaries and
bright supernovae are just two examples of objects which call
for the best possible calibration.
{\bf Third}, some very rare objects
or events will be detected. Some may uniquely assist us in
understanding critical stages of stellar evolution. A spectacular
example from the past is FG Sagittae, a nucleus of a planetary
nebula undergoing a helium shell flash in front of our telescopes
(Woodward et al. 1993, and references therein).
Some may provide spectacles which bring astronomy to many
people. A few recent examples are the supernova 1987A in the Large
Magellanic Cloud, a collision of the comet Shoemaker-Levy
with Jupiter in the summer of 1994, and the bright comet
Hyakutake in the spring of 1996.
{\bf Fourth}, fully automated real time data processing will provide
instant alert about variety of unique targets of opportunity:
supernovae, gravitational microlensing events, small asteroids
that collide with earth every year, etc. Such alerts will provide
indispensable information for the largest and most expensive space
and ground based telescopes, which have tremendous light collecting
power and/or resolution but have very small fields of view.
{\bf Fifth}, the archive of photometric measurements will provide
a documentation of the history for millions of objects, some of which
may turn out to be very interesting some time in the future. The
Harvard patrol plates and the Palomar Sky Survey atlas provide an
excellent example of how valuable an astronomical archive can be.
{\bf Sixth}, some unexpected new objects and phenomena may be discovered.
There is no way to know for sure, but it is almost always the case
that when the amount of information increases by an order of magnitude
something new is discovered.
One may envision beginning the all sky variability survey with very low
cost equipment: a telephoto lens attached to a CCD camera, with the cost
at the level of a personal computer. A ''low end'' system can easily
record stars as faint as 14 magnitude. Naturally, many
small units are needed to monitor the whole sky.
It is useful to realize how many stars there are in the sky as a function
of stellar magnitude. This relation is shown with a solid line in Figure 1,
following Allen (1973). There are $ 10^3 $, $ 10^4 $,
$ 10^5 $, $ 10^6 $, $ 10^7 $, $ 10^8 $ stars in the sky
brighter than approximately $ 4.8, ~ 7.1, ~ 9.2, ~ 11.8, ~ 14.3, $ and
$ 17.3 ~ mag $, respectively.
Also shown in the same Figure are the numbers of
know binaries of three types: Algols, contact binaries (W UMa), and
binaries with spotted companions (RS CVn). Note that among the brightest
stars the fraction of these binaries is very high, presumably because
these stars, with $ m < 6 ~ mag $, are studied so thoroughly. When
we go faint the incompleteness sets in.
In the following sub-sections I shall discuss a few specific types of
variables, pointing out to various indicators of the incompleteness.
All numbers are taken from the electronic edition of the 4th General
Catalogue of Variable Stars as available on a CD ROM (Kholopov et al. 1988).
\subsection{RS Canum Venaticorum Binaries}
The incompleteness is most dramatically apparent for the
RS CVn type binaries, where very few stars fainter than 5th magnitude
are known. These variables have small amplitudes, typically
only 0.1 or 0.2 $ mag $, so they cannot be found with photographic
searches. A systematic CCD search should reveal lots of such variables
in the magnitude range $ 6 - 10 $, and of course even more among
fainter stars. Periods are typically a few days to a few weeks.
\begin{figure}[p]
\vspace{8cm}
\special{psfile=fig1.ps voffset=-50 hoffset=110
vscale=40 hscale=40 angle=0}
\caption{\small
The number of all stars in the whole sky in one magnitude bins is shown
with a solid line. The number of Algol type binaries listed in the General
Catalog of Variable Stars is shown with open circles. The number of contact
binaries (W UMa stars) and of RS CVn (spotted) binaries is shown with filled
circles and with star symbols, respectively. Note that for $ m > 9 ~ mag $
the fraction of all stars which are either Algol or contact type binaries
decreases rapidly, presumably because of incompleteness of the variable star
catalog. The incompleteness of RS CVn type binaries becomes obvious already
for $ m > 6 ~ mag $.
}
\vspace{8cm}
\special{psfile=fig2.ps voffset=-300 hoffset=-50
vscale=90 hscale=90 angle=0}
\caption{\small
The distribution of known contact binaries (W UMa stars) which are fainter
than $ 12 ~ mag $ is shown in galactic coordinates.
The celestial equator is marked with a dashed line.
The patchy distribution of known systems
is a clear indication of catalog's incompleteness.
}
\end{figure}
The nearly sinusoidal light variations are believed to be caused by
rotation, with the stellar surface covered with large, irregular
spots, similar to those found on the sun, but much larger. The
spot activity varies with time, possibly in a way similar to the solar
cycle. These red subgiant stars have hot winds, with strong X-ray emission.
It is thought that this activity is due to relatively rapid rotation
combined with convective
envelops. There are related stars of the FK Comae type which are single,
but rapidly rotating. These were presumably produced by mergers of
two components of close binaries. Photometrically it is difficult
to distinguish these two types of variables, unless the presence of
eclipses reveals the binary nature, and therefore the RS CVn classification.
Long term monitoring of
RS CVn and FK Com type variables would provide information about stellar
magnetic cycles, and so it would help us understand the nature of the
solar cycle.
\subsection{Contact Binaries}
Contact binaries, known as W Ursae Majoris stars are very common.
Recent studies
indicate that one out of 140 solar type stars is a member of a contact
binary (Ruci\'nski 1994). The Fourth edition of the General Catalog of
Variable Stars (Kholopov et al. 1988) lists a total of over 28,000
variables, only 561 of them classified as EW type (i.e. W UMa type); 530
of these have binary periods in the range 0.2 - 1.0 days. The number
of these stars is shown as a function of their magnitude in Figure 1.
Also shown is the estimate of the total number of all stars in the
sky according to Allen (1973). At the bright end, for $ m < 9 ~ mag $,
there is one known contact binary per $ ~ 10^3 $ stars. Below
magnitude 9 the fraction of known contact binaries declines rapidly.
Therefore even at $ \sim 10 ~ mag $ we may expect many new W UMa
systems to be discovered.
The incompleteness of current catalogs
becomes striking at $ \sim 12 ~ mag $, as clearly shown in Figure 2,
presenting the distribution of those bright contact binaries in the sky in
the galactic coordinate system. The clustering is caused by the
non-uniform sky coverage by the past searches.
Contact binaries are very easy to find. Their brightness
changes continuously with a typical amplitude of $ \sim 0.6 ~ mag $.
A few dozen photometric measurements are enough to establish the
period and the type of variability. The fraction of stars which
are contact binaries is likely to be reasonably constant well below
the 9th magnitude. A simple inference from Figure 1 is that there
are likely to be $ \sim 10^3 $ new contact binaries to be discovered
which are brighter than 14 mag. If the search is done with a well
defined procedure which can be calibrated then these $ \sim 10^3 $ systems
would be very valuable for a variety of studies.
The origin, structure and evolution of contact binaries is poorly
understood. It will be very important to find out if there
are non-contact binaries which might be precursors of contact
systems, as expected theoretically.
In some theories the thermal evolution of contact systems
should periodically take them out of contact, dramatically
changing the shape of their light curves, but it
is not known if these theories are correct. Only observational
evidence can solve this and many other puzzles of W UMa binaries.
It is very likely that contact systems, once properly understood,
will turn out to be good ''standard candles'', and as such they could
be used as tracers for studies of the galactic structure.
There is already evidence for a fairly good period - color - luminosity
relation (Ruci\'nski 1994).
\subsection{Algols}
Variable stars of the Algol type are eclipsing binaries. They are
most often of the semi-detached type, i.e. one component fills its
Roche lobe and the gas flows from its surface towards the second
component under the influence of tidal forces. The incompleteness
of the current catalogs can be appreciated in many ways. First,
the fraction of stars which are known to be Algols declines as
a function of apparent magnitude, as clearly seen in Figure 1.
Second, the apparent distribution of moderately bright Algols, in
the magnitude range 12-13, is clearly clustered in the sky.
Finally, the distribution of Algols in the eclipse depth - binary
period diagram shows a dramatic difference
between the bright ($ m < 5 ~ mag $), and moderately bright
stars ($ 9.5 < m < 10.0 ~ mag$), with the long period systems
missing from the fainter group - this is most likely caused by
the difficulty of detecting eclipses which are spaced more than a few months
apart. Also, eclipses shallower than $ 0.25 ~ mag $ dominate the bright
sample but are entirely missing from the fainter sample,
because photographic searches could not reliably detect
low amplitude variations.
\subsection{Detached Eclipsing Binaries}
In contrast to contact binaries and most Algols the detached eclipsing
binaries are much more difficult to find as their eclipses are very narrow,
and their brightness remains constant between the eclipses. In detached
systems both stars are much smaller than their separation. Typically
$ \sim 300 $ photometric measurements have to be made to establish
the binary period, and roughly one out of $ 10^3 $ stars
is a detached eclipsing system (Ka\l u\.zny et al. 1995b, 1996a,b).
The fraction of such binaries which are missing in the
catalogs is likely to be even larger than it is for contact systems.
Detached eclipsing binaries are the primary source of information
of stellar masses, radii and luminosities (Anderson 1991, and references
therein). When properly calibrated they are to become the primary
distance and age indicators (Guinan 1996, Paczy\'nski 1996a,d, and references
therein). The first such systems were recently discovered in the Large
Magellanic Cloud (Grison et al. 1995) and in a few globular and
old open clusters (Ka\l u\.zny et al. 1995b, 1996a,b). This will make it
possible to measure directly stellar masses at the
main sequence turn off points in those clusters, thereby leading to
more reliable age estimates than those currently available.
The on-going searches for detached eclipsing binaries in galaxies
of the Local Group will lead to accurate determination to their
distances. However, all these very important tasks will require
very good calibration which is possible to do only for the nearby,
and therefore apparently bright systems. A discovery of any new bright
detached eclipsing binary makes the calibration easier and more reliable.
The brightest system of this class is $ \beta $ Aurigae (Stebbins 1910).
It has a period of 3.96 days, and the amplitude of less than $ 0.2 ~ mag $,
and at $ m \approx 2 ~ mag $ it is one of the brightest stars in the sky.
\subsection{Pulsating Stars}
Pulsating stars vary continuously, so their periods are easy to establish.
There are many different types, the best known are long period variables
(Miras), Cepheids (population II cepheids are also known as W Virginis
stars), and RR Lyrae. Their periods are typically months, weeks, and
$ \sim 10 $ hours, and their amplitudes are a few magnitudes, somewhat
in excess of 1 magnitude, and somewhat less than 1 magnitude, respectively.
The number of these variables declines rapidly for $ m > 15 ~ mag $,
probably due to reaching the limit of our galaxy. However, the
incompleteness seems to set in already around $ m \sim 10 ~ mag $.
The clumpiness in the sky distribution is another indicator that
many objects are missing. This shows strikingly in Figure 3.
The two square regions with the
majority of RR Lyrae stars catalogued in this general direction
are 5 degrees on a side - this is a size of an image taken
with a Schmidt camera. The apparent distribution of the RR Lyrae
variables reveals the type of instrument used in the searches.
All types of pulsating variables are very good standard candles,
useful for distance determination, for studies of the galactic structure,
and studies of stellar evolution. All would benefit from
complete inventories of those variables.
\begin{figure}[t]
\vspace{11cm}
\special{psfile=fig3.ps voffset=-80 hoffset=90
vscale=48 hscale=48 angle=0}
\caption{\small
The distribution of known RR Lyrae type pulsating stars
in the magnitude range
$ 15 < m < 16 ~ mag $ is shown in galactic coordinates.
The patchy distribution of known variables
is a clear indication of catalog's incompleteness.
}
\end{figure}
\subsection{Novae and Dwarf Novae}
Among the many types of cataclysmic variables the novae and dwarf novae
are best known. These are binary stars with orbital periods shorter
than one day, with one star being a main sequence dwarf transferring
mass to its white dwarf companion. Novae explode once every $ 10^3 - 10^4 $
years (theoretical estimate) as a result of ignition of hydrogen
accumulated on the white dwarf surface. The amplitudes of light
variation are in the range 10-20 magnitudes, and the stars remain
bright between a week and a year. The source of energy is nuclear.
Dwarf novae brighten
once every few weeks to few years, with the amplitude ranging from 3 to
7 magnitudes, and the stars remain bright between a few days and two
weeks. The increases in brightness are cause by the enhanced viscosity
in the accretion disk around the white dwarf component, and the source
of energy is gravitational.
The sky distribution of novae and dwarf novae
is clumpy, indicating incompleteness. But there is more than that. Every
year new stars are discovered to explode, so the search for new events
has no end. The more explosions we observe, the fewer we miss, the
better our understanding of the nature of these stars, their origin and
their evolution.
\subsection{Supernovae Type Ia}
A supernova explosion is the end of nuclear evolution of a massive
star. This is a very spectacular but very rare event, with the typical
rate of about one explosion per century in a galaxy like ours.
Among many types of supernovae those
of type Ia are most useful as cosmological probes.
They are standard candles with peak magnitudes
$$
m_{B,max} \approx 19.1 + 5 \log (z/0.1) = 17.1 +5 \log (z/0.04) ,
$$
(Branch and Tammann 1992, and references therein).
Most of the scatter may be
removed using the correlation between the absolute peak magnitude and
the initial rate of decline (Phillips 1993). A further result has been
obtained by Riess et al. (1995a, 1996), and Hamuy at al. (1995),
who found that SN Ia light curves form a well ordered one parameter
family, with somewhat different peak luminosities (range about 0.6 mag)
and shapes. Even the exponential declines have somewhat different slopes.
This work seems to indicate that the scatter in the Hubble diagram
of SN Ia in the redshift range $ 0.05 \leq z \leq 0.10 $ can be reduced
down to 0.1 mag in V band.
The rate of SN Ia is approximately 0.6 per $ 10^{10} ~ h^{-2} ~ L_{B, \odot } $
per century (Table 8 of van den Bergh and Tammann, 1991). The luminosity
density in the universe can be estimated with the CfA redshift surveys
(de Lapparent, Geller and Huchra 1989) to be
$ 0.8 \times 10^8 ~ L_{B, \odot } ~ h ~ Mpc^{-3} $. These two numbers can be
combined with Eq.~(1) to obtain the SN Ia rate for the whole sky:
$$
N_{_{SN ~ Ia}} \approx 300 \times 10^{0.6(m_{max} -17)} ~ yr^{-1} .
$$
As a large fraction of the sky cannot be monitored being too close
to the sun, and the weather is never perfect, the maximum effective
detection rate is likely to be a factor $ \sim 2 $ lower than that
given with the eq. (2). Still, if an all sky variability survey can
reach magnitude 17 then over a hundred type Ia supernovae will be discovered
every year, providing excellent data to improve the Phillips,
Riess et al. and Hamuy et al. relation, and will allow even more
accurate study of the large scale flows (Riess et al. 1995b).
Also, such a survey would
provide a steady stream of alerts of supernovae prior to their
maximum brightness, allowing the most detailed follow-up studies
with the HST, the Keck, and other large telescopes.
\subsection{Quasars and other Active Galactic Nuclei}
A large fraction of active galactic nuclei (AGN) is variable, some with
very large amplitudes. One of the most efficient ways to discover
new quasars is a search for variable objects (Hawkins and V\'ernon 1993).
There are 12 active galactic nuclei listed as variable stars in
the General Catalogue of Variable Stars (Kholopov et al. 1988).
These were first found and catalogued as variables stars, and
subsequently found to be at cosmological distances. The following
is the list, with the observed range of magnitudes given in brackets:
AU CVn (14.2 - 20.0), W Com (11.5 - 17.5), X Com (15.9 - 17.9), GQ Com
(14.7 - 16.1), V1102 Cyg (15.5 - 17), V395 Her (16.1 - 17.7), V396 Her
(15.7 - 16.7), BL Lac (12.4 - 17.2), AU Leo (17. - ), AP Lib (14.0 - 16.7),
UX Psc (16. - ), BW Tau (13.7 - 16.4). No doubt there are many other
bright and variable active galactic nuclei in the sky, and new objects
appear all the time. A search for new AGNs as well as a continuous
monitoring of those which are already known is very important for our
understanding of these enigmatic objects.
\subsection{Gamma-ray Bursts}
One of the main driving forces in the plans for massive photometric
searches has been the desire to find optical counterparts (optical
flashes) associated with gamma-ray bursts (GRBs, cf. a chapter:
{\it Counterparts - General}, pages 382-452 in Fishman et al., 1994).
This is a very ambitious undertaking. There are two broad approaches.
One may wait for a trigger signal from a GRB detector, like BATSE, which
provides the exact time and an approximate direction to look at. An
alternative is to have a wide field non-stop monitoring of the sky
and look later into possible coincidences with GRB detections. In either
case the instrument and the data it will generate can be used for the
searches of all kinds of astronomical objects as described in this paper.
If a system is to work in the first mode then it would be best to have
a GRB detection system which could provide reasonably accurate positions (say
better than a degree) in real time for {\it all} strong GRBs, as these are
the most likely to have detectable counterparts. Unfortunatelly, BATSE
detects only $ \sim 40\% $ of strong bursts and provides instant positions
good to $ \sim 5-10^o $ (cf. Fishman et al. 1992, and references therein).
It would be ideal to have small GRB detectors, like the one on the
Ulysses planetary probe (Hurley et al. 1994), placed on a number of
geo-stationary satellites. Such instruments could provide real time
transmission of the information about every registered $ \gamma $ photon,
and with a time baseline between the satellites of $ \sim 0.2 $ seconds
the positions good to a $ \sim 1^o $ could be available in real time for
almost all strong bursts. Such a good GRB alert system would put very
modest demands for the optical follow up.
In the other extreme, a blind search for optical flashes from GRBs, the
demand for the optical system capabilities are very severe, well in
excess of any other project mentioned in this paper. It seems reasonable
to expect that the very powerful system that may be required, like the TOMBO
Project (Transient Observatory for Microlensing and Bursting Objects, Otani
et al. 1996), would start small and gradually expand to the data rate
$ \sim $ terabyte per hour, along the way addressing most topics presented
in this paper.
\subsection{Killer Asteroids}
While it is possible that global disasters, like the extinction of
dinosaurs, may be caused by impacts of large asteroids,
such events are extremely rare (Chapman and Morrison 1994, and
references therein).
On the other hand smaller asteroid or cometary impacts which
happen every century may be of a considerable local concern. The
best know example is the Tunguska event (Chyba et al. 1993, and
references therein). In such cases there is no need to destroy
the incoming ``killer asteroid'' in outer space, it is sufficient
to provide an early warning of the impact and evacuate the site.
The less devastating events are much more common. There are
several multi-kiloton explosions in the upper atmosphere when
small asteroids or large meteorites disintegrate, producing very
spectacular displays which are harmless (cf. Chyba 1993, and references
therein). With a sufficiently early warning such events could be
observed and they could even provide considerable entertainment,
like the impact of the comet Shoemaker-Levy
on Jupiter in the summer of 1994, and the bright comet
Hyakutake in the spring of 1996.
An early warning system detecting not so deadly `killer asteroids',
or rather cosmic boulders, may be feasible and inexpensive.
A mini-asteroid with
a diameter of 35 meters ($10^{-5} $ of our moon diameter) would
appear as an object of $ \sim 13 $ magnitude while at the distance
of the moon, in the direction opposite to the sun.
Moving with a typical velocity of $ \sim 10 ~ km ~ s^{-1} $
it would reach earth in $ \sim 10 $ hours. Close
fly-byes would be far more common, and
such events are currently detected with the Spacewatch program
(cf. Rabinowitz et al. 1993a,b, and references therein).
If the relative transverse velocity of the cosmic boulder with
respect to earth is
$ 10 ~ km ~ s^{-1} $ then at the distance of the Moon it corresponds
to the proper motion of $ \sim 5'' ~ s^{-1} $. Of course, it the object
is heading for earth then the proper motion is much reduced, as
the motion is mostly towards the observer.
According to Rabinowitz (1993a, Fig. 12) one boulder with a diameter
of 30 meters collides with earth once per year, and the more common
10 meter boulders do it ten times a year. The cross section to
come to earth as close as the moon, i.e. within 60 earth radii
is larger by a factor $ \sim 60^2 = 3,600 $. Therefore, on any given day
we may expect 10 boulders of 30 meter diameter and 100 boulders of
10 meter diameter to pass closer to us than our Moon. At their closest
approach these are brighter than 13 and 15.5 magnitude, respectively.
There may be dozens of nearby cosmic boulders brighter than 16 magnitude
at any time. They are the brightest when looked at in the anti-solar
direction. If a fair fraction of these could be detected and
recognized in real time they would offer a fair amount of excitement.
And we would learn about inhabitants of the solar system as well.
Recently, a $ \sim 300 $ meter diameter asteroid
was detected at the distance $ \sim 450,000 $ kilometers
(Spahr 1996, Spahr \& Hegenrother 1996).
It was expected to be the closest to us on May 19.690, 1996 UT,
and to be 11th magnitude at that time.
\subsection{Other Planetary Systems, Dark Matter}
The first extrasolar planetary system with a few earth-mass planets
has already been discovered (Wolszczan and Frail 1992, Wolszczan 1994).
However, this is considered peculiar, with the planets orbiting a
neutron star. A number of super-Jupiter planets were also found around
a few nearby solar-type stars: 51 Peg (Mayor and Queloz 1995), 70 Vir
(Marcy and Butler 1996), and 47 UMa (Butler and Marcy 1996).
No doubt a detection of earth-mass planets around solar-type stars
would be very important. The only known way to conduct a
search for earth-mass planets
with the technology which is currently available is through
gravitational microlensing (Mao and Paczy\'nski 1991, Gould and Loeb 1992,
Bennett and Rhie 1996, Paczy\'nski 1996a, and references therein).
This project requires a fairly powerful hardware, $ \sim 1 $ meter class
telescopes, and it has to be targeted in the direction where microlensing
is known to be a relatively frequent phenomenon, i.e. the galactic bulge.
It is not know how large area in the sky is suitable for the search,
and how many stars are there detectable from the ground. An reasonable
estimates are $ \sim 100 $ square degrees and up to $ \sim 10^9 $ stars.
There are various approaches proposed. My preference would be to
look for high magnification events with the amplitude of up to
1 magnitude and a duration of $ \sim 1 $ hour. This would call
for a continuous monitoring program in order to acquire a large number of
photometric measurements well covering short events. If we assume that
every star has one earth-mass planet then the so called optical
depth to microlensing by such planets would be $ \sim 10^{-11} $,
and it would take $ \sim 100 $ hours of continuous photometric monitoring of
$ 10^9 $ stars to detects a single planetary microlensing event, i.e.
up to 20 such events could be detected every year from a good ground
based site. Clearly, this project is very demanding in terms of data
acquisition and data processing.
Such a search could
also either detect dark matter with compact objects in the mass range
$ \sim 10^{-8} - 10^6 ~ M_{\odot} $ (Paczy\'nski 1996 and references
therein), or place very stringent upper limits.
\subsection{Local Luminosity and Mass Functions, Brown Dwarfs}
In general, gravitational lensing provides only statistical information
about the masses of lensing objects (Paczy\'nski 1996b). However,
any very high proper motion star must be nearby, and hence its distance
can be measured with a trigonometric parallax. For given stellar
trajectory it is possible to predict when the star will come close
enough to a distant source (that is close in angle, in the projection
onto the sky)
to act as a gravitational lens. If the microlensing event can be
detected either photometrically (Paczy\'nski 1995) or astrometrically
(Paczy\'nski 1996c) then the mass of the lens can be directly measured.
The only problem is that in order to have a reasonable chance for
a microlensing event the high proper motion star must be located in
a region of a very high density of background sources, i.e. within the
Milky Way. A search for such objects is very difficult because of
crowding. However, once the rare high proper motion objects are
found the measurement of their masses by means of microlensing is a
fairly straightforward process, as such events can be predicted ahead
of time, just like occultations of stars by the known asteroids.
Recent discovery of very faint nearby objects, most likely field
brown dwarfs, indicates that there may be a significant population
of sub-stellar objects in the galactic disk (Hawkins et al. 1996).
A discovery of such objects in the Milky Way would offer a possibility
to measure their masses by means of gravitational microlensing.
This project, just as the one described in the previous sub-section,
is very demanding in terms of data acquisition and data processing rate.
\section{Implementation}
The searches of various objects described in the previous section
cover a very broad range in the required instrumentation. Some,
like the search for variables stars brighter than $ 13 ~ mag $, can
be conducted with a telephoto with a CCD camera attached to it. Of
course, in order to cover the whole sky with such a search dozens
or even hundreds of such simple instruments may be needed.
The searches for nearby asteroids do not demand much larger apertures,
as many of these are expected to be brighter than $ 13 ~ mag $. However,
as they move rapidly a very efficient data processing would be required
in order to notice them before they are gone.
The searches for variable stars are useful at any magnitude limit,
as the current catalogs are not complete, except (perhaps)
for the brightest $ \sim 1000 $ stars. But some variables are likely
to be faint, like supernovae, as they are very far away. It is unlikely
that a useful supernova search can be conducted with an instrument
with a diameter smaller than about 20 cm. Any project involving a search
for microlensing events calls for a $ \sim 1 $ meter telescope.
A major technical issue facing any massive search is data processing.
The experience of the current microlensing searches demonstrated that
robust software can be developed to handle billions of photometric
measurements automatically (e.g. Pratt et al. 1996). So far such
software runs on workstations. However, the today's personal computers
are as powerful as yesterday's workstations, or as supercomputers
used to be. Therefore, there is no problem in principle to transfer
the know-how to the level of serious amateur astronomers.
Once the local data processing is under control the second major problem
is the communication: how to make those gigabytes (or soon terabytes)
available to the world? Clearly, the Internet is of some help, but not
at this volume, or at least not yet. The problem of effective distribution
of the vast amount of information collected in modern microlensing searches
has not been solved yet. No doubt the solution will be found some day,
hopefuly before too long.
Some steps have already been taken on the road
towards this brave new world of massive all sky searches and monitoring.
For example the
EROS, MACHO and OGLE collaborations provide up-to-date information
about their microlensing searches and other findings,
and a complete bibliography of their work on the
World Wide Web and by anonymous ftp.
There are other projects under way, some of them active for
a long time, which are now accessible over Internet.
The members of the American Association of Variable Star Observers (AAVSO)
were monitoring a large number of variable stars for many decades.
The organization is publishing an electronic journal AAVSO NEWS FLASH.
An important electronic news system is the Variable Star NETwork (VSNET).
Another Internet-based organization: The Amateur Sky Survey
(TASS), has the explicit aim to monitor the whole sky with CCD
detectors, and to provide full access to all data over the Internet.
Yet another fascinating on-line demonstration what a modern technology can do
when combined with a human ingenuity is provided by ``Stardial'', set
by Dr. Peter R. McCullough at the roof
of the astronomy building on the campus of the University of Illinois
at Urbana-Champaign. Stardial is
a stationary weather-proof electronic camera for
recording images of the sky at night autonomously. It is
intended for education, primarily, but it may be of interest to
astronomers, amateur or professional, also.
At the Stardial you will find the growing archive of the data
coming from the 8x5 degree field of view camera. The limiting stellar
magnitude is $ \sim 12.5 $, through an approximately R filter bandpass.
The links providing access to all the systems mentioned above can
be found at: \\
\indent \indent \indent \indent \indent
http://www.astro.princeton.edu/\~\/richmond/surveys.html \\
\noindent
No doubt there are many more groups which are already active, or which
are planning massive photometric and/or astrometric searches, and which
communicate over the Internet. If you know of any
other sites, or groups please let us know, and send e-mail to: \\
\indent \indent \indent \indent \indent
bp@astro.princeton.edu \hskip 1.0cm (Bohdan Paczy\'nski), \\
or to: \\
\indent \indent \indent \indent
richmond@astro.princeton.edu \hskip 1.0cm (Michael Richmond).
\vskip 0.3cm
While the number of new searches increases rapidly, and so does the
volume, diversity and quality of data, there are
many challenges and many unsolved problems in the areas
of data acquisition, processing, archiving, and distribution.
The volume of data at some sites is already many terabytes,
so there is a need to develop efficient and user friendly ``search engines''.
There is a demand for new scientific questions which can be asked
in the world of plentiful data. The learning curve is likely to be
very long. The full power of this new approach to observational
astrophysics will be unleashed if monitoring of the whole sky to
ever fainter limits, and ever more frequently, can be sustained for an
indefinite length of time. However, for that to be possible
very inexpensive answers have to be found to all aspects of these
projects. Over the years many wonderful programs had
been discontinued for the lack of funds. I am optimistic. Just
a few examples given above show the magic of the Internet, and they
also demonstrate that ingenious people are more important
than big budgets.
\section{Acknowledgments}
I am very grateful to Dr. Michael Richmond for setting up the WWW page
with the links to the information about many on-going massive variability
searches.
This work was supported by the NSF grants AST-9313620 and AST-9530478.
| proofpile-arXiv_065-1071 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years it has proved possible to solve the problem of
back-reaction in the context of preequilibrium parton production in the
quark--gluon plasma \cite{CEKMS,KES}. This addresses the scenario in which
two ultrarelativistic nuclei collide and generate color
charges on each other, which in turn create a chromoelectric field
between the receding disk-like nuclei. Parton pairs then tunnel
out of this chromoelectric field through the Schwinger mechanism
\cite{Sau,HE,Sch} and may eventually reach thermal
equilibrium if the plasma conditions pertain for a sufficient length of
time. While the tunneling and the thermalizing collisions proceed, the
chromoelectric field accelerates the partons, producing a current which
in turn modifies the field. This back-reaction may eventually set up
plasma oscillations.
This picture for preequilibrium parton production has been studied
in a transport formalism \cite{BC} in which the chromoelectric field is
taken to be classical and abelian and collisions between the partons are
completely ignored so that the only interaction of the partons is with
the classical electric field, this interaction being the source of the
back-reaction. Alternatively, the mutual scattering
between partons has been
considered in an approximation that assumes rapid thermalization and
treats the collisions in a relaxation approximation about the thermal
distribution \cite{KM}; in this study no back-reaction was allowed.
Both interparton collisions and back-reaction were considered in a
calculation \cite{GKM} done in the hydrodynamic limit, which thus took
into account only electric conduction within the parton plasma.
All of these studies focused on the region of central rapidity.
The more recent calculations \cite{CEKMS,KES} carried out a comparison
between the transport formalism for back-reaction and the results of a
field-theory calculation for the equivalent situation (see also
\cite{CM,KESCM1,KESCM2}) and found a remarkable similarity between the
quantal, field-theory results and those of the classical transport
equations using a Schwinger source term. This link has also been
established formally to a certain degree \cite{BE}. (This close
relationship tends to fail, in part, for a system
confined to a finite volume as a dimension of this
volume becomes comparable with the reciprocal parton effective
mass \cite{Eis1}.)
The studies relating field theory with transport formalism were all
carried out under the assumption of a classical, abelian electric field
and no parton--parton scattering. The removal of the assumption of a
classical field, and thus the inclusion of interparticle scattering
through the exchange of quanta, has been considered quite recently
\cite{CHKMPA} for one spatial dimension.
The study reported here is carried out within the framework of the transport
formalism (the parallel field-theory case is also currently under
study \cite{Eis2}) and incorporates both back-reaction and a collision term
in the approximation of relaxation to thermal equilibrium. Thus it
assumes that thermalization takes place fast enough so that it makes
sense to speak of the ongoing tunneling of partons, with back-reaction,
as the collisions produce conditions of thermal equilibrium. It
may be seen as combining the features of the studies of Bia\l as and
Czy\.z \cite{BC} with those of Kajantie and Matsui \cite{KM}, or of
paralleling the calculation \cite{GKM} of Gatoff, Kerman, and Matsui,
but at the level of the transport formalism without further appeal to
hydrodynamics. Along with the other studies noted, it restricts itself
to the region of central rapidity. The study provides a model for
comparing the interplay between the thermalizing effects of particle
collisions and the plasma oscillations produced by back-reaction.
\section{Formalism}
The transport formalism for back-reaction using boost-invariant variables
has been presented previously in considerable detail \cite{CEKMS}
and is modified here
only by the appearance of the collision term in the approximate form
appropriate to relaxation to thermal equilibrium \cite{KM}. The
Boltzmann--Vlasov equation in $3 + 1$ dimensions then reads, in the
notation of \cite{CEKMS},
\begin{equation} \label{BV}
p^\mu\frac{\partial f}{\partial q^\mu}
- ep^\mu F_{\mu\nu} \frac{\partial f}{\partial p_\nu} = S + C,
\end{equation}
where $f = f(q^\mu,p^\mu)$ is the distribution function, $S$ is the
Schwinger source term, and $C$ is the relaxation-approximation collision
term. The electromagnetic field is $F_{\mu\nu}$ and the electric charge
$e.$ The variables we take are
\begin{equation} \label{variables}
q^\mu = (\tau,x,y,\eta),\quad\quad p_\mu = (p_\tau,p_x,p_y,p_\eta),
\end{equation}
where $\tau = \sqrt{t^2-z^2}$ is the proper time and
$\eta = \frac{1}{2}\log[(t+z)/(t-z)]$ is the rapidity. Thus, as usual, the
ordinary, laboratory-frame coordinates are given by
\begin{equation} \label{labcoords}
z = \tau\sinh\eta, \quad\quad t = \tau\cosh\eta.
\end{equation}
The momentum coordinates in eq.~(\ref{variables}) relate to the
laboratory momenta through
\begin{equation} \label{momenta}
p_\tau = (Et - pz)/\tau, \quad\quad p_\eta = -Ez + tp,
\end{equation}
where $E$ is the energy and $p$ is the $z$-component of the momentum, the
$z$-axis having been taken parallel to the initial nucleus--nucleus
collision direction or initial electric field direction.
Inserting expressions \cite{CEKMS,KM} for the source term $S$ and for
the collision term $C$, and restricting to $1 + 1$
dimensions, the Boltzmann--Vlasov equation becomes
\begin{eqnarray} \label{transport}
\frac{\partial f}{\partial\tau}
+ e\tau{\cal E}(\tau)\frac{\partial f}{\partial p_\eta} & = &
\pm(1\pm 2f)e\tau|{\cal E}(\tau)|
\log\left\{1\pm\exp\left[-\frac{\pi m^2}{|e{\cal E}(\tau)|}\right]\right\}
\delta(p_\eta)
\nonumber \\
& - & \frac{f - f_{\rm eq}}{\tau_{\rm c}}.
\end{eqnarray}
Here the upper sign refers throughout to boson production and the lower
sign to the fermion case, and we have incorporated the necessary
\cite{CEKMS} boson enhancement
and fermion blocking factor $(1\pm 2f).$ The electric field is given
for these variables by
\begin{equation} \label{E}
{\cal E}(\tau) = \frac{F_{\eta\tau}}{\tau} = -\frac{1}{\tau}\frac{dA}{d\tau},
\end{equation}
where $A = A_\eta(\tau)$ is the only nonvanishing component of the
electromagnetic four-vector potential in these coordinates. In
eq.~(\ref{transport}) we have assumed, as usual, that pairs emerge
with vanishing $p_\eta,$ which is the boost-invariant equivalent of the
conventional assumption that pairs are produced with zero momentum in the
laboratory frame.
The thermal equilibrium distribution is
\begin{equation} \label{feq}
f_{\rm eq}(p_\eta,\tau) = \frac{1}{\exp[p_\tau/T]\mp 1},
\end{equation}
where $T$ is the system temperature, determined at each moment in proper
time from the requirement \cite{KM}
\begin{equation} \label{T}
\int\frac{dp_\eta}{2\pi}\ f(p_\eta,\tau)\ p_\tau =
\int\frac{dp_\eta}{2\pi}\ f_{\rm eq}[T(\tau);\, p_\eta,\tau]\ p_\tau;
\end{equation}
here and throughout $p_\tau = \sqrt{m^2 + p_\eta^2/\tau^2},$ where $m$
is the parton effective mass, and the independent variables in terms of
which the
transport equations are evolved are $p_\eta$ and $\tau.$ In
eq.~(\ref{transport}), $\tau_{\rm c}$ is the collision time or time for
relaxation to thermal equilibrium.
Back-reaction generates variations in ${\cal E}(\tau)$ as a function of
proper-time through the Maxwell equation
\begin{eqnarray} \label{Maxwell}
-\tau\frac{d{\cal E}}{d\tau} = j_\eta^{\rm cond} + j_\eta^{\rm pol} & = &
2e\int\frac{dp_\eta}{2\pi\tau p_\tau}\ f\ p_\eta \nonumber \\
& \pm & \left[1\pm 2f(p_\eta=0,\tau)\right]
\frac{me\tau}{\pi}{\rm sign}[{\cal E}(\tau)] \nonumber \\
& \times & \log\left\{1\pm
\exp\left[-\frac{\pi m^2}{|e{\cal E}(\tau)|}\right]\right\};
\end{eqnarray}
here the two contributions on the right-hand side are for the conduction
and polarization currents, respectively.
Note that in $1 + 1$ dimensions the units of electric charge $e$ and of
the electric field ${\cal E}$ are both energy. For numerical convenience
\cite{CEKMS} a new variable is introduced, namely,
\begin{equation} \label{u}
u = \log(m\tau), \quad\quad \tau = (1/m)\exp(u).
\end{equation}
Equations (\ref{transport}) and (\ref{Maxwell}) are to be solved as a
system of partial differential equations in the independent variables
$p_\eta$ and $\tau$ for the dependent variables $f$ and ${\cal E},$ determining
the temperature $T$ at each proper-time step from the consistency
condition of eq.~(\ref{T}).
\section{Numerical results and conclusions}
The numerical procedures used here are patterned after those of
ref.~\cite{Eis1}, and involve either the use of a Lax method or a method of
characteristics. In practice the latter is considerably more efficient
in this context and all results reported here are based on it. We note
that these methods are completely different from those used in
ref.~\cite{CEKMS}; as a check on numerical procedures we verified that full
agreement was achieved with the results reported there. All quantities
having dimensions of energy are scaled \cite{CEKMS} here to units of the
parton effective mass $m,$ while quantities with dimensions of
length are given in
terms of the inverse of this quantity, $1/m.$
In order to present a relatively limited number of cases, we fix
all our initial conditions at $u = -2$ in terms of the variable of
eq.~(\ref{u}). At that point in proper time we take ${\cal E} = 4,$ with no
partons present; the charge is set to $e = 1.$ This has
been found \cite{CEKMS} to be a rather representative case; in particular,
little is changed by applying the initial conditions at $u = 0$ rather
than at $u = -2.$ We shall exhibit results for three values of $\tau_{\rm c},$
namely 0.2, 1, and 10.
Our results are presented in fig.~1 for boson production and in fig.~2
for fermions. The uppermost graph in each case shows the temperature
derived from the consistency condition of eq.~(\ref{T}) while the middle
curves are for the electric field ${\cal E}$ and the lower graph gives the
total currents. Both for bosons and for fermions,
the cases with $\tau_{\rm c} = 0.2$ and $\tau_{\rm c} = 1$ involve a collision term
that damps the distributions very rapidly. Thus no signs of plasma
oscillations, which would arise if back-reaction came into play
unhindered, are seen. For these values, the electric field and
total current damp rather quickly to zero, and a fixed value
of $T$ is reached. The temperature peaks at around $1.5 m$ for the boson
cases, and near $2m$ for fermions. The temperature ultimately achieved
depends, of course, on $\tau_{\rm c}.$
For $\tau_{\rm c} = 10,$ the plasma oscillations of back-reaction are clearly
visible in the electric field and in the total current, both for bosons
and for fermions. In fact, these cases are rather similar to their
counterparts without thermalizing collisions \cite{CEKMS}, except for
greater damping, especially of the current, when thermalization is
involved. The plasma frequency is changed only a little by this damping.
The plasma oscillations are reflected very slightly in the
temperature behavior in a ripple at the onset of the oscillations,
where they naturally
have their largest excursion. However, the oscillations have the effect
of pushing off the region at which a constant temperature is reached.
Extending the calculation further out in the variable $u,$ one finds that
the temperature in that case levels off around $u \sim 5$ at a value of
$T \sim 0.38$ for bosons and 0.39 for fermions.
In conclusion, this calculation allows an exploration of the transition
between a domain dominated by parton collisions that bring about rapid
thermalization in the quark--gluon plasma and a domain governed in major
degree by back-reaction. In the first situation, the electric field from
which the parton pairs tunnel, and the current which is produced from
these pairs by acceleration in the field, both decay smoothly to zero and
a terminal temperature is reached. In the latter case, plasma
oscillations set in which delay somewhat the achievement of a final constant
temperature. This qualitative difference between the two situations
occurs for collision times about an order of magnitude larger than the
reciprocal effective parton mass.
{\sl Note added in proof:} After this paper was completed and posted in the
Los Alamos archive, I learned of a similar study carried out by
B. Banerjee, R.S. Bhalerao, and V. Ravishankar [Phys. Lett. B 224 (1989)
16]. The present work has several features that are different
from the previous one, notably, the application to bosons as well
as to fermions and the inclusion of factors for Bose--Einstein enhancement
or Fermi--Dirac blocking in the Schwinger source term. By comparing with the
field theory results, these factors have been found to be of considerable
importance \cite{CEKMS,KES,KESCM1,KESCM2}. The earlier work uses
massless fermions, and, while the initial motion is taken to be
one-dimensional as here, it includes a transverse momentum distribution,
so that it is difficult to make a direct quantitative comparison between
the two studies. There are also a number of technical differences between the
calculations. Qualitatively, very similar behavior is found and the
earlier study points out very clearly the necessity for treating the
interplay between back-reaction and thermalization. I am very grateful
to Professor R.S. Bhalerao for acquainting me with this earlier reference.
It is a pleasure to acknowledge useful conversations with Fred
Cooper, Salman Habib, Emil Mottola, Sebastian Schmidt, and Ben Svetitsky
on the subject matter of this paper. I also wish to express my warm
thanks to Professor Walter Greiner and the Institute for Theoretical
Physics at the University of Frankfurt and to Fredrick Cooper and Emil
Mottola at Los Alamos National Laboratory for their kind hospitality
while this work was being carried out.
This research was funded in part by the U.S.-Israel Binational
Science Foundation, in part by the Deutsche Forschungsgemeinschaft, and
in part by the Ne'eman Chair in Theoretical Nuclear Physics at Tel Aviv
University.
\vfill\eject
\vskip 2 true pc
| proofpile-arXiv_065-1078 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Reconstructing the top mass is critical to many of the physics goals
of the LHC. The top-quark mass is interesting in its own right as a
fundamental parameter of the standard model (SM), and for its role in
pinning down other aspects of the SM and its extensions. Our ability to
reconstruct $m_t$ also affects new physics search strategies, in which
top can appear as a signal or background. Reconstructing the top mass
on an event by event basis is an important tool for distinguishing top
production from other processes. Estimates for future runs at the
Tevatron, extrapolated from early Run 1 results,
suggest that $m_t$ will be measured at the
4~GeV level \cite{TEV33}. Recent improvements in Tevatron results
\cite{WARSAW} may lead to more optimistic conclusions.
At the LHC, given the enhanced statistics,
experiments may hope for an accuracy of 2~GeV \cite{mtLHC}.
But the ability to do
this will depend on how well systematic effects --- especially those
associated with gluon radiation --- are understood and controlled.
It is therefore crucial to properly simulate the relevant physics.
By virtue of its energy and luminosity, the LHC will be a top factory.
The top cross section at the 14~TeV LHC is more than 100 times larger than
at the 2~TeV Tevatron. This increase in production rate has a price, however:
an increase in gluon radiation. In a $t \bar t$ interaction, 7~TeV
protons can easily radiate quarks and gluons with only a
small penalty to be paid in the parton distributions. As a result, at
the LHC there will be a plethora of radiation associated with top pair
production. This radiation may well be the limiting factor in our
ability to reconstruct the top mass, both on an event by event basis,
and from global shapes. For example, the sheer quantity of radiation
could result in emissions associated with top production (as opposed to decay)
being included in the
$b$-quark jet cone, introducing spurious contributions to the top mass
reconstruction. Additional jets may also introduce difficulties in
choosing the appropriate jets for reconstructing momenta in the
lepton+jets channel.
This paper is an investigation of the effects of gluon radiation in
top events at the LHC. In the next section, we compare $t\bar t$ and
$t\bar t g$ production at the LHC to that at the Tevatron. We then
present in Section 3 a complete tree-level $\alpha_s^3$ calculation of
$p p \rightarrow W^+W^-b \bar b j$. We present distributions of the
radiated jets, with the radiation decomposed into production- and
decay-stage emission. In Section 4 we compare our matrix element
results with those from the parton shower Monte
Carlo HERWIG, and we comment on the
precision to which HERWIG
appears to have this physics implemented for gluon radiation in
top-quark production and decay. In Section 5 we present our
conclusions.
\goodbreak
\section{$t\bar t$ and $t \bar t g$ production at the Tevatron and LHC}
To understand the effects of gluon radiation in top events at the LHC,
it is useful to compare top production there and at the Tevatron. The
most obvious difference is in the $t \bar t$ production cross section,
which is on the order of 100 times higher at the LHC. More relevant for our
purposes is a similarity between the two machines: the fact that for
heavy quark production, the mass of the quark rather than the collider
energy sets the scale for the quark's transverse momentum. This is
illustrated in Figure~\ref{fig:pt}, which shows the transverse momentum of top
quarks produced at the Tevatron\footnote{With center-of-mass energy
1.8 TeV} (solid line) and LHC (dashed line),
normalized to the Tevatron cross section. Despite the factor of seven
difference in collider energy, the transverse momentum of the top
quarks at the two machines is remarkably similar. The only noticeable
effect of the LHC's higher energy is a slight spread in the
distribution at larger $p_T$. (And although we do not show it here,
top quarks are produced at the LHC with broader rapidity distributions.)
Similar results are seen for
$t \bar t j$ production.
This similarity in top quark spectra at the two machines has several
consequences. The most notable is the set of $x$ values at which the parton
distributions of the proton are probed at the two machines. At the
Tevatron, the parton typically has a fraction $x \simeq 0.2$ of the
proton's energy. At the LHC, the typical value is only $x \simeq
0.03$. This results in 90\% of the top quarks produced at the
Tevatron coming from $q \bar q$ annihilation, whereas at the LHC about 90\%
of the top quarks come from the $gg$ initial state. This means, among
other things, that we expect more gluon radiation in top production at
the LHC because of the gluons' larger color charge. A second
consequence of the similarity in top quark spectra is that gluon
emission in top quark {\it decay} at the LHC should be similar to that
at the Tevatron. At the LHC, therefore, gluon radiation is dominated
by production-stage emission.
We will discuss the full gluon distributions below, but because of the
production-stage dominance we can draw a few general conclusions about
the importance of radiation in top events by considering $t \bar t j$
production.\footnote{In this calculation and in those below, we have chosen
the strong coupling constant scale as follows. The factors of $\alpha_s$
associated with the lowest-order part of each process are evaluated
at $\sqrt{\hat{s}}$, the total subprocess center-of-mass energy. The
additional factor of $\alpha_s$ associated with emission of the extra jet is
evaluated at the jet's transverse energy $E_{Tj}$. Thus each
$t\bar t j$ cross section contains an overall factor
$\alpha_s^2(\sqrt{\hat{s}}) \alpha_s(E_{Tj})$.}
Figure~\ref{fig:sigmas} shows the ratio of cross sections for $t \bar t
g$ and $t \bar t$ production at the Tevatron (solid line) and LHC
(dashed line) as a function of the minimum transverse energy
$E_T$ of the gluon. The
great enhancement for production stage emission at the LHC can be
attributed to two sources. First, as mentioned above, gluons carry a
larger color charge than quarks. Therefore the color in the $gg$
initial state at the LHC enhances gluon emission by a factor of
approximately $C_A/C_F= 9/4$ over the $q \bar q$ initial state at the
Tevatron. Second, the parton distributions at the Tevatron fall very
steeply in the relevant $x$ range, making it difficult to
provide the additional energy required for a production stage
emission. At the LHC, the additional energy can be obtained with less
of a cost in the parton densities.
These effects are illustrated in the table, where we compare the
contributions to $t \bar t$ and $t \bar t j$ production at the Tevatron
and LHC. Cross sections are given in picobarns with cuts on the
extra jet as indicated.
If we compare the ratios $\sigma(t\bar t j)/\sigma(t \bar t)$ for the
$q\bar q$ and $gg$ initial states, we see an enhancement for
$gg$ compared to $q \bar q$, as we would expect due to the larger color
factor in the $gg$ initial state. This happens for both colliders.
A closer look shows that the $gg$
enhancement is larger at the LHC, where we have, for $E_{Tj}>40\ {\rm GeV}$,
$\sigma(gg\to t\bar t g)/\sigma(gg\to t\bar t) = 0.46$ and
$\sigma(q\bar q\to t\bar t g)/\sigma(q\bar q\to t\bar t)=0.16$.
At the Tevatron, these ratios are, respectively, 0.1 and 0.07.
The larger increase in the $gg$ cross section over $q \bar q$ at the LHC is
due to the diffference in behavior of the parton distributions discussed
above.
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
\rule[-1.2ex]{0mm}{4ex}
& & $q\bar q$ & $gg$ & $qg, \bar q g$ \\ \hline
Tevatron & $t \bar t$ & 2.4 & 0.2 & - \\
& $t \bar t j, E_{Tj}> 10\ {\rm GeV}$ & 1.1 & 0.2 & 0.1 \\
& $t \bar t j, E_{Tj}> 40\ {\rm GeV}$ & 0.17 & 0.02 & 0.03 \\ \hline
LHC & $t \bar t$ & 50 & 330 & - \\
& $t \bar t j, E_{Tj}> 10\ {\rm GeV}$ & 35 & 590 & 146 \\
& $t \bar t j, E_{Tj}> 40\ {\rm GeV}$ & 8 & 151 & 63 \\
\hline
\end{tabular}
\end{center}
Although Fig.~\ref{fig:sigmas} provides an indication of the relative
importance of
gluon radiation at the Tevatron and LHC, it it should not be taken too
literally. For example, it should not be translated directly into an
expected number of top events containing an extra gluon. There are
several reasons for this. First, only production-stage radiation is
explicitly included. Second, and more important, it represents a
fixed-order matrix element calculation which includes neither virtual
effects nor effects due to multiple gluon emission, both of which can
be important for low gluon energies.
In fact the figure serves as a guide to the regions where we can and
cannot trust the matrix-element results. Roughly speaking, they are
reliable when the $t \bar t g$ cross section is well below $\sigma(t
\bar t)$. This is satisfied at the Tevatron for all $E_T$ cuts shown.
At the LHC, however, the first-order cross section rises dramatically
with decreasing $E_T$ cut, and the $t \bar t g$ cross section with
gluon transverse energies greater than 10 GeV {\it exceeds} the
lowest-order cross section by a factor of 2, as can be seen in
Fig.~\ref{fig:sigmas} and the table.
Clearly virtual and multi-gluon effects
must be important there. We therefore restrict our LHC analysis to
gluons with transverse momentum greater than 40 GeV in what follows.
\section{Gluon radiation in top production and decay}
It is useful to distinguish between two different types of radiation
in $ t \bar t$ processes, as we have implicitly done above
and as has been discussed in previous work \cite{KOSetc,MOS,OSS}.
Gluons can be radiated in either the top
production or decay stages.
Production-stage emission occurs before the
top quark goes on shell and decay-stage emission occurs only after
the top quark goes on shell. In principle, an event with an extra jet
can be classified as `production' or
`decay' by looking at the invariant mass of the decay products. In
production emission events, the $W$ and $b$ momenta will combine to give
the top momenta. In decay emission events, the gluon momentum must also be
included to reconstruct the top momenta.
This interpretation is exact at the parton level in the narrow width
approximation. Finite top width effects can blur this interpretation due to
interferences between production-and decay-stage emissions. However,
the classification is still useful in our case because the top width
of 1.5~GeV is small compared to the 40~GeV gluon $E_T$ cut imposed in
the matrix element calculations. It should be kept in mind that this
applies at the level of theory. In an experiment, the
production-decay distinction is further blurred by jet energy
resolution and ambiguities associated with combinatorics and the like.
We have performed a complete tree-level $O(\alpha_s^3)$ calculation of
$p p \rightarrow W^+W^-b \bar b j$ at 14 TeV collision energy. The
calculation was performed as in \cite{OSS}, with the exception of the
choice of $\alpha_s$ scale as discussed above. We include all contributing
diagrams\footnote{We include all processes that give rise to an extra
jet: $q\bar{q} \rightarrow b\bar{b} W^+W^- g,$ $gg \to b\bar{b} W^+W^-
g,$ and $q g (\bar{q} g) \to b\bar{b} W^+W^- q (\bar q)$.}
and their interferences (with helicity
amplitudes generated by MadGraph \cite{MADGRAPH}), and all top width
and $b$ mass effects. Note that we do {\it not} include radiation off
the $W$ decay products. We use MRS(A$'$) parton distributions
\cite{MRSA}. The kinematic cuts imposed on the final-state partons
are (the subscript $j$ refers to the extra jet only):\footnote{The
cuts are applied to both the $b$ and $\bar b$ quarks.}
\begin{eqnarray}
|\eta_j| \> & \leq & \> 3 \; ,\nonumber \\
|\eta_b| \> & \leq & \> 2 \; ,\nonumber \\
E_{Tj}, \> & \geq & \> 40 \ {\rm GeV} \; , \nonumber \\
E_{Tb}, \> & \geq & \> 20 \ {\rm GeV} \; , \nonumber \\
\Delta R_{bj}, \Delta R_{b\bar b} \> & \geq & \> 0.4 \; .
\label{cuts}
\end{eqnarray}
The resulting distributions for the extra jet at the LHC are shown in
Figures \ref{fig:et}--\ref{fig:dr}.
In each figure the distribution is decomposed into contributions from
production- (dashed line) and decay-stage (solid line) radiation
according to final-state kinematics
as described in \cite{OSS}.
The most obvious feature of these distributions is the dominance of production
over decay emission, due to the enhancements in production
emission discussed above.
The decay contribution does not receive this enhancement because its
behavior is determined not by the collider energy, but by the
phase space of a 175~GeV top-quark decay.
In addition to the relative size, the kinematics of the two types
of emission are also interesting. Figure \ref{fig:et}
shows the jet $E_T$ distribution.
Both contributions fall off with increasing $E_T$, but production emission
extends to much higher values. The smaller values of $E_T$ to which decay
emission is constrained are again the consequence of the top
decay kinematics. Recall that even at the LHC, top quarks are produced with
relatively modest transverse momentum (cf. Fig.~\ref{fig:pt}),
so that gluons from the
decay do not receive much of a boost in $E_t$.
Note also that an increase in the $E_T$ cut on the jet would result in
a further reduction in relative size of the decay contribution compared
to production.
Figure \ref{fig:eta} shows the distribution in pseudorapidity of the extra jet.
Production emission is relatively flat in rapidity, as compared to the
more central decay emission. This is consistent with our basic intuition
that decay-stage radiation, being associated with the final-state particles
--- which tend to appear in the central rapidity region --- is
also likely to be produced centrally. But this decay contribution is
small; the important point to note here is that even in the central region,
it is production-stage radiation that dominates.
The tendency of decay-stage radiation to be associated with the final-state
$b$ quarks might lead one to expect that
if the extra jet is `near' the $b$ jet it should be
included in the mass reconstruction, and if it is not it should be
excluded. Figure \ref{fig:dr}, which shows the distribution in
$\Delta R$ between the jet and the nearest $b$ quark,
confirms that the decay-stage radiation peaks close to the $b$ and
production-stage radiation peaks further away. Unfortunately,
the production contribution is so large that it dominates even at
the low $\Delta R$ cutoff. A higher $E_T$ cut on the jet would make this situation
even worse. The best
choice of what is `near' the $b$ quark will therefore balance the competing
effects of decay emission falling outside the cone, and production
emission falling inside the cone.
It is tempting at this point to provide a prescription for
dealing with the extra jets expected in top events at the LHC, for
example by specifying
how to make the best choice of what is `near' the $b$ quark.
But optimizing this choice at the parton level would be naive, because
effects of multiple emissions, hadronization, and detector resolution
will all affect the results.
We also note that radiation from $W$ decay products has not been included in
our analysis here. Since the best top mass reconstruction is obtained
in the lepton+jets mode, radiation from hadronic $W$ decays must
ultimately be included. This calculation has been done in the soft
gluon approximation \cite{MOS}, and the contribution from a single
hadronically decaying $W$ is found to be substantial --- comparable in
size and shape to the {\it total} decay contributions from radiation off
the $tb$ and $\bar t \bar b$ antennae. The exact calculation
including hadronic $W$ decays is currently in progress \cite{OSSINPROG}.
In practice the effects of gluon radiation are incorporated into
the predictions that are used in experimental fits.
The parton level calculation can and should be used
to ensure that the radiation physics is properly implemented in event
generators used in the experimental analysis.
\section{Comparison with HERWIG}
Because the experimental analysis must rely on the
predictions of Monte Carlo programs --- for example, in fits to three-jet
invariant mass distributions for top mass determination --- it is
important that these programs contain the correct physics.
The Monte Carlo program HERWIG \cite{HERWIG}, which is widely used in
experimental analyses, treats gluon radiation in top production and decay
using parton showers.
In previous work \cite{OSS,OSSBIS}, we compared our results for
radiation at the Tevatron with predictions of version 5.8 of HERWIG.
We found significant discrepancies in regions where the two should
agree. HERWIG appeared to have a deficit in decay-stage radiation
compared to production \cite{OSS}. Further investigation revealed
differences even for $t \bar t g$ production at $e^+e^-$ colliders
\cite{OSSBIS}. Recently HERWIG 5.8 was found to contain a bug
\cite{BUG,HERWIGNEW} which resulted in suppression of decay-stage radiation in
top events.\footnote{The bug appeared only in version 5.8; it was not
present in HERWIG 5.7 \cite{HERWIGNEW}.} Here we continue our comparison of
matrix-element and parton-shower results using
HERWIG 5.9 \cite{HERWIGNEW}, in which the bug has been corrected.
Although we see some improvement in the agreement,
major differences still exist.
We begin by reproducing the LHC jet distributions
using HERWIG 5.9. The details of comparing
a full parton shower Monte Carlo with a fixed-order matrix element
calculation were discussed in previous papers \cite{OSS,OSSBIS}. The
idea is to combine particles from the parton shower into jets, and
compare distributions of these jets to those from the matrix element
calculation. For hadron colliders we use a cone algorithm to combine
partons into jets. We identify events with production- and
decay-stage emission according to the final state kinematics, as
described above.
Results for the jet pseudorapidity are shown in Figure \ref{fig:etahw}.
There is general agreement
between the matrix element calculation and the parton shower.
However, a closer examination reveals two important differences.
Looking at production
emission, we see that the HERWIG distribution is peaked at large
$|\eta|$ and has a dip in the center. In contrast, the matrix element
distribution is relatively flat, as we have seen in Fig.~\ref{fig:eta}.
Since the jets in this case have relatively
strong cuts, the perturbation series should be converging quickly and
the tree-level matrix element distributions should be accurate.
This suggests that the approximations used in HERWIG may be responsible
for this discrepancy.
The second difference between the matrix-element and parton-shower results
is, as before, in the relative amounts of production and decay
emissions. Whereas HERWIG 5.8 with the bug had too {\it little}
decay-stage radiation, the corrected version now seems to have
too {\it much} compared to the matrix element calculation.
This effect is illustrated more clearly in a simpler example. As in
our previous work \cite{OSSBIS}, we simplify the comparison by looking
at $e^+e^-$ machines near $t \bar t$ threshold. While the parton
calculation is an inclusive calculation with a fixed number of final
state particles, HERWIG is an exclusive calculation with an arbitrary
number of final particles. To perform a meaningful comparison we
employ the Durham ($k_T$) successive recombination algorithm to reconstruct
jets from the HERWIG output \cite{DURHAM}. In addition, we impose
cuts ($E_{Tj} > 10$~GeV, $\Delta R > 0.4$) on the jets to ensure the
matrix element is being evaluated in a region where the perturbation
series converges rapidly. The validity of this comparison was
demonstrated in \cite{OSSBIS}.
The results of our comparison for $e^+e^- \rightarrow W^+W^- b \bar b
g$ are shown in Figure \ref{fig:eehw}, where along with the matrix-element
calculation we show results from both the old (5.8) and
corrected (5.9) versions of HERWIG.
The center-of-mass energy of 360 GeV is
chosen just above $t \bar t$ threshold to suppress production-stage
emission, so that almost all of the radiation occurs in the decays.
Fig.~\ref{fig:eehw}(a)
shows the distribution in $\Delta R$ between the closest two
jets, and Fig.~\ref{fig:eehw}(b) shows the minimum $y$ (defined in the Durham
algorithm as $y_{ij} \equiv {{2\min(E_i^2,E_j^2)(1-\cos\theta_{ij})}/
{s}}$) for all jet pairings in the event. We see in both cases
that the old version of HERWIG underestimates the amount of decay
radiation and the new version overestimates it. The
discrepancy even in the corrected version of HERWIG is dramatic.
As a technical aside, we note that the normalization of the matrix
element for the $e^+e^-$ case is fixed as in our previous work
\cite{OSSBIS} by choosing the value of $\alpha_s = 0.126$ that gives
agreement between the matrix-element and parton-shower calculations
for the case of $b \bar b$ production. The larger energy scale for top
quark production might suggest the use of a smaller value of
$\alpha_s$, which would make the discrepancy even larger.
The disagreement between the matrix element calculation and HERWIG
seems severe. A detailed study of the discrepancy is in progress and
will appear elsewhere \cite{OSSINPROG}. For the moment, it appears
that an estimate of the magnitude of the effect is the best that can
be hoped for.
We would expect this effect to contribute on the order of a few GeV to
the uncertainty in the measured top mass. While not catastrophic,
clearly such a discrepancy is
unacceptably large, given the precision hoped for in future experiments.
Further work must be done to provide an
accurate event generator for top-quark production.
\section{Conclusions}
Top-quark production will be central to many physics
studies at the LHC, and top mass reconstruction will be the
key for identifying top events.
The large energy of the LHC collider provides a
large top-quark cross section, but it also provides for large amounts of
gluon radiation in the top production process. We have calculated
the cross section for top production and decay in
association with an extra jet to order $\alpha_s^3$, and find
a large probability for gluon radiation at the LHC compared to the
Tevatron. At the LHC, production-stage radiation
dominates over decay-stage emissions; this is also in contrast to
the Tevatron, where the two contributions are roughly comparable.
As shown above, the relative amounts of production- and decay-stage
radiation depend sensitively on the kinematic cuts applied. In
addition, the decay contribution is expected roughly to double if
radiation from hadronic $W$ decays is included.
All of this has important implications for top physics at the LHC.
Even more so than at the Tevatron, gluon radiation at the LHC must be
understood not only because there is more of it, but because
uncertainties in quantities like the top mass
will be dominated by systematic effects due to gluon radiation.
For example, the proliferation of production-stage gluon radiation
means that it will sometimes be included in
the top mass reconstruction, and therefore will limit our ability to
reconstruct the top mass. Quantifying the magnitude of this and similar
effects requires simulations which implement all of the relevant
physics correctly. Unfortunately our comparisons show that even the most
recent version of HERWIG, corrected for the bug in version 5.8,
still does not reproduce the
correct distributions. Apparently a hard gluon correction is needed to
model radiation in the production and decay of very heavy quarks.
It should be
a priority to provide a top-quark event generator with the standard
model physics implemented as accurately as possible.
\section*{\Large\bf Acknowledgements}
\noindent WJS is grateful to the UK PPARC for a Senior Fellowship.
Useful discussions with Tony Liss, Richard Partridge and
Paul Tipton are acknowledged. This work was supported in part by the U.S.\
Department of Energy, under grant DE-FG02-91ER40685 and by the EU
Programme ``Human Capital and Mobility'', Network ``Physics at High
Energy Colliders'', contract CHRX-CT93-0537 (DG 12 COMA).
\goodbreak
\vskip 1truecm
| proofpile-arXiv_065-1101 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A classical description of relativistic spinning particles is one of
the traditional branches of theoretical physics having a long history \cite
{c1,c2,c3}. By now, several approaches to this problem have been developed.
Most of the researches are based on the enlargement of the Minkowski space
by extra variables, anticommuting \cite{c2} or commuting \cite{c3,c13,c16},
responsible for the spin evolution. Being well adapted for the quantization,
the theories using Grassmann variables encounter, however, difficulties on
attempting to justify them at the classical level. Besides that, the
quantization of these theories lead to the Poincar\'e representation of
fixed spin.
The orbit method, developed in \cite{kir}, is the universal approach for the
description of the elementary systems. The basic
object of this approach is a presymplectic manifold ${\cal E}$, being a
homogeneous transformation space for a certain Lie group $G$, and the system
is considered as "elementary" for this group. The manifold carries
the invariant and degenerate closed
two-form $\Omega $ such that quotient space ${\cal E}/\ker \Omega $ is a
homogeneous symplectic manifold (in fact, it may be identified with some
covering space for coadjoint orbit ${\cal O}$ of group $G$). If $\theta $ is
a potential one--form for $\Omega $ then the first-order action functional
of the system may be written as%
$$
S=\int \theta
$$
Being applied to the Poincar\'e group, this method gives the Souriau
classification of the spinning particles. Meanwhile, there is another
trend to describe a spinning particle by means of a traditional
formalism based on an appropriate choice of the configuration space for
spin [1-5].
In a recent paper \cite{c13}, the new model was proposed for a massive
particle of arbitrary spin in $d=4$ Minkowski space to be a mechanical
system with the configuration space $R^{3,1}\times S^2$, where two sphere $%
S^2$ corresponds to the spinning degrees of freedom. It was shown that
principles underlying the model have simple physical and geometrical origin.
Quantization of the model leads to the unitary massive representations of
the Poincar\'e group. The model allows the direct extension to the case of
higher superspin superparticle and the generalization to the anti-de Sitter
space.
Despite the apparent simplicity of the model's construction, its higher
dimensional generalization is not so evident, and the most crucial point is
the choice of configuration space for spin. In this work we describe the
massive spinning particle in six-dimensional Minkowski space $R^{5,1}$, that
may be considered as a first step towards the uniform model construction for
all higher dimensions. It should also be noted that this generalization may
have a certain interest in its own rights since six is the one in every four
dimensions: 3, 4, 6 and 10 possess the remarkable properties such as the
presence of two-component spinor formalism or light-likeness of the spinor
bilinear \cite{ced}. These properties are conditioned by the connection
between the division algebras and the Lorentz groups of these spaces \cite
{kts}%
$$
SL\left( 2,A\right) \sim SO^{\uparrow }\left( \dim A+1,1\right)
$$
where A are the division algebras $R,C,H,O$ of real and complex numbers,
quaternions and octonions respectively. Besides that, these are exactly the
dimensions where the classical theory of Green-Schwarz superstring can be
formulated \cite{gs}.
Let us now sketch the broad outlines of the construction. First of all, for
any even dimension $d$, the model's configuration space is chosen to be the
direct product of Minkowski space $R^{d-1,1}$ and some $m$-dimensional
compact manifold $K^m$ being a homogeneous transformation space for the
Lorentz group $SO(d-1,1)$. Then the manifold $M^{d+m}=R^{d-1,1}\times K^m$
proves to be the homogeneous transformation space for the Poincar\'e group.
The action of the Poincar\'e group on $M^{d+m}$ is unambiguously lifted up to
the action on the cotangent bundle $T^{*}(M^{d+m})$ being the extended phase
space of the model. It is well known that the massive unitary irreducible
representations of the Poincar\'e group are uniquely characterized by the
eigenvalues of $d/2$ Casimir operators%
$$
C_1={\bf P}^2\ ,\ C_{i+1}={\bf W}^{A_1...A_{2i-1}}{\bf W}_{A_1...A_{2i-1}}\
,\quad i=1,...,\frac{d-2}2\ ,
$$
where{\bf \ }${\bf W}_{A_1....A_{2i-1}}=\epsilon _{A_1...A_d}{\bf J}%
^{A_{2i}A_{2i+1}}...{\bf J}^{A_{d-2}A_{d-1}}{\bf P}^{A_d}$ and ${\bf J}_{AB},%
{\bf P}_C$ are the Poincar\'e generators. This leads us to require the
identical (off-shell) conservation for the quantum numbers associated with
the phase space counterparts of Casimir operators. In other words $d/2$
first-class constraints should appear in the theory.
Finally, the dimensionality $m$ of the manifold $K^m$ is specified from the
condition that the reduced (physical) phase space of the model should be a
homogeneous symplectic manifold of Poincar\'e group (in fact, it should
coincide with the coadjoint orbit of maximal dimension $d^2/2$). A simple
calculation leads to $m=d(d-2)/4$. In the case of four-dimensional Minkowski
space this yields $m=2$ and two-sphere $S^2$ turns out to be the unique
candidate for the internal space of the spinning degrees of freedom. In the
case considered in this paper $d=6$, and hence $m=6$. As will be shown below
the suggestive choice for $K^6$ is the complex projective space $CP^3$.
The models can be covariantly quantized a l\^a Dirac by imposing the
first-class constraints on the physical states being the smooth complex
functions on the homogeneous space $M^{d(d+2)/4}=R^{d-1,1}\times
K^{d(d-2)/4} $%
$$
(\widehat{C}_i-\delta _i)\Psi =0\quad ,\qquad i=1,...,\frac d2\ ,
$$
where the parameters $\delta _i$ are the quantum numbers characterizing the
massive unitary representation of the Poincar\'e group. Thus the
quantization of the spinning particle theories reduces to the standard
mathematical problem of harmonic analysis on homogeneous spaces. It should
be remarked that manifold $M^{d(d+2)/4}$ may be thought of as the ${\it %
minimal}$ (in sense of its dimensionality) one admitting a non-trivial
dynamics of arbitrary spin, and hence it is natural to expect that the
corresponding Hilbert space of physical states will carry the ${\it %
irreducible}$ representation of the Poincar\'e group.
The paper is organized as follows. Sec.2 deals with the description of the
configuration space geometry, its local structure and various
parametrizations. In sec.3 we derive the model's action functional in the
first order formalism. We also consider the solutions of classical
equations of motion and discuss the geometry of the trajectories.
In sec.4 the second order
formalism for the theory is presented and the different reduced forms of
Lagrangian are discussed. Here we also investigate the causality conditions
for the theory. Sec.5 is devoted to the quantization of the theory in the
Hilbert space of smooth tensor fields over $M^{12}$. The connection with the
relativistic wave equations is apparently stated. In the conclusion we
discuss the received results and further perspectives. In the Appendix we have
collected the basic facts of half spinorial formalism in six-dimensions.
\section{Geometry of the configuration space \newline
and covariant parametrization}
We start with describing a covariant realization for the model's
configuration space chosen as $M^{12}=R^{5,1}\times CP^3.$ The manifold $%
M^{12}$ is the homogeneous transformation space for Poincar\'e group $P$
and, hence, it can be realized as a coset space $P/H$ for some subgroup $%
H\subset P.$ In order to present the subgroup $H$ in an explicit form it is
convenient to make Iwasawa decomposition for six-dimensional Lorentz group $%
SO\left( 5,1\right) $ in maximal compact subgroup $SO\left( 5\right) $ and
solvable factor $R$%
\begin{equation}
\label{a}SO\left( 5,1\right) =SO\left( 5\right) R
\end{equation}
Then the minimal parabolic subgroup, being defined as normalizer of $R$ in $%
SO\left( 5,1\right) $, coincides with $SO\left( 4\right) R.$ By means of the
decomposition $SO\left( 4\right) =SO\left( 3\right) \times SO\left( 3\right)
$ the subgroup $H$ is identified with $\left[ SO\left( 2\right) \times
SO\left( 3\right) \right] R$. Thus
\begin{equation}
\label{b}M^{12}=R^{5,1}\times CP^3\sim \frac{Poincar\acute e\,group}{\left[
SO\left( 2\right) \times SO\left( 3\right) \right] R}\sim R^{5,1}\times
\frac{SO\left( 5\right) }{SO\left( 2\right) \times SO\left( 3\right) }
\end{equation}
and thereby one has the isomorphism
\begin{equation}
\label{c}CP^3\sim \frac{SO\left( 5\right) }{SO\left( 2\right) \times
SO\left( 3\right) }
\end{equation}
Furthermore, from the sequence of the subgroups
\begin{equation}
\label{d}SO\left( 2\right) \times SO\left( 3\right) \subset SO\left(
4\right) \subset SO\left( 5\right)
\end{equation}
and the obvious isomorphisms $S^4\sim SO\left( 5\right) /SO\left( 4\right)
,S^2\sim SO\left( 3\right) /SO\left( 2\right) $ one concludes that $CP^3$
may be considered as the bundle $CP^3\rightarrow S^4$ with the fibre $S^2$%
. The fibres lie in $CP^3$ as projective lines $CP^1\sim S^2$. Thus, $CP^3$
is locally represented as
\begin{equation}
\label{f}CP^3\stackrel{loc.}{\sim }S^4\times S^2
\end{equation}
Note that the subgroup $H$ contains solvable factor ${\cal R}$ (and
hence $H$ is not an unimodular), so there is no Poincar\'e invariant
measure on $M^{12}$. Nevertheless from rel.(\ref{c}) it follows that there is a
quasi-invariant measure which becomes a genuine invariant when the Lorentz
transformations are restricted to the stability subgroup of time-like vector
$ SO\left( 5\right) $.
In spite of the quite intricate structure, the subgroup $H$ admits a simple
realization, namely, it can be identified with all the $SO\left( 5,1\right)
- $transformations multiplying the Weyl spinor $\lambda $ by a complex
factor
\begin{equation}
\label{g}N_a{}^b\lambda _b=\alpha \lambda _a\quad ,\quad \alpha \in
C\backslash \left\{ 0\right\}
\end{equation}
(all the details concerning six-dimensional spinor formalism are collected
in the Appendix). This observation readily leads to the covariant
parametrization of $CP^3$ by a complex Weyl spinor subject to the equivalence
relation
\begin{equation}
\label{h}\lambda _a\sim \alpha \lambda _a\qquad ,\qquad \alpha \in
C\backslash \left\{ 0\right\}
\end{equation}
By construction, the Poincar\'e group generators act on $M^{12}$ by the
following vector fields:
\begin{equation}
\label{i}{\bf P}^A=\partial ^A\quad ,\quad {\bf M}_{AB}=x_A\partial
_B-x_B\partial _A-\left( \left( \sigma _{AB}\right) _a{}^b\lambda _b\partial
^a~+~c.c.\right)
\end{equation}
where $\left\{ x^A\right\} $ are the Cartesian coordinates on $R^{5,1}$. It
is evident that Poincar\'e generators commute with the projective
transformations (\ref{h}) generated by the vector fields
\begin{equation}
\label{j}d=\lambda _a\partial ^a\qquad ,\qquad \overline{d}=\overline{%
\lambda }_a\overline{\partial }^a
\end{equation}
Then the space of scalar functions on $M^{12}$ is naturally identified with
those functions $\Phi \left( x^A,\lambda _a,\overline{\lambda }_b\right) $
which satisfy the homogeneity conditions
\begin{equation}
\label{k}d\Phi =\overline{d}\Phi =0
\end{equation}
Let us consider the ring of invariant differential operators acting on the
space of scalar functions on $M^{12}$. Such operators should commute with
the Poincar\'e transformations (\ref{i}) and the projective ones (\ref{j}).
It is easy to see that there are only three independent Laplace operators.
They are
\begin{equation}
\label{l}
\begin{array}{c}
\Box =-\partial ^A\partial _A \\
\\
\bigtriangleup _1=\lambda _a\overline{\lambda }_b\partial ^b\overline{%
\partial }^a\quad ,\quad \bigtriangleup _2=\overline{\lambda }_a\lambda
_b\partial ^{ab}\partial _{cd}\overline{\partial }^c\partial ^d
\end{array}
\end{equation}
where $\partial _{ab}=\left( \sigma ^A\right) _{ab}\partial _A$. Casimir
operators of the Poincar\'e group in representation (\ref{i}) can be
expressed through the Laplace operators as follows
\begin{equation}
\label{m}
\begin{array}{c}
C_1=
{\bf P}^A{\bf P}_A=-\Box \\ \\
C_2=\frac 1{24}{\bf W}^{ABC}{\bf W}_{ABC}=\bigtriangleup _2+\Box
\bigtriangleup _1\quad ,\quad C_3=\frac 1{64}{\bf W}^A{\bf W}_A={\bf %
\bigtriangleup }_1\bigtriangleup _2+2\Box \bigtriangleup _1
\end{array}
\end{equation}
where ${\bf W}^A=\epsilon ^{ABCDEF}{\bf M}_{BC}{\bf M}_{DE}{\bf P}_F$ , $%
{\bf W}^{ABC}=\epsilon ^{ABCDEF}{\bf M}_{DE}{\bf P}_F$ are Pauli-Lubanski
vector and tensor respectively.
In what follows we present another covariant parametrization of $M^{12}$ in
terms of a non-zero light-like vector $b^A$ and anti-self-dual tensor $%
h^{ABC}$ constrained by the relations
\begin{equation}
\label{n}
\begin{array}{c}
b^Ab_A=0\quad ,\quad b^A\sim ab^A\quad ,\quad h^{ABC}\sim ah^{ABC}\quad
,\quad a\in R\backslash \left\{ 0\right\} \\
\\
h^{ABC}=-\frac 16\epsilon ^{ABCDEF}h_{DEF}\qquad ,\qquad b_Ah^{ABC}=0 \\
\\
h^{ABC}h_{CDE}=\frac 14\delta ^{[A}{}_{[D}b^{B]}b_{E]}
\end{array}
\end{equation}
(Here the anti-self-dual tensor $h^{ABC}$ is chosen real that is
always possible in $R^{5,1}$.) As a matter of fact, the first two relations
imposed on $b^A$ define $S^4$ as a projective light-cone. With the use of
Lorentz transformations each point on $S^4$ can be brought into another one
parametrized by the vector $\stackrel{o}{b}\!^A=\left( 1,0,0,0,0,1\right) $.
By substituting $\stackrel{o}{b}\,^A$ into the fourth equation one reduces
the ten components of $h^{ABC}$ to the three independent values, for
instance $h_{012},h_{013},h_{014}$. Then the last equation takes the form
\begin{equation}
\left( h_{012}\right) ^2+\left( h_{013}\right) ^2+\left( h_{014}\right)
^2=\frac 14
\end{equation}
i.e. it defines the two-sphere $S^2$. In such a manner we recover the local
structure of $CP^3$ discussed above (\ref{f}). The relationship between
these two parametrizations may be established explicitly with the use of the
following Fierz identity:
\begin{equation}
\label{o}\overline{\lambda }_a\lambda _b=\frac 14\overline{\lambda }%
\widetilde{\sigma }_A\lambda \left( \sigma ^A\right) _{ab}+\frac 1{12}%
\overline{\lambda }\widetilde{\sigma }_{ABC}\lambda \left( \sigma
^{ABC}\right) _{ab}
\end{equation}
Defining $b^A$ and $h^{ABC}$ through $\overline{\lambda }_a$,$\lambda _a$ as
\begin{equation}
\label{p}b^A=\overline{\lambda }\widetilde{\sigma }{}^A\lambda \qquad
,\qquad h^{ABC}=i{}\overline{\lambda }\widetilde{\sigma }{}^{ABC}\lambda
\end{equation}
one can get (\ref{n}). The Poincar\'e generators (\ref{i}) and Laplace
operators (\ref{m}) can straightforwardly be rewritten in terms of $b^A$ and
$h^{ABC}$ but we omit the explicit expressions here since in what follows
the spinor parametrization of $CP^3$ will be mainly used.
\section{Action functional in the first order formalism and classical
dynamics}
We proceed to the derivation an action functional governing the point
particle dynamics on $M^{12}.$ The main dynamical principle underlying our
construction is the requirement of identical (off-shell) conservation for
the classical counterparts of three Casimir operators (\ref{m}).
As a starting point, consider the phase space $T^{*}(R^{5,1}\times C^4)$
parametrized by the coordinates $x^A,\lambda _a,\overline{\lambda }_b$ and
their conjugated momenta $p_A,\pi ^a,\overline{\pi }^b$ satisfying the
canonical Poisson-bracket relations
\begin{equation}
\label{q}\left\{ x^A,p_B\right\} =\delta ^A\!_B\quad ,\quad \left\{ \lambda
_a,\pi ^b\right\} =\delta ^b\!_a\quad ,\quad \left\{ \overline{\lambda }_a,%
\overline{\pi }^b\right\} =\delta ^b\!_a
\end{equation}
Obviously, the action of the Poincar\'e group on $M^{12}$ (\ref{i}) is lifted up
to the canonical action on $T^{*}(R^{5,1}\times C^4).$ This action induces a
special representation of the Poincar\'e group on the space of smooth
functions $F$ over the phase space, and the corresponding infinitesimal
transformations can be written via the Poisson brackets as follows
\begin{equation}
\label{r}\delta F=\left\{ F,-a^AP_A+\frac 12K^{AB}J_{AB}\right\}
\end{equation}
Here $a^A$ and $K^{AB}=-K^{BA}$ are the parameters of translations and
Lorentz transformations, respectively, and the Hamilton generators read
\begin{equation}
\label{s}P_A=p_A\qquad ,\qquad J_{AB}=x_Ap_B-x_Bp_A+M_{AB}
\end{equation}
where the spinning part of Lorentz generators is given by%
$$
M_{AB}=-\pi \sigma _{AB}\lambda +\ c.c.
$$
The phase-space counterparts of Casimir operators associated with the
generators (\ref{s}) can be readily obtained from (\ref{m}) by making formal
replacements: $\partial _A\rightarrow p_A,\partial ^a\rightarrow \pi ^a,%
\overline{\partial }^a\rightarrow \overline{\pi }^a$. The result is
\begin{equation}
\label{t}
\begin{array}{c}
C_1=p^2 \\
\\
C_2=p^2\left( \overline{\pi }\lambda \right) \left( \pi \overline{\lambda }%
\right) -\left( \overline{\pi }p\pi \right) \left( \overline{\lambda }%
p\lambda \right) \quad ,\quad C_3=\left( \overline{\pi }\lambda \right)
\left( \pi \overline{\lambda }\right) \left( \overline{\pi }p\pi \right)
\left( \overline{\lambda }p\lambda \right)
\end{array}
\end{equation}
As is seen the Casimir functions $C_2,C_3$ are unambiguously expressed via
the classical analogs of Laplace operators (\ref{l})
\begin{equation}
\label{u}\bigtriangleup _1=\left( \overline{\pi }\lambda \right) \left( \pi
\overline{\lambda }\right) \quad ,\quad \bigtriangleup _2=\left( \overline{%
\pi }p\pi \right) \left( \overline{\lambda }p\lambda \right)
\end{equation}
and, thereby, one may require the identical conservation of $\bigtriangleup
_1,\bigtriangleup _2$ instead of $C_2,C_3$.
Let us now introduce the set of five first-class constraints, three of which
are dynamical
\begin{equation}
\label{v}
\begin{array}{c}
T_1=p^2+m^2\approx 0 \\
\\
T_2=\bigtriangleup _1+\delta _1^2\approx 0\qquad ,\qquad T_3=\bigtriangleup
_2+m^2\delta _2^2
\end{array}
\end{equation}
and the other are kinematical
\begin{equation}
\label{w}T_4=\pi \lambda \approx 0\qquad ,\qquad T_5=\overline{\pi }%
\overline{\lambda }\approx 0
\end{equation}
Here parameter $m$ is identified with the mass of the particle, while the
parameters $\delta _1,\delta _2$ relate to the particle's spin. The role of
kinematical constraints is to make the Hamiltonian reduction of the extended
phase space to the cotangent bundle $T^{*}\left( M^{12}\right) $. In
configuration space these constraints generate the equivalence relation (\ref
{h}) with respect to the Poisson brackets (\ref{q}). The constraints
$T_1,T_2,T_3$ determine the dynamical content of the model and lead to the
unique choice for action functional.
>From (\ref{v}) it follows that on the constraint surface the conserved
charges $\bigtriangleup _1$ and $\bigtriangleup _2$ are limited to be
negative (or zero) constants. These restrictions are readily seen from the
following simple reasons. Let us introduce the set of three $p$-transversal
tensors
\begin{equation}
\label{x}
\begin{array}{c}
W_{ABC}=\epsilon _{ABCDEF}J^{DE}p^F\ \ ,\ \ W_A=\epsilon
_{ABCDEF}J^{BC}J^{DE}p^F\ \ ,\ \ \\
\\
V_A=M_{AB}p^B
\end{array}
\end{equation}
Since the $p$ is a time-like vector (\ref{v}) the full contraction of each
introduced tensor with itself should be non-negative. Then one may check
that the following relations take place
\begin{equation}
\label{y}
\begin{array}{c}
W_{ABC}W^{ABC}=p^2\bigtriangleup _1-\bigtriangleup _2\geq 0\quad ,\quad
W_AW^A=\bigtriangleup _1\bigtriangleup _2\geq 0, \\
\\
V_AV^A=-p^2\bigtriangleup _1-\bigtriangleup _2\geq 0
\end{array}
\end{equation}
Resolving these inequalities we come to the final relation:
\begin{equation}
\label{z}\bigtriangleup _2\leq m^2\bigtriangleup _1\leq 0
\end{equation}
which in turn implies that $\left| \delta _2\right| \geq \left| \delta
_1\right| $. Thus, the set of constraints (\ref{v}) leads to the
self-consistent classical dynamics only provided that the rel.(\ref{z}) holds
true.
Assuming the theory to be reparametrization invariant, the Hamiltonian of
the model is a linear combination of the constraints and the first-order
(Hamilton) action takes the form:
\begin{equation}
\label{aa}S_H=\int d\tau \left\{ p_A\stackrel{.}{x}^A+\pi ^a\stackrel{.}{%
\lambda }_a+\overline{\pi }^a\stackrel{.}{\overline{\lambda }}%
_a-\sum\limits_{i=1}^5e_iT^i\right\}
\end{equation}
Here $\tau $ is the evolution parameter, $e_i$ are the Lagrange multipliers
associated to the constraints with $e_4=\overline{e}_5$. Varying (\ref{aa})
one gets the following equations of motion:
\begin{equation}
\label{ab}
\begin{array}{c}
\dot \lambda _a=e_2\left(
\overline{\pi }\lambda \right) \overline{\lambda }_a+e_3\left( \overline{%
\lambda }p\lambda \right) \overline{\pi }^bp_{ba}+e_4\lambda _a \\ \\
\dot \pi ^a=-e_2\left( \pi
\overline{\lambda }\right) \overline{\pi }^a-e_3\left( \overline{\pi }p\pi
\right) \overline{\lambda }_bp^{ba}-e_4\pi ^a \\ \\
\dot x^A=2e_1p^A+e_3\left\{ \left(
\overline{\pi }\sigma ^A\pi \right) \left( \overline{\lambda }p\lambda
\right) +\left( \overline{\pi }p\pi \right) \left( \overline{\lambda }%
\widetilde{\sigma }^A\lambda \right) \right\} \\ \\
\dot p_A=0
\end{array}
\end{equation}
Despite the quite nonlinear structure, the equations are found to be
completely integrable with arbitrary Lagrange multipliers. This fact is not
surprising as the model, by construction, describes a free relativistic
particle possessing a sufficient number of symmetries.
In the spinning sector the corresponding solution looks like:
$$
\begin{array}{c}
\displaystyle{\ \lambda _a=e}^{E_4}{\cos \left( m^2E_3\delta _2\right)
\left( \cos \left( E_2\delta _1\right) \lambda _a^0+\frac{\sin \left(
E_2\delta _1\right) }{\delta _1}\left( \overline{\pi }_0\lambda ^0\right)
\overline{\lambda }_a^0\right) +} \\ \\
\displaystyle{\ +e}^{E_4}{\frac{\left( \overline{\lambda }^0p\lambda
^0\right) }{m^2}\frac{\sin \left( m^2E_3\delta _2\right) }{\delta _2}%
p_{ab}\left( \frac{\sin \left( E_2\delta _1\right) }{\delta _1}\left(
\overline{\pi }_0\lambda ^0\right) \pi _0^b-\cos \left( E_2\delta _1\right)
\overline{\pi }_0^b\right) }
\end{array}
$$
\begin{equation}
\label{ac}{}
\end{equation}
$$
\begin{array}{c}
\displaystyle{\ \pi ^a=e}^{-E_4}{\cos \left( m^2E_3\delta _2\right) \left(
\cos \left( E_2\delta _1\right) \pi _0^a-\frac{\sin \left( E_2\delta
_1\right) }{\delta _1}\left( \pi _0\overline{\lambda }^0\right) \overline{%
\pi }_0^a\right) +} \\ \\
\displaystyle{\ +e}^{-E_4}{\frac{\left( \overline{\pi }_0p\pi _0\right) }{m^2%
}\frac{\sin \left( m^2E_3\delta _2\right) }{\delta _2}p^{ab}\left( \frac{%
\sin \left( E_2\delta _1\right) }{\delta _1}\left( \pi _0\overline{\lambda }%
^0\right) \lambda _b^0+\cos \left( E_2\delta _1\right) \overline{\lambda }%
_b^0\right) }
\end{array}
$$
and for the space-time evolution one gets
\begin{equation}
\label{ad}
\begin{array}{c}
p^A=p_0^A \\
\\
x^A\left( \tau \right) =x_0^A+2\left( E_1+E_3\delta _2^2\right)
p_0^A-m^{-2}V^A\left( \tau \right) \\
\\
V^A\left( \tau \right) =V_1^A\cos \left( 2m^2E_3\delta _2\right) +V_2^A\sin
\left( 2m^2E_3\delta _2\right)
\end{array}
\end{equation}
Here $E_i\left( \tau \right) =\int\limits_0^\tau d\tau e_i(\tau )$, vector $%
V^A$ is defined as in (\ref{x}) and the initial data $\lambda _a^0=\lambda
_a\left( 0\right) \ ,\ \pi _0^a=\pi ^a\left( 0\right) \ ,\ p_0^A$ are
assumed to be chosen on the surface of constraints (\ref{v}), (\ref{w}).
Let us briefly discuss the obtained solution. First of all, one may resolve
the kinematical constraints (\ref{w}) by imposing the gauge fixing
conditions of the form
\begin{equation}
\label{ae}e_4=e_5=0\qquad ,\qquad \lambda _0=1 \, ,
\end{equation}
so that $\lambda _i,i=1,2,3$ can be treated as the local coordinates on $%
CP^3 $. Then from (\ref{ac}), (\ref{ad}) we see that the motion of the point
particle on $M^{12}$ is completely determined by an independent evolution of
the three Lagrange multipliers $e_1,e_2,e_3$. The presence of two additional
gauge invariances in comparison with spinless particle case causes the
conventional notion of particle world line, as the geometrical set of
points, to fail. Instead, one has to consider the class of gauge equivalent
trajectories on $M^{12}$ which, in the case under consideration, is identified
with three-dimensional surface, parametrized by $e_1,e_2,e_3$ . The
space-time projection of this surface is represented by the two-dimensional
tube of radius $\rho =\sqrt{\delta _2^2-\delta _1^2}$ along the particle's
momenta $p$ as is seen from the explicit expression (\ref{ad}). This fact
becomes more clear in the rest reference system $\stackrel{\circ }{p}_A=(m,%
\stackrel{\rightarrow }{0})$ after identifying of the evolution parameter $%
\tau $ with the physical time by the law
\begin{equation}
\label{af}x^0=c\tau
\end{equation}
Then eq. (\ref{ad}) reduces to
\begin{equation}
\label{ag}\stackrel{\rightarrow }{x}(\tau )=m^{-2}\stackrel{\rightarrow }{V}%
\left( \tau \right) =\stackrel{\rightarrow }{V}_1\cos \left( 2m^2E_3\delta
_2\right) +\stackrel{\rightarrow }{V}_2\sin \left( 2m^2E_3\delta _2\right)
\end{equation}
where, in accordance with (\ref{y}) $\stackrel{\rightarrow }{V}%
{}^2=m^2\left( \delta _2^2-\delta _1^2\right) $ and hence
\begin{equation}
\label{ah}\stackrel{\rightarrow }{V}_1\!^2=\stackrel{\rightarrow }{V}%
_2{}^2=\delta _2^2-\delta _1^2\ ,\ (\stackrel{\rightarrow }{V}_1,\stackrel{%
\rightarrow }{V}_2)=0
\end{equation}
The rest gauge arbitrariness, related to the Lagrange multiplier $e_3$, causes
that, in each moment of time, the space-time projection of the motion is
represented by a circle of radius $\rho $. This means that after accounting
spin, the relativistic particle ceases to be localized in a certain point of
Minkowski space but represents a string-like configuration contracting to
the point only provided that $\delta _1=\delta _2$.
Finally, let us discuss the structure of the physical observables of the
theory. Each physical observable $A$ being a gauge-invariant function on the
phase space should meet the requirements:
\begin{equation}
\label{ai}\left\{ A,T_i\right\} =0\qquad ,\qquad i=1,..,5
\end{equation}
Due to the obvious Poincar\'e invariance of the constraint surface, the
generators (\ref{s}) automatically satisfy (\ref{ai}) and thereby they are
the observables. On the other hand, it is easy to compute that the dimension
of the physical phase space of the theory is equal to 18. Thus the physical
subspace may covariantly be parametrized by 21 Poincar\'e generator subject
to 3 conditions (\ref{t}), and as a result, any physical observable proves to
be a function of the generators (\ref{s}) modulo constraints. So a general
solution of (\ref{ai}) reads
\begin{equation}
\label{aj}A=A\left( J_{AB},P_C\right) +\sum\limits_{i=1}^5\alpha _iT_i
\end{equation}
$\alpha _i$, being an arbitrary function of phase space variables.
In fact, this implies that the physical phase space of the model is embedded
in the linear space of the Poincar\'e algebra through the constraints (\ref
{v}) and therefore coincides with some coadjoint orbit ${\cal O}$ of the
Poincar\'e group.
\section{Second-order formalism}
In order to obtain a second-order formulation for the model one may proceed
in the standard manner by eliminating the momenta $p_A,\pi ^a,\overline{\pi }%
^a$ and the Lagrange multipliers $e_i$ from the Hamiltonian action (\ref{aa}%
) resolving equations of motion:
\begin{equation}
\label{ak}\frac{\delta S}{\delta p_A}=\frac{\delta S}{\delta \pi ^a}=\frac{%
\delta S}{\delta \overline{\pi }^a}=\frac{\delta S}{\delta e_i}=0
\end{equation}
with respect to the momenta and the multipliers. The corresponding
Lagrangian action will be invariant under global Poincar\'e transformation
and will possess five gauge symmetries associated with first-class
constraints (\ref{v}). The presence of kinematical ones will result in the
invariance of Lagrangian action under the local $\lambda $-rescalings: $%
\lambda _a\rightarrow \alpha \lambda _a$. At the same time, by construction,
among the gauge transformations related to the dynamical constraints will
necessarily be the one corresponding to reparametrizations of the particle
world-line $\tau \rightarrow \tau ^{\prime }\left( \tau \right) $.
It turns out, however, that the straightforward resolution of eqs. (\ref
{ak}) is rather cumbersome. Fortunately, in the case in hand there is
another way to recover the covariant second-order formulation exploiting the
symmetry properties of the model. Namely, we can start with the most general
Poincar\'e and reparametrization invariant ansatz for the Lagrange action
and specify it, by requiring the model to be equivalent to that
described by the constraints (\ref{v}).
As a first step we classify all the Poincar\'e invariants of the
world-line being functions over the tangent bundle $TM^{12}$. One may easily
verify that there are only three expressions possessing these properties
\begin{equation}
\label{al}\stackrel{.}{x}^2\qquad ,\qquad \xi =\frac{(\stackrel{.}{\lambda }%
\stackrel{.}{x}\lambda )(\stackrel{.}{\overline{\lambda }}\stackrel{.}{x}%
\overline{\lambda })}{\stackrel{.}{x}^2\left( \overline{\lambda }\stackrel{.%
}{x}\lambda \right) ^2}\qquad ,\qquad \eta =\frac{\epsilon ^{abcd}\stackrel{.%
}{\lambda }_a\overline{\lambda }_b\stackrel{.}{\overline{\lambda }}_c\lambda
_d}{\left( \overline{\lambda }\stackrel{.}{x}\lambda \right) ^2}
\end{equation}
Notice that $\xi $ and $\eta $ are invariant under reparametrizations as
well as under the local $\lambda $-rescalings (\ref{h}), so the kinematical
constraints (\ref{w}) are automatically accounted
Then the most general Poincar\'e and reparametrization invariant Lagrangian
on $M^{12}$ reads
\begin{equation}
\label{am}{\cal L=}\sqrt{-\stackrel{.}{x}^2F\left( \xi ,\eta \right) }
\end{equation}
where $F$ is an arbitrary function.
The particular form of the function $F$ entering (\ref{am}) may be found
from the requirement that the Lagrangian is to lead to the Hamilton
constraints (\ref{v}). The substitution of the canonical momenta
\begin{equation}
\label{an}p_A=\frac{\partial {\cal L}}{\partial \stackrel{.}{x}^A}\qquad
,\qquad \pi ^a=\frac{\partial {\cal L}}{\partial \stackrel{.}{\lambda }_a}%
\qquad ,\qquad \overline{\pi }^a=\frac{\partial {\cal L}}{\partial \stackrel{%
.}{\overline{\lambda }}_a}
\end{equation}
to the dynamical constraints $T_1$ and $T_2$ gives the following equations
\begin{equation}
\label{ao}
\begin{array}{c}
\displaystyle{\frac{\partial {\cal L}}{\partial \stackrel{.}{x}^A}\frac{%
\partial {\cal L}}{\partial \stackrel{.}{x}_A}+m^2=0\Rightarrow } \\ \\
\displaystyle{\Rightarrow F^2+\xi \left( \xi +\eta \right) \left( \frac{%
\partial F}{\partial \xi }\right) ^2-2\xi \frac{\partial F}{\partial \xi }%
-2\eta \frac{\partial F}{\partial \eta }+2\xi \eta \frac{\partial F}{%
\partial \xi }\frac{\partial F}{\partial \eta }-m^2F=0}
\end{array}
\end{equation}
and
\begin{equation}
\label{ap}\frac{\partial {\cal L}}{\partial \stackrel{.}{\overline{\lambda }}%
_a}\lambda _a\frac{\partial {\cal L}}{\partial \stackrel{.}{\lambda }_b}%
\overline{\lambda }_b+\delta _1^2=0\Rightarrow \left( \frac{\partial F}{%
\partial \xi }\right) ^2+\delta _1^2F=0
\end{equation}
The integration of these equations results with
\begin{equation}
\label{aq}F=\left( 2\delta _1\sqrt{-\xi }+\sqrt{m^2-4\delta _1^2\eta +4A%
\sqrt{\eta }}\right) ^2\quad ,
\end{equation}
$A$ being arbitrary constant of integration. The account of the rest
constraint $T_3$ does not contradict the previous equations, but
determines the value of $A$ as
\begin{equation}
\label{ar}A=m\sqrt{\delta _2^2-\delta _1^2}
\end{equation}
Putting altogether, we come with the Lagrangian
$$
\displaystyle{{\cal L}=\sqrt{-\stackrel{.}{x}^2\left( m^2-4\delta _1^2\frac{%
\epsilon ^{abcd}\stackrel{.}{\lambda }_a\overline{\lambda }_b\stackrel{.}{%
\overline{\lambda }}_c\lambda _d}{\left( \overline{\lambda }\stackrel{.}{x}%
\lambda \right) ^2}+4m\sqrt{\left( \delta _2^2-\delta _1^2\right) \frac{%
\epsilon ^{abcd}\stackrel{.}{\lambda }_a\overline{\lambda }_b\stackrel{.}{%
\overline{\lambda }}_c\lambda _d}{\left( \overline{\lambda }\stackrel{.}{x}%
\lambda \right) ^2}}\right) }}+
$$
\begin{equation}
\label{as}{}
\end{equation}
$$
\displaystyle{+2\delta _1\left| \frac{\stackrel{.}{\lambda }\stackrel{.}{x}%
\lambda }{\overline{\lambda }\stackrel{.}{x}\lambda }\right| }
$$
It should be stressed that the parameters $\delta _1$ and $\delta _2$
entering the Lagrangian are dimensional and cannot be made dimensionless by
redefinitions involving only the mass of the particle and the speed of light
$c$. Whereas, using the Planck constant we may set
\begin{equation}
\label{at}\delta _1=\frac \hbar c\kappa _1\qquad ,\qquad \delta _2=\frac
\hbar c\kappa _2
\end{equation}
where $\kappa _1$ and $\kappa _2$ are already arbitrary real numbers
satisfying the inequality $\left| \kappa _1\right| \leq \left| \kappa
_2\right| $. Turning back to the question of particle motion (see (\ref{ad})
and below) we also conclude that the radius $\rho $ of the tube,
representing the particle propagation in Minkowski space is proportional to $%
\hbar $. So, this ''non-local'' behavior of the particle is caused by spin
which manifests itself as a pure quantum effect disappearing in the
classical limit $\hbar \rightarrow 0$.
As is seen, for a given non-zero, spin the Lagrangian (\ref{as}) has a
complicated structure involving radicals and, hence, the reality condition
for ${\cal L}$ requires special consideration. Similar to the spinless case,
the space-time causality implies that
\begin{equation}
\label{au}\stackrel{.}{x}^2<0\qquad ,\qquad \stackrel{.}{x}^0>0
\end{equation}
Then expression (\ref{as}) is obviously well-defined only provided that
\begin{equation}
\label{av}
\begin{array}{c}
\eta \geq 0 \\
\\
m^2-4\delta _1^2\eta +4m\sqrt{\left( \delta _2^2-\delta _1^2\right) \eta }%
\geq 0
\end{array}
\end{equation}
As will be seen below the first inequality is always fulfilled, while the
second condition is equivalent to
\begin{equation}
\label{aw}0\leq \eta \leq \frac{m^2}{4\delta _1^4}\left( \delta _2+\sqrt{%
\delta _2^2-\delta _1^2}\right) ^2
\end{equation}
Together, eqs. (\ref{au}), (\ref{aw}) may be understood as the full set of
causality conditions for the model of massive spinning particle.
Passing to the vector parametrization of the configuration space in terms of
$b_A$ and $h_{ABC}$ the basic invariants $\eta $ and $\xi $ take the form
\begin{equation}
\label{ax}
\begin{array}{c}
\displaystyle{\xi =-\frac{4\stackrel{.}{x}_A\stackrel{.}{h}^{ABC}\stackrel{.%
}{h}_{BCD}\stackrel{.}{x}^D+\stackrel{.}{x}^2\stackrel{.}{b}^2-4\left(
\stackrel{.}{x}\stackrel{.}{b}\right) ^2}{16\stackrel{.}{x}^2\left(
\stackrel{.}{x}b\right) ^2}} \\ \\
\displaystyle{\eta =\frac{\stackrel{.}{b}^2}{4\left( \stackrel{.}{x}b\right)
^2}}
\end{array}
\end{equation}
and the corresponding Lagrangian reads
\begin{equation}
\label{ay}
\begin{array}{c}
\displaystyle{{\cal L}=\sqrt{-\stackrel{.}{x}^2\left( m^2-\delta _1^2\frac{%
\stackrel{.}{b}^2}{\left( \stackrel{.}{x}b\right) ^2}+2m\sqrt{\left( \delta
_2^2-\delta _1^2\right) \frac{\stackrel{.}{b}^2}{\left( \stackrel{.}{x}%
b\right) ^2}}\right) }}+ \\ \\
\displaystyle{+\delta _1\sqrt{\frac{4\stackrel{.}{x}_A\stackrel{.}{h}^{ABC}%
\stackrel{.}{h}_{BCD}\stackrel{.}{x}^D+\stackrel{.}{x}^2\stackrel{.}{b}%
^2-4\left( \stackrel{.}{x}\stackrel{.}{b}\right) ^2}{4\left( \stackrel{.}{x}%
b\right) ^2}}}
\end{array}
\end{equation}
where the holonomic constraints (\ref{n}) are assumed to hold. In view of (%
\ref{ax}) the condition (\ref{av}) becomes evident since $\stackrel{.}{b}^A$
is orthogonal to the light-like vector $b^A$ and thereby is space-(or
light-) like. Recalling that the vector $b^A$ parametrizes $S^4$,
condition (\ref{aw}) forbids the particle to move with arbitrary large
velocity not only in Minkowski space but also on the sphere $S^4$.
Classically the parameters $\delta _1$ and $\delta _2$ can be chosen to be
arbitrary numbers subject only to the restriction $\left| \delta _1\right|
\leq \left| \delta _2\right| $. There are, however, two special cases: $%
\delta _1=\delta _2=0$ and $\delta _1=0$ when the Lagrangian (\ref{ay}) is
considerably simplified. The former option is of no interest as it
corresponds
to the case of spinless massive particle, while the latter leads to the
following Lagrangian
\begin{equation}
\label{az}{\cal L}=\sqrt{-\stackrel{.}{x}^2\left( m^2+2m\delta _2\sqrt{\frac{%
\stackrel{.}{b}{}^2}{\left( \stackrel{.}{x}b\right) ^2}}\right) }
\end{equation}
which is the direct six-dimensional generalization of the $\left( m,s\right)
$-particle model proposed earlier \cite{c13} for $D=4$. The configuration
space of the model (\ref{az}) is represented by the direct product of
Minkowski space $R^{5,1}$ and four-dimensional sphere $_{}S^4$ parametrized
by the light-like vector $b^A$. It is easy to see that the reduced model
cannot describe arbitrary spins, since the third Casimir operator (\ref{m}),
being constructed from the Poincar\'e generators acting on $R^{5,1}\times
S^4 $, vanishes identically. As will be seen below the quantization of this
case leads to the irreducible representations of the Poincar\'e group
realized on totally symmetric tensor fields on Minkowski space.
\section{Generalization to the curved background}
So far we discussed the model of spinning particle living on the flat
space-time. In this section, we will try to generalize it to the case of
curved background. For these ends one can replace the configuration space $%
M^{12}$ by ${\cal M}^6\times CP^3$ where ${\cal M}^6$ is a curved
space-time. Now the action functional should be generalized to remain
invariant under both general coordinate transformations on ${\cal M}^6$ and
local Lorentz transformations on $CP^3$. Let $e_m{}^A$ and $\omega _{mAB}$
be the vielbein and the torsion free spin connection respectively. The
minimal covariantization of the Lagrangian (\ref{as}) gives
$$
\displaystyle{\!{\cal L}=\!\sqrt{-\stackrel{.}{x}^2\!\left( \!m^2-\!4\delta
_1^2\frac{\epsilon ^{abcd}\stackrel{\bullet }{\lambda }_a\overline{\lambda }%
_b\stackrel{\bullet }{\overline{\lambda }}_c\lambda _d}{\left( \stackrel{.}{x%
}^me_m{}^A\left( \overline{\lambda }\sigma _A\lambda \right) \right) ^2}+\!4m%
\sqrt{\left( \delta _2^2-\delta _1^2\right) \frac{\epsilon ^{abcd}\stackrel{%
\bullet }{\lambda }_a\overline{\lambda }_b\stackrel{\bullet }{\overline{%
\lambda }}_c\lambda _d}{\left( \stackrel{.}{x}^me_m{}^A\left( \overline{%
\lambda }\sigma _A\lambda \right) \right) ^2}}\right) }}+
$$
\begin{equation}
\label{ca}{}
\end{equation}
$$
\displaystyle{+2\delta _1\left| \frac{\stackrel{.}{x}^me_m{}^A\left(
\stackrel{\bullet }{\lambda }\sigma _A\lambda \right) }{\stackrel{.}{x}%
^me_m{}^A\left( \overline{\lambda }\sigma _A\lambda \right) }\right| }
$$
where
\begin{equation}
\label{cb}\stackrel{\bullet }{\lambda }_a=\stackrel{.}{\lambda }_a-\frac 12%
\stackrel{.}{x}^m\omega _{mAB}(\sigma ^{AB})_a{}^b\lambda _b
\end{equation}
is the Lorentz covariant derivative along the particle's world line.
Proceeding to the Hamilton formalism one gets the set of five constraints $%
T_i^{^{\prime }}$, $i=1...5$ which may be obtained from $T_i$ (\ref{v},\ref
{w}) by replacing $p_A\rightarrow \Pi _A$, where
\begin{equation}
\label{cc}\Pi _A=e_A{}^m\left( p_m+\frac 12\omega _{mCD}M^{CD}\right)
\end{equation}
Here $e_A{}^m$ is the inverse vielbein and $M^{CD}$ is the spinning part of
Lorentz generators (\ref{s}). The generalized momentum $\Pi _A$ satisfies
the following Poisson brackets relation:
\begin{equation}
\label{cd}\left\{ \Pi _A,\Pi _B\right\} =\frac 12R_{ABCD}M^{CD}
\end{equation}
$R_{ABCD}$ being the curvature tensor of ${\cal M}^6$. Now it is easy to
find that
\begin{equation}
\label{ce}
\begin{array}{c}
\left\{ T_1^{^{\prime }},T_3^{^{\prime }}\right\} =R_{ABCD}q^A\Pi ^BM^{CD}
\\
\\
q^A=(\overline{\lambda }\sigma ^A\lambda )(\overline{\pi }\Pi \pi )+(%
\overline{\lambda }\Pi \lambda )(\overline{\pi }\sigma ^A\pi )
\end{array}
\end{equation}
The other Poisson brackets of the constraints are equal to zero. So, in
general, the constraints $T_1^{^{\prime }},T_3^{^{\prime }}$ are of the
second class, which implies that switching on an interaction destroys the
first class constraints algebra and, hence, gives rise to unphysical degrees
of freedom in the theory. What is more, the Lagrangian (\ref{ca}) is
explicitly invariant under reparametrizations of the particle's world line,
while the gauge transformations, associated with the remaining first class
constraints $T_2^{^{\prime }},T_4^{^{\prime }},T_5^{^{\prime }}$, do not
generate the full reparametrizations of the theory (the space-time
coordinates $x^m$ on ${\cal M}^6$ remain intact ). The last fact indicates
that the equations of motion derived from (\ref{ca}) are contradictory. Thus
the interaction with external gravitational field is self-consistent only
provided that r.h.s. of (\ref{ce}) vanishes. This requirement leads to some
restrictions on curvature tensor. Namely, with the use of the identity $%
M^{AB}q_B\approx 0$ one may find that (\ref{ce}) is equal to zero if and
only if $R_{ABCD}$ has the form
\begin{equation}
\label{cf}R_{ABCD}=\frac R{30}\left( \eta _{AC}\eta _{BD}-\eta _{AD}\eta
_{BC}\right)
\end{equation}
where $R$ is a constant (the scalar curvature of the manifold ${\cal M}^6$).
So the minimal coupling to gravity is self-consistent only provided that $%
{\cal M}^6$ is the space of constant curvature.
Concluding this section let us also remark that the Lagrangian (\ref{ca})
may be obtained using the group theoretical principles outlined in the
introduction. To this end one should replace the Poincar\'e group by $%
SO\left( 5,2\right) $ or $SO\left( 6,1\right) $ depending on $R<0$ or $R>0$.
(Cf. see \cite{c13})
\section{Quantization}
In Sect. 3 we have seen that the model is completely characterized, at the
classical level, by the algebra of observables associated with the phase
space generators of the Poincar\'e group. We have shown that the observables
${\cal A}=(P_A,J_{AB})$ constitute the basis, so that any gauge invariant
value of the theory can be expressed via the elements of ${\cal A}$.
To quantize this classical system means to construct an irreducible unitary
representation
\begin{equation}
\label{ba}r:{\cal A\rightarrow }End\ {\cal H}
\end{equation}
of the Lie algebra ${\cal A}$ in the algebra $End\ {\cal H}$ of linear
self-adjoint operators in a Hilbert space where the physical subspace ${\cal %
H}$ is identified with the kernel of the first-class constraint operators.
Here by a Lie algebra representation $r$ we mean a linear mapping from $%
{\cal A}$ into $End\ {\cal H}$ such that
\begin{equation}
\label{bb}r(\{{f,g\}})=-i[r(f),r(g)]\qquad ,\qquad \forall \ f,g\in {\cal A}
\end{equation}
where $[r(f),r(g)]$ is the usual commutator of Hermitian operators $r(f)$, $%
r(g).$ Unitarity means that the canonical transformations of the model's
phase space generated by observables from ${\cal A}$ should correspond to
unitary transformations on ${\cal H.}$ Besides that we should supply the
algebra ${\cal A}$ by the central element 1 and normalize $r$ by the
condition
\begin{equation}
\label{bc}r(1)=id
\end{equation}
i.e. the constant function equal to 1 corresponds under $r$ to the identity
operator on ${\cal H}$.
Now{\it \ }it is seen{\it \ that\ the quantization of the model is reduced
to the construction of the unitary irreducible representation of the
Poincar\'e group with the given quantum numbers fixed by the constraints }(%
\ref{v}, \ref{w}).
Within the framework of the covariant operatorial quantization the Hilbert
space of physical states ${\cal H}$ is embedded into the space of smooth
scalar functions on $R^{5,1}\times C^4$ and the phase space variables $%
x^A,p_A,\lambda _a,\pi ^a$ are considered to be Hermitian operators subject
to the canonical commutation relations.
In the ordinary coordinate representation
\begin{equation}
\label{bd}p_A\rightarrow -i\partial _A\qquad ,\qquad \pi ^a\rightarrow
-i\partial ^a\qquad ,\qquad \overline{\pi }^a\rightarrow -i\overline{%
\partial }^a
\end{equation}
the Hermitian generators of the Poincar\'e group (observables) take the form
\begin{equation}
\label{be}{\bf P}_A=-i\partial _A\quad ,\qquad {\bf M}_{AB}=-i\left(
x_A\partial _B-x_B\partial _A+\left( \sigma _{AB}\right) _a\!^b\left(
\lambda _b\partial ^a+\overline{\lambda }_b\overline{\partial }^a\right)
\right)
\end{equation}
By contrast, the quantization of the first-class constraints is not so
unambiguous. As is seen from the explicit expressions (\ref{v}, \ref{w})
there is an inherent ambiguity in the ordering of operators $\widehat{%
\lambda }_a,\widehat{\pi }^b$ and $\widehat{\overline{\lambda }}_a,\widehat{%
\overline{\pi }}^b$. Luckily as one may verify, the different ordering
prescription for the above operators results only in renormalization of the
parameters $\delta _1^2,\delta _2^2$ and modification of the kinematical
constraints by some constants $n$ and $m$. Thus, in general, (after omitting
inessential multipliers) the quantum operators for the first-class
constraints may be written as
\begin{equation}
\label{bf}
\begin{array}{c}
\widehat{T}_1=\Box -m^2\quad ,\quad \widehat{T}_2=\bigtriangleup _1-\delta
_1^{^{\prime }2}\quad ,\quad \widehat{T}_3=\bigtriangleup _2-\delta
_2^{^{\prime }2} \\ \\
\widehat{T}_4=d-n\qquad ,\qquad \widehat{T}_5=\overline{d}-m
\end{array}
\end{equation}
where the operators in the r.h.s. of relations are defined as in (\ref{j}), (%
\ref{l}), and $\delta _1^{^{\prime }2},\delta _2^{^{\prime }2}$ are
renormalized parameters $\delta _1^2,\delta _2^2$.
The subspace of physical states ${\cal H}$ is then extracted by
conditions
\begin{equation}
\label{bg}\widehat{T}_i\left| \Phi _{phys}\right\rangle =0\qquad ,\qquad
i=1,...,5
\end{equation}
The imposition of the kinematical constraints yields that the physical wave
functions are homogeneous in $\lambda $ and $\overline{\lambda }$ of
bedegree ($n,m$) i.e.
\begin{equation}
\label{bh}\Phi \left( x,\alpha \lambda ,\overline{\alpha }\overline{\lambda }%
\right) =\alpha ^n\overline{\alpha }^m\Phi \left( x,\lambda ,\overline{%
\lambda }\right)
\end{equation}
From the standpoint of the intrinsic $M^{12}$ geometry these functions can
be interpreted as the special tensor fields being the scalars on Minkowski
space $R^{5,1}$ and, simultaneously the densities of weight ($n,m$) with
respect to the holomorphic transformations of $CP^3$. Requiring the fields (%
\ref{bh}) to be unambiguously defined on the manifold,
the parameters $n$ and $m$ should be restricted to be integer.
Let us consider the space $^{\uparrow }{\cal H}^{[0]}(M^{12},m)$ of massive
positive frequency fields of the type (0,0) (i.e. the scalar fields on $%
M^{12}$). Such fields satisfy the mass-shell condition
\begin{equation}
\label{bi}\left( \Box -m^2\right) \Phi \left( x,\lambda ,\overline{\lambda }%
\right) =0
\end{equation}
and possess the Fourier decomposition
\begin{equation}
\label{bj}
\begin{array}{c}
\displaystyle{\Phi \left( x,\lambda ,\overline{\lambda }\right) =\int \frac{d%
\stackrel{\rightarrow }{p}}{p_0}e^{i(p,x)}\Phi \left( p,\lambda ,\overline{%
\lambda }\right) } \\ \\
p^2+m^2=0\quad ,\quad p_0>0
\end{array}
\end{equation}
The space $^{\uparrow }{\cal H}^{[0]}(M^{12},m)$ may be endowed with the
Poincar\'e-invariant and positive-definite inner product defined by the rule
\begin{equation}
\label{bk}\langle \Phi _1\left| \Phi _2\right\rangle =\int \frac{d\stackrel{%
\rightarrow }{p}}{p_0}\int\limits_{CP^3}\overline{\omega }\wedge \omega
\overline{\Phi }_1\Phi _2
\end{equation}
where the three-form $\omega $ is given by
\begin{equation}
\label{bl}\omega =\frac{\epsilon ^{abcd}\lambda _ad\lambda _b\wedge d\lambda
_c\wedge d\lambda _d}{\left( \overline{\lambda }p\lambda \right) ^2}
\end{equation}
Then $^{\uparrow }{\cal H}^{[0]}(M^{12},m)$ becomes the Hilbert space and,
as a result, the Poincar\'e representation acting on this space by the
generators (\ref{be}) is unitary. This representation can be readily
decomposed into the direct sum of irreducible ones by means of Laplace
operators $\bigtriangleup _1$ and $\bigtriangleup _2$. Namely, the subspace
of irreducible representation proves to be the eigenspace for both Laplace
operators. This implies the following
\begin{equation}
\label{bn}^{\uparrow }{\cal H}^{[0]}(M^{12},m)=\bigoplus\limits_{\stackrel{%
\scriptstyle{s_1,s_2=0,1,2,...}}{s_1\geq s_2}}{}^{\uparrow }{\cal H}%
_{s_1,s_2}(M^{12},m)
\end{equation}
and the spectrum of Laplace operators is given by the eigenvalues
\begin{equation}
\label{bm}
\begin{array}{c}
\delta _1^{^{\prime }2}=s_2\left( s_2+1\right) \ ,\quad \delta _2^{^{\prime
}2}=m^2s_1\left( s_1+3\right) \\
\\
s_1\geq s_2\qquad ,\qquad s_1,s_2=0,1,2,...
\end{array}
\end{equation}
Consequently, the subspace of physical states satisfying the quantum
conditions (\ref{bg}) is exactly $^{\uparrow }{\cal H}_{s_1,s_2}(M^{12},m)$.
The explicit expression for an arbitrary field from $^{\uparrow }{\cal H}%
_{s_1,s_2}(M^{12},m)$ reads
\begin{equation}
\Phi \left( p,\lambda ,\overline{\lambda }\right) =\Phi \left( p\right)
^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}\frac{\lambda _{a_1}...\lambda
_{a_{s_1}}\overline{\lambda }_{a_{s_1+1}}..\overline{\lambda }_{a_{s_1+s_2}}%
\overline{\lambda }_{b_1}...\overline{\lambda }_{b_{s_1-s_2}}}{\left(
\overline{\lambda }p\lambda \right) ^{s_1}}
\end{equation}
Here the spin-tensor $\Phi \left( p\right)
^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}$ is considered to be the $p$%
-transversal
\begin{equation}
\label{bp}p_{a_1b_1}\Phi \left( p\right)
^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}=0
\end{equation}
(for $s_1\neq s_2$) and its symmetry properties are described by the
following Young tableaux:
\unitlength=0.7mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(120.00,23.00)(00.00,122.00)
\put(30.00,140.00){\line(1,0){64.00}}
\put(94.00,140.00){\line(0,-1){8.00}}
\put(94.00,132.00){\line(-1,0){64.00}}
\put(30.00,132.00){\line(0,1){8.00}}
\put(30.00,132.00){\line(0,-1){8.00}}
\put(30.00,124.00){\line(1,0){32.00}}
\put(62.00,124.00){\line(0,1){8.00}}
\put(38.00,140.00){\line(0,-1){16.00}}
\put(54.00,140.00){\line(0,-1){16.00}}
\put(62.00,140.00){\line(0,-1){8.00}}
\put(70.00,140.00){\line(0,-1){8.00}}
\put(86.00,140.00){\line(0,-1){8.00}}
\put(34.00,136.00){\makebox(0,0)[cc]{$a_1$}}
\put(34.00,128.33){\makebox(0,0)[cc]{$b_1$}}
\put(45.80,135.92){\makebox(0,0)[cc]{. . .}}
\put(45.80,127.89){\makebox(0,0)[cc]{. . .}}
\put(77.94,136.10){\makebox(0,0)[cc]{. . . }}
\put(57.94,136.10){\makebox(0,0)[cc]{$a_n$}}
\put(57.94,128.07){\makebox(0,0)[cc]{$b_n$}}
\put(89.90,136.10){\makebox(0,0)[cc]{$a_m$}}
\put(120.00,136.00){\makebox(0,0)[cc]{$n=s_1-s_2$}}
\put(120.00,128.00){\makebox(0,0)[cc]{$m=s_1+s_2$}}
\end{picture}
The field $\Phi \left( p\right) ^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}$ can
be identified with the Fourier transform of spin-tensor field on Minkowski
space $R^{5,1}$. Together, mass-shell condition
\begin{equation}
\label{bq}\left( p^2+m^2\right) \Phi \left( p\right)
^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}=0
\end{equation}
and relation ($\ref{bp}$) constitute the full set of relativistic wave
equations for the mass-$m$, spin-$\left( s_1,s_2\right) $ field in six
dimensions. Thus the massive scalar field on $M^{12}$ generates fields of
arbitrary integer spins on Minkowski space.
In order to describe the half-integer spin representations of Poincar\'e
group consider the space $^{\uparrow }{\cal H}^{[1/2]}\left( M^{12},m\right)
$ of massive positive frequency fields with tensor type (1,0). These fields
possess the Fourier decomposition and may be endowed with the following
Hermitian inner product
\begin{equation}
\label{bt}\langle \Phi _1\left| \Phi _2\right\rangle _{1/2}=\int \frac{d%
\overrightarrow{p}}{p_0}\int\limits_{CP^3}\overline{\omega }\wedge \omega
\left( \overline{\lambda }p\lambda \right) ^{-1}\overline{\Phi }_1\Phi _2
\end{equation}
Then the decomposition of the space $^{\uparrow }{\cal H}^{[1/2]}\left(
M^{12},m\right) $ with respect to both Laplace operators reads
\begin{equation}
\label{bu}^{\uparrow }{\cal H}^{[1/2]}(M^{12},m)=\bigoplus\limits_{\stackrel{%
\scriptstyle{s_1,s_2=1/2,3/2,...}}{s_1\geq s_2}}{}^{\uparrow }{\cal H}%
_{s_1,s_2}(M^{12},m)
\end{equation}
where invariant subspaces $^{\uparrow }{\cal H}_{s_1,s_2}(M^{12},m)$ are the
eigenspaces of $\bigtriangleup _1$ and $\bigtriangleup _2$ with
eigenvalues
\begin{equation}
\label{bv}
\begin{array}{c}
\delta _1^{^{\prime }2}=\left( s_2-1/2\right) \left( s_2+3/2\right) \quad
,\qquad \delta _2^{^{\prime }2}=\left( s_1-1/2\right) \left( s_1+7/2\right)
\\
\\
s_1,s_2=1/2,3/2,...\quad ,\qquad s_1\geq s_2
\end{array}
\end{equation}
The explicit structure of an arbitrary field from $^{\uparrow }{\cal H}%
_{s_1,s_2}(M^{12},m)$ is
\begin{equation}
\label{bw}\Phi \left( p,\lambda ,\overline{\lambda }\right) =\Phi \left(
p\right) ^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}\frac{\lambda _{a_1}...\lambda
_{a_{s_1}}\overline{\lambda }_{a_{s_1+1}}..\overline{\lambda }_{a_{s_1+s_2}}%
\overline{\lambda }_{b_1}...\overline{\lambda }_{b_{s_1-s_2}}}{\left(
\overline{\lambda }p\lambda \right) ^{s_1}}
\end{equation}
where $\Phi \left( p\right) ^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}$ is the $p$%
-transversal tensor
\begin{equation}
\label{bx}p_{a_1b_1}\Phi \left( p\right)
^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}=0
\end{equation}
(for $s_1\neq s_2$) and its symmetry properties are described by the above
written Young tableaux. Consequently, from (\ref{bu}), (\ref{bv}) it follows
that the massive type (1,0) field on $M^{12}$ generates fields of arbitrary
half-integer spins on Minkowski space.
It is instructive to rewrite the inner product for two fields from $%
^{\uparrow }{\cal H}_{s_1,s_2}(M^{12},m)$ in terms of spin-tensors $\Phi
\left( p\right) ^{a_1...a_{s_1+s_2}b_1...b_{s_1-s_2}}$. The integration over
spinning variables may be performed with the use of the basic integral
\begin{equation}
\label{br}\int\limits_{CP^3}\overline{\omega }\wedge \omega =\frac{48i\pi ^3%
}{\left( p^2\right) {}^2}
\end{equation}
and the result is
\begin{equation}
\label{bo}\langle \Phi _1\left| \Phi _2\right\rangle =N\int \frac{d\stackrel{%
\rightarrow }{p}}{p_0}\overline{\Phi }_1\left( p\right)
^{a_1...a_{2s_1}}\Phi _2\left( p\right) _{a_1...a_{2s_1}}
\end{equation}
where
\begin{equation}
\label{bs}
\begin{array}{c}
\Phi _2\left( p\right) _{a_1...a_mb_1...b_n}= \\
\\
=\epsilon _{a_1b_1c_1d_1}...\epsilon
_{a_nb_nc_nd_n}p_{a_{n+1}c_{n+1}}...p_{a_mc_m}\Phi _2\left( p\right)
^{c_1...c_md_1...d_n}
\end{array}
\end{equation}
and $N$ is some normalization constant depending on $s_1$ and $s_2$.
\section{Conclusion}
In this paper we have suggested the model for a massive spinning particle in
six-dimensional Minkowski space as a mechanical system with
configuration space $M^{12}=R^{5,1}\times CP^3$. The Lagrangian of the model
is unambiguously constructed from the $M^{12}$ world line invariants when
the identical conservation is required for the classical counterparts of
Casimir operators. As a result, the theory is characterized by three
genuine gauge symmetries.
The model turns out to be completely solvable as it must, if it is a free
relativistic particle. The projection of the class of gauge equivalent
trajectories from $M^{12}=R^{5,1}\times CP^3$ onto $R^{5,1}$ represents the
two-dimensional cylinder surface of radius $\rho \sim \hbar $ with
generatings parallel to the particle's momenta.
Canonical quantization of the model naturally leads to the unitary
irreducible representation of Poincar\'e group. The requirement of the
existence of smooth solutions to the equations for the physical wave
functions results in quantization of the parameters entering Lagrangian or,
that is the same, in quantization of particle's spin.
It should be noted that switching on an interaction of the particle to the
inhomogeneous external field, one destroys the first class constraint
algebra of the model and the theory, thereby, becomes inconsistent, whereas
the homogeneous background is admissible. The physical cause underlying this
inconsistency is probably that the local nature of the inhomogeneous field
may contradict to the nonlocal behavior of the particle dynamical histories.
A possible method to overcome the obstruction to the interaction is to
involve the Wess-Zumino like invariant omitted in the action (\ref{as}). It
has the form%
$$
\Gamma =\rho \frac{(\overline{\lambda }_a\stackrel{.}{x}^{ab}\stackrel{.}{%
\lambda }_b)}{\left( \overline{\lambda }_a\stackrel{.}{x}^{ab}\lambda
_b\right) }+\overline{\rho }\frac{(\stackrel{.}{\overline{\lambda }}_a%
\stackrel{.}{x}^{ab}\lambda _b)}{\left( \overline{\lambda }_a\stackrel{.}{x}%
^{ab}\lambda _b\right) }
$$
As is easy to see, $\Gamma $ is invariant under the $\lambda $-rescalings up
to a total derivative only. This fact, however, does not prevent to say
about the particle's dynamics on $M^{12}$. The similar trick solves the
problem of interaction in the case of $d=4$ spinning particle \cite{c16}.
\section{Acknowledgments}
The authors would like to thank I. A. Batalin, I. V. Gorbunov, S. M.
Kuzenko, A. Yu. Segal and M. A. Vasiliev for useful discussions on various
topics related to the present research. The work is partially supported by
the European Union Commission under the grant INTAS 93-2058. S. L. L. is
supported in part by the grant RBRF 96-01-00482.
\section{Appendix. Half-spinorial formalism in six dimensions}
Our notations are as follows: capital Latin letters are used for Minkowski
space indices and small Latin letters for spinor ones. The metric is chosen
in the form: $\eta _{AB}=diag(-,+,...,+)$. The Clifford algebra of $8\times
8 $ Dirac matrices $\Gamma _A$ reads: $\left\{ \Gamma _A,\Gamma _B\right\}
=-2\eta _{AB}$. The suitable representation for $\Gamma _A$ is
\begin{equation}
\label{ap1}\Gamma _A=\left(
\begin{array}{cc}
0 & (\sigma _A)_{a
\stackrel{.}{a}} \\ (\widetilde{\sigma }_A)^{\stackrel{.}{a}a} & 0
\end{array}
\right) \ ,\quad
\begin{array}{c}
\sigma _A=\left\{ 1,\gamma _0,i\gamma _1,i\gamma _2,i\gamma _3,\gamma
_5\right\} \\
\widetilde{\sigma }_A=\left\{ 1,-\gamma _0,-i\gamma _1,-i\gamma _2,-i\gamma
_3,-\gamma _5\right\}
\end{array}
\end{equation}
where $\gamma _i,i=0,1,2,3,5$ are the ordinary Dirac matrices in four
dimensions. The charge conjugation matrix is defined as
\begin{equation}
\label{ap2}C=\Gamma _2\Gamma _4=\left(
\begin{array}{cc}
I & 0 \\
0 & \widetilde{I}
\end{array}
\right) \ ,\quad I=\widetilde{I}=\left(
\begin{array}{ccc}
\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}
& | & 0 \\
--- & | & --- \\
0 & | &
\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}
\end{array}
\right)
\end{equation}
The spinor representation of $SO(5,1)$ on Dirac spinors $\Psi =\left(
\begin{array}{c}
\lambda _a \\
\overline{\pi }^{\stackrel{.}{b}}
\end{array}
\right) $ is generated by
\begin{equation}
\label{ap3}
\begin{array}{c}
\Sigma _{AB}=-\frac 14\left[ \Gamma _A,\Gamma _B\right] =\left(
\begin{array}{cc}
(\sigma _{AB})_a{}^b & 0 \\
0 & (\widetilde{\sigma }_{AB})^{\stackrel{.}{a}}{}_{\stackrel{.}{b}}
\end{array}
\right) = \\
\\
=\left(
\begin{array}{cc}
-\frac 14\left( \sigma _A{}_{a\stackrel{.}{a}}\widetilde{\sigma }_B{}^{%
\stackrel{.}{a}b}-\sigma _B{}_{a\stackrel{.}{a}}\widetilde{\sigma }_A{}^{%
\stackrel{.}{a}b}\right) & 0 \\
0 & -\frac 14\left( \widetilde{\sigma }_A{}^{\stackrel{.}{a}b}\sigma _B{}_{b%
\stackrel{.}{b}}-\widetilde{\sigma }_B{}^{\stackrel{.}{a}b}\sigma _A{}_{b%
\stackrel{.}{b}}\right)
\end{array}
\right)
\end{array}
\end{equation}
The representation is decomposed into two irreducible ones corresponding to
the left- and right-handed Weyl spinors. It turns out that the
representation (\ref{ap3}) and its complex conjugated are equivalent: ($%
\sigma _{AB}^{*})_{\stackrel{.}{a}}{}^{\stackrel{.}{b}}=I_{\stackrel{.}{a}%
}{}^a(\sigma _{AB})_a{}^bI_b{}^{\stackrel{.}{b}},\ (\widetilde{\sigma }%
_{AB}^{*})^a{}_b=\widetilde{I}{}^a{}_{\stackrel{.}{a}}(\widetilde{\sigma }%
_{AB})^{\stackrel{,}{a}}{}_{\stackrel{.}{b}}\widetilde{I}{}^{\stackrel{.}{b}%
}{}_b$. So, one can convert the dotted spinor indices into undotted ones%
$$
\overline{\lambda }_a=I_a{}^{\stackrel{.}{a}}\stackrel{*}{\lambda }_{%
\stackrel{.}{a}}\quad ,\qquad \overline{\pi }^a=\widetilde{I}{}^a{}_{%
\stackrel{.}{a}}\stackrel{*}{\pi }{}^{\stackrel{.}{a}}
$$
While the gradient and contragradient representations are inequivalent
because of absence of an object raising and/or lowering spinor indices as
distinguished from the four-dimensional case. It is convenient to turn from
the matrices ($\sigma _A)_{a\stackrel{.}{a}},(\widetilde{\sigma }_A)^{%
\stackrel{.}{a}a}$ to ($\sigma _A)_{ab}=(\sigma _A)_{a\stackrel{.}{a}}%
\widetilde{I}{}^{\stackrel{.}{a}}{}_b,(\widetilde{\sigma }_A)^{ab}=%
\widetilde{I}{}^a{}_{\stackrel{.}{a}}(\widetilde{\sigma }_A)^{\stackrel{.}{a}%
a}$. They possess a number of relations
\begin{equation}
\label{ap4}
\begin{array}{c}
\begin{array}{cc}
(\sigma _A)_{ab}=-(\sigma _A)_{ba}{} & {}(\widetilde{\sigma }_A)^{ab}=-(%
\widetilde{\sigma }_A)^{ba}
\end{array}
\\
\\
\begin{array}{cc}
(\sigma _A)_{ab}{}(\sigma ^A)_{cd}=-2\epsilon _{abcd}{} & {}(\widetilde{%
\sigma }_A)^{ab}(\widetilde{\sigma }^A)^{cd}=-2\epsilon ^{abcd}
\end{array}
\\
\\
\begin{array}{cc}
(\sigma _A)_{ab}=-\frac 12\epsilon _{abcd}(\widetilde{\sigma }_A)^{cd}{} &
{}(\widetilde{\sigma }_A)^{ab}=-\frac 12\epsilon ^{abcd}(\sigma ^A)_{cd}
\end{array}
\\
\\
(\sigma _A)_{ab}(
\widetilde{\sigma }^A)^{cd}=2\left( \delta _a{}^c\delta _b{}^d-\delta
_a{}^d\delta _b{}^c\right) \ ,\quad (\sigma _A)_{ab}(\widetilde{\sigma }%
_B)^{ba}=-4\eta _{AB} \\ \\
(\sigma _A)_{ab}(
\widetilde{\sigma }_B)^{bc}+(\sigma _B)_{ab}(\widetilde{\sigma }%
_A)^{bc}=-2\eta _{AB}\delta _a{}^c \\ \\
(\widetilde{\sigma }_A)^{ab}(\sigma _B)_{bc}+(\widetilde{\sigma }%
_B)^{ab}(\sigma _A)_{bc}=-2\eta _{AB}\delta ^a{}_c
\end{array}
\end{equation}
Here we introduced two invariant tensors $\epsilon _{abcd}$ and $\epsilon
^{abcd}$, totally antisymmetric in indices and $\epsilon _{1234}=\epsilon
^{1234}=1$. With the aid of introduced objects one may convert the vector
indices into antisymmetric pairs of spinor ones. E.g. for a given vector $%
p_A$%
\begin{equation}
\label{ap5}p_A\rightarrow p_{ab}=p_A(\sigma ^A)_{ab}\ ,\quad p^{ab}=p_A(%
\widetilde{\sigma }^A)^{ab}\ ,\quad p_A=-\frac 14p_{ab}(\widetilde{\sigma }%
_A)^{ba}=-\frac 14p^{ab}(\sigma _A)_{ba}
\end{equation}
Consider two objects
\begin{equation}
\label{ap6}\left( \sigma _{ABC}\right) _{ab}=\frac 14(\sigma _A\widetilde{%
\sigma }_B\sigma _C-\sigma _A\widetilde{\sigma }_B\sigma _C)_{ab}\ ,\quad (%
\widetilde{\sigma }_{ABC})^{ab}=\frac 14(\widetilde{\sigma }_A\sigma _B%
\widetilde{\sigma }_C-\widetilde{\sigma }_A\sigma _B\widetilde{\sigma }%
_C)^{ab}
\end{equation}
They obey the following properties:
\begin{equation}
\label{ap7}
\begin{array}{c}
\begin{array}{cc}
\left( \sigma _{ABC}\right) _{ab}=\left( \sigma _{ABC}\right) _{ba}{} & {}(%
\widetilde{\sigma }_{ABC})^{ab}=(\widetilde{\sigma }_{ABC}).^{ba}
\end{array}
\\
\\
\begin{array}{cc}
\left( \sigma _{ABC}\right) _{ab}=\frac 16\epsilon _{ABCDEF}\left( \sigma
^{DEF}\right) _{ab}{} & {}(\widetilde{\sigma }_{ABC})^{ab}=-\frac 16\epsilon
_{ABCDEF}(\widetilde{\sigma }^{DEF})^{ab}
\end{array}
\\
\\
\left( \sigma _{ABC}\right) _{ab}(
\widetilde{\sigma }^{ABC})^{cd}=6\left( \delta _a{}^c\delta _b{}^d+\delta
_a{}^d\delta _b{}^c\right) \\ \\
\left( \sigma _{ABC}\right) _{ab}\left( \sigma ^{ABC}\right) _{cd}=(
\widetilde{\sigma }_{ABC})^{ab}(\widetilde{\sigma }^{ABC})^{cd}=0 \\ \\
\left( \sigma _{ABC}\right) _{ab}(
\widetilde{\sigma }^{DEF})^{ba}=\epsilon _{ABC}{}^{DEF}+\delta _A^{[D}\delta
_B^E\delta _C^{F]} \\ \\
(\widetilde{\sigma }_{ABC})^{ab}\left( \sigma ^{DEF}\right) _{ba}=-\epsilon
_{ABC}{}^{DEF}+\delta _A^{[D}\delta _B^E\delta _C^{F]}
\end{array}
\end{equation}
The brackets around the indices mean antisymmetrization. With the aid of
introduced objects any antisymmetric Lorentz tensor of the third rank may be
converted into a pair of symmetric bispinors.
\begin{equation}
\label{ap8}
\begin{array}{c}
M_{ABC}=\frac 1{12}(M^{ab}\left( \sigma _{ABC}\right) _{ba}+M_{ab}(
\widetilde{\sigma }_{ABC})^{ba}) \\ \\
M^{ab}=M^{ABC}(\widetilde{\sigma }_{ABC})^{ab}\ ,\quad M_{ab}=M^{ABC}\left(
\sigma _{ABC}\right) _{ab}
\end{array}
\end{equation}
In conclusion we write out the Fierz identities and rules of complex
conjugation for different spinor bilinears. For the sake of simplicity
we omit the contracted spinor indices throughout this paper, e. g. ($\chi
\widetilde{\sigma }_A\psi )=\chi _a(\widetilde{\sigma }_A)^{ab}\psi _b,(\chi
\widetilde{\sigma }_{ABC}\psi )=\chi _a(\widetilde{\sigma }_{ABC})^{ab}\psi
_b$%
\begin{equation}
\label{ap9}
\begin{array}{c}
\psi _a\chi _b=\frac 14(\psi
\widetilde{\sigma }_A\chi )\sigma ^A{}_{ab}+\frac 1{12}(\psi \widetilde{%
\sigma }_{ABC}\chi )\left( \sigma ^{ABC}\right) _{ab} \\ \\
\chi ^b\psi _a=\frac 14\left( \chi \psi \right) \delta _a{}^b-\frac 12\left(
\chi \sigma _{AB}\psi \right) \left( \sigma ^{AB}\right) _a{}^b \\
\\
\left( \psi \chi \right) ^{*}=\left(
\overline{\psi }\overline{\chi }\right) \quad ,\qquad \left( \psi \overline{%
\chi }\right) ^{*}=-\left( \overline{\psi }\chi \right) \\ \\
(\chi
\widetilde{\sigma }_A\psi )^{*}=(\overline{\chi }\widetilde{\sigma }_A%
\overline{\psi })\quad ,\qquad (\overline{\chi }\widetilde{\sigma }_A\psi
)^{*}=-(\chi \widetilde{\sigma }_A\overline{\psi }) \\ \\
(\overline{\chi }\widetilde{\sigma }_{ABC}\psi )^{*}=-(\chi \widetilde{%
\sigma }_{ABC}\overline{\psi })\ ,\quad (\chi \widetilde{\sigma }_{ABC}\psi
)^{*}=(\overline{\chi }\widetilde{\sigma }_{ABC}\overline{\psi })
\end{array}
\end{equation}
Analogous relations take place for spinor with upper indices.
| proofpile-arXiv_065-1103 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Can we understand galaxy formation at high redshift studying QSO
absorption systems? Lines-of-sight (LOS) to distant QSOs must frequently
traverse protogalactic environments. If the gas in these regions
has already experienced some metal enrichment the
galaxy formation process should produce a characteristic pattern of metal
absorption lines in the QSO spectrum. If we succeed in identifying
the spectroscopic signature of such regions, high-resolution spectra of
QSOs at high redshift should be able to yield detailed insights into
the physical state of the gas in young galaxies and the gaseous
reservoir from which they form. Once the observable absorption features
have been related to the properties of the absorbing objects one may
hope to be able to actually constrain the galaxy formation
process and cosmological parameters. This paper is mostly concerned with
the first aspect: what {\it are} absorption systems
(in particular: heavy element systems) at high redshift, and can we
model them realistically by adopting a specific
scenario of structure formation?
QSO absorption studies provide only one-dimensional information, so
observations of single LOS cannot give independently
the spatial density and the size of the objects causing the absorption.
Only the product of number density and cross-section is constrained.
Additional information about the spatial distribution of the absorbing
gas has to be sought, e.g. by the identification of heavy element
absorbers with a class of galaxies whose number density is known.
As Bahcall \& Spitzer (1969) and Burbidge et al.~(1977) pointed out,
the cross section of the absorbing gas would have to be
much larger than typical half-light radii of present-day spiral
galaxies if these objects had the same comoving space density
as such galaxies. First observational evidence in favor of large
metal absorption cross-sections came from work by Bergeron (1986)
who discovered low redshift galaxies close to the LOS to QSOs with
known \mbox{Mg{\scriptsize II}}\ absorption systems, coincident in redshift with the
absorbers. Surveys of optically thick (Lyman Limit) absorption
systems and galaxies at intermediate redshift have reported results
consistent with galaxies being surrounded by \mbox{Mg{\scriptsize II}}\ absorbing halos of
radius $\sim 40$ kpc (e.g.~Bergeron (1995), Steidel (1995),
Churchill, Steidel \& Vogt (1996)).
Extending such work to optically thin systems and lower redshifts
Lanzetta et al.~(1995) found evidence for even larger \mbox{H{\scriptsize I}}\ halos with
radii of order 160 kpc. Similarly, damped Ly$\alpha$\ systems were
interpreted by Wolfe and collaborators (e.g. Wolfe 1988,
Lanzetta et al.~1991) as large protodisks, progenitors of present-day
spiral galaxies with significantly larger radii at high redshift.
The large-halo/disk scenario can qualitatively explain the component
and internal structure of heavy element absorption systems,
especially the strong clustering measured for \mbox{C{\scriptsize IV}}\ systems
(Sargent et al.~1979, Sargent, Boksenberg \& Steidel 1988, Petitjean
\& Bergeron 1994). In this
picture the individual absorption components could be clouds orbiting
in a halo or co-rotating in a disk, produced and replenished e.g. by
thermal instability (Bahcall 1975, Fall \& Rees 1985, Mo 1994,
Mo \& Miralda-Escud\'e 1996).
However, unambiguous evidence in favor of large absorption
cross-sections and the identification of the absorbing objects with
massive galaxies is restricted to low redshift observations. Little is
known about the sizes of damped Ly$\alpha$\ absorbers at high redshift (e.g.
M\o ller \& Warren 1995, Djorgovski et al.~1996) and massive disks are
not the only viable dynamical model for the observed velocity
structure. Moreover, observations of galaxies at high redshift are
strongly biased towards objects with high current star formation
rates. Thus it is not clear whether the transverse separations
on the sky of galaxy-absorber pairs coincident in redshift do reliably
indicate the presence and sizes of any (hypothetical) halos/disks.
It is difficult to ascertain that there is not an undetected
(not necessarily faint) galaxy closer to the LOS.
The possibility of ``smaller'' but more numerous objects as sources of
the metal absorption has been explored earlier. Tyson (1988)
suggested identifying damped Ly$\alpha$\ systems with gas-rich dwarf galaxies
instead of large proto-disks. York et al.~(1986) discussed
clusters of such objects to explain the component structure of \mbox{C{\scriptsize IV}}\
systems. Individual galaxy halos cannot produce potential wells deep
enough to explain the largest velocity splittings of \mbox{C{\scriptsize IV}}\
systems (up to 1000\,\mbox{km\,s$^{-1}$}) as
virialized motions. Pettini et al.~(1983) and Morris et
al.~(1986) concluded that the velocity splitting and large
cross-sections are equally difficult to understand if
objects similar to present-day
galaxy clusters placed at high redshift were causing the
absorption.
Earlier attempts at understanding QSO absorption systems have mostly
employed heuristic models. The ionization state of the gas was
calculated with simplified assumptions about the geometry, temperature
and dynamical structure of the gas. Recently, realistic
hydrodynamical simulations of the fate of gas in a universe subject to
hierarchical structure formation (Cen et al.~1994, Hernquist
et al.~1996) have led to a deeper understanding of the large scale
distribution of baryons. In the new picture a coherent filamentary
large scale structure of dark matter and baryons is responsible for
Ly$\alpha$\ forest absorption lines (e.g. Petitjean, M\"ucket \& Kates 1995;
Zhang, Anninos \& Norman 1995; Miralda-Escud\'e et al.~1996).
Denser condensations embedded in the filaments are unstable against rapid
collapse and cooling of the gas and probably form stars at their
centers.
In such a hierarchical picture (cf. Lake 1988) metal absorption
systems at high redshift are more likely to arise from groups of
relatively small, merging protogalactic objects, rather than from
ensembles of clouds in huge virialized halos:
(1) Recent Keck spectroscopy of Ly$\alpha$\ forest systems (Cowie et al.
1995, Tytler et al.~1995) has shown that contamination
of Ly$\alpha$\ forest clouds by carbon is common in high-redshift Ly$\alpha$\
absorbers with \mbox{H{\scriptsize I}}\ column densities as low as a $N(\mbox{H{\scriptsize I}}) \sim$ a
few $\times 10^{14}$cm$^{-2}$. In the numerical simulations
such column densities typically correspond to gas densities
smaller than those expected for
fully collapsed objects at these redshifts (indicating
baryonic overdensities of order ten), so at least some metal
absorption systems appear to occur outside virialized regions.
(2) Typical observed temperatures of \mbox{C{\scriptsize IV}}\ systems are somewhat larger
than expected if the gas were heated only by photoionization
(Rauch et al.~1996, hereafter RSWB). Such an enhancement is
predicted by numerical simulations (Haehnelt, Steinmetz \& Rauch
1996, hereafter paper I) and is most likely due to
shock and (to a smaller extent) adiabatic heating during
the gravitational compression of the gas. Absorber sizes along
the LOS inferred from ionization balance calculations are of
order ten kpc. This is uncomfortably large for a
cloudlet-in-halo model (Haehnelt, Rauch, \& Steinmetz 1996,
hereafter paper II).
(3) Simple scaling laws predict that in a hierarchical scenario
the typical ratio of cooling time to dynamical time
decreases as $(1+z)^{-3/2}$.
At high redshift the gas will generally cool rapidly out to
the virial radius. It will be difficult to maintain large,
massive, hot halos for an extended period of time.
Similarly for fixed circular velocity both the mass and the
size of the dark matter component of typical objects at high
redshift decrease as $(1+z)^{-3/2}$ (see Kauffmann (1996)
and Steinmetz (1996a) for quantitative predictions).
\medskip
In paper I we used numerical hydrodynamical simulations to
demonstrate that observed \mbox{C{\scriptsize IV}}\ absorption systems can be
qualitatively understood in terms of absorption by protogalactic clumps
(PGCs) formed through gravitational instability in a hierarchical
cosmogony. The \mbox{C{\scriptsize IV}}\ and \mbox{H{\scriptsize I}}\ absorption features caused by groups of
PGCs were found to resemble observed \mbox{C{\scriptsize IV}}\ and \mbox{H{\scriptsize I}}\ absorption systems
if an overall homogeneous abundance of [C/H] $\sim$ $-3$ to $-2$ was
assumed. Here, we will perform a quantitative analysis and extend the
work to a larger set of diagnostically useful ionic species. Section
2 will give the details of the numerical modelling. In section 3 we
discuss the line formation process and the physical nature of the
absorbing structures in regions of ongoing galaxy formation. In section
4 we investigate the observational consequences. Conclusions are drawn
in section 5.
\section{Numerical simulations}
\subsection{The code}
The simulations were performed using GRAPESPH (Steinmetz 1996b).
This code combines a direct summation N-body integrator
with the Smoothed Particle Hydrodynamics technique
(Lucy 1977, Gingold \& Monaghan 1977) and is implemented for the
special purpose hardware GRAPE (Sugimoto et al.~ 1990).
The version of the code we use here is especially adapted to follow a
mixture of collisionless (dark matter) and collisional (gas) fluids. It
is fully Lagrangian, three-dimensional, and highly adaptive in
space and time as the result of the use of individual smoothing
lengths and individual timesteps. The physical processes included in
this version of the code include self-gravity, pressure gradients,
hydrodynamical shocks, radiative and Compton cooling and photoheating
by a UV background with a specified spectrum. We do not assume rate
equilibrium for the dominant baryonic species
(H, H$^{+}$, He, He$^{+}$, He$^{++}$, and $e^-$) but
follow self-consistently their non-equilibrium time evolution.
However, this had only modest effect on the gas temperature in the
lowest density regions. The non-equilibrium time evolution
can generally be neglected for the questions addressed in this paper
but will be important for low column density systems at epochs close
to reionization. We assume that the gas remains optically thin
throughout the calculation and that the background radiation is
uniform in space. Radiative transfer effects have been neglected
(see also Navarro \& Steinmetz 1996).
\subsection{Initial conditions}
The initial conditions are identical to those in Navarro and Steinmetz (1996)
and were originally designed to study the formation of galaxies with circular
velocities between 80\,\mbox{km\,s$^{-1}$} and 200\,\mbox{km\,s$^{-1}$}. The background cosmogony
is a $\Omega=1$, $H_0=50\,$\mbox{km\,s$^{-1}$}\,Mpc$^{-1}$ cold dark matter (CDM)
universe with a
normalization of $\sigma_{8}=0.63$. The baryon fraction is $\Omega_b=0.05$.
Based on a P3M large scale structure simulation in a 30 Mpc box, eight
halos were selected (four each with circular velocity of about
100\,\mbox{km\,s$^{-1}$}\ and 200\,\mbox{km\,s$^{-1}$}) and resimulated at much higher
resolution. The tidal fields of the large scale matter distribution
of the original simulation were still included.
The high-resolution sphere has a radius
of about 2.3 (3.8) Mpc (comoving) for the
systems with a circular velocity of 100\,km/sec (200\,km/sec). The mass
per gas particle is $4.9 \times 10^6 M_{\odot}$ and $2.3 \times 10^7
M_{\odot}$ in the low- and high-mass systems, respectively. We adopt
a Plummer gravitational softening of $2.5$ ($5$) kpc for the dark
matter and of $1.25$ ($2.5$) kpc for the gas particles in the low
(high) mass systems. All runs are started at $z=21$.
We use several different description for the intensity, redshift
dependence and frequency dependence of the UV background. Most of
the simulations were done using a power-law
spectral energy distribution and the redshift dependence suggested by Vedel,
Hellsten \& Sommer--Larsen (1994). We varied turn-on redshift,
spectral index ($\alpha=1\dots 2$) and normalization
of the background ($J_{-22}=0.3 \dots 30$, in units of
$10^{-22}\,\mbox{erg}\,\mbox{s}^{-1}\,\mbox{cm}^{-2}\,\mbox{Hz}^{-1}\,
\mbox{sr}^{-1}$). The more realistic description
of the UV background proposed by Haardt \& Madau (1996)
was also used. Although varying the UV background had
noticeable effects on the temperature and the ionization of the gas,
the effects on the overall gas
distribution were small. In this paper we will only briefly discuss these
effects and concentrate on simulations with $\alpha = 1.5$ and
$J_{-22}=3$ for which the background was switched on at the
beginning of the simulation ($z=21$). This corresponds to a typical
quasar-like spectrum and is consistent with measurements of the helium
Gunn-Peterson decrement (Davidsen, Kriss \& Zeng 1996).
The initial gas temperature was assumed to take the value, for which
photoionization heating balances the cooling due to collisional
processes and Compton scattering.
For simplicity, we take the gas to be homogeneously contaminated by
metals (C,Si,N,O) with solar relative abundances and an absolute
metallicity of $10^{-2}$ solar. Effects of time dependent and/or
inhomogeneous chemical enrichment were neglected. Later we shall
consider different total and relative metallicities and briefly
discuss implications of an inhomogeneous metal contamination.
\subsection{The matter and temperature distribution}
Some of the simulations were run up to the present epoch.
For an individual set of initial
conditions, typically between one and three disk like galaxies with circular
velocities between 80 and 200\,\mbox{km\,s$^{-1}$} have formed by now. Each
consists of several thousand particles. In addition, a couple of halos
with smaller masses and circular velocities (about 50\,\mbox{km\,s$^{-1}$} ) are
present. Size, mass and circular velocities of these disks are in good
agreement with observed spiral galaxies. Compared to present-day
spiral galaxies, however, the disks are too concentrated. This may
be an artifact due to the neglect of energy and momentum input
by star formation and active galactic nuclei.
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_1a.ps,width=7.5cm,angle=0.}
\hspace{0.5cm}
\psfig{file=haehnelt_1b.ps,width=7.5cm,angle=0.}
}
\vspace{0.5cm}
\caption{\small The upper panel shows a grey-scale plot of the projected \mbox{H{\scriptsize I}}\
column density ($\log N$) of the inner 700 kpc of a simulation box at
$z=3.07$ which will contain three $v_c \sim 100$\,\mbox{km\,s$^{-1}$}
galaxies at $z=0$. The lower panel shows the
temperature ($\log T$, \mbox{H{\scriptsize I}}\ column density weighted) for the same box.
\label{greyscale}}
\end{figure}
Tracing the evolution of a present-day galaxy
backwards in time, one finds that at redshift three it has
``split'' into several progenitors separated by several
hundred kpc. Generally the circular velocities of these progenitors are
slightly smaller than that of the galaxy which they
will later form, but this is not true for the most massive
progenitors. These have the same or even
higher circular velocities even at redshifts of three to five, although
their mass may be a factor ten smaller (Steinmetz 1996a).
The progenitors probably suffer from the same overcooling problem
as their present-day counterparts. This will have small effects
for intermediate column density absorbers
but conclusions regarding damped Ly$\alpha$\ absorbers should be drawn
with some caution.
Figure \ref{greyscale} shows the column density and
temperature distribution (\mbox{H{\scriptsize I}}\ weighted along the LOS) at $z=3.07$
(bright=high column
density/hot) of a typical simulation box in which three galaxies with
circular velocity $v_c \sim$ 100\,\mbox{km\,s$^{-1}$}\ will have formed at redshift
zero. The area shown is 700\,kpc across (proper length). Individual
PGCs are embedded in a filamentary network.
Regions leading to a single $v_c\sim 200$\,\mbox{km\,s$^{-1}$}\ galaxy
look similar, with individual components scattered over
several hundred kpc.
The temperature in the gas rises from less than $10^4$\,K at the center of
expansion-cooled voids (large diffuse dark areas) to a few times 10$^4$\, K for
the gas in the filaments. Higher temperatures up to several $10^5\,$K
degrees occur in spheroidal envelope within $\sim$ 30 kpc of the PGCs. These
envelopes of hot gas arise when infalling gas is shock-heated to temperatures
of the order of the virial temperature, but the density of this gas is low enough
that cooling times are long compared to dynamical time scales. The
temperature, the location and the mass fraction of this hot gas component
can depend sensitively on the assumed spectral shape of the UV
background and on the metal
enrichment (Navarro \& Steinmetz 1996). In the innermost few kpc
of the PGCs the cooling timescales are always short compared to the
dynamical time scale and the dense gas cools precipitously to
temperatures below $10^4$ K.
\subsection{The ionization state of the gas}
We have used the photoionization code CLOUDY (Ferland 1993) to
calculate the ionization state of the gas for the temperature,
density, and UV background of each SPH particle in the simulation.
We assumed an infinite slab of gas of low metallicity, optically thin to
ionizing radiation and illuminated from both sides by a homogeneous UV
field. (Self-)shielding from UV radiation in optically thick regions
was generally not taken into account. Thus quantitative results for
LOS with column density above $10^{17} \,{\rm cm}^{-2}$ need to be viewed
with some caution.
Below we will concentrate on the ionic
species \mbox{H{\scriptsize I}}, \mbox{C{\scriptsize II}}, \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}, \mbox{N{\scriptsize V}},
\mbox{O{\scriptsize VI}}\ which are known to show the strongest observable lines in low and
intermediate column density absorbers. As discussed in papers I and II,
observations and SPH simulations indicate that gas temperatures can
significantly deviate from the equilibrium value where photo-heating
balances line cooling processes. It is then important to use the actual gas
temperature to calculate the ionization state of the different states.
Figure \ref{martin_1} shows the ratios of various metal to \mbox{H{\scriptsize I}}\ column
densities as a function of density for our chosen ionic species and a
set of temperatures between $10^{4}$ and $10^{6}$\,K .
At high and intermediate densities the metal column density
generally increases relative to the \mbox{H{\scriptsize I}}\ column density with
increasing temperature while at
small densities the opposite is true. \mbox{Si{\scriptsize IV}}\ and \mbox{C{\scriptsize IV}}\ are especially
temperature sensitive in the temperature and density range most
relevant for intermediate column density absorption systems around
$n_{\rm H} \sim 10^{-4} \,{\rm cm}^{-3}$ and $T\sim 5 \times 10^{4}$\,K.
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_2.ps,width=16.0cm,angle=0.}
}
\caption{\small Column density ratios relative to \mbox{H{\scriptsize I}}\ for six different ionic
species calculated with CLOUDY for fixed density and
fixed temperatures between $10^4$\,K and $10^{6}$\,K
as indicated on the plot. A power law
with $\alpha = -1.5$ and $J_{-22}=3$ was assumed for the background UV field.
\label{martin_1}}
\end{figure}
\begin{figure}[t]
\vskip -2.0cm
\centerline{
\psfig{file=haehnelt_3.ps,width=15.0cm,angle=0.}
}
\vskip -1.0cm
\caption{\small Projected column density of the inner 700 kpc of a
simulation box at $z=3.07$ which will contain three
$v_c \sim 100$\,\mbox{km\,s$^{-1}$} galaxies at $z=0$. Shown are
logarithmic column density contours in steps of 1 dex
for \mbox{H{\scriptsize I}}, \mbox{C{\scriptsize II}}, \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}, \mbox{N{\scriptsize V}}\ and \mbox{O{\scriptsize VI}}. [Z/H] = -2 and solar
relative abundances were assumed.
\label{ioncontours}}
\end{figure}
\clearpage
\noindent
Figure \ref{ioncontours} shows logarithmic column density contours for
all 6 ions in steps of 1 dex. \mbox{C{\scriptsize II}}\ and \mbox{Si{\scriptsize IV}}\ are confined to dense regions
marking likely sites of star formation in the center of the PGCs.
\mbox{C{\scriptsize IV}}\ traces the hotter filaments and halos, while \mbox{O{\scriptsize VI}}\ matches well
the low column density \mbox{H{\scriptsize I}}\ contours and traces low density gas of any
temperature (cf. Chaffee et al.~1985, 1986). In the box shown the
covering factor for detectable hydrogen absorption is 100\%. The
same is true for \mbox{O{\scriptsize VI}}\ if the metal contamination is indeed homogeneous
and does not drop in regions outside the filamentary structure.
Approximately one third of all LOS give detectable \mbox{C{\scriptsize IV}}\ absorption.
\subsection{Generating artificial QSO spectra}
To analyze the appearance of the galaxy forming region in absorption we
drew 1000 LOS and generated artificial spectra for the simulation box
shown in figure 1 (see section 2.3).
The LOS had random orientations and random offsets within $\pm
225$ kpc from the center of the box. Optical depth $\tau(v)$ profiles
along the LOS were constructed projecting the Voigt absorption line
profile caused by the column density of each individual spatial pixel
onto its proper position in velocity space, using the relation (for
the \mbox{H{\scriptsize I}}\ Ly$\alpha$\ line)
\begin{eqnarray}
\tau(v) = \sum_{i}\tau_i(v-v_i) = 1.34\times 10^{-12} \frac{\,{\rm d} N}{\,{\rm d} v}(v)
\end{eqnarray}
(cf. Gunn \& Peterson 1965).
Here $\tau_i(v-v_i)$ is the optical depth of the Voigt profile at the observed velocity $v$, caused
by the column density $\Delta N_i$ in spatial pixel i, moving with
velocity $v_i=v_{\rm pec}(i)+v_{\rm Hubble}(i)$.
This can be expressed in terms of the column density per unit velocity,
$\,{\rm d} N/\,{\rm d} v$ at $v$. Units of $N$ are $\,{\rm cm}^{-2}$, $v$ is in $\mbox{km\,s$^{-1}$}$.
The spectra were made to resemble typical Keck data obtainable within a
few hours from a 16-17th magnitude QSO (S/N=50 per 0.043 \AA\ pixel, FWHM
= 8km/s). These ``data'' were then treated and analyzed in exactly the
same way (by fitting Voigt profiles) as actual observations.
An absorption line was deemed detectable when the equivalent width
had a probability of less than 10$^{-5}$ (corresponding to a $>$ 4.75
$\sigma$ event) to have been caused by a statistical fluctuation.
\section{Absorption line formation in regions of ongoing gravitational
collapse}
Having access to the three-dimensional
gas/temperature distribution and the velocity field of the gas we are
able to study the line formation process as a function of the
characteristic physical properties of the absorbing structures. We
expect the general mechanisms to be similar to those found earlier for
\mbox{H{\scriptsize I}}\ absorption features (Cen et al.~1994; Zhang et al.~1995;
Hernquist et al.~1996; Miralda-Escud\'e et al.~1996). The emphasis of
the present work is, however, on the much narrower absorption lines
from metal ions. These allow us to study the velocity structure in
galactic potential wells, where the corresponding \mbox{H{\scriptsize I}}\ Ly$\alpha$\ lines are
saturated and strongly blended.
The resolution of the present simulations is much higher than that of
previous work, so we can pursue the physical quantities into
regions of larger densities, and study the fate of
objects of small mass.
\subsection{Environments causing specific absorption
patterns}
The following figures illustrate a few ``typical'' situations where
gravitational collapse gives rise to \mbox{C{\scriptsize IV}}\ and other metal absorption
features. The panels give (from top to bottom) the spectra of the six ions
offset vertically by 0.5 , the total baryon density, the temperature
and the peculiar velocity. The $x$ axis is the spatial coordinate
along the LOS labeled by the relative velocity each position would
have if following an undisturbed Hubble flow.
In the case of \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}, \mbox{N{\scriptsize V}}, and \mbox{O{\scriptsize VI}}\ only the stronger
transition of each doublet is shown. The dotted lines
connect the spatial positions of overdense regions (selected manually)
to their density weighted positions in velocity space.
\subsubsection{Collapsing regions of moderate overdensity}
Figure 4a shows a LOS producing an intermediate column density
\mbox{H{\scriptsize I}}\ profile. The line is saturated but not yet damped. In LOS passing very
close to at least one PGC generally a single sharp density peak
dominates the
absorption line formation (see the following pictures), but
more often the LOS passes through regions of smaller overdensities
just about to collapse into a single object. The peculiar velocity
field displays a typical infall pattern (redshift with respect to the
Hubble flow for gas falling in from the front,
blueshift for gas falling in from behind, a jump at the location
of the density maximum).
\mbox{C{\scriptsize IV}}\ and \mbox{O{\scriptsize VI}}\ are the only metals which
can be detected. The absorption features are similar in appearance to the
``partial Ly$\alpha$\ limit systems'' studied by Cowie et al.~1995.
In this particular case the individual density peaks have converged
in velocity space to form a caustic, but the enhanced density in physical
space is still much more important for producing the absorption line
(see also Miralda-Escud\'e et al.~1996). In principle velocity caustics
may reduce the contribution of the Hubble expansion to the total
velocity width of absorption lines. We find, however, that this
rarely occurs. In most cases infall velocities overcompensate
the Hubble flow. The ordering of two absorption lines in velocity
space can even be reversed relative to their spatial positions.
\subsubsection{Large velocity spread and chance alignments with
filaments}
Absorption caused by a chance alignment between the LOS and a
filamentary structure is seen in figure 4b. The filaments
generally lie at the boundaries of underdense regions (``voids'')
and expand with velocities in excess the Hubble flow. This is
apparent from the divergent dotted lines and the ramp-like
increase of the peculiar velocity as a function of real-space
distance. Such structures rather than deep potential wells are
likely to be responsible for the largest observed velocity
splittings of \mbox{C{\scriptsize IV}}\ systems whereas moderate velocity splittings
(up to 200 to 300\,\mbox{km\,s$^{-1}$}) are often due to a complex
temperature distribution or ionization structure
internal to individual PGCs (see also paper I).
\subsubsection{``Damped Ly$\alpha$\ systems''}
Figure \ref{damped} shows two ``damped Lyman $\alpha$ systems''.
Currently we are limited to qualitative statements about the
spatial extent and ionization state of damped Lyman $\alpha$
systems as neither self-shielding nor energy/momentum input by
star-formation or AGN have been taken into account.
Nevertheless, in agreement with Katz et al.~(1996)
we find that for small impact parameters column densities
becomes large enough to produce damped Ly$\alpha$\ absorption
lines.
Damped systems are usually dominated by one large density maximum.
The density peak corresponds to a local temperature minimum due to
cooling of the dense gas at the center and is surrounded by a shell
of hotter infalling shock-heated gas. This leads to the characteristic
double-hump structure in the temperature diagram of figure 5b.
The absorption lines for the various ions of the system shown in
figure 5a appear rather similar to each other in terms of
position and line shape, with the exception of \mbox{O{\scriptsize VI}}\ which arises
at much larger radii. In general, the maxima of the high ionization
species for these highly optically thick systems need not coincide
with the centers of the \mbox{H{\scriptsize I}}\ lines and those of lower ionization
species. Figure 5b illustrates a situation where the component
structure has a very complex origin. While the strongest
\mbox{C{\scriptsize II}}\ and \mbox{Si{\scriptsize IV}}\ components coincide
with the center of the damped Ly$\alpha$\ line, the \mbox{C{\scriptsize IV}}\ and \mbox{N{\scriptsize V}}\ positions are
far off. This indicates differential motion between the high and low
ionization regions.
\begin{figure}[t]
\vskip -1.0cm
\centerline{\psfig{file=haehnelt_4a.ps,width=8.cm,angle=0.}
\psfig{file=haehnelt_4b.ps,width=8.cm,angle=0.}
}
\vskip 1.0cm
\caption{\small left: spectrum of a LOS through a collapsing region
of moderate overdensity producing intermediate column density \mbox{H{\scriptsize I}}\ and
weak \mbox{C{\scriptsize IV}}\ and \mbox{O{\scriptsize VI}}\ absorption.
right: LOS along a filament expanding
faster than the Hubble flow.
The panels give (from top to bottom) the spectra of the six ions
offset vertically by 0.5, the total baryon
density, the temperature and the peculiar velocity (see section 3.1).
In the case of \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}, \mbox{N{\scriptsize V}}, and \mbox{O{\scriptsize VI}}\ only the stronger
component of each doublet is shown.
\label{postshock}}
\end{figure}
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_5a.ps,width=8.cm,angle=0.}
\psfig{file=haehnelt_5b.ps,width=8.cm,angle=0.}
}
\vskip 1.0cm
\caption{\small ``damped Ly$\alpha$\ systems''.\label{damped}
}
\end{figure}
\clearpage
\begin{figure}[t]
\vskip -2.0cm
\centerline{
\psfig{file=haehnelt_6a.ps,width=9.5cm,angle=0.}
\hspace{-1.5cm}
\psfig{file=haehnelt_6b.ps,width=9.5cm,angle=0.}
}
\vskip -0.5cm
\caption{\small Left: mean projected column density
as a function of overdensity
$\expec{\rho_{\rm bar}}_{\rm LOS}/\bar{\rho}_{\rm bar}$
(column density-weighted along the line-of-sight)
for the simulation box shown in figure 1.
Right: column density distribution (f(N) N) at $z=3$
for six different species in the simulation. Crosses show the
observed column density distribution (Petitjean et al. 1993).
The column density normalization is described in section 4.1.}
\end{figure}
\subsection{The absorption properties of individual protogalactic clumps}
The solid curve in figure 6a shows the good correlation
between the mean log $N$(\mbox{H{\scriptsize I}}) and the baryonic overdensity in a typical
simulation box. Also shown is the mean log $N$ for our canonical set of
ionic species. As expected the strength relative to \mbox{H{\scriptsize I}}\ varies
considerably from species to species. \mbox{C{\scriptsize II}}\ and \mbox{Si{\scriptsize IV}}\ are strong at high
\mbox{H{\scriptsize I}}\ column densities/baryonic over-densities, while \mbox{C{\scriptsize IV}}\ dominates at
intermediate column densities and \mbox{O{\scriptsize VI}}\ probes the low density regime.
We have also plotted a simple self-shielding correction for
large \mbox{H{\scriptsize I}}\ column densities. The correction was calculated
with CLOUDY specifying the total column density and mean density
along the LOS.
Figure \ref{squ_6_3kpc_v} shows how the spectral features change with
distance from a fully collapsed clump. The plot consists of a mosaic of
LOS separated by 3 kpc from each other in the $x$ and $y$ directions on the
sky. The center of the clump is close to the top right corner.
The \mbox{H{\scriptsize I}}\ Ly$\alpha$ line exhibits damping wings within the
central $\approx$ 6 kpc.
To quantify the spatial coherence of the absorption properties in more
detail we have investigated a set of random LOS close to a typical
PGC ($M \approx 1.2\times10^9 $M$_{\odot}$) in the simulation. Figure
\ref{impaccol_ovi} shows the column densities as a function of impact
parameter for a set of randomly oriented LOS. A sharp drop from $\log
N(\mbox{H{\scriptsize I}}) \ge 20$ to $\log N(\mbox{H{\scriptsize I}}) \approx 15$ occurs within the first five to
ten kpc. At \mbox{H{\scriptsize I}}\ column densities above $\log N \ga 17$
self-shielding should become important. This will lead to an
even steeper rise
toward smaller radii. After the rapid drop the \mbox{H{\scriptsize I}}\ column density
decreases very gradually, still exceeding $\log N(\mbox{H{\scriptsize I}})\approx$ 14
at 100 kpc. The typical radius of the damped region of a protogalactic
clump in our simulation taking self-shielding into account should be
about five kpc. However, as pointed out previously this value
might increase once feedback processes are taken into account.
Currently a 10m telescope can detect \mbox{H{\scriptsize I}}\ column densities of order
$10^{12}$ cm$^{-2}$ which means that the detectable \mbox{H{\scriptsize I}}\ radius
extends far beyond 100 kpc. The largest detectable metal
absorption cross-section is subtended by \mbox{C{\scriptsize IV}}\ and probably \mbox{O{\scriptsize VI}}\
(second panel on the left); at a detection threshold
of $N$(\mbox{C{\scriptsize IV}})$\sim 10^{12}$ cm$^{-2}$
(realistic for very high signal-to-noise data),
the radius is $\sim $ 30 kpc; \mbox{Si{\scriptsize IV}}\ at the same significance level
(taking the larger oscillator strength into account)
would be detectable out to about 12 kpc. The decrease of the higher
ions \mbox{C{\scriptsize IV}}\ and \mbox{N{\scriptsize V}}\ at the lowest radii is due to the increase in density;
it should be even more pronounced if radiation transfer were
implemented. Figure \ref{impacratios} gives the ratios of the LOS
integrated column densities for several ions. In particular, the third
panel on the left shows once more the importance of the \mbox{O{\scriptsize VI}}\ ion, which
should be detectable at the largest radii with similar or even higher
significance than the \mbox{C{\scriptsize IV}}\ lines. Thus, in spite of the difficulty of
observing \mbox{O{\scriptsize VI}}\ in the Ly$\alpha$\ forest the \mbox{O{\scriptsize VI}}/\mbox{C{\scriptsize IV}}\ column density ratio should
be the best measure of the impact parameter.
\section{Observational tests}
\subsection{Column density distributions as a test of the gas distribution
and metallicities}
Before discussing the overall column density distribution of \mbox{H{\scriptsize I}}\ and the
metal ionic species in our simulations we stress again that our
simulation boxes were not chosen randomly (see section 2.2). At our
redshift of interest ($z=3$) the mean baryonic density of the simulation
boxes containing low (high) circular velocities halos is a factor
1.27 (2.97) larger than the assumed mean baryonic density of the universe
for the inner $700$ (proper) kpc cube. Furthermore,
\mbox{H{\scriptsize I}}\ column densities projected across the simulation box are
generally larger than a few times $10^{13}$cm$^{-2}$.
Below we will concentrate on
properties of the four boxes containing low circular velocities halos
at $z=3$ for which the overdensity is moderate and which should be
closest to a fair sample.
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_7.ps,width=14.0cm,angle=0.}
}
\vskip 0.5cm
\caption{\small Mosaic of 6$\times$6 lines-of-sight near a collapsed
clump, separated by 3 kpc in each direction on the sky.
The center of the collapsed object is close to the top right corner.
The panels give the spectra of the six ions
offset vertically by 0.5 for clarity.
In the case of \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}, \mbox{N{\scriptsize V}}, and \mbox{O{\scriptsize VI}}\ only the stronger line
of each doublet is shown.
\label{squ_6_3kpc_v}}
\end{figure}
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_8.ps,width=14.cm,angle=-0.}}
\vskip 0.5cm
\caption{\small Integrated column densities along random lines-of-sight
for the six ions as a function of impact parameter from the center
of a protogalactic clump with $1.2 \times 10^{9}$\,\mbox{M$_\odot$}.\label{impaccol_ovi}}
\end{figure}
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_9.ps,width=14.cm,angle=-0.}}
\vskip 0.5cm
\caption{\small Ratios of the integrated column density along random
lines-of-sight for several ions as a function of impact
parameter from the center of a protogalactic clump with
$1.2 \times 10^{9}\,$\mbox{M$_\odot$}.\label{impacratios}}
\end{figure}
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_10.ps,width=14.cm,angle=0.}
}
\vskip 0.5 cm
\caption{\small Ratios of the integrated column densities along random
lines-of-sight for several ionic species as a function of \mbox{H{\scriptsize I}}\
or \mbox{C{\scriptsize IV}}\ column density. \label{los_ionrat_ovi_999}}
\end{figure}
\clearpage
Column density distributions/ratios of ionic species with different
ionization potentials are an important diagnostic of the ionization
parameter, the spectral shape of the ionization field, the relative
element abundances and the temperature of the gas (e.g. Bergeron \&
Stasinska 1986, Chaffee et al.~1986, Bechtold, Green \& York 1987,
Donahue \& Shull 1991, Steidel 1990, Viegas \& Gruenwald 1991, Ferland
1993, Savaglio et al.~1996, Songaila \& Cowie 1996).
In our case the distributions of densities and temperatures are
predicted by the simulation (only weakly dependent on the input
radiation field) so the remaining adjustable parameters are the
elemental abundances and the properties of the ionizing radiation.
\subsubsection{Column density distribution functions}
We define the differential column density distribution as usual
\begin{equation}
f(N) = \frac{\,{\rm d}^{2} {\cal N}}{\,{\rm d} X \,{\rm d} N},
\end{equation}
where ${\cal N}$ is the number of lines and $\,{\rm d} X = (1+z)^{1/2} \,{\rm d} z$
(for $q_0$=0.5).
The curves in figure 6b give $f(N) N$ at $z=3.07$ for the
four simulation boxes shifted in column density by 0.3 dex, whereas the
crosses show the observed distribution (Petitjean et al.~1993).
The data on the metal ions are again for a constant metallicity of
[Z/H]=$-2$. The self-shielding correction for large \mbox{H{\scriptsize I}}\ column densities
mentioned in section 3.2 is also shown. The shape of the observed
distribution is matched reasonable well by the simulations once the
self-shielding correction is applied. As pointed out e.g. by Hernquist
et al.~(1996) and Miralda-Escud\'e et al.~(1996) the \mbox{H{\scriptsize I}}\ column density
should scale with the baryon fraction, the strength of the ionizing
background and the Hubble constant as $\Omega_{b} ^{2} h^{3}/J$.
With the parameters of our simulation the applied shift of 0.3 dex
in column density corresponds to
$(\Omega_b\, h_{50}^{2}/0.05)^{2}\,/(J_{-22}\,h_{50}) \approx 1.5$,
similar to the value found by Hernquist et al.~(1996).
Taking into account that the mean baryonic density of the volume used
to calculate the column density distribution is a factor 1.27 larger
then the assumed overall mean baryonic density should somewhat
lower this value.
We do also find a deficit of systems with
\mbox{H{\scriptsize I}}\ column densities around $10^{17} \,{\rm cm}^{-2}$, but it is smaller
than the discrepancy by a factor of ten reported by
Katz et al.~(1996).
\subsubsection{Column density ratios for random lines of sight}
Figure \ref{los_ionrat_ovi_999} shows column density ratios for various
ions, integrated along randomly offset and oriented
LOS through the box described at the end of section 2.3. The most
readily observable ratio is \mbox{C{\scriptsize IV}}/\mbox{H{\scriptsize I}}, which in the simulations has a
maximum of about $-2.2$ at $N$(\mbox{H{\scriptsize I}})=10$^{15} \,{\rm cm}^{-2}$ in the log.
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_11.ps,width=12.cm,angle=0.}
}
\vskip 1.0cm
\caption{\small Column density ratios obtained from profile fitting the
simulated data. Absolute metallicity [Z/H]= $-2$, solar
relative abundances and a power law UV spectrum
with $\alpha$=$1.5$ and $J_{-22} =3$ were assumed.
Filled circles denote the ratios of the sums of the
column densities of all individual components along a LOS detectable at
the 4.75 $\sigma$ level. The thin diagonal lines give upper limits
for the simulated lines in cases where only one ion (\mbox{H{\scriptsize I}}\ in the first panel and \mbox{C{\scriptsize IV}}\ in the
other cases) has been detected. Open squares mark observed values (see
text), open circles either observational upper limits or (bottom left panel)
\mbox{O{\scriptsize VI}}\ values undetermined because of blending.
\label{fitresults}}
\end{figure}
\clearpage
\noindent
This is consistent with the range of values measured by
Cowie et al.~(1995), but a better
match can be found with a carbon abundance of [C/H]= $-2.5$ instead
of the originally adopted [C/H]= $-2$. At higher column densities
\mbox{C{\scriptsize IV}}\ recombines to form \mbox{C{\scriptsize III}}\ and \mbox{C{\scriptsize II}}\ with increasing density and
shielding. At lower column densities \mbox{C{\scriptsize IV}}/\mbox{H{\scriptsize I}}\ also declines as carbon
becomes more highly ionized into \mbox{C{\scriptsize V}}. This effect will cause
difficulties for searches for carbon enrichment in the Ly$\alpha$\
forest at column densities much lower than the presently accessible
limits in \mbox{H{\scriptsize I}}\ (10$^{14}$ cm$^{-2}$), as not only the \mbox{H{\scriptsize I}}\
column but also the \mbox{C{\scriptsize IV}}/\mbox{H{\scriptsize I}}\ ratio declines.
Results obtained from Voigt profile fitting a subset of the spectra
(using the software package VPFIT, Carswell et al.~1987) are shown in
figure \ref{fitresults}. Solid dots are measurements from the
simulation. For comparison open boxes give observational data
for \mbox{C{\scriptsize II}}, \mbox{C{\scriptsize IV}}, \mbox{Si{\scriptsize IV}}\ and \mbox{N{\scriptsize V}}\ by Songaila \& Cowie (1996)
and for \mbox{O{\scriptsize VI}}\ from Rauch et al.~(in prep.).
The circles show observational upper limits
or indicate undetermined \mbox{O{\scriptsize VI}}\ in cases of severe
blending with Ly$\alpha$\ forest lines.
The thin diagonal lines indicate the approximate
detection limits for absorption features in the
simulated spectra. Most of the general trends in the
observed column density ratios are well reproduced. The scatter
in the column density ratios between different metal ionic species
is similar to that found observationally. This suggests that the
simulations produce a realistic range of ionization conditions.
There are, however, some interesting discrepancies between simulated
and actual data. First we note that the scatter in \mbox{C{\scriptsize IV}}/\mbox{H{\scriptsize I}}\ (panel on
top left) for the observations is much larger than that in the
simulations (where constant metallicity was assumed). We take this to imply
that the carbon fraction and probably also the absolute
metallicity in the outskirts of protogalactic
objects at $z\sim$ 3 fluctuates over one to two orders of magnitude
throughout the column density range $N$(\mbox{H{\scriptsize I}}) $10^{14}$ to $10^{18}$
cm$^{-2}$. It is interesting to note that a similar
scatter is seen in [C/Fe] in metal poor stars
(McWilliam et al.~1995, Timmes, Woosley \& Weaver 1995).
There are also some obvious differences between the absolute
levels of the observed and simulated mean column density ratios.
While the abundance ratios for the high ionization species (\mbox{C{\scriptsize IV}}, \mbox{N{\scriptsize V}}, \mbox{O{\scriptsize VI}})
in the simulation agree reasonably well with the observations, \mbox{Si{\scriptsize IV}}\
and \mbox{C{\scriptsize II}}\ are significantly off, with the simulated \mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}\ and
\mbox{C{\scriptsize II}}/\mbox{C{\scriptsize IV}}\ ratios lower by a factor ten than the observed values.
\subsubsection{Matching the observed column density ratios}
Some of these discrepancies can be reduced by an appropriate modification
of the relative abundances. So far we have assumed
{\it solar relative abundances}. The generally low abundances
in present-day galactic halo stars (which may
contain a fossil record of the high $z$ gas abundances) as well as
optically thin \mbox{C{\scriptsize IV}}\ systems (Cowie et al.~1995) and high-redshift damped Ly$\alpha$\
systems (Pettini et al.~1994, Lu et al.~1996) indicate that
relative metal abundances like those in very metal poor
stars (McWilliam et al.~1995) may be
more appropriate. Below we will discuss the effect of changing the
abundance pattern from solar values to: [C]=[N]=0; [Si]=[O]=0.4.
The brackets denote the difference to solar abundances in dex.
Some improvement may also be expected from a different intensity
or spectral shape of the ionizing radiation background.
Figure \ref{martin_5} shows the influence of such changes on various
column density ratios. The solid curve shows the values for
the $\alpha$ = 1.5 power law spectrum.
The other curves are for several different spectral shapes and normalizations
of the UV background:
a power law with intensity increased and
decreased by factors of 3; a spectrum taking intergalactic
absorption and emission into account (Haardt \& Madau 1996); and
a power law with a step-shaped break at 4 Rydberg
(flux reduced by a factor 100 at 4 Rydberg, constant flux for higher
energies up to the point where the flux equals that of the underlying
power law). The effect of the absorption edges in the Haardt \& Madau spectrum
on the ion ratios considered here is
generally small for the relevant column densities.
A power law with slope 1.5 is a good approximation.
Changing the normalization has noticeable but moderate effects on the
\mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}\ and \mbox{C{\scriptsize II}}/\mbox{C{\scriptsize IV}}\ ratios. Otherwise the changes are again small.
The case of a powerlaw with a 4 Ryd cutoff, however, leads to strong
departures in the ion/\mbox{H{\scriptsize I}}, and in the ion/ion ratios for a given \mbox{H{\scriptsize I}}\
column density. A spectrum of the last kind was suggested by
Songaila \& Cowie (1996) and Savaglio et al.~(1996) to
account for large observed \mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}\ ratios at $z>3$. These authors
consider an increasing \mbox{He{\scriptsize II}}\ opacity or an increasing stellar
contribution to the UV background towards higher redshifts as
possible causes for a softer spectrum.
However, for a fixed \mbox{C{\scriptsize IV}}\ column density the power law with
4 Rydberg cutoff actually even lowers the \mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}\ and \mbox{C{\scriptsize II}}/\mbox{C{\scriptsize IV}}\
ratios relative to those obtained with the originally
adopted $\alpha$ = 1.5 power law.
Moreover, such a spectrum would imply a factor three to ten smaller
carbon abundance. Apparently, changing the UV radiation field does
not dramatically improve the overall agreement with the observed ion ratios.
The observed and simulated column density
ratios are compared in figure \ref{bestfit} after
adopting some of the changes discussed
above (metallicity reduced by 0.5 dex to [Z/H] =$-2.5$; metal poor
abundance pattern; spectrum as suggested by Haardt \& Madau). The
discrepancy for \mbox{Si{\scriptsize IV}}\ and \mbox{C{\scriptsize II}}\ has now been significantly reduced: the
contribution to the shift in the \mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}\ vs. \mbox{C{\scriptsize IV}}\ came in equal parts
from the adjustment in relative and absolute metallicity, whereas the
\mbox{C{\scriptsize II}}/\mbox{C{\scriptsize IV}}\ vs. \mbox{C{\scriptsize IV}}\ ratio improved mostly because of the decrease in
absolute metallicity. Thus the agreement is now quite good for \mbox{Si{\scriptsize IV}},
\mbox{C{\scriptsize II}}\ and \mbox{C{\scriptsize IV}}. \mbox{O{\scriptsize VI}}\ and \mbox{N{\scriptsize V}}\ are slightly further off but --
given the large observational uncertainties in these two ions --
probably consistent with the data.
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_12.ps,width=15.cm,angle=-0.}
}
\vskip 1.0cm
\caption{\small Column density ratios for various
UV background radiation fields. The solid curve are for the
$\alpha$ = 1.5 power law spectrum.
The other curves are for: a power law with
intensity increased and decreased by factors of 3;
a spectrum taking intergalactic absorption and emission
into account (Haardt \& Madau 1996); and a power law with a
step function like break at 4 Rydberg (flux reduced by a factor 100
at 4 Rydberg, constant flux for higher energies up to the point where
the flux equals that of the underlying power law). \label{martin_5}}
\end{figure}
\begin{figure}[t]
\vskip -1.0cm
\centerline{
\psfig{file=haehnelt_13.ps,width=12.cm,angle=0.}
}
\vskip 1.0cm
\caption{\small Column density ratios obtained from profile fitting the
simulated data (filled circles). Compared to figure 11
the absolute metallicity has been changed from [Z/H] = $-2$
to $-2.5$, solar relative abundances have been replaced by those of
metal-poor stars ([C]=[N]=0, [Si]=[0]=0.4) and the spectrum
suggested by Haardt \& Madau (1996) is used instead of a power
law with $\alpha =1.5$.
The diagonal lines give upper limits for the simulated absorption features
in cases where only one ion (\mbox{H{\scriptsize I}}\ in the first panel and \mbox{C{\scriptsize IV}}\ in the
others) has been detected. Open squares mark observed values (see
text), open circles either observational upper limits or undetermined
\mbox{O{\scriptsize VI}}\ .
\label{bestfit}}
\end{figure}
\clearpage
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_14.ps,width=7.0cm,angle=-90.}
}
\vskip 0.5cm
\caption{\small Left panel: Total \mbox{C{\scriptsize IV}}\ Doppler parameter {\it
measured} by profile fitting versus density weighted \mbox{C{\scriptsize IV}}\ thermal
Doppler parameter computed from the gas temperature in the
simulation. Right: Total measured \mbox{C{\scriptsize IV}}\ Doppler parameter
vs.~the Doppler parameter obtained by adding RMS bulk velocity
dispersion and thermal velocity in quadrature.
\label{bmeasbconstr}}
\end{figure}
\subsection{The Doppler parameter as indicator
of gas temperatures and small-scale bulk motions}
Temperature and bulk velocity measurements from individual absorption lines
are useful discriminants
for the environment of heavy element absorption systems. The actually
observable quantity is the Doppler parameter $b$ (=$\sqrt{2}\sigma$) of
the absorbing line. The line width results from a convolution of
the thermal motion and the small scale bulk motion. Measurements of the
Doppler parameter for ionic species with different atomic weights permit
a decomposition into the contributions from thermal and non-thermal
(bulk) motion. The precise nature of the decomposition depends,
however, on the velocity distribution of the bulk motion.
For the simulations the relative importance of the
contributions to the line formation can be easily studied
as we have full knowledge of the density and peculiar velocity
field along the LOS. Here we proceed as follows: first we
compute the column density weighted temperature and RMS velocity
dispersion in an overdense region selected manually; then we fit the
absorption line closest to the position of the velocity centroid of the
region with a Voigt profile and compare results.
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_15.ps,width=8.cm,angle=0.}
}
\caption{\small Doppler parameter -- column density ($b$--log$N$)
diagrams for \mbox{C{\scriptsize IV}}\ (top), \mbox{O{\scriptsize VI}}\ (middle) and \mbox{H{\scriptsize I}}\ (bottom).
The dotted lines give the
spectral velocity resolution.
Note the different scales of the axes.\label{bcivbovibhi}}
\end{figure}
\clearpage
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_16.ps,width=9.cm,angle=-90.}
}
\vskip 0.5cm
\caption{\small Comparison between the \mbox{C{\scriptsize IV}}\ Doppler parameters from
the simulation (solid line) and the observed distribution from
Rauch et al. 1996 (dotted line). The observed distribution has been
scaled such as to match the total number of lines
of the simulated distribution.\label{bsimobscomp}}
\end{figure}
\noindent
The top panel in
figure \ref{bmeasbconstr} shows a plot of the ``observed'' \mbox{C{\scriptsize IV}}\
Doppler parameter from the fit versus the
purely thermal Doppler parameter computed straight from the temperature
array of the simulation. While the density weighted thermal \mbox{C{\scriptsize IV}}\ Doppler
parameter of many LOS hovers around 7 to 8 $\mbox{km\,s$^{-1}$}$, the ``observed''
values occur over a wide range. The total \mbox{C{\scriptsize IV}}\ Doppler parameter is
obviously not an unbiased measure of the temperature alone. There
are substantial varying contributions from bulk motion. It is not
obvious how temperature and bulk motion should be deconvolved.
Making the simplest possible ansatz of adding them in quadrature (which would be strictly
true if the bulk motion followed a Gaussian velocity distribution) we
arrive at the results shown in the lower panel: the observed line
width does indeed measure the quadratic sum of thermal motion and RMS
velocity dispersion to a reasonable accuracy. Outliers in the plot
are due small spurious line components sometimes introduced by the
automatic fitting program VPFIT to improve the quality of
the fit.
How realistic are the gas motions in the simulation? To compare the
simulation to real data we have again measured a number of randomly
selected simulated absorption lines fitting Voigt profiles. The
resulting Doppler parameter column density ($b$--log$N$) diagrams for \mbox{C{\scriptsize IV}},
\mbox{O{\scriptsize VI}}, and \mbox{H{\scriptsize I}}\ are shown in figure \ref{bcivbovibhi}. The scatter plots
agree very well with those from observational data
(e.g. RSWB for \mbox{C{\scriptsize IV}}, Hu et al.~1995 for \mbox{H{\scriptsize I}}; there is no comparable
information yet for \mbox{O{\scriptsize VI}}). The presence of a minimum $b$ parameter in
both cases and a slight increase of $b$ with $N$ have both been observed.
There is also excellent quantitative agreement with the observed
mean $b$ values: the mean \mbox{H{\scriptsize I}}\ Doppler parameter in the simulations is
$\expec{b}_{\rm sim}$ = 27.3 $\mbox{km\,s$^{-1}$}$ ($\expec{b}_{\rm obs}$ = 28
$\mbox{km\,s$^{-1}$}$ is observed). The mean \mbox{C{\scriptsize IV}}\ Doppler parameters are
$\expec{b}_{\rm sim}$ = 8.6 $\mbox{km\,s$^{-1}$}$ and $\expec{b}_{\rm obs}$ = 9.3
$\mbox{km\,s$^{-1}$}$, respectively.
Figure \ref{bsimobscomp} shows that the shape of the simulated
\mbox{C{\scriptsize IV}}\ Doppler parameter distribution matches the observed
distribution very well. A decomposition of the measured Doppler
parameters into thermal and turbulent contributions gives
a mean simulated {\it thermal} Doppler parameter
$\expec{b}_{\rm therm} = 7.7 $\,\mbox{km\,s$^{-1}$} (see figure \ref{bmeasbconstr})
with the observed mean thermal \mbox{C{\scriptsize IV}}\ Doppler parameter,
$\expec{b}_{\rm therm} = 7.2$\,\mbox{km\,s$^{-1}$}).
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_17a.ps,width=5.0cm,angle=-90.}
\hspace{0.5cm}
\psfig{file=haehnelt_17b.ps,width=5.0cm,angle=-90.}
}
\vskip 0.5cm
\caption{\small Strongest \mbox{C{\scriptsize IV}}\ Doppler parameters of each absorption
complex as a function of impact parameter to the nearest PGC (left).
Mass-weighted temperature (right).\label{bciv_bovi_tmass_lin}}
\end{figure}
Can we observe the infalling motion during gravitational collapse
directly? Naively we may expect to see substantial line broadening
due to infalling gas. However, the \mbox{H{\scriptsize I}}\ density weighted
velocity dispersion along the LOS (the physical quantity
relevant for the line broadening) has a median as low as
11.6\,\mbox{km\,s$^{-1}$}, even though the average radius of the regions used for
the computation of this quantity was as large as 40 kpc. Looking at
the peculiar velocity diagrams in figures
\ref{postshock} and \ref{damped} with their large
velocity gradients of close to 200\,\mbox{km\,s$^{-1}$} across the size of a PGC the
quiescent structure may be somewhat surprising. However, the enhanced
density responsible for the line formation is strongly peaked on a
spatial scale much smaller than the infalling region. The line
profile samples mostly the high density gas which has gone through
the shock front, has cooled to temperatures of a few $10^4$\,K, and is more
or less at rest. In principle the signature of infalling gas is
still visible in the form of broad profile wings. In \mbox{C{\scriptsize IV}}, however,
these are almost always too weak to be seen as they are due to low
column density gas beyond the radius of the shock. The situation
may be different for \mbox{H{\scriptsize I}}\ absorption lines (cf. Rauch 1996).
In \mbox{O{\scriptsize VI}}\ with its much larger cross-section and weaker density
dependence we probably also have a better chance to detect
infall. The \mbox{O{\scriptsize VI}}\ Doppler parameter ($\expec{b}_{\rm sim}$=11.5
$\mbox{km\,s$^{-1}$}$) is indeed higher than that of \mbox{C{\scriptsize IV}}, and not lower, as one
could naively expect for thermally dominated gas. Obviously
the \mbox{O{\scriptsize VI}}\ absorbing gas is subject to stronger non-thermal motions.
How do the Doppler parameters vary as a function of impact
parameter? The LOS-integrated temperature distribution of
either \mbox{C{\scriptsize IV}}\ or \mbox{O{\scriptsize VI}}\ does not
show a radial variation that could be used as a measure of the impact
parameter. This is due to the fact that some of the absorption
components are related to different PGCs. Plotting, however, only the
Doppler parameters of the strongest absorption component in each
\mbox{C{\scriptsize IV}}\ complex (top panel, figure \ref{bciv_bovi_tmass_lin}) we obtain a
noticeable anti-correlation of $b$(\mbox{C{\scriptsize IV}}) with impact parameter from the
center of the closest PGC. This seems to reflect the drop in
temperature (bottom panel) and the coming to rest of the gas at small
radii ($<$ 12 kpc in this case). The trend is not obvious for \mbox{O{\scriptsize VI}}\
(not shown here) which arises from a much larger region
of lower density. It also does not show up in plots
of $b$(\mbox{C{\scriptsize IV}}) when all components, rather than the strongest
in each complex, are considered.
\begin{figure}[t]
\centerline{
\psfig{file=haehnelt_18a.ps,width=4.5cm,angle=-90.}
\hspace{0.5cm}
\psfig{file=haehnelt_18b.ps,width=4.5cm,angle=-90.}
}
\vskip 0.5cm
\caption{\small Left: two-point correlation function for the \mbox{C{\scriptsize IV}}\ systems
detected in a simulation box at $z$=3.07 (to contain three galaxies
with $v_c \sim 100$\,\mbox{km\,s$^{-1}$} at $z=0$). Normalization is by the combined TPCF
from the observations of three QSOs ($\expec{z}$=2.78;
RWSB). \label{tpcf3}
Right: same as above for a simulation box containing a single
galaxy with $v_c \sim 200$\, \mbox{km\,s$^{-1}$}\ by $z=0$.
Note the different scales of the axes.}
\end{figure}
\subsection{The two-point correlation function and the dynamical
state of the absorbers on large scales}
As discussed earlier the observed velocity
spreads among absorption components are difficult to reconcile with a
dynamical velocity dispersion. In paper I it was argued that
occasional large velocity splitting of sometimes more than
1000\,\mbox{km\,s$^{-1}$}\ can be explained by chance alignments of
the LOS with filaments containing several PGCs. The observed
two-point correlation function
(TPCF) can indeed be interpreted in this sense, if there is a
strong small scale clustering of \mbox{C{\scriptsize IV}}\ components on scales of
20\,\mbox{km\,s$^{-1}$}\ in addition to the large scale expansion (RSWB). This is in good
agreement with the earlier result that small scale structure may
explain the ``supra-thermal'' widths commonly found for high column
density Ly$\alpha$\ lines (Cowie et al.~1995), as well as the increasing
clustering of Ly$\alpha$\ forest lines as one goes to higher
column density (Chernomordik 1995, Cristiani et al.~1995,
Fernandez-Soto et al.~1996).
The TPCF for the simulation box described earlier
(containing three $v_c \sim 100$\,\mbox{km\,s$^{-1}$}\
galaxies) is shown in the upper panel of figure \ref{tpcf3}. The
velocities of the \mbox{C{\scriptsize IV}}\ systems were obtained by Voigt profile fits to
significant continuum depressions.
The absolute normalization comes from the observed TPCF for
three QSOs ($\expec{z}$=2.78; RSWB).
Similar to the observed TPCF, the correlation function of
\mbox{C{\scriptsize IV}}\ lines in the simulated
spectra exhibits a narrow peak at the origin
(the lowest velocity bin is incomplete) and a long tail
out to velocities of 500-600\,\mbox{km\,s$^{-1}$}\ and beyond. The overall width
is somewhat smaller than that of the observed distribution.
Repeating the analysis for another simulation box containing
one galaxy with $v_c \sim 200$\,\mbox{km\,s$^{-1}$}\ we
obtain the TPCF shown in the lower panel of figure \ref{tpcf3}
which has much more power on large scales. Considering the rather small
size of our simulation boxes, such variations from box to box are
not surprising. In fact the observed TPCF is also subject to a
large scatter from QSO to QSO, as some LOS often contain only one or
a few systems providing large velocity splittings.
It seems likely that the observed TPCF can be explained by
averaging over TPCFs from individual galaxy forming regions.
As a formal measure of the velocity dispersion of an individual
protogalactic region we can compute the mean (over all LOS) of the RMS of the
density weighted total velocity
\begin{eqnarray}
\sigma_v = \bigexpec{\sqrt{\frac{\int (\overline{v} -
v)^2 dn}{\int dn}}}_{\rm LOS},
\end{eqnarray}
where $v$ = $H(z)r + v_{\rm pec}$,
and
\begin{eqnarray}
\overline{v}=\frac{\int v dn}{\int dn}.
\end{eqnarray}
For the standard box, with 3 future galaxies, $\sigma_v = 100$\,\mbox{km\,s$^{-1}$}.
Four other boxes developing into single $\sim 200$\,\mbox{km\,s$^{-1}$}\
galaxies give $\sigma_v =$ 142, 134, 111 and 138\,\mbox{km\,s$^{-1}$}.
\subsection{Metal absorbers in emission}
The observational identification of \mbox{C{\scriptsize IV}}\ absorbers at redshifts $z>2$
with any known type of galaxies has proven difficult. Speculating that
the absorbers may be related to luminous galaxies with an old
population of stars Aragon-Salamanca et al.~(1994) have obtained K
images of QSO sight lines with known strong \mbox{C{\scriptsize IV}}\ absorbers to a K
limiting magnitude of 20.3. A slight excess of objects probably at the
redshift of the QSO was found but convincing identifications of
individual absorbers were not possible.
Here we will briefly consider stellar
population synthesis models (Bruzual \& Charlot 1993, Charlot 1996)
to investigate the prospects of detecting
the stellar continuum from the PGCs responsible for the metal
absorption systems in our model (see also Katz 1992, Steinmetz 1996a).
We assume that the available gas turns into stars on time scales
between $10^{7}$ and $10^{9}$\,yr, a time span capturing the
range from a short burst to extended star formation. In figure
\ref{martin_8} the apparent brightness (not corrected for Ly$\alpha$\ and dust
absorption) at $z=3.1$ is shown for a total gas mass of $10^{9}$\,\mbox{M$_\odot$}
(typical for a PGC) and three different star formation timescales
as a function of the redshift where star formation began.
Figure \ref{martin_8} shows that our model would explain the
non-detection of \mbox{C{\scriptsize IV}}\ absorbers in K by Aragon-Salamanca et al.~(1994).
Even a very short burst of $10^{9}$\,\mbox{M$_\odot$} reaches only a peak K
magnitude of 22. Extended star formation reduces the maximum K
magnitude to 25. A gas/star mass $\ga 10^{11}$\,\mbox{M$_\odot$} would be
necessary to be permanently detectable in K from the ground.
However, if galaxies start building up at redshifts $z\sim 3$ by
merging of smaller objects recent star formation should have occurred
at these redshifts. The optical passbands are then much more promising
for a ground-based detection of the stellar continuum. Figure
\ref{martin_8} suggests that with HST the stellar continuum should be
detectable in all wavebands redward of the Lyman break.
Spectroscopic identification will only be possible for
the most massive objects in the case of
extended star formation, and for a fraction of bursting
PGCs of average mass if the star formation timescale is short. Recently
Steidel et al.~(1996) and Giavalisco et al.~(1996) have reported
the detection of a population of star forming galaxies at comparable
redshifts. It is intriguing that these objects could be identical to
either a bursting fraction of average mass ($10^{9}$\,\mbox{M$_\odot$}) objects or
to the high mass end of the population of PGCs, showing extended star
formation (Steinmetz 1996a).
Finally one should note that confusion with foreground galaxies is a
concern when studying regions of ongoing galaxy formation as described
in this paper. The inner 700 kpc of our simulation box (at z=3) shown
in figures 1 and 3 correspond to an angular size slightly larger than
that of the WFPC2 field. Thus images as large as the Hubble deep field
may be required to get a ``complete'' picture of
the progenitors of an individual large $z=0$ galaxy. Superposition
effects are then obviously quite severe.
\begin{figure}[t]
\vspace{-2.0cm}
\centerline{
\psfig{file=haehnelt_19.ps,width=18.cm,angle=-0.}
}
\caption{\small
Bruzual \& Charlot models for the apparent brightness (not corrected
for Ly$\alpha$\ and dust absorption, Salpeter IMF, [Z/H] = $-1.7$)
at $z=3.1$ for a total gas mass of a protogalactic clump
of $10^{9}$\,\mbox{M$_\odot$}, as a function of the redshift where star
formation began. Different line styles indicate star formation
timescales between $10^{7}$ and $10^{9}$ yr. Different panels
are for different Johnson filters. \label{martin_8}}
\end{figure}
\clearpage
\section{Discussion and conclusions}
Gravitationally driven density fluctuations in a universe with
hierarchical structure formation can explain QSO absorption phenomena
at $z\sim 3$ over a wide range of column densities. While neutral
hydrogen shows a rather tight correlation between column density and
total density over a density range from $10^{-6}$ to $10^{-1} \,{\rm cm}^{-1}$,
other ionic species probe different density and temperature
regimes in a way specific to each species. The lowest \mbox{H{\scriptsize I}}\ column
densities ($10^{12}$ to $10^{14} \,{\rm cm}^{-2}$) arise from large-scale
sheet-like density enhancements in the IGM with an overdensity of only
a few compared to the mean density of the universe. In this
diffuse gas (densities around $10^{-5} \,{\rm cm}^{-3}$) high ionization
species are prevalent and \mbox{O{\scriptsize VI}}\ $\lambda 1031$ is often the
strongest metal absorption line. Towards higher \mbox{H{\scriptsize I}}\ column densities we
start probing filaments embedded in the large-scale sheets.
In these regions low column density \mbox{C{\scriptsize IV}}$\lambda\lambda 1548,1550$
lines from infalling gas with densities around $10^{-4} \,{\rm cm}^{-2}$
(overdensities of about 10 to 100) dominate the metal absorption
features. \mbox{C{\scriptsize IV}}\ remains the most easily visible metal ion in the
as yet unvirialized regions around the protogalactic clumps
which are later to merge into present-day galaxies. Still larger
\mbox{H{\scriptsize I}}\ column densities occur for lines-of-sight
approaching the central regions of
PGCs. These give rise to Lyman limit systems and eventually to damped Ly$\alpha$\
absorbers. Total densities here exceed $10^{-4} \,{\rm cm}^{-3}$, and
species like \mbox{C{\scriptsize II}}\ and \mbox{Si{\scriptsize IV}}\ become
increasingly prominent. At densities above $10^{-3} \,{\rm cm}^{-3}$ we have
reached the virialized region which is generally optically thick for
radiation shortward of one Rydberg.
Although current simulations cannot precisely constrain the size of
the damped region in a PGC, the fact that at $z \sim 3$ more than ten
such objects exist in the comoving volume containing one $L_{\star}$
galaxy at $z=0$ considerably reduces the cross section per object
required to explain the observed rate of incidence of damped Ly$\alpha$\ absorbers. If
this picture is correct there should be no 1:1 correspondence between
present-day galaxies and a high-redshift damped (or Lyman limit) absorber.
There is then no need for hypothetical large disks/halos as high
redshift progenitors of present-day galaxies.
The analysis of artificial spectra generated from our numerical
simulation was carried out using the same methods as for observational data. The
results are in remarkable quantitative agreement with a number of
observed properties:
The predicted shape of the \mbox{H{\scriptsize I}}\ column density distribution
shows good agreement with that of the observed distribution
and a good fit is obtained for
$(\Omega_b\,h_{50}^{2}/0.05)^{2}\,/(J_{-22} \, h_{50})\approx 1.5$.
Detailed information on the strength of accompanying metal
absorption has recently become available for \mbox{H{\scriptsize I}}\ column densities
as small as a few times $10^{14} \,{\rm cm} ^{-2}$. We obtain good overall agreement
between the results from our artificial spectra and the observed properties,
either with a simple power law UV spectrum ($\alpha$=$1.5$, $J_{-22}
=3$) or with the spectrum proposed by Haardt \& Madau (1996). This
seems consistent with the lower end of the range of $J$ values
measured from the proximity effect (e.g. Giallongo et al.~1996,
and refs. therein) and suggests a baryon fraction slightly
exceeding the nucleosynthesis constraint (Walker et al.~1991).
The scatter of the column density ratios for \mbox{Si{\scriptsize IV}}/\mbox{C{\scriptsize IV}}, \mbox{C{\scriptsize II}}/\mbox{C{\scriptsize IV}}, and
\mbox{O{\scriptsize VI}}/\mbox{C{\scriptsize IV}}\ versus \mbox{C{\scriptsize IV}}\ is consistent with the observational results,
so the range of ionization conditions appears to be
well captured by the simulations.
As already appearent from simple photoionization models the observed
metal line strength corresponds to a {\it mean metallicity} [C/H] =
$-2.5$ for the column density range $10^{14}$ to $10^{17} \,{\rm cm} ^{-2}$.
A homogeneous metal distribution reproduces the observed ion ratios
quite well. This implies either that much of the
as yet unvirialized gas had been subject to a widespread phase of
stellar nucleosynthesis well before redshift three, or
that metal transport outward from fully collapsed regions
has been efficient.
Nevertheless, the observed scatter in [C/H] is larger by a factor
three to ten than predicted by our numerical simulations where
the metals were distributed homogeneously. This may indicate
that some of the metal enrichment took place {\it in situ}
with incomplete mixing prior to observation. Alternatively there
may be a wider spread in physical conditions (e.g. spatial variations of
strength and spectrum of the UV field) than assumed by the simulations.
The observed column density ratios are matched significantly better if
we use {\it relative abundances appropriate for metal-poor stars}
(we used [C]=[N]=0 and [Si]=[O]=0.4). Damped Ly$\alpha$\ systems at redshifts
$>$ 3 show similar low metal abundances ($-2.5$ $<$ [Z/H] $<$
$-2.0$) and relative metallicities (Lu et al.~1996, and
refs. therein). This is consistent with the idea that metal
absorption systems at high redshift contain a record of
early nucleosynthesis dominated by supernovae of type II.
The observed distribution of Doppler parameters and the relative
contributions to the line width from thermal and non-thermal motion are
well reproduced by the simulations. Obviously, {\it shock heating} is
a second important heating agent (in addition to photoionization
heating) for regions of the universe with overdensities between ten and
a few hundred. Inspite of the large peculiar velocities ($\sim 100$\,\mbox{km\,s$^{-1}$})
of the infalling gas \mbox{C{\scriptsize IV}}\ absorption lines are typically only
$\sim$ 8 to 10 \mbox{km\,s$^{-1}$}\ wide. This is, because the \mbox{C{\scriptsize IV}}\ optical depth
arises mostly in narrow post-shock regions where the shocked gas has
already come to rest and is cooling rapidly.
The contribution of bulk motions to the Doppler parameters
of \mbox{O{\scriptsize VI}}\ and \mbox{H{\scriptsize I}}\ are larger because much of the absorption arises
at larger impact
parameters, where infall of gas is a more important broadening agent. In
this particular model of structure formation, the \mbox{C{\scriptsize IV}}\ Doppler
parameter (at $z=3$) is, to a good approximation, the measure of the
quadratic sum of the thermal and the RMS bulk velocity dispersion.
The large scale structure in velocity space, as measured by the \mbox{C{\scriptsize IV}}\ TPCF,
is consistent with the observed \mbox{C{\scriptsize IV}}\ TPCF. This supports the hypothesis
proposed in paper I that LOS which intersect expanding large scale
filaments with embedded PGCs contribute significantly to the high
velocity-tail of the \mbox{C{\scriptsize IV}}\ TPCF. The existence of a hypothetical
class of abundant deep potential wells at high redshift is not required.
We have discussed the prospects of detecting the stellar continuum
which is expected from protogalactic clumps if at least some
metal enrichment has occurred in situ. These objects should be visible
at all optical wavelengths in deep images with the HST and in all optical
passbands from the ground, at least longward of the Lyman break.
We also suggested to interpret the Lyman break objects at
redshifts around three reported by Steidel et al.~(1996) in terms
of the high mass end of the PGCs causing metal absorption systems or of
a bursting fraction of lower mass PGCs.
Prospects for identifying metal absorption systems at these redshifts
are good. The best strategy may be a systematic search for
Lyman-break objects within a few arcseconds to the line-of-sight to
bright quasars, together with follow-up spectroscopy to the faintest
possible limits. PGCs which are progenitors of a particular $z=0$
galaxy can be scattered over several hundred kpc at $z=3$, an area larger
than the field size of the Hubble WFPC2 camera, so problems with incompleteness and
foreground confusion can arise.
In future, a large database of metal-line ratios as a function of
redshift should improve constraints on the normalization and spectrum of
the UV background and allow us to distinguish between possible metal
enrichment histories of the IGM. In particular, in the
low density regions probed by the lowest column density Ly$\alpha$\ absorbers,
we expect \mbox{O{\scriptsize VI}}\ to be considerably stronger than \mbox{C{\scriptsize IV}}.
Thus it may be possible to push metallicity determinations with \mbox{O{\scriptsize VI}}\
closer to truly primordial gas than is possible with \mbox{C{\scriptsize IV}}, despite
the severe problems with identifying \mbox{O{\scriptsize VI}}\ in the Ly$\alpha$\ forest.
Another interesting case for further study is \mbox{Si{\scriptsize IV}}.
\mbox{Si{\scriptsize IV}}\ should rapidly decrease towards low densities
and a detection in low \mbox{H{\scriptsize I}}\ column density systems
would indicate that the normalization and/or the spectrum of the UV
background differ significantly from those we have adopted.
A detection of the stellar continuum expected to be associated with
metal absorption systems would allow us to investigate
the radial density and metallicity profile of the PGCs.
In summary, we have found that high-resolution hydrodynamical simulations of
galaxy forming regions can substantially aid the interpretation of the
observed properties of metal absorption systems. Currently these
simulations are best suited for studying regions with overdensities
from ten to a few hundred. Such regions are optically thin to ionizing
radiation. Substantial simplifications are present in the current
work.
However, the good and sometimes excellent agreement of the present model
with observation gives reason to believe that we may have
correctly identified the mechanism underlying many of the
metal absorption systems at high redshift; conversely, we may take
the results presented here as an argument in favor of a hierarchical galaxy
formation scenario.
\noindent
\section {Acknowledgments}
We thank Bob Carswell and John Webb for VPFIT, and Gary Ferland
for making CLOUDY available to us. Thank is also due to
Len Cowie, Limin Lu, Andy McWilliam, Wal Sargent and Simon White
for their helpful comments. MR is grateful to NASA for support
through grant HF-01075.01-94A
from the Space Telescope Science Institute, which is operated
by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555. Support by NATO grant CRG 950752 and
the ``Sonderforschungsbereich 375-95 f\"ur Astro-Teilchenphysik der
Deutschen Forschungsgemeinschaft'' is also acknowledged.
\pagebreak
| proofpile-arXiv_065-1129 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Semiclassical theory, and the quantum--classical correspondence,
are still incompletely understood at
the level of long-time or invariant structures, especially when the classical
dynamics shows exponential sensitivity to initial conditions (instability,
or positive Lyapunov exponents).
We have therefore selected a simple 1-d system, the dilation operator
which quantizes the classical linear hyperbolic Hamiltonian $H(q,p)=qp$,
and collected some detailed analytical properties of its eigenstates.
Elementary as this quantum Hamiltonian may be,
seemingly unreported expressions for its eigenstates are given here,
precisely within the coherent-state (Bargmann or Husimi) formulations
where the semiclassical behaviour ($\hbar \to 0$ asymptotics) of quantum states
is best seen.
Formulae for this ``dilator" should be useful tools to probe
quantum phenomena linked to unstable (here, hyperbolic) classical dynamics.
Here are two actively studied examples involving this Hamiltonian:
- in one dimension, classical hyperbolic dynamics takes place
near a saddle-point (assuming it is generic, i.e., isolated and nondegenerate);
then it is locally equivalent to $H=pq$ in suitable canonical coordinates,
i.e, the classical dilator is the local normal form for this class of problems.
At the quantum level, 1-d saddle-points also challenge semiclassical analysis:
like energy minima, they correspond to critical energy values
(i.e., the phase-space velocity vanishes);
semiclassical analysis is fundamentally harder at critical
than at regular energy values (where simple WKB theory works),
but saddle-points are even harder to understand than minima
and their study has expanded more recently \cite{CdV,bleher,paul}.
- in higher dimensions,
the search for quantum manifestations of unstable classical motion
forms one facet of the ``quantum chaos" problem.
For instance, the role and imprint of unstable periodic orbits upon quantum
dynamics remain actively debated issues.
It is known that they influence both the quantum bound state
energy spectrum (through trace formulas) and wave functions (through scarring).
More specifically, the (stable) orbits of elliptic type generate
quantization formulas and quasimode constructions in a consistent way.
In chaotic systems, by contrast,
trace formulas diverge and scarring occurs rather unpredictably, so that
periodic orbits (now unstable) have incompletely assessed quantum effects;
nevertheless, an essential role must still go to their linearized classical
dynamics, which is of hyperbolic type hence generated by linear dilator(s).
Fully chaotic behaviour, a combination of global instability plus
ergodic recurrence, cannot however be captured by integrable models;
therefore, the place of the 1-d dilator eigenfunctions in quantum structures
corresponding to fully chaotic dynamics remains to be further studied.
At present, these eigenfunctions should primarily find use as microlocal models
for general 1-d eigenstates near saddle-points
(and for separatrix eigenstates in higher-dimensional integrable systems,
by extension).
\section{Coherent-state representations}
\subsection{Bargmann representation}
The Bargmann representation \cite{barg} is a particular coherent-state representation
of quantum wave-functions \cite{klau,perel,gilmore} in terms of entire functions.
Although it can be defined in any dimension,
we will just use it for 1-dimensional problems;
it then transforms Schr\"{o}dinger wave-functions $\psi(q)$
defined over the whole real line
into entire functions $\psi(z)$ of a complex variable $z$, as
\begin{equation}
\psi(z)\;=\;\langle z|\psi\rangle \;=\;{1\over(\pi\hbar)^{1/4}} \int_\Bbb R
\mathop{\rm e}\nolimits^{{1\over \hbar}(-{1\over 2}(z^2+q'^2)+\sqrt{2}zq')}\;\psi(q')\;dq'.
\end{equation}
Here $|z\rangle $ denotes a (Weyl) coherent state localized at the phase space point
$(q,p)$ where $z=2^{-1/2}(q-ip)$, and satisfying $\langle z|z'\rangle =\mathop{\rm e}\nolimits^{z\overline {z'}/\hbar}$;
i.e. these coherent states are not mutually orthogonal,
and their normalization is not $\langle z|z\rangle =1$, allowing instead
the bra vector $\langle z|$ to be a holomorphic function of its label $z$.
On the other hand, a closure formula exists
which makes the Bargmann transform invertible,
\begin{equation}
\label{clos}
\rlap{\ninerm 1}\kern.15em 1 = \int_\Bbb C {d\Re(z)d\Im(z)\over\pi\hbar}\;
\mathop{\rm e}\nolimits^{-{z\overline z/\hbar}}\,|z\rangle \langle z|.
\end{equation}
The Bargmann transformation maps ordinary square-integrable wave-functions
into a Hilbert space of entire functions of order $\le 2$.
We will however mostly deal with generalized wave-functions $\psi(q)$,
which are not $L^2$ but only tempered distributions.
They can then still be Bargmann-transformed by the integral formula (1),
and into entire functions of order $\le 2$ as before, now bounded as
\begin{equation}
|\psi(z)| \leq c(1+|z|^2)^N \mathop{\rm e}\nolimits^{|z|^2\over 2\hbar} \qquad \mbox{for some } N.
\end{equation}
This in turn constrains the distribution of their zeros \cite{boas}:
the counting function
\begin{equation}
n(r)=\#\{\mbox{zeros }\,z_m\,\mbox{ of }\,\psi(z)\,\mbox{ s.t. }\,|z_m|\leq r\}
\end{equation}
(zeros will be always counted with their multiplicities) verifies
\begin{equation}
\limsup_{r\to\infty}\, {n(r)\over r^2} \leq {\mathop{\rm e}\nolimits\over\hbar}.
\end{equation}
It follows that a Bargmann function admits a canonical Hadamard representation
as an (in)finite product over its zeros,
\begin{equation}
\label{prod}
\psi(z) = \mathop{\rm e}\nolimits^{p(z)} \,z^{n(0)} \, \prod_{z_m \ne 0} \Bigl(1-{z\over z_m}\Bigr)
\exp \Bigl({z\over z_m} + {1 \over 2}\Bigl({z\over z_m}\Bigr)^2\Bigr)
\end{equation}
where $p(z)=\hbar^{-1}(a_2z^2 + a_1z + a_0)$, with moreover $|a_2|\leq 1/2$;
the integer $n(0)\ge 0$ is the multiplicity of $z=0$ as a zero of $\psi(z)$.
This decomposition shows that the wave-function is completely determined by
the knowledge of all the Bargmann zeros (with their multiplicities) plus three
coefficients $a_2,a_1$, and $a_0$ (which only fixes a constant factor).
When the Hamiltonian is a polynomial (or more generally an analytic function)
in the variables $(q,p)$, it can be useful to express its quantum version
as a pseudo-differential operator acting on the Bargmann function \cite{barg},
using the rules
\begin{eqnarray}
a^\dagger = {\hat q-i\hat p\over\sqrt2}\ \ \qquad\mbox{(creation operator)}
&\rightarrow& \mbox{multiplication by } z\nonumber\\
a = {\hat q+i\hat p\over\sqrt2} \quad\mbox{(annihilation operator)}
&\rightarrow& \hbar{\partial\over\partial z}.
\end{eqnarray}
\subsection{Husimi representation}
An alternative point of interest lies in certain semi-classical densities
on phase space associated to wave-functions.
In particular, the Wigner function is defined as
\begin{equation}
{\cal W}_\psi (q,p) = (2 \pi \hbar)^{-1} \int_\Bbb R
\psi(q-r/2) \overline{\psi}(q+r/2) \mathop{\rm e}\nolimits^{ipr/\hbar} dr
\end{equation}
and the Husimi function \cite{husimi} as the convolution of the Wigner function
with a phase-space Gaussian,
\begin{equation}
{\cal H}_\psi (q,p) =(\pi \hbar )^{-1} \int_{\Bbb R ^2} {\cal W}_\psi(q',p')
\mathop{\rm e}\nolimits^{- \left[(q-q')^2+
(p-p')^2 \right]/ \hbar} dq'dp'.
\end{equation}
The Wigner representation has greater symmetry (invariance under all linear
symplectic transformations),
but Wigner functions show a much less local semiclassical behaviour:
they tend to display huge nonphysical oscillations,
which must be averaged out to reveal any interesting limiting effects.
In the Husimi functions, the spurious oscillations get precisely damped so as
to unravel the actual phase-space concentration of the semiclassical measures,
but at the expense of reducing the invariance group.
The Husimi function is equivalently given by
\begin{equation}
\label{hus}
{\cal H}_\psi(z,\overline z)\,=\,{\langle z|\psi\rangle \langle \psi|z\rangle
\over \langle z|z\rangle }\,=\,|\psi(z)|^2\,\mathop{\rm e}\nolimits^{-z\overline z/\hbar}
\end{equation}
hence it constitutes the density of a positive measure on the phase space;
for the scattering-like eigenfunctions to be studied here,
this measure will not be normalizable.
(For a normalized state, it is a probability measure thanks to the closure
formula (\ref{clos}).)
It is interesting to study how the Husimi measure of an eigenfunction
behaves as $\hbar\to 0$. For an energy away from the separatrix,
a standard theorem states that this measure concentrates on the
energy surface, with a Gaussian transversal profile \cite{taka,kurchan,vor89}.
For energies close to a separatrix, careful analyses were performed in
\cite{CdV, bleher,paul}.
Our aim here is to select a simple tractable case,
namely the eigenfunctions of the linear hyperbolic Hamiltonian \cite{taka:hyp}, and to carry further its description by means of the Bargmann representation,
using eq. (\ref{hus}) to derive the Husimi density as a by-product.
\subsection{Stellar representation}
According to the factorized representation (\ref{prod}),
1-d quantum wavefunctions can be essentially parametrized
in a phase-space geometry by the distribution of their Bargmann zeros which,
by eq. (\ref{hus}), is also the pattern of zeros for the Husimi density itself;
it thus constitutes a complementary viewpoint to the previous emphasis put on
the high-density behaviour of the Husimi function.
We refer to this ``reduced" description of a wavefunction by a discrete cloud
of phase-space points as a stellar representation.
It puts quantum mechanics in a new perspective \cite{leb:vor,leb,tualle},
but calls for a finer understanding of both dynamical and asymptotic properties
of Bargmann zeros if new results are to be awaited through eq. (\ref{prod}).
Consequently, our subsequent analysis of ``toy"
eigenfunctions
will largely deal with explicit behaviours of their Bargmann zeros.
\section{Description of the framework}
\subsection{The linear hyperbolic Hamiltonian}
The classical 1-dimensional Hamiltonian of linear dilation is $H(q,p)=pq$,
which is also equivalent to the scattering Hamiltonian $H={1\over2}(P^2 - Q^2)$
upon a symplectic rotation of the coordinates by $\pi/ 4$ according to
\begin{equation}
q={P+Q \over\sqrt{2}}, \qquad p={P-Q \over\sqrt{2}}.
\end{equation}
A classical trajectory at any energy $E \ne 0$ is a hyperbola branch,
\begin{equation}
q(t) =q_0\, \mathop{\rm e}\nolimits^t, \qquad p(t) =p_0\,\mathop{\rm e}\nolimits^{-t} \qquad \mbox{with } E=p_0 q_0
\end{equation}
whereas the $E=0$ set is a separatrix, made of a stable manifold $\{p=0\}$,
an unstable manifold $\{q=0\}$, and the hyperbolic fixed point $(0,0)$.
We study eigenfunctions of the operator obtained by Weyl quantization,
namely $\hat H = {\hbar\over i}(q{d\over dq}+{1\over 2})$.
This quantum Hamiltonian admits two independent stationary wave-functions
for any real energy $E$:
\begin{equation}
\psi_\pm^E = K\,\theta(\pm q)\,\mathop{\rm e}\nolimits^{(i{E\over \hbar}-{1\over 2})\log |q|}
\end{equation}
where $\theta(q)$ is the Heaviside step function; $K\ne 0$ is a complex constant
(having no preferred value, since the solutions are not square-integrable).
Microlocally, each of these wavefunctions is supported by the lagrangian
manifolds $\Lambda_\pm^E =\{pq=E\,, \pm q>0\,\}$, i.e. half of the $E$-energy
surface \cite{CdV}.
In order to obtain more semi-classical information, we will use the Bargmann representation. For instance,
\begin{equation}
\label{int}
\langle z|\psi_+^E\rangle =\psi_+^E(z) = {K\over (\pi\hbar)^{1/4}}\int_0^\infty
\mathop{\rm e}\nolimits^{{1 \over \hbar}(-{1 \over 2}(z^2+q'^2)+\sqrt{2}zq'+iE\log q' )}
{1\over \sqrt {q'}}dq'
\end{equation}
and we have of course
\begin{equation}
\label{sym}
\psi_-^E(z) = \psi_+^E(-z); \qquad
\psi_\pm^{-E}(z) = \overline {\psi_\pm^E(\overline z)}.
\end{equation}
Our aim in this paper is to describe the general eigenfunction of energy $E$
in this representation: up to a global (removable) constant factor,
it reads as $\psi_\lambda^E(z)=\psi_+^E(z)+\lambda \psi_-^E(z)$,
for any complex projective parameter $\lambda$, i.e.
$\lambda \in \overline \Bbb C = \Bbb C \cup\{\infty\}$.
We can immediately restrict attention to $E \ge 0$ due to the second of
eqs. (\ref{sym}).
We will be particularly interested,
on the one hand, in the global profile of these functions,
and on the other hand, in the position of their zeros
because these form the main skeleton of
the Hadamard product representation (\ref{prod}) \cite{leb:vor}.
The motivation is to better describe the eigenfunctions of a general
1-d Hamiltonian for eigenvalues close to a classical saddle-point energy value.
Such an eigenfunction cannot be simply of WKB form near the saddle-point;
instead, it should be microlocally modeled by an eigenvector $\psi_\lambda^E$
of the dilation operator $\hat H$ near $(q,p)=0$ with $E \approx 0$
(up to straightforward displacements, in phase space and in energy).
Thus, in a Bargmann representation, the eigenfunction near $z=0$
ought to behave like $\psi_\lambda^E(z)$ for some $\lambda$, which is the one
quantity whose actual value is determined by global features of the solution.
(In particular, for a parity-symmetric 1-d Hamiltonian
and for a saddle-point located at the symmetry center,
only even or odd solutions ever come into play;
hence parity conservation preselects the two eigenfunctions of $\hat H$
with the special values $\lambda=+1$ and $-1$ respectively.)
\subsection{Main analytical results}
The functions $\psi_\lambda^E(z)$ are also solutions
of the Schr\"odinger equation written in the Bargmann representation,
\begin{equation}
\label{bar}
{i\over2}\Bigl(-\,\hbar ^2 {d^2\over dz^2}\,+\,z^2\Bigr)\psi^E(z) =E\:\psi^E(z).
\end{equation}
It is convenient to use the rotated variables
$Q,P$ and $Z = 2^{-1/2}(Q-iP) =z \mathop{\rm e}\nolimits^{-i\pi /4}$ in parallel with $q,p$, and $z$.
In those variables the quantum Hamiltonian reads as
the quadratic-barrier Schr\"odinger operator
$\hat H = {1\over2}(-\hbar^2 d^2/dQ^2 - Q^2)$.
Its Bargmann transform happens to be exactly the same operator in the $Z$
variable, simply continued over the whole complex plane, so that
the eigenfunction equation can also be written as
\begin{equation}
\label{barrier}
{1\over2}\Bigl(-\,\hbar^2 {d^2\over dZ^2}\,-\,Z^2\Bigr)\Psi^E(Z) = E\:\Psi^E(Z).
\end{equation}
At the same time, the Bargmann representations obtained from $q$ and $Q$
are equivalent under a simple complex rotation,
\begin{equation}
\Psi(Z) \,=\,\psi(z) \qquad \mbox{with } Z=z \mathop{\rm e}\nolimits^{-i\pi /4} .
\end{equation}
Consequently, as a main first result, the above solutions are directly related
to the parabolic cylinder functions $D_{\nu}(y)$, defined for example in \cite{bat}.
As a matter of fact, we have :
\begin{equation}
\label{psi-D}
\Psi_\pm ^E(Z)= \psi_{\pm}^E(z)
= K{\hbar^{iE\over 2\hbar} \over {\pi ^{1/4}}}\,
\Gamma \left({1\over 2} + {iE \over \hbar}\right)\,
D_{- {1\over 2} -{iE \over \hbar}}\biggl(\mp\sqrt{2\over \hbar} z\biggr).
\end{equation}
Up to rescaling, the Bargmann eigenfunction
$\psi_\lambda^E(z)$ is then simply a linear combination of known functions,
$[\lambda D_{\nu}(y)+D_{\nu}(-y)]$ with the notations
$y=\sqrt{2/\hbar}\,z$ and $\nu = -{1\over 2}-i{E\over \hbar}$.
The situation simplifies even further on the separatrix $E=0$,
where parabolic cylinder functions reduce to Bessel functions, as
\begin{eqnarray}
\label{bessel}
D_{-1/2}(y) &=& \left((2\pi)^{-1} y \right)^{1/2}
K_{1/4}\left({y^2 /4}\right) \nonumber\\
D_{-1/2}(y)\,+\,D_{-1/2}(-y) &=& \mathop{\rm e}\nolimits^{+i\pi /8}\, (\pi y)^{1/2}
J_{-1/4}\left(i{y^2 /4}\right)\\
-D_{-1/2}(y)\,+\,D_{-1/2}(-y) &=& \mathop{\rm e}\nolimits^{-i\pi /8}\, (\pi y)^{1/2}
J_{1/4}\left(i{y^2 /4}\right). \nonumber
\end{eqnarray}
By virtue of eq. (\ref{hus}), eqs. (\ref{psi-D}) and (\ref{bessel})
yield the Husimi densities in closed form, for all eigenfunctions;
e.g., for $\psi_+^E (z)$ and $\Psi_{\pm 1}^0 (Z)$ respectively,
\begin{eqnarray}
\label{husi}
{\cal H}_+^E (z,\overline z) &=& {|K|^2 \sqrt \pi \over \cosh (\pi E/\hbar)}
|D_{-{1\over 2}-i{E\over\hbar}}(-\sqrt{2/\hbar}\,z)|^2 \mathop{\rm e}\nolimits^{-z\overline z/\hbar}
\\
{\cal H}_{\pm 1}^0 (Z,\overline Z) &=& |K|^2 \sqrt{2 \pi^3/\hbar}
\,|Z|\,|J_{\mp 1/4} (Z^2 /\, 2 \hbar)|^2 \mathop{\rm e}\nolimits^{-Z\overline Z/\hbar} . \nonumber
\end{eqnarray}
This is an interesting extension of earlier similar formulas for
the Wigner functions; e.g., for $\psi_+^E$, \cite{bal:vor}
\begin{eqnarray}
{\cal W}_+^E (q,p) &=& {|K|^2 \over \hbar \,\cosh (\pi E/\hbar)}\theta(q)
\mathop{\rm e}\nolimits^{2ipq/\hbar} {_1 F_1}\Bigl({1\over 2}+{iE \over \hbar};1;-4ipq/\hbar\Bigr) \\
{\cal W}_+^{E=0}(q,p) &=& |K|^2 \hbar^{-1} \theta(q)\,J_0(4ipq/\hbar)
\nonumber
\end{eqnarray}
where ${_1 F_1} (-\nu;1;x) \propto L_\nu (x)$ (Laguerre functions).
These expressions are specially simple (constant along connected orbits)
because the Wigner representation is exactly $\hat H$-invariant
under the symplectic evolution generated by the bilinear Hamiltonian $H$.
Inversely, coherent-state representations can never preserve
such a dynamical invariance of hyperbolic type,
hence closed-form results like (\ref{husi}) are necessarily more intricate
than their Wigner counterparts and not simply transferable therefrom.
Fig. 1 shows contour plots for some of those Husimi densities,
all with the normalizations $K=\hbar=1$.
The first example (${\cal H}_+^E(z,\overline z)$ for $E=+1$) is plotted twice:
once with equally spaced contour levels starting from zero (linear scale),
and once with contour levels in a geometric progression decreasing
from the maximum (logarithmic scale).
The linear plot emphasizes the high-density modulations which control
the measure concentration of the Husimi density;
the logarithmic plot reveals the subdominant structures and especially
the locations of the zeros $z_m$
which determine the Hadamard parametrization.
For the parabolic cylinder function itself as an example,
the factorization formula (\ref{prod}) specifically gives
\begin{eqnarray}
D_\nu(z) &=& \mathop{\rm e}\nolimits^{p(z)} \prod_{z_m \ne 0} \Bigl(1-{z\over z_m}\Bigr)
\exp \Bigl({z\over z_m}+{1 \over 2}\Bigl({z\over z_m}\Bigr)^2\Bigr), \nonumber\\
p(y) &\equiv&
\log D_\nu(0)+(\log D_\nu)'(0)\ y+(\log D_\nu)''(0)\ y^2/2 \nonumber\\
&=&
-\log { \Gamma \bigl({1-\nu \over 2}\bigr) \over 2^{\nu/2} \sqrt \pi}
-\sqrt 2\
{\Gamma\bigl({1-\nu\over 2}\bigr)\over\Gamma\bigl(-{\nu\over 2}\bigr)}\ y
-\left( {\nu \over 2} + {1\over 4}+
\left[{\Gamma\bigl({1-\nu\over 2}\bigr)\over\Gamma\bigl(-{\nu\over 2}\bigr)}
\right]^2
\right) y^2 .
\end{eqnarray}
In order to save figure space, we do not provide the log-plots
used to locate the zeros for the other Husimi densities
but only their linear contour plots, with the zeros superimposed as small dots.
(The same uniform contour level spacing is used throughout
to make comparisons easier.)
\section{Asymptotic expansions}
We can then rely upon the known asymptotic properties of the $D_{\nu}(y)$,
which follow from the integral representation (\ref{int}),
to investigate two asymptotic regimes for the eigenfunctions of $\hat H$.
Firstly, when $E/\hbar$ is kept finite, the
asymptotic expansions will be valid in the limit $y\to\infty$; this corresponds
to eigenenergies very close to the classical separatrix energy $E=0$. Secondly,
if we fix the energy $E$ at a non-vanishing value and let $\hbar \to 0$, we
have to use a different type of asymptotics, namely usual WKB expansions.
\subsection{Energies close to zero}
We use the expansions for $D_\nu (y)$ when $|y|\to\infty$ \cite{bat} at fixed $\nu$,
which are obtained from integral representations using Watson's lemma,
and take different forms in various angular sectors,
\begin{eqnarray}
\label{dexp}
-{\pi \over 2}<\arg(y)<+{\pi \over 2}&:&\quad D_\nu(y) \sim y^\nu \mathop{\rm e}\nolimits^{-y^2/4}
\sum_{n\ge 0} {\Gamma(2n-\nu)\over n!\ \Gamma(-\nu)}{1\over(-2y^2)^n}
\sim y^\nu \mathop{\rm e}\nolimits^{-y^2/4}(1+O(y^{-2})) \\
-{\pi}<\arg(y)<-{\pi \over 2}&:&\quad D_\nu(y)\sim y^\nu\, \mathop{\rm e}\nolimits^{-y^2/4}
(1+O(y^{-2}))\,\,-{\sqrt{2\pi}\over \Gamma(-\nu)}\mathop{\rm e}\nolimits^{-i\nu\pi}\,y^{-\nu-1}\,
\mathop{\rm e}\nolimits^{y^2/4}\,(1+O(y^{-2}))\nonumber\\
+{\pi \over 2}<\arg(y)<{+\pi }&:&\quad D_\nu(y)\sim y^\nu\, \mathop{\rm e}\nolimits^{-y^2/4}
(1+O(y^{-2}))\,\,-{\sqrt{2\pi}\over \Gamma(-\nu)}\mathop{\rm e}\nolimits^{i\nu\pi}\,y^{-\nu-1}\,
\mathop{\rm e}\nolimits^{y^2/4}\,(1+O(y^{-2})). \nonumber
\end{eqnarray}
The sectors are specified here as non-overlapping and bounded by Stokes lines,
i.e. curves of maximal dominance of one exponential factor over the other.
(Each asymptotic expansion actually persists in a larger sector
overlapping with its neighbours, but this extension will not be of use here.)
The above expansions are valid for $|y|\to\infty$ within each sector
and provide approximations to the shape of the eigenfunctions
and the positions of their zeros for large $|y|$;
we will use them to leading order only (up to $O(y^{-2})$ terms).
For the general eigenfunction
$\psi^E_\lambda \propto [\lambda D_{\nu}(y)+D_{\nu}(-y)]$,
eqs. (\ref{dexp}) straightforwardly generate four different expansions
in the four $z$-plane quadrants $S_j,\ j=0,1,2,3$
(named anticlockwise from $S_0=\{0<\arg z <\pi/2\}$).
\subsection{WKB expansions for a fixed non-vanishing energy}
The previous expansions are inapplicable when $\hbar\to 0$
with the classical energy kept fixed at a non-zero value (in the following we
will suppose $E>0$). In this regime, we have to use WKB-type expansions instead.
These can be obtained from the integral representations of the solutions
(\ref{int}), by performing saddle-point approximations ;
equivalently, they can be found directly from
the Schr\"odinger equation written in Bargmann variables (\ref{barrier}).
We will use the $Z$ variable for convenience.
The general WKB solution can be written, to first order in $\hbar$, as
\begin{equation}
\Psi ^E(Z) \sim (2E+Z^2)^{-1/4} \left(\alpha(\hbar) \mathop{\rm e}\nolimits^{+i\phi(Z_0,Z)/\hbar}
+ \beta(\hbar) \mathop{\rm e}\nolimits^{-i \phi(Z_0,Z)/\hbar} \right), \qquad
\end{equation}
where the exponents are now the classical action integrals,
taken from an (adjustable) origin $Z_0$,
\begin{equation}
\phi(Z_0,Z) = \int_{Z_0}^Z P_E(Z')dZ', \qquad P_E(Z) \equiv \sqrt{2E+Z^2}
\end{equation}
with the determination of the square root $P_E(Z)$ fixed
by the cuts indicated on fig. 2 (left) and by $P_E(0)>0$.
This approximation is valid for $\hbar\to 0$,
when $Z$ stays far enough from the two turning points
$Z_\pm=\pm i\sqrt{2E}$ in the sense that $|\phi(Z_\pm,Z)| \gg \hbar$.
The coefficients $\alpha$ and $\beta$ a priori depend
upon $\hbar$ and the region of the complex plane where $Z$ lies.
More precisely, the complex $Z$-plane is to be partitioned by the Stokes lines,
specified for each turning point by the condition
\begin{equation}
i \phi(Z_\pm,Z)/\hbar\quad \mbox{real} .
\end{equation}
Three such lines emanate from every turning point.
When the variable $Z$ crosses a Stokes line, the coefficients $\alpha$ and $\beta$ change according to connection rules (see \cite{olver} for instance);
the application of these rules yields the global structure of the solution.
For full consistency, no Stokes line should link two turning points;
this restriction forces us to slightly rotate $\hbar$ into the complex plane,
as $\hbar\to \mathop{\rm e}\nolimits^{i\epsilon}\hbar$,
with the resulting partition of the plane drawn on fig. 2 (left).
The explicit form of exponentially small WKB contributions
is generally sensitive to the choice $\epsilon = \pm 0$.
However, such is not the case for the subsequent results at the order to which they will be expressed, so that we may ultimately reset $\epsilon =0$.
Before studying a particular solution, we introduce the hyperbolic angle variable $\theta = {\rm arcsinh} (Z/\sqrt{2E})$ which allows
to integrate the action in closed form, as
\begin{equation}
\phi(Z_0,Z)={E\over 2} \left[\sinh(2\theta')+2\theta'\right]_{\theta_0}^{\theta}
={1 \over 2} \left[ Z'\sqrt{2E+Z'^2} + \log (Z'+ \sqrt{2E+Z'^2})\right]_{Z_0}^Z;
\end{equation}
the turning points correspond to $\theta_\pm = \pm i\pi/2$;
the action values $\phi(0,Z_\pm)=\pm i \pi E/2$ are frequently needed.
We fully describe one eigenfunction as an example, $\Psi_+^E(Z)$
(corresponding to $\lambda=0$).
We identify its WKB form first in the regions ${\cal S}'$, ${\cal S}_1$, ${\cal S}_2$,
by noticing that this eigenfunction must be exponentially decreasing
for $Z \to \infty$ in a sector around $\arg Z = -3\pi/4$ (i.e., $z \to -\infty$)
overlapping with those three regions, and then in the remaining regions
by using the connection rules. The result is
\begin{eqnarray}
\label{plus}
\Psi_+^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}\;
\mathop{\rm e}\nolimits^{+i\phi(0,Z)/\hbar}\qquad\qquad
\mbox{in the regions ${\cal S}'$, ${\cal S}_1$, ${\cal S}_2$}\nonumber\\
\Psi_+^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}\; \mathop{\rm e}\nolimits^{-\pi E/\, 2\hbar}
\left( \mathop{\rm e}\nolimits^{+i \phi(Z_+,Z)/\hbar} +i\,\mathop{\rm e}\nolimits^{-i \phi(Z_+,Z)/\hbar}
\right)\qquad\mbox{in ${\cal S}_0$} \\
\Psi_+^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}\; \mathop{\rm e}\nolimits^{+\pi E/\,2\hbar}
\left( \mathop{\rm e}\nolimits^{+i \phi(Z_-,Z)/\hbar} -i\,\mathop{\rm e}\nolimits^{-i \phi(Z_-,Z)/\hbar}\right)
\qquad\mbox{in ${\cal S}_3$}. \nonumber
\end{eqnarray}
The overall normalization factor $C(\hbar)$ is determined by comparison
with the direct saddle-point evaluation of the integral (\ref{int}):
\begin{equation}
C(\hbar) = K(2\pi\hbar)^{1/4}\mathop{\rm e}\nolimits^{-{\pi E/\,4\hbar}} \mathop{\rm e}\nolimits^{-i\pi /8}
\left({E/ \mathop{\rm e}\nolimits}\right)^{iE/\,2\hbar}.
\end{equation}
Eqs. (\ref{plus}) readily yield the WKB expansions for the general solution as
\begin{eqnarray}
\label{pm}
\Psi_\lambda^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}
\left[\mathop{\rm e}\nolimits^{+i \phi(0,Z)/\hbar}+\lambda \mathop{\rm e}\nolimits^{-i \phi(0,Z)/\hbar}\right]
\qquad\qquad\qquad\quad \mbox{in ${\cal S}'$} \nonumber\\
\Psi_\lambda^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}
\left[\mathop{\rm e}\nolimits^{+i \phi(0,Z)/\hbar} + (\lambda-c_-)\mathop{\rm e}\nolimits^{-i\phi(0,Z)/\hbar}\right]
\qquad\qquad\mbox{in ${\cal S}_0$} \nonumber\\
\Psi_\lambda^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}
\left[(1-\lambda c_+ )\mathop{\rm e}\nolimits^{+i \phi(0,Z)/\hbar}
+\lambda \mathop{\rm e}\nolimits^{-i \phi(0,Z)/\hbar}\right]
\qquad\quad \mbox{in ${\cal S}_1$} \\
\Psi_\lambda^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}
\left[(1-\lambda c_-)\mathop{\rm e}\nolimits^{+i \phi(0,Z)/\hbar}
+\lambda\mathop{\rm e}\nolimits^{-i \phi(0,Z)/\hbar}\right] \qquad\quad \mbox{in ${\cal S}_2$} \nonumber\\
\Psi_\lambda^E(Z) &\sim& {C(\hbar)\over P_E(Z)^{1/2}}
\left[\mathop{\rm e}\nolimits^{+i \phi(0,Z)/\hbar} + (\lambda-c_+)\mathop{\rm e}\nolimits^{-i\phi(0,Z)/\hbar}\right]
\qquad\qquad \mbox{in ${\cal S}_3$}, \nonumber
\end{eqnarray}
with the notations
\begin{equation}
\label{cpm}
c_\pm = -\mathop{\rm e}\nolimits^{\pm i \pi \nu} = \pm i\,\mathop{\rm e}\nolimits^{\pm\pi E/\hbar}
\quad (c_-=1/c_+).
\end{equation}
\section{Large values of the Husimi density}
\subsection{In the WKB framework}
We study the particular solution $\Psi_+^E(Z)$ for a fixed positive energy
$E$ as an example. From the WKB
expansions (\ref{plus}), we derive the Husimi density of this solution, using
the hyperbolic angle as variable.
In the regions ${\cal S}'$, ${\cal S}_1$, ${\cal S}_2$, away from the turning points, we obtain
\begin{equation}
\label{husimi}
{\cal H}_+^E(Z,\overline Z) \approx
{|C(\hbar)|^2\over [E(\cosh 2\Re(\theta)+\cos 2\Im(\theta))]^{1/2}}\:
\exp\left\{{E\over\hbar}\Bigl(
-\cosh 2\Re(\theta)[\sin 2\Im(\theta)+1]+\cos 2\Im(\theta)-2\Im(\theta)
\Bigr)\right\}.
\end{equation}
This formula shows that the Husimi measure concentrates semi-classically
along the maxima of the exponential factor.
Since the variable $\theta$ is restricted to the strip $|\Im(\theta)|<\pi/2$,
those maxima occur on the line $\Im(\theta)=-\pi /4$, which corresponds exactly to the branch of hyperbola of energy $E$ in the half-plane $\Im(Z)<0$.
In the region ${\cal S}_0 \cap \{\Im (Z)\le 0\}$, ${\cal H}_+^E(Z,\overline Z)$ also obeys
eq. (\ref{husimi}) up to exponentially small terms,
so that the discussion concerns the whole $E$-hyperbola branch
in the lower $Z$-half-plane.
The above expression simplifies around this maximum curve, according to the following remarks. First of all, the variables $(Z,\overline Z)$ are
(up to a factor $-i$) symplectic transforms of the original variables $(q,p)$,
so the expression of the classical energy $E=-{1\over 2}{\overline Z}^2 \,+\,V(Z)$,
where $V(Z)$ is analytic, implies the following classical velocity along the
$E$-energy curve :
\begin{equation}
\dot Z=i{\partial E\over\partial \overline Z}=-i\overline Z
\end{equation}
and along this curve, we also have $|Z|^2=E\cosh\,2\Re(\theta)$.
Furthermore, if we decompose a small variation $\delta\theta$ as
\begin{eqnarray}
\delta Z_\parallel&=&\sqrt{2E}\cosh\theta\ \Re(\delta\theta)\nonumber\\
\delta Z_\perp&=&i\sqrt{2E}\cosh\theta\ \Im(\delta\theta)
\end{eqnarray}
($\delta Z_\perp$ is a variation of $Z$ perpendicularly to the $E$-hyperbola),
then we obtain the following expression of the Husimi density around this maximum curve:
\begin{equation}
\label{husimi1}
{\cal H}_+^E(Z,\overline {Z})\,\approx\, |K|^2\sqrt{2\pi\hbar}\,{1\over|\dot Z|}
\mathop{\rm e}\nolimits^{-2|\delta Z_\perp|^2 /\hbar};
\end{equation}
that is, the density decreases as a Gaussian of constant width normally to the maximum curve, its height being given by the inverse of the phase-space
velocity. This corresponds semi-classically to a conserved probability flux
along this curve, and confirms earlier predictions \cite{taka}
(cf. fig. 1, top left and bottom right).
As a new feature, by contrast, in the upper $Z$-half-plane
there is a maximum curve in the region ${\cal S}_0$ only,
and well below the anti-Stokes line where the zeros of $\Psi_+^E(Z)$ lie.
This maximum curve is given by $\Im(\theta)=+\pi /4$,
i.e., it is the other branch of the $E$-hyperbola.
Around it, the Husimi density behaviour is precisely eq. (\ref{husimi1})
times the constant factor $\exp(-{2\pi E/\hbar})$,
an exponentially small contribution compared to that from the lower half-plane;
hence this enhancement is semi-classically ``invisible" but can be guessed on
the log-plot in fig. 1, top right.
(The correspondence between the $z$ and $Z$ variables is recalled on
Fig. 2, right.)
\subsection{Energies close to zero}
Now using the expansions (\ref{dexp}), we can analyze the large values of the Husimi density in the case where $E/\hbar$ stays bounded, still in the semi-classical limit $\hbar\to 0$. If we still consider the function $\psi_+^E(z)$, we find a concentration along an invariant subset
of the separatrix, i.e., the positive real and the imaginary $z$-axes.
More precisely, if $|z|^2\gg\hbar$, we have
\begin{eqnarray}
\label{expsep}
{\cal H}_+^E(z,\overline z) &\sim& |K|^2\sqrt{2\pi\hbar}\;
{1\over |\dot z|} \mathop{\rm e}\nolimits^{-2E\arg(z)/\hbar} \mathop{\rm e}\nolimits^{-2\Im(z)^2/\hbar}
\quad\qquad\mbox{when } |\arg(z)|<{\pi\over 4} \\
{\cal H}_+^E(z,\overline z) &\sim&
{|K|^2\over\cosh(\pi E/\hbar)} \sqrt{\pi\hbar\over 2}\;
{1\over|\dot z|} \mathop{\rm e}\nolimits^{2E\arg(-z)/\hbar} \mathop{\rm e}\nolimits^{-2\Re(z)^2/\hbar}
\mbox{ when } |\arg(-z)|<{3\pi\over 4}.\nonumber
\end{eqnarray}
We notice that both the longitudinal dependence of the density,
and its Gaussian decrease away from the separatrix, are exactly the same
as for a regular energy curve. Thus, away from the unstable fixed point $z=0$,
the singular limit $E\to 0$ behaves straightforwardly.
Here, however, the Husimi density is also described exactly
all the way down to the saddle-point (for $z\approx 0$ when $E\approx 0$),
by the explicit formulae (\ref{husi}), e.g.,
\begin{equation}
{\cal H}_+^E(0,0)\,=\,{|K|^2\over\sqrt{8\pi}}\;
\Bigl| \Gamma \Bigl( {1\over 4}+i{E\over 2\hbar}\Bigr) \Bigr| ^2 .
\end{equation}
We can likewise obtain the rough shape of the Husimi density
for a general solution $\psi_\lambda^E$. As before,
whatever type of expansion we use, we find along each of the four half-axes (which are asymptotes to the classical energy curves)
\begin{equation}
{\cal H}_\lambda ^E (z,\overline {z}) \approx
I\, {1\over |\dot z|} \mathop{\rm e}\nolimits^{-2|\delta Z_\perp|^2 /\hbar}
\end{equation}
where the constant $I$, depending on $\lambda$ and on the half-axis we consider,
can be interpreted semi-classically as the invariant intensity
of a flux of particles moving with velocity $\dot z=\overline z$
along this branch of classical curve.
For $E \gg 0$, the flux is separately conserved
along each of the two hyperbola branches: with obvious notations,
\begin{eqnarray}
I_+ &=& I_{-i}\ =\ |K|^2\sqrt{2\pi\hbar}\\
I_- &=& I_{+i}\ =\ |K|^2|\lambda|^2\sqrt{2\pi\hbar} . \nonumber
\end{eqnarray}
When the energy approaches its critical value 0, the intensities become
\begin{eqnarray}
I_+\;&=&\;|K|^2\;\sqrt{2\pi\hbar}\\
I_-\;&=&\;|K|^2\;\sqrt{2\pi\hbar}\;|\lambda|^2\nonumber\\
I_{+i}\;&=&\;{|K|^2\;\sqrt{2\pi\hbar}\over 2\cosh (\pi E/\hbar )} \left\{|\lambda|^2 \mathop{\rm e}\nolimits^{\pi E/\hbar} + 2\Im (\lambda)
+ \mathop{\rm e}\nolimits^{-\pi E /\hbar}\right\}\nonumber\\
I_{-i}\;&=&\;{|K|^2\;\sqrt{2\pi\hbar}\over 2\cosh (\pi E/\hbar )} \left\{|\lambda|^2 \mathop{\rm e}\nolimits^{-\pi E/\hbar} - 2\Im (\lambda)
+ \mathop{\rm e}\nolimits^{\pi E/\hbar}\right\}. \nonumber
\end{eqnarray}
Now the flux is only conserved globally:
it is easy to check that $I_{+i} + I_{-i}=I_+ + I_-$.
\section{Asymptotic study of the zeros}
The asymptotic geometry of the zeros for a general solution can be deduced from
the following general principles.
When a function $f(Z)$ has an asymptotic expansion (within a sector) combining
two exponential behaviours, the function can only vanish
when both exponential factors are of the same order of magnitude.
Thus, the two exponents must have equal real parts: this necessary condition
defines the anti-Stokes lines of the problem in the complex $Z$-plane.
Zeros of the function can then only develop along anti-Stokes lines
(always in the large-$|Z|$ approximation),
and provided both exponentials are present in the given sectorial expansion,
with fixed non-zero prefactors.
\subsection{Energies close to zero - general case}
Here the expansions (\ref{dexp}) have to be used,
and it is simpler to work in the rotated $Z$ variable.
Then the exponential factors read as
$\mathop{\rm e}\nolimits^{+iZ^2/ \,2\hbar}$ and $\mathop{\rm e}\nolimits^{-iZ^2/\, 2\hbar}$,
and the anti-Stokes lines on which they balance each other,
namely the bisecting lines $L_j$ of the quadrants $S_j$,
are simply the real and imaginary $Z$-axes (fig. 2, right).
Now, for each fixed $j$, two independent solutions
$\Phi^{(j)}_\pm$ of eq. (\ref{barrier}) can be specified
by imposing single-exponential asymptotic behaviours along $L_j$, as
$\Phi^{(j)}_\pm \sim f^{(j)}_\pm(Z) \mathop{\rm e}\nolimits^{\pm iZ^2/\, 2\hbar}$.
A general solution $\Psi(Z)$ is then proportional to
$\left[ \Lambda \Phi^{(j)}_+ + \Phi^{(j)}_- \right]$
for some $\Lambda \in \overline \Bbb C$, and satisfies
\begin{equation}
\Psi(Z) \propto
\Lambda f^{(j)}_+(Z) \mathop{\rm e}\nolimits^{+iZ^2/\, 2\hbar}+ f^{(j)}_-(Z) \mathop{\rm e}\nolimits^{-iZ^2/\, 2\hbar},
\qquad Z\to \infty \hbox{ in } S_j ,
\end{equation}
whence the condition $\Psi(Z)=0$ in the sector $S_j$ asymptotically reads as
\begin{equation}
\label{zeq}
Z^2/\hbar \sim (-1)^j 2\pi m + i\log\left[\Lambda \, r^{(j)}(Z)\right], \quad
\mbox{ for } m\to +\infty, \quad r^{(j)}(Z) \equiv -f^{(j)}_+(Z)/f^{(j)}_-(Z).
\end{equation}
This yields an asymptotic sequence of zeros, as:
$Z^{(j)}_m \sim \mathop{\rm e}\nolimits^{i j\pi/2}\sqrt{2\pi m \hbar}$ to leading order;
thereupon, to the order $O(1)$ included,
\begin{equation}
\label{zer}
Z^{(j)\,2}_m/\hbar \sim
(-1)^j 2 \pi m + i \log\ r^{(j)}(\mathop{\rm e}\nolimits^{i j\pi/2}\sqrt{2\pi m \hbar})
+ i \log\Lambda +O(m^{-1}\log m), \qquad m\to +\infty .
\end{equation}
The final square-root extraction is straightforward,
\begin{equation}
Z^{(j)}_m/\sqrt\hbar \sim \mathop{\rm e}\nolimits^{i j\pi/2}\left(\sqrt{2\pi m}
+(-1)^j i \:{\log\ r^{(j)}(\mathop{\rm e}\nolimits^{i j\pi/2}\sqrt{2\pi m \hbar})
+\log \Lambda \over 2 \sqrt{2 \pi m}}
+O\left({\log^2 m\over m^{3/2}}\right) \right)
\end{equation}
so that the simpler form (\ref{zer}) will be preferred for further displays of results.
When the asymptotic analysis concerns a fixed linear combination
given as $[\lambda D_{\nu}(y)+D_{\nu}(-y)]$, $\Lambda$ turns into
a sector-dependent function $\Lambda^{(j)}(\lambda)$, which is
the linear fractional transformation induced by the change of basis
$\{D_\nu(y),D_\nu(-y)\} \longrightarrow \{\Phi^{(j)}_+(Z),\Phi^{(j)}_-(Z)\}$.
The equations for zeros like (\ref{zer}) become singular for the two values
of $\lambda$ which map to $\Lambda^{(j)}=0 \hbox{ or } \infty$,
simply because they yield the pure $\Phi^{(j)}_-$ or $\Phi^{(j)}_+$ solutions
which have no zeros (at least asymptotically) in this sector $S_j$.
\medskip
We now list more explicit results. In the sector $S_0$,
corresponding to $\{0<\arg y<+\pi/2\}$,
\begin{equation}
\lambda D_{\nu}(y)+D_{\nu}(\mathop{\rm e}\nolimits^{-i\pi} y) \sim
(\lambda + \mathop{\rm e}\nolimits^{-i\pi\nu}) y^\nu \mathop{\rm e}\nolimits^{-y^2/4}
+{\sqrt{2\pi} \over \Gamma(-\nu)} y^{-\nu-1}\mathop{\rm e}\nolimits^{+y^2/4} .
\end{equation}
Upon the substitutions $\Lambda^{(0)}(\lambda)r^{(0)}(Z) \equiv
\bigl(\sqrt{2\pi}/\Gamma(-\nu)\bigr) y^{-2\nu-1}/(\lambda + \mathop{\rm e}\nolimits^{-i\pi\nu}),
\ y=\sqrt{2/\hbar}\,\mathop{\rm e}\nolimits^{i\pi/4}Z$ and $\nu=-{1 \over 2}-{iE \over \hbar}$,
the asymptotic equation (\ref{zer}) for zeros becomes
\begin{equation}
\label{zr2}
{Z^{(0)\,2}_m \over \hbar} \sim (2m-1)\pi -{E \over\hbar} \log\, 4m\pi i
-i \,\log { \Gamma({1 \over 2} +i{E \over \hbar}) \over \sqrt{2\pi}}
-i \,\log (\lambda + i \mathop{\rm e}\nolimits^{-\pi E/\hbar}) ,
\qquad -{\pi \over 4}<\arg Z<+{\pi \over 4}.
\end{equation}
A geometrical interpretation will prove useful. Let $C_\pm$ be the two circles
in the $\lambda$-plane respectively specified by the parameters
(cf. eq. (\ref{cpm}) and fig. 3)
\begin{equation}
\label{circ}
\mbox{centers:}\quad c_\pm = \pm i\,\mathop{\rm e}\nolimits^{\pm\pi E/\hbar},
\qquad \mbox{radii:} \quad R_\pm=(1+\mathop{\rm e}\nolimits^{\pm 2\pi E/\hbar})^{1/2}
\end{equation}
(they are both centered on the imaginary axis
and intersect orthogonally at $\lambda=+1$ and $-1$).
Let us also write, for real $t$, $\Gamma({1\over 2} + it)/\sqrt{2\pi}$
in polar form as $(2 \cosh \pi t)^{-1/2} \mathop{\rm e}\nolimits^{i \Theta(t)}$,
defining the phase $ \Theta(t)$ by continuity from $ \Theta(0)=0$.
The formula (\ref{zr2}), and its partners in the other sectors, then become:
\bigskip
\noindent in $S_0=\{-{\pi /4}<\arg Z<+{\pi /4}\}$,
\begin{equation}
\label{z0}
{Z^{(0)\,2}_m \over \hbar} \sim (2m-1)\pi -{E \over\hbar} \log\ 4m\pi
+ \Theta( {E/\hbar} ) -i \,\log { \lambda -c_- \over R_-}
\quad (\lambda \notin \{c_-, \infty\})
\end{equation}
in $S_1=\{+{\pi /4}<\arg Z<+{3\pi /4}\}$,
\begin{equation}
\label{z1}
-{Z^{(1)\,2}_m \over \hbar} \sim (2m-1)\pi +{E \over\hbar} \log\ 4m\pi
- \Theta( {E/\hbar} ) +i \,\log { \lambda^{-1} -c_+ \over R_+}
\quad (\lambda \notin \{0,c_-\})
\end{equation}
in $S_2=\{-{5\pi /4}<\arg Z<-{3\pi /4}\}$,
\begin{equation}
\label{z2}
{Z^{(2)\,2}_m \over \hbar} \sim (2m-1)\pi -{E \over\hbar} \log\ 4m\pi
+ \Theta( {E/\hbar} ) -i \,\log { \lambda^{-1} -c_- \over R_-}
\quad (\lambda \notin \{0,c_+\})
\end{equation}
in $S_3=\{-{3\pi /4}<\arg Z<-{\pi /4}\}$,
\begin{equation}
\label{z3}
-{Z^{(3)\,2}_m \over \hbar} \sim (2m-1)\pi +{E \over\hbar} \log\ 4m\pi
- \Theta( {E/\hbar} ) +i \,\log { \lambda -c_+ \over R_+}
\quad (\lambda \notin \{c_+, \infty\}) .
\end{equation}
For general values of the parameters we cannot get more precise information
this way, except by going to higher orders (but still in the asymptotic sense).
We have yet no idea about the position, or even existence, of small zeros.
Our subsequent strategy will be
to start from very special cases for which the pattern of zeros is well known,
and from there to vary continuously the parameters $E$ and $\lambda$
and keep track of the zeros along these deformations: zeros are topological defects, so they move continuously w.r. to both parameters.
We will then exploit symmetries of eq. (\ref{barrier}),
especially the reality of its solutions (a real function
of a complex variable $t$ is one satisfying $f(\overline t)=\overline{f(t)}$).
Eq. (\ref{barrier}) is a differential equation with real coefficients, hence it
admits real solutions, whose zeros are symmetrical w. r. to the real $Z$-axis.
During a deformation of a real solution, zeros could then only leave (or enter)
the real axis in conjugate pairs,
but at the same time the functions considered here
(solutions of a second-order equation) cannot develop double zeros;
consequently, each zero is bound to stay permanently real (or nonreal)
in the course of a real deformation.
\subsection{Even--odd solutions of zero energy}
We use the $E=0$ expressions eq. (\ref{bessel}) in terms of the Bessel functions
$J_{\pm 1/4}(Z^2/\,2\hbar)$ to view the pattern of zeros more precisely
in this particular case.
We know that, for real $\mu$, $t^{-\mu}J_\mu(t)$ is a real even function
having only real zeros $\pm j_{\mu,m},\ m=1,2,\cdots$, with $ j_{\mu,m}>0$
and $ j_{\mu,m} \sim \pi (m+{\mu\over 2} - {1\over 4})$ for large $m$.
This translates into the $Z$ variable as follows.
For $\lambda =+1$: the even solution (fig. 1, middle right)
\begin{equation}
\label{bessele}
\Psi_{+1}^0(Z)\,=\,K{\left({2\pi^3/\hbar}\right)}^{1/4} Z^{1/2}\; J_{-1/4}\left({Z^2\over 2\hbar}\right)
\end{equation}
(a real function for $K$ real),
is not just even but also invariant under $Z \longrightarrow iZ$;
all its zeros are purely real or imaginary,
the $m$-th positive zero admits the approximation
\begin{equation}
\label{zeven}
Z_m^{(0)}|_{\lambda =+1} \sim \sqrt{\hbar(2m\pi - 3\pi/ 4)},
\end{equation}
and all other zeros follow by the rotational symmetry of order 4.
For $\lambda =-1$: the odd solution (fig. 1, bottom left)
\begin{equation}
\label{besselo}
\Psi_{-1}^0(Z)\,=\,K\,e^{i\pi/ 4}\,{\left({2\pi^3/\hbar}\right)}^{1/4} Z^{1/2}\;J_{1/4}\left({Z^2\over 2\hbar}\right)
\end{equation}
has exactly the same symmetries for its zeros as the even one
(because the auxiliary function $\mathop{\rm e}\nolimits^{-i\pi/ 4}\Psi_{-1}^0(Z)/Z$
has all the symmetries of $\Psi_{+1}^0(Z)$);
besides an obvious zero at the origin,
the $m$-th positive zero of $\Psi_{-1}^0(Z)$ lies approximately at
\begin{equation}
Z_m^{(0)}|_{\lambda =-1} \sim \sqrt{\hbar(2m\pi - \pi/ 4)}.
\end{equation}
\subsection{Even--odd solutions of non-zero energy}
If we ``switch on" the energy, keeping $\lambda=+1$, the solutions will not
be Bessel functions any longer, but they still exhibit interesting features.
Equation (\ref{barrier}) has real coefficients,
hence its even and odd solutions are real up to a constant factor,
i.e., up to an adjustment of $\arg K$.
For the even solution $\Psi_{+1}^E(Z)$, the reality condition is
\begin{equation}
\label{Keven}
\Psi_{+1}^E(0)\,=\,K{\left(2\over\pi\right)}^{1/4}(2\hbar)^{iE\over 2\hbar}\;
\Gamma\left({1\over4} + {iE\over 2\hbar}\right) \quad\mbox{real.}
\end{equation}
This even solution can be seen as a real deformation
of the solution (\ref{bessele}).
It maintains the symmetry w.r. to the origin and to the two $Z$ coordinate axes;
only the $\pi/2$ rotation symmetry is lost for $E\ne 0$.
Due to the two mirror symmetries, as explained above,
the only possible motion of the zeros during this deformation is
a ``creeping without crossing" along the four half-axes,
symmetrically w.r. to the origin.
This can be checked on the $O(1)$ terms of the expansions (\ref{z0}--\ref{z3}),
and on the sequence of plots: fig. 1 (middle right), fig. 6 (right),
fig. 1 (bottom right).
At the same time, this deformation allows to properly count the zeros
at all energies, by continuity from $E=0$
where eq. (\ref{zeven}) does count the zeros:
in each of the expansions (\ref{z0}--\ref{z3}),
$Z^{(j)}_m$ remains the actual $m$-th zero on the half-axis $L_j$
if the corresponding complex log functions are defined at $\lambda=+1$ as
\begin{equation}
\log (1-c_+) = \log |1-c_+| - i \,\arctan \mathop{\rm e}\nolimits^{+\pi E/\hbar}, \qquad
\log (1-c_-) = \log |1-c_-| + i \,\arctan \mathop{\rm e}\nolimits^{-\pi E/\hbar}
\end{equation}
where the $\arctan$ function has the usual range $(-{\pi/ 2},{\pi/ 2})$.
The same analysis can be performed for the odd real solution $\Psi_{-1}^E(Z)$
of (\ref{barrier}), a deformation of the solution (\ref{besselo}),
for which the reality condition is
\begin{equation}
\label{Kodd}
{d\Psi_{-1}^E\over dZ}(0)\,=\,K{\left({2^7\over\pi}\right)}^{1/4}(2\hbar)^{-{1\over 2}+{iE\over 2\hbar}}\;\;
\Gamma\left({3\over4} + {iE\over 2\hbar}\right)\, \mathop{\rm e}\nolimits^{i\pi/ 4}
\quad\mbox{real.}
\end{equation}
\subsection{Real solutions}
We now consider more general families of real solutions, i.e.,
$\Psi_\lambda ^E(\overline {Z}) =\overline{\Psi_\lambda ^E(Z)},\,\forall Z\in \Bbb C$. These exist only for certain values of $\lambda$, for which we
have to adjust $K$. Since (\ref{barrier}) is a second-order equation with
real coefficients,
it has the real solutions
\begin{equation}
\Psi^E(Z) = \kappa\left( \Psi_{+1}^E(Z)+t \Psi_{-1}^E(Z)\right), \quad\mbox{for}\quad
\kappa \in \Bbb R^\ast,\quad t\in \Bbb R\cup\{\infty\} = \overline \Bbb R .
\end{equation}
Under the change of basis
$\{\Psi_{+1}^E(Z),\Psi_{-1}^E(Z)\}\longrightarrow \{D_\nu(y),D_\nu(-y)\}$,
the set $\{t\in\overline \Bbb R\}$ is mapped to a circle in the projective
$\lambda$-plane, passing through $\lambda|_{t=0}=+1,\ \lambda|_{t=\infty}=-1$.
The full circle can be determined long-hand
using eqs. (\ref{Keven}--\ref{Kodd}),
but more easily by asking the expansions (\ref{z0}), (\ref{z2}) to yield
asymptotically real zeros as reality demands:
the resulting $\lambda$-circle is $C_-$
(note that $\lambda \in C_- \Longleftrightarrow 1/\lambda \in C_-$).
Let us now analyze the motion of the zeros in the four sectors
as we vary $\lambda$ around $C_-$ anti-clockwise from 1 to $\mathop{\rm e}\nolimits^{2i\pi}$.
We already know that the real zeros cannot leave the real axis by symmetry.
In the expansions (\ref{z0}) and (\ref{z2}) for $S_0$ and $S_2$ respectively,
the only modifications are that $\arg(\lambda - c_-)$ increases by $2\pi$,
and $\arg(\lambda^{-1} - c_-)$ decreases by $2\pi$, inducing
a re-labeling of the large zeros in those two sectors.
Hence each large positive or negative zero creeps to the right
until it reaches the former position of its right neighbour after one cycle.
The small zeros, trapped on a real bounded interval in-between by reality,
and unable to cross one another,
can then only follow the same homotopic pattern of behaviour.
In each of the two remaining sectors, by contrast,
the zeros are not confined to the imaginary axis
whereas the cycle $C_-$ is homotopically trivial
in the Riemann surface of the relevant logarithmic function.
According to the expansions in $S_1$ (resp. $S_3$),
the large zeros in these sectors
perform a clockwise (resp. anti-clockwise) cycle beginning and ending
at their location on the imaginary axis for $\lambda=1$.
The geometric relation $\overline{\lambda -c_+}=R_+^2(\lambda ^{-1}-c_+)^{-1}$
is the asymptotic remnant of reality: $\{\Psi(Z)=0\Rightarrow \Psi(\overline Z)=0\}$.
This motion in $S_1$ and $S_3$ has been thus shown only for large zeros,
but we may argue a similar behaviour for the smaller ones by homotopy;
moreover, by reality these zeros cannot cross the real axis and
are confined to a bounded region around the origin (contrary to the real zeros,
they cannot migrate to infinity when $\lambda$ keeps revolving around $C_-$); orbits of non-real zeros during the described $\lambda$-cycle are
also symmetrical w.r. to the imaginary axis,
thanks to a $\lambda \leftrightarrow 1/\lambda$ symmetry of the real solutions.
The evolution of all the zeros under the $\lambda$-cycle $C_-$ is globally
depicted for $E=0$ on fig. 4 (left).
Similarly, the solutions corresponding to $\lambda\in C_+$
can be chosen real w.r. to the variable $iZ$ (observing that eq. (\ref{barrier})
is invariant under the change $\{Z\to iZ, E\to -E\}$), and analyzed likewise.
\subsection{Singular values of $\lambda$}
\label{sing}
Each of the expansions (\ref{z0}--\ref{z3}) becomes singular
for two values of $\lambda$ from the set $\{0, \infty, c_-,c_+\}$.
The solutions are then $\psi^E_\pm(z), \psi^{-E}_\pm(iz)$,
corresponding to ``pure $D_\nu$ functions",
and eqs. (\ref{dexp}) clearly show that $D_\nu(y)$ has large zeros only in two
out of four sectors, namely along the pair of adjacent anti-Stokes lines
$\{\arg y = \pm 3\pi/4\}$. That is why
the zeros in the two other sectors must ``escape to infinity"
as $\lambda$ moves towards one of its special values.
Let for instance $\lambda$ decrease from $+1$ (the even solution case)
to the special value $0$ along the interval $[0,1]$.
For simplicity we first restrict ourselves to the case $E=0$,
where the eigenfunctions are combinations of Bessel functions.
The zeros' expansions (\ref{z0}--\ref{z3})
(with $R_\pm |_{E=0}=\sqrt 2$) then become singular in the sectors
$S_1$ and $S_2$, whereas they stay perfectly uniform in the two other sectors.
In $S_2$, eq. (\ref{z2}) holds uniformly as $Z^{(2)\,2}_m /\hbar \to \infty$,
which amounts to $-\log \lambda + 2 \pi m \gg 1$,
and the same conclusion is reached for eq. (\ref{z1}) in $S_1$.
As $\lambda \to +0$, the two formulae merge into a single one,
\begin{equation}
Z_n^2/\hbar \sim i \,\log {\lambda \over \sqrt 2} +(2n-1) \pi
\qquad \mbox{in } S_1\cup S_2 \quad\mbox{for any }n \in \Bbb Z
\end{equation}
where the global counting index $n \in \Bbb Z$ matches with $m$ in the sector
$S_2$ for $n \gg 0$ and with $(-m+1)$ in $S_1$ for $n \ll 0$.
These zeros thus tend to follow the hyperbola branch
$\{2\Re(Z) \Im(Z) = \hbar \log \ \lambda /\sqrt 2,\ \Re(Z)<0\}$,
which itself recedes to infinity as $\lambda \to +0$, as can be seen on
fig. 4 (right) followed by fig. 1 (middle left).
This description can be generalized to the case of a non-vanishing
(but small) energy. The asymptotic condition for zeros in $S_1 \cup S_2$
when $\lambda\to 0$ must now be drawn from eq. (\ref{zeq}) itself and reads as
\begin{equation}
{Z_n^2\over\hbar} \approx (2n-1)\pi
-i\,\log\left({\lambda^{-1}-c_\mp\over R_-}\right)
-{E\over\hbar} \log\left(2 {Z_n^2\over\hbar}\right) + \Theta( {E/\hbar} )
\qquad \mbox{for } n \to \pm \infty .
\end{equation}
For small values of $\lambda$, these zeros tend to follow asymptotically
two half-branches of hyperbolae which differ as $n \to +\infty$ or $-\infty$
(because of the $\Im (\log Z_n^2)$ contribution);
the matching between this set of zeros for small $\lambda$ and the set of zeros
of $\Psi_{+1}^E$ on $\Bbb R_-\cup i \Bbb R_+$ can be done as in the case $E=0$.
\subsection{WKB expansions of zeros at a fixed non-vanishing energy }
We first consider the particular solution $\Psi_+^E(Z)$.
In the semi-classical limit of eqs. (\ref{plus}), $\Psi_+^E(Z)$ can vanish only
within the regions ${\cal S}_0$ and ${\cal S}_3$ and along anti-Stokes lines,
now defined by the conditions:
$\phi(Z_\pm,Z)/\hbar \mbox{ real}$, respectively.
From the complete set of anti-Stokes lines
(shown on fig. 5 top left, with $\arg \hbar=0$ henceforth),
the presently relevant ones are:
in ${\cal S}_3$, the imaginary half-axis below $Z_-$; and in ${\cal S}_0$,
the anti-Stokes line from $Z_+$ asymptotic to the positive real axis
(fig. 5, bottom right).
The zeros themselves are given asymptotically by the equations
\begin{eqnarray}
\label{phi}
\phi(Z_+,Z)&=&(+m-1/4)\hbar\pi,\quad m\in\Bbb N,\ m \gg 1 \qquad\mbox{in ${\cal S}_0$}
\nonumber\\
\phi(Z_-,Z)&=&(-m+1/4)\hbar\pi,\quad m\in\Bbb N,\ m \gg 1 \qquad\mbox{in ${\cal S}_3$} .
\end{eqnarray}
Finally, the expansions
\begin{equation}
\phi(Z_+,Z) \sim +{Z^2 / 2}\quad\mbox{when}\quad Z\to +\infty , \qquad
\phi(Z_-,Z) \sim +{Z^2 / 2}\quad\mbox{when}\quad Z\to -i\infty
\end{equation}
restore the former large zero behaviours,
$Z_m^{(j)}\sim \mathop{\rm e}\nolimits^{ij\pi/2}\sqrt{2\hbar m\pi}$ for $j=0,3$ respectively.
We can also see how the zeros in regions ${\cal S}_1$ and ${\cal S}_2$ go to infinity
for a more general eigenfunction $\Psi_\lambda^E(Z)$,
when the parameter $\lambda$ decreases from $1$ to $0$ along $[0,1]$
as in subsection (\ref{sing}), but now for a fixed non-vanishing energy.
We will then be able to compare the results in the two frameworks.
Using the WKB expansions (\ref{pm}) in the different regions of the $Z$-plane,
we study the equation $\Psi_\lambda^E(Z)=0$ in each of them \cite{olv1}.
For $\lambda=0$ we found zeros along only two anti-Stokes lines.
For a general value, there will be zeros in all regions
(and, inasmuch as $\lambda$ varies independently of $\hbar$,
those zeros are not confined near the above anti-Stokes lines).
Moreover, we know from symmetry properties that
for $\lambda=\pm 1$, the zeros can only lie on the real and imaginary axes.
It is also obvious, from the different expansions, that the zeros' pattern
depends on the ratio $\lambda /c_- = \lambda c_+$ (cf. eq. (\ref{cpm})).
In the range $1>\lambda\gg |c_-|$, the differences between the formulae
(\ref{pm}) for regions ${\cal S}'$, ${\cal S}_0$, ${\cal S}_2$ are irrelevant
as far as the position of the zeros is concerned,
so that the first one suffices to localize the zeros in those three regions
by means of the equations
\begin{eqnarray}
\label{tilde}
\Im \,\phi(0,Z) &=& -(i\hbar/2) \log\lambda \\
\Re \,\phi(0,Z) &=& \hbar {\pi/ 2} \quad {\rm mod}\ \hbar\pi.\nonumber
\end{eqnarray}
For $\lambda =1$, the zeros are exactly real by symmetry.
When $\lambda$ decreases towards $|c_-|$,
the curve (\ref{tilde}) gets deformed towards $Z_+$,
keeping the real axis as asymptote at both ends.
At the same time, the zeros in ${\cal S}_1 \cup {\cal S}_3$
stay along the imaginary axis.
When $\lambda\approx |c_-|$, there are zeros along all anti-Stokes lines
from $Z_+$ and along the imaginary axis in ${\cal S}_3$.
When $\lambda \ll |c_-|$, the zeros along the anti-Stokes lines in
${\cal S}_0 \cup {\cal S}_3$ stabilize,
whereas the ones in the regions ${\cal S}_1 \cup {\cal S}_2$
lie along the curve (\ref{tilde}), which recedes to infinity when $\lambda\to 0$
as in the previous subsection, keeping $i\Bbb R_+$ and $\Bbb R_-$ as asymptotes.
All these phenomena appear on the sequence of plots in fig. 5.
Once more, we can recover the previous asymptotic large zeros
by expanding the action integrals $\phi(0,Z)$ along the four half-axes.
\section{Conclusion}
The above study should help to better understand more complicated 1-d systems when the eigenenergy is very near a saddle-point value and standard WKB theory fails.
This can be illustrated upon the eigenstates of a quantum Harper model.
The classical Harper Hamiltonian is $H_{\rm H}=-\cos(2\pi P) - \cos(2\pi Q)$
on the torus phase space $(Q \mbox{ mod } 1,P \mbox{ mod } 1)$.
We have quantized it on the Hilbert space of wavefunctions with periodic boundary conditions,
which has a finite dimension $N$ to be identified with $(2 \pi \hbar)^{-1}$,
and taken $N=31$ for calculations.
We have then selected the eigenfunction $\Psi_n$ immediately below the separatrix energy $E=0$; it is an even state having the quantum number $n=14$,
and its Husimi density over the torus is plotted on fig. 6, left
(cf. also \cite{leb:vor} 1990, fig. 2a).
Now, by expanding the Hamiltonian near one saddle-point like $(Q=0, P=-1/2)$
we recover a quadratic-barrier problem. This suggests to compare $\Psi_{14}$
with the even dilator eigenfunction $\Psi_{+1}^E(Z)$
for $E/\hbar \equiv 2 \pi N |E_{14}|/(2 \pi)^2$
and $z \equiv \hbar^{-1/2}\mathop{\rm e}\nolimits^{i\pi/4}(Q-iP)/\sqrt 2 \approx \mathop{\rm e}\nolimits^{i\pi/4}10.(Q-iP)$
(see fig. 6 right).
We then observe that not only the high densities of both figures
around the saddle-points fit very nicely
(a result to be expected from semiclassical comparison arguments
\cite {taka:hyp,taka}),
but also, more surprisingly, the sequences of zeros for the Harper eigenfunction
match the comparison zeros as well, and not just near the saddle-points but
practically all the way out to the extremal points; at these points the lines
supporting the zeros intersect and the correspondence must fail,
but this happens much beyond its reasonable range of validity anyway.
| proofpile-arXiv_065-1142 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Interest in the classical solutions of $2+1$-Gravity
\cite{a1}-\cite{a10} has recently revived \cite{a11}-\cite{a13} because of
the discovery of exact moving particle solutions \cite{a12}-\cite{a13}
in a regular gauge of conformal type \cite{a11}. Simplifying features of such a
gauge are the instantaneous propagation ( which makes the ADM decomposition
of space-time explicit and particularly simple ) and the conformal
factor of Liouville type (which can be exactly found at least in the two-body
case).
We have already provided in Ref. \cite{a12}, hereafter referred to as [BCV], the
main results for the case of $N$ moving spinless particles. The
purpose of the present paper is to extend the BCV gauge choice \cite{a11}
to spinning particles and to provide solutions for the metric and the
motion in some particular cases.
Localized spin $s$, in $2+1$ dimensions \cite{a6}, is characterized by the
fact that a Minkowskian frame set up in the neighbourhood of the particle
has a multivalued time, which is shifted by the amount $\delta T = -
s$, when turning around it in a closed loop. A consequence of this
jump ( which is backwards in time for a proper loop orientation ) is that
there are closed timelike curves ( CTC's ) \cite{a14}
around the particles at a distance smaller
than some critical radius $R_0 \sim O(s)$.
This feature suggests that the single-valued time of our gauge, which
is syncronized in a global way, cannot be pushed too close to the particles
themselves. Indeed we shall find that there are `` CTC horizons'' around
the particles of radii $R_i \sim s_i$ which cannot be covered by our gauge
choice \cite{a15}. Nevertheless, we will be able to describe the
motion of the particles themselves on the basis of our ``external
solutions'' to the metric and to the DJH matching conditions.
Technically, the existence of the time shifts mentioned before
modifies the number of ``apparent singularities'' which appear in the
Riemann-Hilbert problem \cite{a16} for the analytic function providing the
mapping to Minkowskian coordinates. Such singularities are not branch points
of the mapping function, but nevertheless appear as poles in its
Schwarzian derivative.
While for $N$ spinless particles there are $2N-1$ singularities ( $N$
for the particles, $1$ at infinity and $N-2$ apparent singularities ),
in the spinning case there are $3N-1$, corresponding to one more apparent
singularity per particle. This means that explicit exact solutions are more
difficult to find.
In the spinless case we found exact solutions for the two-body problem
with any speed ( $3$ singularities ) and for $N$ bodies with small speed.
In the spinning case we find here an exact solution only for the
static ( $N$-body ) case, and we discuss the two-body problem, which
corresponds to five singularities, for the case of small speed only.
The outline of the paper is as follows. In Sec. 2 we recall the general
features of our method in the conformal Coulomb gauge in both first-order
and ADM \cite{a17} formalisms. In particular, we show how the metric
can be found once the mapping function $f(z,t)$ and the meromorphic
function $N(z,t)$ are given. In Sec. $3$ we give an exact solution for
$f$ and $N$, in the
case of spinning particles at rest, characterized by the fact that $N$ has
double poles at the particle sites, with residues proportional to the
spins. We show that such double poles, related to an energy-momentum
density of ${\delta}'$-type, are at the origin of the time shifts, of
the apparent singularities, and of the $CTC$ horizons close to the particles.
In Sec. $4$ we discuss the two-body problem, corresponding to $5$
singularities, at both first-order and second-order in the velocities. The
second-order solution corresponds to the non-relativistic limit and is
of particular interest, even in the spinless case.
Our results and conclusions are summarized in Sec. $5$, and some technical
details are contained in Appendices $A$ and $B$.
\section{General features and gauge choice}
{\bf 2.1 From Minkowskian to single-valued coordinates }
In [BCV] we have proposed a non-perturbative solution for the metric
and the motion of $N$ interacting spinless particles in (2+1)-gravity,
based on the introduction of a new gauge choice which yields an instantaneous
propagation of the gravitational force.
Our gauge choice is better understood in the first-order formalism
which naturally incorporates the flatness property of $(2+1)$ space-time
outside the sources. This feature allows to choose a global Minkowskian
reference system $X^a \equiv (T, Z, \overline{Z} )$, which however is in
general multivalued, due to the localized curvature at the particle
sources. In order to have well-defined coordinates, cuts should be
introduced along tails departing from each particle, and a Lorentz
transformation should relate the values of $dX^a$'s along the cuts, so
that the line element $ ds^2 = \eta_{ab} dX^a dX^b $ is left
single-valued.
The crucial point of our method is to build a
representation of the $X^a$'s starting from a regular coordinate
system $x^\mu = ( t , z, \overline{z} )$, as follows:
\begin{equation} dX^a = E^a_\mu dx^\mu = E^a_0 dt + E^a_z dz + E^a_{\overline z} d
\overline{z} \end{equation}
Here the dreibein $E^a_\mu$ is multivalued and satisfies the
integrability condition :
\begin{equation} \partial_{[\mu} E^a_{\nu ]} \ = \ 0 \end{equation}
which implies a locally vanishing spin connection, outside the
particle tails \cite{a9}.
Let us choose to work in a Coulomb gauge :
\begin{equation} \partial \cdot E^a = \partial_z E^a_{\overline z} +
\partial_{\overline z} E^a_z = 0 \end{equation}
which, together with the equations of motion (2.2), implies
\begin{equation} \partial_z E^a_{\overline z} = \partial_{\overline z} E^a_z = 0 ,
\end{equation}
so that $ E^a_z ( E^a_{\overline z} ) $ is analytic ( antianalytic ).
Multiplying (2.4) by $E^a_z$ we also get $\partial_{\overline z}
g_{zz} = 0$; we choose to impose the conformal condition $ g_{zz} =
g_{{\overline z}{\overline z}} = 0 $ in order to avoid arbitrary analytic
functions as components of the metric. Hence we can parametrize $E^a_z$,
$E^a_{\overline z}$ in terms of null-vectors:
\begin{equation} E^a_z = N W^a \ , \ \ \ E^a_{\overline z} = {\overline N}
{\widetilde W}^a \end{equation}
where $ W^2 = {\tilde W}^2 = 0 $, and we can assume $N(z,t)$ to be a
single-valued meromorphic function ( with poles at $z = \xi_i$, as we
shall see ).
We have to build $W^a$, ${\tilde W}^a $ in order to represent the DJH \cite{a2}
matching conditions of the $X^a$ coordinates, around the particle
sites $z = \xi_i (t)$:
\begin{equation} (dX^a)_I \rightarrow (dX^a)_{II} = {(L_i)}^a_b (dX^b)_I \ \ \
i = 1, 2, ..., N \end{equation}
where $ L_i = exp ( i J_a P_i^a ) $ ( $(iJ_a)_{bc} = \epsilon_{abc}$ )
denote the holonomies of the spin connection, which is treated here in
a global way, in order to avoid distributions.
The simplest realization of such $O(2,1)$ monodromies is given by a
spin $\frac{1}{2}$ projective representation:
\begin{equation} f(z,t) \rightarrow \frac{ a_i f(z,t) + b_i}{ b^*_i f(z,t) + a^*_i
},
\ \ \ \ a_i = \cos \frac{m_i}{2} + i \ \gamma_i \ \sin \frac{m_i}{2} , \ \ \
b_i = - i \ \gamma_i \ \overline{V}_i \sin \frac{m_i}{2} \end{equation}
where the mapping function $f(z,t)$ is an analytic function with branch-cuts
at $ z = \xi_i (t) $ and $V_i = P_i/ E_i$ ( $ \gamma_i = {( 1 -
{|V_i|}^2 ) }^{-\frac{1}{2}} $ ) are the constant Minkowskian velocities.
Since the $W$ vectors must transform according to the adjoint representation
of $O(2,1)$, the natural choice, constructed out of the mapping function $f$
defined before, is to set :
\begin{equation} W^a = \frac{1}{f'}( \ f, \ 1, \ f^2 ) \ , \ \ \ \ {\widetilde W}^a =
\frac{1}{\overline f'}( \ {\overline f}, \ {\overline f}^2, \ 1 ) \end{equation}
which gives for the spatial component $g_{z{\overline z}}$ of the
metric tensor the expression
\begin{equation} - 2 g_{z{\overline z}} \ = \ e^{2\phi} \ = \ {|\frac{N}{f'}|}^2 ( 1 - {|
f |}^2 )^2 \end{equation}
in which we recognize the general solution of a Liouville-type
equation \cite{a18}.
We can now integrate (2.1) out of particle 1 :
\begin{equation} X^a = X^a_1 (t) + \int_{\xi_1}^z dz \ N \ W^a (z,t) +
\int_{{\overline \xi}_1}^{\overline z} d {\overline z} \ {\overline N}
{\widetilde W}^a ( {\overline z}, t ) \end{equation}
in terms of the parametrization $X^a_1 (t)$ of one Minkowskian
trajectory, which is left arbitrary.
The $X^a = X^a (x)$ mapping is at this point uniquely determined once
the solution to the monodromy problem (2.7) is found. Since the
coefficients $(a_i, b_i)$ are constants of motion, the monodromy
problem can be recast into a Riemann-Hilbert problem \cite{a16} for an
appropriate II order differential equation with Fuchsian
singularities, whose solutions are quoted in [BCV] for the spinless case.
For instance, in the two-body case, there are $3$ singularities, which can be
mapped to $\zeta_1 = 0$, $\zeta_2 = 1$, $\zeta_\infty = \infty$,
where $\zeta = \frac{z - \xi_1}{\xi_2 - \xi_1}$, and the mapping
function is the ratio of two hypergeometric functions
\begin{eqnarray}
f(\zeta) & = & \frac{\gamma_{12} {\overline V}_{12}}{\gamma_{12} - 1}
\zeta^{\mu_1} \frac{ {\tilde F}
\left(\frac{1}{2}(1+\mu_\infty+\mu_1-\mu_2),\frac{1}{2}(1-\mu_\infty+\mu_1-\mu_2),
1+\mu_1;\zeta\right)}{ {\tilde F}
\left(\frac{1}{2}(1+\mu_\infty-\mu_1-\mu_2),\frac{1}{2}(1-\mu_\infty-\mu_1-\mu_2),
1-\mu_1;\zeta\right)} \ , \nonumber \\ & & \nonumber \\
{\tilde F}(a,b,c;z) & \equiv & \frac{\Gamma (a) \Gamma (b)}{\Gamma (c)}
F(a,b,c;z) \ , \ \ \ \gamma_{12} \equiv \frac{P_1 P_2}{m_1 m_2}
\end{eqnarray}
whose difference of exponents are $\mu_i = m_i / 2 \pi $ ( i = 1,2 )and $\mu_\infty =
{\cal M} / 2 \pi - 1$, where $\cal M$ is the total mass
\begin{equation} \cos(\frac{\cal M}{2}) = \cos (\frac{m_1}{2}) \cos (\frac{m_2}{2})
\ - \ \frac{P_1 \cdot P_2}{m_1 m_2} \ \sin (\frac{m_1}{2}) \sin
(\frac{m_2}{2}) \end{equation}
In general, we can set
\begin{equation} f = y_1 / y_2 \ , \ \ \ \ \ \ \ y''_i + q (\zeta) y_i = 0 , \end{equation}
where the potential
\begin{equation} 2 q(\zeta) = \{ f, \zeta \} = {\left( \frac{f''}{f'} \right)}' -
\frac{1}{2} {\left( \frac{f''}{f'} \right)}^2 \end{equation}
is a meromorphic function with double and simple poles at the
singularities $\zeta = \zeta_i$.
It turns out that, for more than two particles, ``apparent singularities''
must be added into the differential equation in order to preserve the
constancy of the monodromy matrix with moveable singularities, as
firstly noticed by Fuchs \cite{a19}. Such singularities are zeros of
$f'$, rather
than branch points of $f$, and their position is related to the ones
of the particles in a generally complicated way, determined by the monodromies
. The total number of singularities for $N$ particles is $2N-1$, and
the mapping function was found for $N \ge 3$ in [BCV] in the limiting
case of small velocities.
In order to determine the metric completely, we derive (2.10) with respect
to time, and we obtain
\begin{eqnarray} E^a_0 \ & = & \ \partial_t X^a \ = \ \partial_t X^a_1 + \
\partial_t \left(
\int^z_{\xi_1} dz \ N \ W^a + \int^{\overline z}_{{\overline \xi}_1} \
d {\overline z} \ {\overline N} \ {\widetilde W}^a \right) \ = \ \nonumber \\
& = & c^a ( t ) + \
\int^z_{\xi_1} \ dz \ \partial_t (N W^a) +
\int^{\overline z}_{{\overline \xi}_1}
\ d {\overline z} \ \partial_t ( {\overline N} {\widetilde W}^a )
\end{eqnarray}
In terms of the vectors $E_0^a = \partial_t X^a$, $E^a_z = N W^a$,
$E^a_{\overline z} = {\overline N} {\widetilde W}^a $, the components of
the metric are given by :
\begin{eqnarray}
- 2 g_{z{\overline z}} \ & = & \ e^{2\phi} \ = \ {|N|}^2 ( - 2 W \cdot
{\tilde W} ) , \nonumber \\
g_{0z} \ & = & \ \frac{1}{2} {\overline\beta} e^{2\phi} \ = \ N W_a E^a_0 , \ \
g_{0{\overline z}} \ = \ \frac{1}{2} {\beta} e^{2\phi} \ = \ {\overline N}
{\widetilde W}_a E^a_0 \nonumber \\
g_{00} \ & = & \ \alpha^2 - {|\beta|}^2 e^{2\phi} = E^a_0 E^a_0, \ \
\ \ \ \ \alpha = V_a E^a_0
\end{eqnarray}
so that the line element takes the form
\begin{equation} ds^2 \ = \ \alpha^2 dt^2 - e^{2 \phi} {| dz - \beta dt |}^2. \end{equation}
Here we have defined the unit vector
\begin{equation} V^a = \frac{1}{1 - {|f|}^2} ( 1 + {|f|}^2, \ 2{\overline f}, \ 2 f )
\ = \ \epsilon^a_{bc} W^b {\widetilde W}^c {( W \cdot {\widetilde W})}^{-1}
\end{equation}
which represents the normal with respect to the surface $X^a = X^a( t
, z, {\overline z})$, embedded at fixed time in the Minkowskian
space-time $ds^2 = \eta_{ab} dX^a dX^b $. The tangent plane is instead
generated by the vectors
\begin{equation} \partial_z X^a = N W^a , \ \ \ \ \ \ \partial_{\overline z} X^a =
{\overline N} {\widetilde W}^a . \end{equation}
We notice that it is not a priori warranted to have such a well
defined foliation of space-time in terms of surfaces at fixed
time. This probably requires the notion of a universal global time,
which is not valid for universes with closed time-like curves, as it
happens in the case of spinning sources. Hence we can anticipate that
we will have problems in defining our gauge globally for spinning
sources.
{\bf 2.2 The Einstein equations in the ADM formalism }
Quite similarly to what we have discussed now, the starting point of
the ADM formalism is to assume that space-time can be globally
decomposed as $\Sigma (t) \otimes R$, where $\Sigma(t)$ is
a set of space-like surfaces. The
($2+1$)-dimensional metric is then split into ``space'' and ``time''
components:
\begin{equation} g_{00} = \alpha^2 - e^{2\phi} \beta {\overline \beta} , \ \ \
g_{0z} = \frac{1}{2} {\overline\beta} e^{2\phi}, \ \ \ g_{0{\overline z}} = \frac{1}{2}
\beta e^{2\phi} \ \ \ g_{z{\overline z}} = - \frac{1}{2} e^{2\phi} \end{equation}
where $\alpha$ and $\beta$ provide the same parametrization as in Eq. (2.17).
The lapse function $\alpha$ and the shift functions $g_{0i}$ have the
meaning of Lagrange multipliers in the Hamiltonian formalism, since
their conjugate momenta are identically zero.
The ADM space-time splitting can be worked out from the Einstein-Hilbert
action by rewriting the scalar curvature $R^{(3)}$ into its spatial
part $R^{(2)}$, intrinsic to the surfaces $\Sigma (t)$, and an
extrinsic part, coming from the embedding , as follows
\begin{equation} S = - \frac{1}{2} \int \sqrt{|g|} \ R^{(3)} = - \frac{1}{2} \int \sqrt{|g|} \ \left[
R^{(2)} + {( Tr K )}^2 - Tr ( K^2 ) \right] \ d^3 x , \end{equation}
where the equivalence holds up to a boundary term. Here we have introduced
the extrinsic curvature tensor $K_{ij}$ , or second fundamental form
of the surface $\Sigma(t)$, given by :
\begin{equation} K_{ij} = \frac{1}{2} \sqrt{ \frac{|g_{ij}|}{|g|} } \left( \nabla^{(2)}_i g_{0j}
+ \nabla^{(2)}_j g_{oi} - \partial_0 g_{ij} \right) \end{equation}
where we denote by $\nabla^{(2)}_i$ the covariant derivatives with
respect to the spatial part of the metric. The momenta $\Pi^{ij}$,
conjugate to $g_{ij}$ are proportional to $K^{ij} - g^{ij} K$ , which
therefore complete the canonical coordinate system of the Hamiltonian
formalism.
We can generate the ADM decomposition starting from the first-order
formalism foliation $X^a = X^a ( t, z , {\overline z})$. The Coulomb
gauge condition, imposed to fix such a mapping, can be
directly related to the gauge condition of vanishing ``York time'' \cite{a20}
\begin{equation} g_{ij} \Pi^{ij} = K(z, {\overline z}, t) = K_{z{\overline z}} =
\frac{1}{2\alpha} ( \partial_z g_{0{\overline z}} +
\partial_{\overline z} g_{0z} - \partial_0 g_{z{\overline z}} ) = 0
\end{equation}
In fact, by rewriting this combination in terms of the dreibein we
get, by using eq. (2.2):
\begin{equation} E^a_0 \cdot ( \partial_z E^a_{\overline z} + \partial_{\overline
z} E^a_z ) + E^a_{\overline z} \cdot ( \partial_z E^a_0 - \partial_0 E^a_z )
+ E^a_z \cdot ( \partial_{\overline z} E^a_0 - \partial_0 E^a_{\overline z}
) = 0 \end{equation}
We thus see that our gauge choice is defined by the conditions
\begin{equation} g_{zz} = g_{{\overline z}{\overline z}} = K = 0 \end{equation}
and thus corresponds to a conformal gauge, with York time $g_{ij}
\Pi^{ij} = 0 $.
By combining the above conditions, we obtain a new action without time
derivatives, demonstrating that the propagation of the fields
$\alpha$, $\beta$ , $\phi$ can be made instantaneous, as it appears
from the equations of motion of Ref. \cite{a11}:
\begin{eqnarray}
\nabla^2 \phi + \frac{e^{2\phi}}{\alpha^2} \partial_z {\overline \beta}
\partial_{\overline z} \beta & = & \nabla^2 \phi + N {\overline N}
e^{-2\phi} = - |g| e^{-2{\phi}} T^{00}, \nonumber \\
\partial_{\overline z} \left( \frac{e^{2\phi}}{\alpha} \partial_z
{\overline \beta} \right) & = & \partial_{\overline z} N =
- \frac{1}{2}\alpha^{-1} |g|( T^{0z} - \beta T^{00} ), \nonumber \\
\nabla^2 \alpha - 2 \frac{e^{2\phi}}{\alpha} \partial_z
{\overline \beta} \partial_{\overline z} \beta & = & \alpha^{-1} |g|
( T^{z{\overline z}} - \beta T^{0{\overline z}} - {\overline \beta} T^{0z} +
\beta {\overline \beta} T^{00}).\end{eqnarray}
We understand from Eq. (2.26) that the sources of the meromorphic $N$
function are given by a combination of $\delta$-functions. For two
particles, this leads to the solution of [BCV],
\begin{equation} N = \frac{C {(\xi_{21})}^{-1 -\frac{\cal M}{2\pi}} }{\zeta ( 1 - \zeta )}
\end{equation}
which shows simple poles at $z = \xi_1$ and $z= \xi_2$, and no pole at $\zeta =
\infty$. The $N$ function so determined provides also a feedback in
the first of Eqs. (2.26),
in which it modifies the sources of the ``Liouville field''
$e^{2{\widetilde\phi}} = e^{2\phi} / {|N|}^2 $, which is determined by the
mapping function $f$ in Eq. (2.9).
The $t$-dependence of the trajectories is now provided by the covariant
conservation of the energy-momentum tensor, which in turn implies the
geodesic equations
\begin{equation} \frac{d^2 \xi_i^\mu}{ds^2_i} + {(\Gamma^\mu_{\alpha\beta})}_i
\ \frac{d\xi^\alpha_i}{ds_i} \ \frac{d\xi^\beta_i}{ds_i} = 0 \ \ \ i = 1,
..., N \end{equation}
Remarkably, we can completely solve these geodesic equations in the
first-order formalism, by measuring the distance between two particles
in the $X^a$ coordinates:
\begin{eqnarray}
X^a_2 (t) - X^a_1 (t) & = & B^a_2 - B^a_1 + V^a_2 T_2 - V^a_1 T_1 =
\nonumber \\
& = &\ \int^{\xi_2}_{\xi_1} \ dz \ N \ W^a ( z, t) +
\int^{{\overline\xi}_2}_{{\overline\xi}_1} \ d {\overline z}
{\overline N} \ {\widetilde W}^a ( {\overline z}, t). \end{eqnarray}
The explicit solution is obtained by inverting these relations to
obtain the trajectories $\xi_i(t)$ as functions of the
constants of motion $B_i^a$, $V^a_i$.
{\bf 2.3 Spinning particle metric }
The metric of a spinning particle at rest \cite{a6} is related to the large
distance behaviour of the one of two moving particles which carry
orbital angular momentum $J$. In the latter case,from the two-body solution
\cite{a12}, we find that the Minkowskian time $T$ at large distance from
the sources changes by
\begin{equation} \Delta T = - 8 \pi G J = - J \ \ \ \ \ \ \ ( 8 \pi G = 1 ) \end{equation}
when the angular variable $\theta$ changes by $2\pi$, so that we must identify
times which differ by $8 \pi G J $ to preserve single-valuedness of the $X^a$
mapping.
Analogously, it was realized long ago \cite{a6} that $T$ has a shift proportional to
the spin $s$ , when turning around a spinning source, while the spatial
component $Z$ rotates by the deficit angle $m$. By the transformation
\begin{equation} T = t - \frac{s}{2\pi} \theta , \ \ \
\ \ \ Z = \frac{w^{1-\mu}}{1-\mu} , \ \ \ \ \mu = \frac{m}{2\pi} , \end{equation}
the Minkowskian metric becomes the one of Ref . \cite{a6}:
\begin{equation} ds^2 = {\left( dt - \frac{s}{2\pi} d \theta \right)}^2 - {|w|}^{-2\mu} dw
d{\overline w} \end{equation}
which, however, is not conformal ( $ g_{zz} \neq 0 $ ).
Hence, in order to switch to a conformal type gauge, we must modify the $Z$
mapping of Eq. (2.31) by allowing another term, dependent on $\overline z$ ,
which preserves the polydromy properties of the first term. If we set
\begin{equation} Z = \frac{1}{1-\mu} \left( z^{1-\mu} + A^2 \ {\overline z}^{\mu -
1} \right) + B \end{equation}
(where $B$ is an additional arbitrary constant), we can get a
cancellation of the $g_{zz}$ term in the metric \cite{a13}:
\begin{equation} ds^2 = {\left( dt - \frac{s}{4\pi} \left( \frac{dz}{iz} -
\frac{d{\overline z}}{i {\overline z}} \right) \right)}^2 -
{| z^{-\mu} dz - A^2 {\overline z}^{\mu-2} d{\overline z} |}^2 \end{equation}
by choosing $A \ = \ - s / 4\pi$.
The one-particle solution now looks like:
\begin{equation} ds^2 = dt^2 + \frac{s}{2\pi} ( dt \ i \ \frac{dz}{z} + h.c. ) -
e^{2\phi} dz d {\overline z} \end{equation}
where the conformal factor $e^{2\phi}$ is given by:
\begin{equation} e^{2\phi} = {|z|}^{-2\mu} {( 1 - A^2 {|z|}^{2(\mu-1)} )}^2 =
{|\frac{N}{
f'}|}^2
{( 1 - {|f|}^2)}^2 \end{equation}
In our general solution of Eqs. (2.8)-(2.10) this expression implies the
following choice for the analytic functions $N(z)$, $f(z)$:
\begin{eqnarray}
N(z) & = & - \frac{i A ( \mu - 1) }{z^2} \ , \ \ \ \mu \equiv \frac{m}{2\pi}
\nonumber \\ f(z) & = & - i A z^{\mu -1}
\end{eqnarray}
Let us first note that $f$ has a singular behaviour for $z\rightarrow 0$,
compared to the one of Eq. (2.11), and takes therefore large values for $z$ small.
This implies that in $z$ coordinates, there is a ``horizon'' surrounding the
spinning particle which corresponds to $|f| = 1$, and thus to a vanishing
determinant $\sqrt{|g|} \simeq \alpha e^{2\phi} = 0 $ in
Eq. (2.36). It is therefore given by the circle
\begin{equation} {|z|}^2 = r^2_0 = A^{\displaystyle{\frac{2}{1-\mu}}}, \end{equation}
or, in $Z$ coordinates, by the circle
\begin{equation} Z_0 = \frac{2A}{1-\mu} e^{i\theta (1-\mu)} \Rightarrow |Z_0| =
R_0 = \frac{s}{2\pi (1-\mu)}.\end{equation}
The meaning of this ``horizon'' is that values of $Z$ with $|Z| < R_0$ cannot
be obtained from the parametrization (2.33) for any values of
$z$. This in
turn is related to the fact that, due to the symmetry $ z \rightarrow
A^{\frac{2}{1-\mu}} \ / \ {\overline z} $, the inverse of the mapping
(2.33) is not single-valued , and in fact we shall choose the
determination of $z$ such that $z \simeq Z^{ \ \frac{1}{1-\mu}}$ for $Z
\rightarrow \infty$.
The fact that our gauge is unable to describe the internal region $|Z|
< R_0$ is related to the existence, in that region, of CTC's
\cite{a14}, which do
not allow a global time choice. Indeed, closed time-like curves can be
built when the negative time-jump $\Delta T = - s$ cannot be
compensated by the time occurring to a light signal to circle the
particle at distance $R$, which is given by $T_{travel} = 2\pi ( 1 - \mu ) R$,
thus implying
\begin{equation} R < \frac{s}{2\pi (1-\mu)} = R_0 , \end{equation}
i.e., the same critical radius as in Eq. (2.39). For this reason we
shall call the sort of horizon just found a ``CTC horizon''.
Secondly, we note that the ratio $ N /{f'}$ is similar to the
spinless case, but the behaviour of $N(z)$ and $f'(z)$ separately is
more singular, i.e.
\begin{equation} N \simeq \frac{\sigma}{z^2} , \ \ \ f' \simeq z^{\mu-2}. \end{equation}
This is to be expected because the source for $N(z)$ in the equation
of motion (2.26) is a more singular distribution, i.e.
\begin{equation} \partial_{\overline z} N(z) \propto \frac{s}{2\pi} \delta' (r)
\end{equation}
in order to allow for a localized angular momentum. The same
$z^{-2}$ behaviour for $N(z)$ can be found in the large
distance limit of the two-body problem.
This more singular behaviour of the metric is an obstacle to define
the $X^a$ coordinates in the vicinity of the particle at
rest. Nevertheless the particle site can be unambiguously obtained by looking
at the center of rotation of the DJH matching conditions \cite{a2} arising
from (2.33), when turning around $z=0$ in the region outside the CTC
horizon, i.e.,
\begin{equation} Z - B \rightarrow e^{-2i \pi \mu } ( Z - B ). \end{equation}
In the following, we shall use this procedure in order to identify the
value of the $Z$ coordinate at the particle site (Cfr. Appendix A ).
\section{Spinning particles at rest}
The function $N(z,t)$ plays an important role in the following
discussion because
its polar structure determines the time shift around each particle,
and from this we can get information about the apparent singularities
which appear in the spinning case.
Let us recall the form (2.27) of $N(z)$ for the two-body problem in
the spinless case:
\begin{equation} N(z,t) = - \frac{R(\xi(t))}{(z-\xi_1)(z-\xi_2)} = \frac{R(\xi)}{\xi^2}
\frac{1}{\zeta (1- \zeta)} \end{equation}
where $\xi \equiv \xi_{21} = \xi_2 - \xi_1 $ is the interparticle
coordinate, and $R(\xi)$ was determined \cite{a12} to be
\begin{equation} R(\xi) = C \xi(t)^{1-\frac{\cal M}{2\pi}} \end{equation}
The imaginary part of this coefficient is related to the asymptotic time shift by
\begin{equation} \Delta T \simeq - R \int \frac{dz}{z^2} \frac{f}{f'} + (h.c.) =
- \frac{4\pi}{1 - \frac{\cal M}{2\pi}} Im R = - J \end{equation}
where $\cal M$ is the total mass of Eq. (2.12).
Therefore, by Eq. (2.30) $R$ is determined in terms of the total
angular momentum of the system, which for the spinless case is purely
orbital ($J = L$), and given by \cite{a12}
\begin{equation} 2 \gamma_{12} |V_2 - V_1| \ B_{21} \ \frac{\sin\pi\mu_1 \sin\pi\mu_2}{\sin
\frac{\cal M}{2} } = L = \frac{4\pi}{1-\frac{\cal M}{2\pi}} \ Im (R) \end{equation}
In the spinning case, we can assume that at large distances $N(z,t)$ has the
same $z^{-2}$ behaviour as in the spinless case. However, around each particle
$N$ should have double poles, as found in Eqs. (2.37) and (2.41). We take
therefore the ansatz
\begin{equation} N(z,t) = \frac{R(\xi )}{\xi^2} \left( \frac{1 - \sigma_1 - \sigma_2}{
\zeta ( 1 - \zeta )} - \frac{\sigma_1}{\zeta^2} - \frac{\sigma_2}{{(\zeta -1
)}^2} \right) , \end{equation}
where $R\sigma_i$ are the double pole residues.
As a consequence of the double poles, logarithmic contributions to $T$ appear
around each particle, which give rise to a time shift proportional to each
spin $s_i$:
\begin{equation} \Delta T_i = \oint_{C_i} \ dz \ \frac{Nf}{f'} + (h.c) = - 4 \pi \frac{
Im ( R \sigma_i)}{1 - \mu_i} = - s_i , \ \ \ \ \ \
\mu_i \equiv \frac{m_i}{2\pi} \end{equation}
where we have used the power behaviour $f|_i \simeq z^{\mu_i-1}$.
Eqs. (3.6) and (3.3) determine the values of the $\sigma_i$'s in terms of spins
and angular momentum:
\begin{equation} \sigma_i = \frac{(1- \mu_i)}{(1 - \frac{\cal M}{2\pi}) }
\frac{s_i}{J} \end{equation}
Furthermore, eq. (3.5) can be rewritten so as to show the presence of two
zeros of $N$, at $\zeta = \eta_1$, $\zeta =\eta_2$, i.e.,
\begin{equation} N(z,t) = - \frac{R(\xi)}{\xi^2} \frac{(\zeta-\eta_1)(\zeta-\eta_2)}{
\zeta^2 {(\zeta-1)}^2 } \end{equation}
with
\begin{equation} \eta_{1,2} = \frac{1}{2} \left( 1 +\sigma_1 -\sigma_2 \pm \sqrt{ 1 - 2 (
\sigma_1 + \sigma_2 ) + {(\sigma_1 - \sigma_2)}^2 } \right) \end{equation}
Such zeros turn out to be the apparent singularities of our system. In fact,
in order to avoid zeros of the metric determinant we have to cancel them by
having $f'$ to vanish at $\zeta = \eta_i$ too. Therefore, even if $f$
is analytic around $\zeta = \eta_i$, its Schwarzian derivative has
extra double poles with differences of indices ${\tilde \mu} = 2$,
corresponding to the general parametrization
\begin{eqnarray}
\{ f, z \} & = & \frac{1}{2} \frac{ \mu_1 ( 2 - \mu_1 )}{\zeta^2} + \frac{1}{2}
\frac{ \mu_2 ( 2 - \mu_2 )}{{(\zeta - 1)}^2} - \frac{3}{2} \frac{1}{ {(\zeta -
\eta_1)}^2} - \frac{3}{2} \frac{1}{ {(\zeta - \eta_2)}^2} + \nonumber \\
& + & \frac{\beta_1}{\zeta} + \frac{\beta_2}{\zeta -1} +
\frac{\gamma_1}{\zeta-\eta_1} + \frac{\gamma_2}{\zeta-\eta_2} \end{eqnarray}
with the ``accessory parameters'' $\beta's$ and $\gamma's$ so far undetermined.
{\bf 3.1 The two-body case }
There is no general known solution to the Fuchsian problem of Eq. (3.10)
, since it contains five singularities. However in a few limiting
cases a particular solution can be obtained. For example in the static
two-body case under consideration, the
form of $f'$ is determined by its zeros, and by the known behaviour
around $\zeta = 0$ and $\zeta = 1 $, as follows :
\begin{equation} f' = - \frac{K}{\xi} (\zeta-\eta_1)(\zeta-\eta_2)
\zeta^{\mu_1-2} {(1-\zeta)}^{\mu_2-2}. \end{equation}
Furthermore, it should be integrable to a function $f$ with static
monodromy matrix, and behaviour $f \simeq \zeta^{\mu_1 + \mu_2 - 1}$
at $\zeta = \infty$, of the form:
\begin{equation} f = \frac{K}{\mu_1+\mu_2-1} \zeta^{\mu_1-1} {(1-\zeta)}^{\mu_2-1}
(\zeta-\tau). \end{equation}
The consistency of Eqs. (3.11) and (3.12) gives a constraint on the
possible values of $\eta_1$, $\eta_2$:
\begin{equation} \eta_1 \eta_2 = \sigma_1 = \frac{(1-\mu_1) \tau}{1-\mu_1-\mu_2} \
, \ \ \ \ ( 1 - \eta_1 ) ( 1 - \eta_2 ) = \sigma_2 =
\frac{(1-\mu_2)(1-\tau)}{1-\mu_1-\mu_2} \end{equation}
which, by Eq. (3.7), is satisfied if the total angular momentum is
simply the sum of the two spins $s_i$, i.e. $ J = s_1 + s_2 =
S$, or
\begin{equation} \frac{\sigma_1}{1-\mu_1} + \frac{\sigma_2}{1-\mu_2} = \frac{1}{
1-\mu_1-\mu_2} , \end{equation}
as expected in the static case. This condition also determines the
value of
\begin{equation} R = i \frac{S}{4\pi} \left( 1 - \mu_1 - \mu_2 \right) \end{equation}
In order to determine the constant $K$ in Eq. (3.12) we must use the
analog of Eq. (2.29) which defines the Minkowskian interparticle distance
\begin{eqnarray}
& B_{21} & = Z_2 - Z_1 = \int^2_1 \ dz \ \frac{N}{f'} +
\int^2_1 \ d{\overline z} \
\frac{{\overline N}{\overline f}^2}{\overline f'} =
\left( \frac{R}{K} \int^1_0 \ d\zeta \ \zeta^{-\mu_1} {( 1 - \zeta
)}^{-\mu_2} + \right. \nonumber \\
& + & \left. \frac{{\overline R} {\overline K}}{{(\mu_1 + \mu_2 -1)}^2}
\int^1_0 \ d\zeta \
\zeta^{\mu_1 -2} {(1-\zeta)}^{\mu_2 -2} {(\zeta-\tau)}^2\right)
\end{eqnarray}
We can see that the second integral is not well defined in the
physical range
\begin{equation} 0 < \mu_i < 1 \ \ , \ \ \ \ \ \ 0 < \mu_1 + \mu_2 < 1 \end{equation}
for which, as shown in I, there are no CTC's at large distances. This
fact reflects the existence of $CTC$ horizons close to the particles,
(cfr. Sec. (2.3)), in which the mapping to Minkowskian coordinates is
not well defined.
The rigorous way of overcoming this problem is to solve for the DJH
matching conditions of type (2.43) outside the $CTC$
horizons, thus defining $B_{21}$
as the relevant translational parameter, as explained in Appendix A.
Here we just notice that the integral in question can be defined by
analytic continuation from the region $ 2 > \mu_i > 1$ to the region
(3.17), so as to yield, by Eqs. (3.15) purely imaginary values for $K$
and $R$, with
\begin{eqnarray}
& \ & \frac{1}{|K|} B ( 1-\mu_1, 1-\mu_2 ) - \frac{|K|
B(\mu_1 , \mu_2 )}{{(\mu_1+\mu_2-1)}^2}
\left( 1 - \tau \frac{1-\mu_1-\mu_2}{1-\mu_1} -
(1-\tau) \frac{1-\mu_1-\mu_2}{1-\mu_2} + \right. \nonumber \\
& + & \tau ( 1 - \tau ) \left.
\frac{(1-\mu_1-\mu_2)(2-\mu_1-\mu_2)}{(1-\mu_1)(1-\mu_2)}
\right) = \frac{4\pi \ B_{21}}{(1-\mu_1 -\mu_2) S}
\end{eqnarray}
where $S = J = s_1 + s_2$ denotes the total spin. In this equation the smaller
branch of $K$ should be chosen for the solution to satisfy $|f|<1$
closer to the particles.
In particular, if $S / B_{12} \ll 1$, the acceptable branch of
the normalization $K$ becomes small and of the same order. In general,
however $K$ is not a small parameter, and thus $f$ is not small,
unlike the spinless case in which $f$ is of the order of
$ L / B_{12}$ and is thus infinitesimal in the static limit.
Whatever the value of $K$, for $z$ sufficiently close to the
particles, the critical value $|f|= 1$ is reached, because of the
singular behaviour of Eq. (3.12) for $\mu_i < 1$. Therefore we have two
horizons encircling each particle, which may degenerate in one
encircling both for sufficiently large values of $S/B_{12}$.
Having found an explicit solution for $f$, its Schwarzian derivative
is easily computed from Eq. (2.14). By using the notation
\begin{eqnarray} {\tilde \mu}_i & = & \mu_i - 1 \ \ \ ( i = 1,2 ) \ , \ \
\ \ {\tilde \mu_3} = {\tilde \mu_4} = 2 \nonumber \\
\eta_1 & = & \zeta_3 , \ \ \eta_2 = \zeta_4 , \ \ \ \ \ \ \gamma_1 =
\beta_3 \ , \ \ \ \gamma_2 = \beta_4 \end{eqnarray}
we find that the residues $\beta_i$ at the single poles of Eq. (3.10)
( called the accessory parameters) take the quasi-static form of BCV, i.e.
\begin{equation} \beta_i = - ( 1 - {\tilde \mu}_i ) \sum_{j \neq i} \frac{(1 -
{\tilde \mu}_j ) }{\zeta_i - \zeta_j} \ \ , \ \ \ \ \ \ ( i, j = 1, ...., 4 ). \end{equation}
{\bf 3.2 The static metric}
From the two-body static solution for $f$ and $N$ we can build the
static vierbein of Eqs. (2.5) and (2.15) which has the components:
\begin{eqnarray}
& E_0^a & = ( 1 , 0 , 0 ) \nonumber \\
& E^a_z & = N W^a = \frac{R }{\xi (\mu_1+\mu_2-1)} \left(
\frac{\zeta-\tau}{\zeta (1-\zeta)} , \frac{\mu_1+\mu_2-1}{K_0}
\zeta^{-\mu_1}{(1-\zeta)}^{-\mu_2} , \right. \nonumber \\
&& \frac{K_0}{(\mu_1+\mu_2-1)} \left.
\zeta^{\mu_1-2} {(1-\zeta)}^{\mu_2-2} {(\zeta-\tau)}^2 \right)
\end{eqnarray}
The corresponding metric has the form
\begin{eqnarray}
g_{00} &=& 1 \nonumber \\
g_{0z} &=& \frac{Nf}{f'} = \frac{R}{\xi (\mu_1+\mu_2-1)}
\frac{\zeta-\tau}{\zeta(1-\zeta)} \nonumber \\
-2 g_{z\overline{z}} & = & e^{2\phi} = \frac{R^2}{\xi^2 |K_0|^2}
{(\zeta{\overline\zeta})}^{-\mu_1}
{((1-\zeta)(1-{\overline\zeta}))}^{-\mu_2} \cdot \nonumber \\
&\cdot&
{\left[ 1 - \frac{K^2_0 {(\zeta\overline\zeta)}^{\mu_1-1}}{{(\mu_1+\mu_2-1)}^2}
{((1-\zeta)(1-\overline\zeta))}^{\mu_2-1}
(\zeta-\tau)(\overline\zeta-\overline\tau) \right]}^2
\end{eqnarray}
and is degenerate whenever $e^{2\phi} = 0$, revealing explicitly the
presence of a singularity line on which the determinant of the metric
is vanishing.
Let us remark that the zeros of $f'$, the apparent singularities, are
geometrically saddle points for the modulus ${|f|}^2$, which instead diverges
on the particle sites. It is easy to realize that in the range $ S/
B_{12} \ll 1$, where $K_0 \simeq S / B_{12}$, the curve $|f|=1$ defines two
distinct horizons, one for each particle.
In the complementary range $ S \ge B_{12} $, with $K_0$ satisfying (3.18)
in its generality , the curve $|f|=1$ defines a line surrounding both
particles.
As a conseguence, we can distinguish the two particles and set up the
scattering problem only in the case where $ S/B_{12}$ is at most
of order O(1). This restriction is physically motivated by the
presence of closed timelike-curves, which make impossible to reduce
the impact parameter without entering in causality problems.
{\bf 3.3 The N-body static case }
In the general static case, with $N$ bodies, we can also provide a
solution for the mapping function by algebraic methods, following the
pattern described above.
Firstly, the meromorphic $N$ function, having simple and double poles
at $\zeta = \zeta_i$ can be parametrized as
\begin{equation} N = R \left( - \sum^N_{i=1} \frac{\sigma_i}{{(\zeta-\zeta_i)}^2}
+ \sum^N_{i=1} \frac{\nu_i}{\zeta-\zeta_i} \right) = R \frac{
\prod^{2N-2}_{\xi=1} ( \zeta-\eta_i ) }{ \prod^N_{i=1}
{(\zeta-\zeta_i)}^2} \end{equation}
with the following conditions
\begin{eqnarray}
\sum_i \nu_i & = & 0 \ , \ \ \ \ ( N \sim - R z^{-2} \ {\rm for } \ z
\rightarrow \infty ) \nonumber \\
\sum_i \sigma_i & - & \sum_i \zeta_i \nu_i = 1 \ , \ \ \ \
( \ {\rm normalization \ of } \ R \ ). \end{eqnarray}
Therefore, there are $2N-2$ zeros ( or apparent singularities ) given
in terms of the $\sigma$'s and of $N-2$ $\nu$-type
parameters. Furthermore, the $\sigma$'s are given by the time shifts
in terms of $s_i / J$ as in Eq. (3.7).
Secondly, $f'$ shows the same $2N-2$ zeros at $\zeta = \eta_j$ in the form
\begin{equation} \frac{df}{dz} = \frac{1}{\xi} f'(\zeta) = \frac{K}{\xi}
\ \prod^{N}_{i=1} \ {( \zeta - \zeta_i )}^{\mu_i -2}
\prod^{2N-2}_{j=1} \ {(\zeta - \eta_j)} \end{equation}
while the mapping function, having static monodromy and behaviour
$\zeta^{\sum_i \mu_i - 1}$ at $\zeta = \infty$, has only $(N-1)$ zeros
with the form
\begin{equation} f = \frac{K}{\sum_i \mu_i - 1} \ \prod^N_{i=1} {(\zeta-
\zeta_i)}^{\mu_i-1} \ \prod^{N-1}_{k=1} (\zeta - \tau_k) . \end{equation}
At this point, the integrability condition, that (3.25) is just the
$z$-derivative of (3.26) provides $2N-2$ conditions for a total of
$N-2 + N-1 = 2N-3$ parameters. Therefore, all the $\eta$ parameters
are determined as function of the $\sigma$'s, and there is one extra
condition among the $\sigma$'s, namely that
\begin{equation} \sum^n_{i=1} \ \frac{\sigma_i}{1-\mu_i} = \frac{1}{1- \sum_i
\mu_i} \end{equation}
which is verified, by Eq. (3.7) because $\sum_i s_i = S = J$ in the
static case.
Finally, the normalization $K$ and the $N-2$ ``shape parameters''
$\zeta_j = \frac{\xi_{j1}}{\xi_{21}}$ are determined from the $N-1$
``equations of motion''
\begin{equation} B_j - B_1 = \int^j_1 \ \frac{N}{f'} \ dz \ + \ \int^j_1 \
\frac{{\overline N}{\overline f}^2}{\overline f'} , \end{equation}
similarly to the two-body case.
We conclude that the static $N$-body case with spin provides a
solvable example of non-vanishing mapping function with static
monodromies and total mass ${\cal M} = \sum_{i=1}^N m_i$, having a
Schwarzian with a total of $3N-1$ singularities.
\section{Spinning particles in slow motion}
For two moving spinning particles, the fuchsian Riemann-Hilbert
problem for the mapping function is in principle well defined by
Eqs. (2.13), (2.14) and (3.10). Indeed, the location of the apparent
singularities is fixed in general by Eqs. (3.5), (3.8) and (3.9) and
it is possible to see that all accessory parameters in the Schwarzian
of Eq. (3.10) are also fixed in terms of the invariant mass ${\cal M}$
of Eq. (2.12) and of the spins.
The fact that the potential of the Fuchsian problem is determined
follows from some general conditions that the accessory parameters
should satisfy, which were described in I, and are the following.
Firstly, the point at $\zeta = \infty$ is regular, with difference of
exponents given by $\mu_\infty = \frac{\cal M}{2\pi} - 1$. This yields
two conditions:
\begin{eqnarray}
& \ & \sum^2_{i=1} \ ( \beta_i + \gamma_i ) \ = \ 0 , \nonumber \\
& \ & \sum^2_{i=1} \ \mu_i ( 2 - \mu_i ) - 6 + 2 \sum^2_{i=1} \ ( \beta_i
\zeta_i + \gamma_i \eta_i ) = 1 - \mu^2_\infty .
\end{eqnarray}
Secondly, there is no logarithmic behaviour \cite{a16} of the solutions $y_i$
at the apparent singularities $\eta_j$. This yields two more
conditions:
\begin{eqnarray}
- \gamma^2_1 \ = \ \sum^2_{j=1} \frac{\mu_j (2-\mu_j)}{{(\eta_1 - \zeta_j)}^2}
- \frac{3}{{(\eta_1 - \eta_2)}^2} + \sum_j \frac{2 \beta_j}{\eta_1 -
\zeta_j} + \frac{2\gamma_2}{\eta_1-\eta_2} , \nonumber \\
- \gamma^2_2 \ = \ \sum^2_{j=1} \frac{\mu_j (2-\mu_j)}{{(\eta_2 - \zeta_j)}^2}
- \frac{3}{{(\eta_1 - \eta_2)}^2} + \sum_j \frac{2 \beta_j}{\eta_2 -
\zeta_j} + \frac{2\gamma_1}{\eta_2-\eta_1}.
\end{eqnarray}
The four algebraic ( non linear ) Eqs. (4.1) - (4.2) determine
$\beta_1$, $\beta_2$ and $\gamma_1$, $\gamma_2$ in terms of ${\cal M}$
and of the $\sigma$'s , similarly to what happens in the ( much
simpler ) spinless case. However, since no general solution to this
Fuchsian problem with $5$ singularities is known, one should resort to
approximation methods in order to provide the mapping function
explicitly.
The idea is to expand $f$ in the (small) Minkowskian velocities $V_i
<< 1$, around the static solution, which is exactly known. One should,
however, distinguish two cases, according to whether $ S / B_{12} \ll
1$ is of the same order as the $V$'s, or instead $ S / B_{12} = O(1)
$, where $B_{12}= B_1 - B_2$ is defined as the relative impact parameter of the
Minkowskian trajectories
\begin{equation} Z_1 = B_1 + V_1 T_1 \ \ \ \ \ \ Z_2 = B_2 + V_2 T_2 . \end{equation}
In the first case, that we call ``peripheral'',
the expansion we are considering is effectively in
both the $V_i$'s and $f$ itself, at least in a region sufficiently far
away from the horizons, which do not overlap, as noticed before. This
is a situation of peripheral scattering with respect to the scale
provided by the horizon, and will be treated in the following to first
(quasi-static) and second ( non-relativistic ) order.
The second case, that we call ``central'',
($S / B_{12} = O(1)$) has non-linear features even to
first order in the velocities and the horizons may overlap. We shall
only treat the quasi-static case.
{\bf 4.1 Peripheral quasi-static case ( $S \ll B_{12}$ )}
Since in this case both $f$ and $V_i$ can be considered as small, at
first order we have just to look for a mapping function which solves
the linearized monodromies of Eq. (2.7) around the particles (i = 1,2)
\begin{equation} {\tilde f}_i = \frac{a_i}{a^{*}_i} f (\zeta) +
\frac{b_i}{a^{*}_i} \ \ \ \ \ ( a_i = e^{i \pi \mu_i} \ , \ \ b_i = -
i {\overline V}_i \sin \pi \mu_i ) \end{equation}
and, furthermore, has the two apparent singularities in Eq. (3.9) and
the behaviour in Eq. (2.41) at $\zeta =0$ and $\zeta = 1$. Since, by
Eq. (4.4), $f'$ has static monodromies, we can set
\begin{equation} f' = K \zeta^{\mu_1-2} {(1-\zeta)}^{\mu_2-2} (\zeta-\eta_1)
(\zeta-\eta_2), \end{equation}
as in Eq. (3.11).
However, in the moving case, $f'$ is not integrable and $f$ has the
quasi-static monodromy (4.4), which contains a translational part. We
then set
\begin{equation} f(\zeta) = f(i) + \int^{\zeta}_i \ dt \ f' (t) \end{equation}
where the ith-integral is understood as analytic continuation from
$\mu_i > 1$, as explained in Appendix A.
Since the ith-integral has purely static monodromy around the i-th particle
we obtain from
Eq. (4.6)
\begin{equation} ( {\tilde f}(\zeta) - f(i) ) = e^{2 i \pi \mu_i} ( f (\zeta) - f(i)
) \end{equation}
and thus Eq. (4.4) is satisfied if $f(i) = - \frac{\overline V_i}{2}$,
thus yielding the condition
\begin{eqnarray}
\frac{{\overline V}_2 - {\overline V}_1}{2} & = & K \int^1_0 \ dt \
t^{\mu_1 -2} {(1-t)}^{\mu_2 -2} (t-\eta_1) (t-\eta_2) = \nonumber \\
& = & K B(\mu_1, \mu_2 ) \left( 1 - \sigma_1
\frac{1-\mu_1}{1-\mu_1-\mu_2} - \sigma_2 \frac{1-\mu_2}{1-\mu_1-\mu_2} \right)
\end{eqnarray}
By then using Eq. (3.7), we determine the normalization
\begin{equation} K = \frac{{\overline V}_{21}}{2} B^{-1}(\mu_1, \mu_2) {\left[ 1 -
\frac{s_1+s_2}{J} \right]}^{-1} = B^{-1}(\mu_1, \mu_2) \frac{J}{L}
\frac{{\overline V}_{21}}{2} \end{equation}
where $L$ is the orbital angular momentum of the system, given in
Eq. (3.4).
Eqs. (4.5), (4.6) and (4.9) yield the ( peripheral ) quasi-static solution with
spin. In particular, Eq. (4.9) shows the existence of two regimes,
according to whether the spin is small or large with respect to the
orbital angular momentum $L$.
If the spin $S$ is small ( $S \ll L$ ), so are the $\sigma$'s, the
apparent singularities of Eq. (3.9) become degenerate with the particles
\begin{equation} \eta_1 \simeq \sigma_1 , \ \ \ \eta_2 \simeq 1 - \sigma_2 \end{equation}
and the normalization $K \sim {\overline V}_{21}$ is vanishingly small
in the static limit, thus recovering the spinless quasi-static limit
of I.
If instead $B_{21} \gg S \gg L$ the parameters $\sigma$ and $\eta$ are
of order unity, and , by Eq. (3.4), the normalization becomes
\begin{equation} K \simeq B^{-1}(\mu_1,\mu_2) \frac{S}{2L} {\overline V}_{21} =
\frac{S}{4\pi} \frac{(1-\mu_1-\mu_2) B(1-\mu_1,1-\mu_2)}{B_{21}} , \end{equation}
in agreement with the static case relation (3.18) for $S \ll
B_{21}$. In the latter case the mapping function becomes nontrivial in
the static limit, as discussed in the previous section.
In the general case, for any $S/L$ of order unity, the mapping
function $f$ is of first order in the small parameters and its
Schwarzian derivative does not change with respect to the static case,
except for the actual values of the $\sigma$'s and $\eta$'s, so that
the accessory parameters are provided by Eq. (3.20). The first
non-trivial change of $\{ f , \zeta \}$ is at second order in the small
parameters, as we shall see ( Secs. 4.2 and 4.3 ).
{\bf 4.2 Central quasi-static case ( $S / B_{12} \simeq O(1)$) }
In this case we have to expand the monodromies in Eq. (2.7) in the
$b_i$ parameters only, thus keeping possible non linear terms in
$f$. By expanding around the quasi-static solution $f^{(0)}$ of Eq. (4.5)
we can write
\begin{equation} f = f^{(0)} + \delta f + .... \end{equation}
where, around the generic particle,
\begin{equation} {\tilde f} = \frac{a_i}{a^{*}_i} f + \frac{b_i}{a^{*}_i} -
\frac{a_i b^{*}_i}{{a*}^2_i} {(f^{(0)}_s)}^2 , \ \ \ \ \ \ \ i = 1,2 \end{equation}
and we have introduced in the last term the static limit $f^{(0)}_s$
of Eq. (3.12). These monodromy conditions, unlike the ones in Eq. (4.4), are
nonlinear. However it is not difficult to check that they linearize for the
function
\begin{equation} h = \frac{1}{f^{(0)}_s}\frac{f'}{f'^{(0)}} \ \ , \end{equation}
which satisfies the first-order monodromy conditions
\begin{equation} {\tilde h} = e^{-2 i \pi \mu_i} h - V_i ( 1 - e^{- 2 i \pi \mu_i}
) . \end{equation}
Furthermore, from the boundary conditions for $f$, we derive the
following ones for $h$
\begin{equation}
h \simeq \left\{ \begin{array}{cc} - V_i + O ( {( \zeta -
\zeta_i)}^{1-\mu_i} ) \ \ , & ( \zeta \simeq \zeta_i, \ i = 1, 2 ) , \\
\zeta^{1-\mu_1-\mu_2} \ \ , & ( \zeta \rightarrow \infty ) \\
{( \zeta - \tau )}^{-1} \ \ , & ( \zeta \rightarrow \tau )
\end{array} \right. \end{equation}
The solution for $h$ can be found from the ansatz
\begin{eqnarray}
h & = & - V_1 + ( V_1 - V_2 ) I_0 ( \zeta ) + A \frac{
\zeta^{1-\mu_1} {(1-\zeta)}^{1-\mu_2}}{K_0 ( \zeta - \tau )} \nonumber \\
I_0 & = & \frac{1}{B(1-\mu_1, 1- \mu_2)} \int^{\zeta}_0 \ dt \ t^{-\mu_1}
\ {(1-t)}^{-\mu_2} \end{eqnarray}
by noticing that the first two terms automatically satisfy the
boundary conditions at $\zeta = 0, 1, \infty$. The last term is
proportional to ${(f^{(0)}_s)}^{-1}$, contains the pole at $\zeta = \tau$,
and the constant $A$ is determined so as to satisfy the translational
part of the monodromy (4.13). Similarly to Eq. (4.8) we get the equation
for $A$
\begin{equation} \frac{{\overline V}_2 - {\overline V}_1}{2} = A \int^1_0 {f'}^{(0)}
(\zeta) + \int^1_0 d\zeta {f'}^{(0)}_s (\zeta) [ - V_1 f^{(0)}_s
(\zeta) + (V_1-V_2) f^{(0)}_s (\zeta) I_0 (\zeta)] \end{equation}
where the integrals are understood as analytic continuations from $2 >
\mu_i > 1$, and the non-linear terms represent higher order
contributions in the parameter $S/B_{12}$.
By using the normalization condition (4.8) we then obtain by an
integration by parts and for velocities along the $x$-axis,
\begin{equation} A = 1 - \int^1_0 \ d\zeta \ {(f^{(0)}_s)}^2 ( \zeta ) I^{'}_{0}
\end{equation}
This means that the coefficient of the first-order quasi-static
solution $f^{(0)}$ in Eq. (4.12) is renormalized by higher orders in
$S/ B_{12}$.
Furthermore from the expression (4.14) of $f'/ f'_{(0)} = h f^{(0)}_s$ we
obtain the correction to the Schwarzian derivative
\begin{eqnarray}
\{ f , \zeta \}& - &\{ f, \zeta \}^{(0)} \simeq {(h f^{(0)})''}_s -
\frac{{f''}_{(0)}}{{f'}_{(0)}} {( h f^{(0)}_s )}^{'} = \nonumber \\
&=& \frac{K_0 ( V_1 - V_2 )}{(\mu_1+\mu_2-1)B(1-\mu_1,1-\mu_2)}
\frac{\zeta-\tau}{\zeta(1-\zeta)} \left( \frac{2}{\zeta-\tau} -
\frac{1}{\zeta-\eta_1} - \frac{1}{\zeta-\eta_2} \right)
\end{eqnarray}
which turns out to be of order $O(V) \cdot O(S / B_{12})$, i.e.
formally of 1st order in both parameters.
The quasi-static solution for general spin values just obtained is particularly
interesting because it allows to understand how the trajectory
equations (4.3) can make sense, despite the multivaluedness of the
Minkowskian time.
In fact, we can solve for the mapping from regular to Minkowskian coordinates
by expanding around some arbitrary point $\xi_0 \neq \xi_i$ to get,
instead of Eq. (2.10)
\begin{equation} X^a = X^a_0 (t) + \int^{\xi}_{\xi_0} \ dz \ N W^a + \int^{\overline\xi}_{\overline
\xi_0} \ d {\overline z} \ {\overline N} \ {\widetilde W}^a . \end{equation}
We can then explicitly check that, to first order in the velocities,
the combinations $ Z - V_1 T \ ( Z - V_2 T )$ are well defined
at particle $1$ ( particle $2$ ).
For this we need the static time
\begin{equation} T = t + \int^{\zeta}_{\zeta_0} \ dz \ \frac{N f_{(0)}}{f'_{(0)}}
+ (c.c) = t - \frac{R}{1-\mu_1-\mu_2} \left( (1-\tau) \log 2(1-\zeta) +
\tau \log 2\zeta \right) + (h.c.) \end{equation}
which shows the logarithmic singularities at $z = \xi_i$ and we also
need the $Z$ coordinate up to first order in $V$
\begin{equation} Z = Z_0 (t) + \int^{\zeta}_{\zeta_0} dz \ \frac{N}{f'_{(0)}}
\left( 1 - \frac{\delta f'}{f'_{(0)}} \right)
+ \int^{\overline \zeta}_{\overline \zeta_0} \ d {\overline z} \
\frac{\overline N}{\overline f'_0} \left( {\overline f}^2_{(0)} + 2
{\overline f}^{(0)} \delta {\overline f} - \frac{\delta {\overline f'}}{
{\overline f'}_{(0)}} {\overline f}^2_{(0)} \right)
\end{equation}
Since, by Eq. (4.14) and (4.17)
$ \delta f' / {f'_{(0)}} = - V_1 f^{(0)} + O (
{(\zeta - \zeta_i)}^0)$ and $ \delta {\overline f} = - V_1 / 2 + O (
f^2_{(0)} V )$, the $Z$-coordinate also has logarithmic singularities,
which cancel in the combination
\begin{equation} \lim_{\xi \rightarrow \xi_1} ( Z - V_1 T ) = B_1 = Z_0 (t) - V_1
t + \int^{\zeta_1}_{\zeta_0} \ dz \ \frac{N}{f'_{(0)}} ( 1 + O (V)) +
\int^{{\overline \zeta}_1}_{{\overline \zeta}_0} \ d {\overline z} \
\frac{{\overline N}
{\overline f}^2_{(0)}}{{\overline f'}_{(0)}} \ ( 1 + O(V)) \end{equation}
which is thus well defined. From the similar relation from particle
$2$ and using the expression of $R$ in Eq. (3.2)
we obtain the quasi-static equations of motion
\begin{eqnarray}
i ( B_1 - & B_2) & - ( V_1 - V_2 ) t = \left( \int^{\xi_1}_{\xi_2} \ dz
\frac{N}{f'_{(0)}} + \int^{\overline \xi_1}_{\overline \xi_2} d
{\overline z} \frac{{\overline N }{\overline f}^2_{(0)}}{{\overline f'}_{(0)}}
\right) ( 1 + O(V)) = \nonumber \\
& = & \alpha \ \xi^{1-\frac{{\cal M}}{2 \pi}} + \beta \ {\overline
\xi}^{1-\frac{{\cal M}}{2\pi}} = \nonumber \\
& = & \frac{C}{{K}_0} \xi^{1-\frac{{\cal M}}{2\pi}} B(1-\mu_1, 1-\mu_2)
+ {\overline C} {\overline K}_0 {\overline \xi}^{1-\frac{{\cal M}}{2\pi}}
\frac{B(\mu_1,\mu_2)}{{(\mu_1 + \mu_2 - 1)}^2}
( 1 - \tau \frac{1-\mu_1-\mu_2}{1-\mu_1} - \nonumber \\
& - & (1 - \tau ) \frac{1-\mu_1-\mu_2}{1-\mu_2}
+ \tau ( 1 - \tau ) \frac{(1-\mu_1-\mu_2)
(2-\mu_1-\mu_2)}{(1-\mu_1)(1-\mu_2)} )
\end{eqnarray}
where we have assumed the velocities along the $x$ axis, and the
impact parameters along the $y$ axis. The equation (4.25) can be
inverted to give
\begin{equation} C \xi^{1-\frac{{\cal M}}{2\pi}} = \frac{V_{21} t}{\alpha + \beta} +
i \frac{S}{4\pi} (1-\mu_1-\mu_2) \end{equation}
From Eq. (4.26) we can learn that the spins renormalize the
constants describing the trajectory but not the exponent determining
instead the scattering angle, which is therefore unaffected, and given
by $ \theta = \frac{\cal M}{2} {( 1 - \frac{\cal M}{2\pi} )}^{-1}$ as
in the spinless case. The constant term in the r.h.s. of Eq. (4.26) is expected
to be proportional to $J$, but only the spin part is here determined,
because we have neglected the O(V) terms in Eq. (4.25).
{\bf 4.3 The peripheral non relativistic case ( $S \ll B_{12}$ )}
We now expand the projective monodromy transformations of Eq. (2.7) up to
next nontrivial order in both $f$ and $V_i$. By referring to a generic
particle and by defining
\begin{equation} f = f^{(1)} + f^{(3)} + ......., \end{equation}
with similar notation for $a$'s and $b$'s, we obtain
\begin{eqnarray}
{\tilde f}^{(1)} & = & \left. \frac{a}{a*}\right)^{(0)} f^{(1)} + \left.
\frac{b}{a^*}\right)^{(1)} \ , \nonumber \\
{\tilde f}^{(3)} & = & \left. \frac{a}{a*}\right)^{(0)} f^{(3)} - \left.
\frac{a b^*}{a} \right)^{(1)} {(f^{(1)})}^2 + {\left( \frac{a}{a*} -
\frac{|b|^2}{a*^2} \right)}^{(2)} f^{(1)} + \left. \frac{b}{a^{*}} \right)
. \end{eqnarray}
The first equation yields the quasi-static solution described in
Sec. 4.1. The second equation ( which is down by two orders ) is non
linear, but this time it linearizes for the function
\begin{equation} h_1 = \frac{1}{f'^{(1)}} \left( \frac{ f'^{(3)}}{ f'^{(1)} }
\right)' \end{equation}
which satisfies the 1-st order monodromy conditions
\begin{equation} \left. {\tilde h}_1 \right)_i = e^{-2 i \pi \mu_i } h_1 - V_i ( 1
- e^{- 2 i \pi \mu_i} ) . \end{equation}
Furthermore, from the boundary conditions for $f$ we derive the
following ones for $h_1$
\begin{equation} h_1 \simeq \left\{ \begin{array}{cc}
- V_i + O ( {( \zeta - \zeta_i )}^{2-\mu_i} ) , & \zeta \simeq
\zeta_i, \ i = 1,2 \\
\zeta^{1-\mu_1-\mu_2} \ , \ \ \ \zeta \rightarrow \infty \\
{(\zeta- \eta_i )}^{-1} , \ \ \ \zeta \rightarrow \eta_i \end{array}
\right.\end{equation}
where the values $\left. h_1\right)_i = - V_i$ are needed to realize
the translational part of the monodromy (4.28).
The solution for $h_1$ can be found from the ansatz
\begin{equation} h_1 = - V_1 + ( V_1 - V_2 ) \left[ I_0 (\zeta ) + \frac{1}{B(1-\mu_1, 1
-\mu_2)} \frac{\zeta^{1-\mu_1} {(1-\zeta)}^{1-\mu_2} ( A_1 ( \zeta -1
) + A_2 \zeta )}{(\zeta-\eta_1)(\zeta-\eta_2)} \right] \end{equation}
by noticing that the first two terms automatically satisfy the
translational part of the monodromy (4.28), so that the third one should
have the purely static monodromies $e^{-2 i \pi \mu_i }$, besides the
poles at $ \zeta = \eta_i $. The constants $A_1$ and $A_2$ can then be chosen
so as to satisfy $h + V_i \sim {( \zeta - \zeta_i )}^{2-\mu_i}$.
Since, by (3.9), $\eta_1 \eta_2 = \sigma_1$, $ (1-\eta_1) (1-\eta_2)
= \sigma_2$, we find
\begin{equation} A_1 = \frac{\sigma_1}{1-\mu_1} = \frac{s_1}{(1-\mu_1-\mu_2) J} \
, \ \ \ \ A_2 = \frac{\sigma_2}{1-\mu_2} = \frac{s_2}{(1-\mu_1-\mu_2)J} \end{equation}
thus making the spinless limit particularly transparent.
From the form (4.29) of $h_1$, from its definition and from the form of
$f^{(1)}$ in Eqs. (4.5) and (4.6), we then find the result
\begin{eqnarray}
\frac{f'^{(3)}}{f'^{(1)}} & = & - V_1 f^{(1)} + \ {\rm const.}
+ \frac{( 1 -\mu_1 - \mu_2 )}{(1-\frac{S}{J})}
\frac{\delta {\cal M} }{2\pi} \int^{\zeta}_0 dt
\left( - \frac{A_1}{t} + \frac{A_2}{1-t} + \right. \nonumber \\
& + & t^{\mu_1-2} \left.
{(1-t)}^{\mu_2-2} (t-\eta_1) (t-\eta_2) \int^t_0 d\tau \tau^{-\mu_1}
{(1-\tau)}^{-\mu_2} \right) , \end{eqnarray}
where we have defined the parameter
\begin{equation} \delta {\cal M} = {|V_{21}|}^2 \frac{\sin \pi\mu_1 \sin \pi
\mu_2}{\sin \pi ( \mu_1 + \mu_2 )} = {\left[ {\cal M} - ( m_1 + m_2 )
\right]}^{(2)} , \end{equation}
representing the nonrelativistic contribution to the total invariant
mass. In fact, the perturbative large $\zeta$ behaviour of $f'$
provided by Eq. (4.31) is
\begin{equation} \frac{f'_{(1)} + f'_{(3)}}{f'_{(1)}} \sim 1 + \frac{\delta {\cal
M}}{2\pi} \log \zeta \sim \zeta^{\frac{\delta {\cal M}}{2\pi}} \end{equation}
as expected from the behaviour $f' \sim \zeta^{\frac{\cal M}{2\pi} -
2}$ of the full solution.
We further notice that Eq. (4.31) provides a non-trivial change of the
Schwarzian derivative, similarly to what was noticed in the previous
sections. Since
\begin{equation} \{ f , \zeta \} = L'' + \frac{1}{2} L'^2 \ , \ \ \ \ \ L \equiv \log
f'^{(1)} + \frac{f'^{(3)}}{f'^{(1)}} + ... \end{equation}
we have, after some algebra, the non-relativistic correction to the
Schwarzian (Cfr. App. B)
\begin{eqnarray} \{ f , \zeta \} & - &
\{ f , \zeta \}^{(0)} = f'^{(1)} h'_1 +
... = \nonumber \\ & = & \frac{\delta {\cal M}}{2\pi} \frac{1}{1- \frac{S}{J}}
\left[ -
\frac{1}{\zeta (1 - \zeta )} \left( 1 - \mu_1 - \mu_2 - ( 3 - \mu_1 - \mu_2
)\frac{s_1 + s_2}{J} \right) + \right. \nonumber \\
& + & \left. \left( \frac{s_1}{J} \frac{1}{\zeta} -
\frac{s_2}{J} \frac{1}{1-\zeta} \right) \left( \frac{1}{\zeta-\eta_1} +
\frac{1}{\zeta-\eta_2} \right) \right] , \end{eqnarray}
where $\{ f, \zeta \}^{(0)}$ denotes the static expression in Eqs.
(3.10) and (3.20).
The same expression (4.38) could have been obtained by expanding
Eqs. (4.1)-(4.2) in the parameter $\delta {\cal M}$, around the static
solution of Eq. (3.20) ( Appendix B).
Let us note that the solution here considered contains more
information than simply the terms of order $O(V^2)$ , because the expansion
of the $1 - \frac{S}{J}$
denominator can give rise in the small velocity limit to a mixed
perturbation both in $O(V^2)$ and in $O(V) \cdot O( S / B_{12})$. In
fact from Eq. (4.38) we can rederive, to this order, the Schwarzian given in
Eq. (4.20).
\section{ Discussion}
We have shown here that the BCV gauge can be extended to the case of spinning
particles in 2+1-Gravity, in the region external to some ``CTC
horizons'' that occur around the particles themselves.
Let us note that the existence of the BCV gauge is in general related
to the lack of CTC's. In fact, in our conformal Coulomb gauge the
proper time element is of the form \cite{a11}
\begin{equation} ds^2 = \alpha^2 dt^2 - e^{2\phi} {| dz - \beta dt |}^2 \end{equation}
and, if some istantaneous motion is possible ( $dt = 0$ ) the proper
time
\begin{equation} ds^2 = - e^{2\phi} {|dz|}^2 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ( dt = 0 )\end{equation}
is necessarily spacelike, unless
\begin{equation} e^{2\phi} \simeq |g| \simeq {( 1 - {| f |}^2 )}^2 = 0 , \end{equation}
a case in which it becomes light-like.
Therefore, the (closed) curves defined by $|f| = 1$ are, at the same
time, the boundary for the existence of CTC's, and also for the existence
of the gauge itself, because the metric determinant vanishes. This is
not surprising, because our gauge allows the definition of a
single-valued global time, which is expected to be impossible in the
case of CTC's.
We have provided explicit solutions for the mapping function in
various cases. Firstly for $N$ spinning particles at rest, a case in which
the Schwarzian shows $3N-1$ singularities, and in particular for
$N=2$, a case in which we have also given a closed form for the metric
(Eq. 3.22 ).
Secondly we have also described metric and motion for two spinning
particles, in the quasi-static and the non-relativistic cases. In
particular, we have shown that it is possible to determine the motion
of the particle sites $\xi_i (t)$ ( as singularities of the Schwarzian )
by imposing the $O(2,1)$ monodromies on the exterior solution (
Sec. 4.2 and Appendix A).
Actually, since the Minkowskian coordinates are sums of analytic and
antianalytic functions of the regular ones in the BCV gauge, it turns
out that each one of them can be continued in the interior
region. This suggests that perhaps the gauge can be extended to the interior
region by relaxing the conformal condition.
We feel however that the clear delimitation of CTC horizons, with
sizes related to the spins, is actually a quite physical feature of
our gauge and points in the direction that a pointlike spin in (2+1)-Gravity
is not really a self consistent concept.
{\bf Acknowledgments }
We are happy to thank Alessandro Bellini for his collaboration in the
early stages of this work, and for valuable discussions.
This work is supported in part by M.U.R.S.T. , Italy.
| proofpile-arXiv_065-1172 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |